text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Many studies suggest that Creatine helps in treating cognitive decline in individuals when combined with other therapies. It also helps people suffering from Parkinson's and Huntington's disease. Though there are minimal side effects associated with creatine, pretty much like any nootropic, it is not entirely free of side-effects. An overdose of creatine can lead to gastrointestinal issues, weight gain, stress, and anxiety.
MarketInsightsReports provides syndicated market research reports to industries, organizations or even individuals with an aim of helping them in their decision making process. These reports include in-depth market research studies i.e. market share analysis, industry analysis, information on products, countries, market size, trends, business research details and much more. MarketInsightsReports provides Global and regional market intelligence coverage, a 360-degree market view which includes statistical forecasts, competitive landscape, detailed segmentation, key trends, and strategic recommendations.
Similarly, Mehta et al 2000 noted that the positive effects of methylphenidate (40 mg) on spatial working memory performance were greatest in those volunteers with lower baseline working memory capacity. In a study of the effects of ginkgo biloba in healthy young adults, Stough et al 2001 found improved performance in the Trail-Making Test A only in the half with the lower verbal IQ.
^ Sattler, Sebastian; Forlini, Cynthia; Racine, Éric; Sauer, Carsten (August 5, 2013). "Impact of Contextual Factors and Substance Characteristics on Perspectives toward Cognitive Enhancement". PLOS ONE. 8 (8): e71452. Bibcode:2013PLoSO...871452S. doi:10.1371/journal.pone.0071452. ISSN 1932-6203. LCCN 2006214532. OCLC 228234657. PMC 3733969. PMID 23940757.
Some suggested that the lithium would turn me into a zombie, recalling the complaints of psychiatric patients. But at 5mg elemental lithium x 200 pills, I'd have to eat 20 to get up to a single clinical dose (a psychiatric dose might be 500mg of lithium carbonate, which translates to ~100mg elemental), so I'm not worried about overdosing. To test this, I took on day 1 & 2 no less than 4 pills/20mg as an attack dose; I didn't notice any large change in emotional affect or energy levels. And it may've helped my motivation (though I am also trying out the tyrosine).
The U.S. Centers for Disease Control and Prevention estimates that gastrointestinal diseases affect between 60 and 70 million Americans every year. This translates into tens of millions of endoscopy procedures. Millions of colonoscopy procedures are also performed to diagnose or screen for colorectal cancers. Conventional, rigid scopes used for these procedures are uncomfortable for patients and may cause internal bruising or lead to infection because of reuse on different patients. Smart pills eliminate the need for invasive procedures: wireless communication allows the transmission of real-time information; advances in batteries and on-board memory make them useful for long-term sensing from within the body. The key application areas of smart pills are discussed below.
Running low on gum (even using it weekly or less, it still runs out), I decided to try patches. Reading through various discussions, I couldn't find any clear verdict on what patch brands might be safer (in terms of nicotine evaporation through a cut or edge) than others, so I went with the cheapest Habitrol I could find as a first try of patches (Nicotine Transdermal System Patch, Stop Smoking Aid, 21 mg, Step 1, 14 patches) in May 2013. I am curious to what extent nicotine might improve a long time period like several hours or a whole day, compared to the shorter-acting nicotine gum which feels like it helps for an hour at most and then tapers off (which is very useful in its own right for kicking me into starting something I have been procrastinating on). I have not decided whether to try another self-experiment.
Not that everyone likes to talk about using the drugs. People don't necessarily want to reveal how they get their edge and there is stigma around people trying to become smarter than their biology dictates, says Lawler. Another factor is undoubtedly the risks associated with ingesting substances bought on the internet and the confusing legal statuses of some. Phenylpiracetam, for example, is a prescription drug in Russia. It isn't illegal to buy in the US, but the man-made chemical exists in a no man's land where it is neither approved nor outlawed for human consumption, notes Lawler.
This research is in contrast to the other substances I like, such as piracetam or fish oil. I knew about withdrawal of course, but it was not so bad when I was drinking only tea. And the side-effects like jitteriness are worse on caffeine without tea; I chalk this up to the lack of theanine. (My later experiences with theanine seems to confirm this.) These negative effects mean that caffeine doesn't satisfy the strictest definition of nootropic (having no negative effects), but is merely a cognitive enhancer (with both benefits & costs). One might wonder why I use caffeine anyway if I am so concerned with mental ability.
There is no official data on their usage, but nootropics as well as other smart drugs appear popular in the Silicon Valley. "I would say that most tech companies will have at least one person on something," says Noehr. It is a hotbed of interest because it is a mentally competitive environment, says Jesse Lawler, a LA based software developer and nootropics enthusiast who produces the podcast Smart Drug Smarts. "They really see this as translating into dollars." But Silicon Valley types also do care about safely enhancing their most prized asset – their brains – which can give nootropics an added appeal, he says.
If I stop tonight and do nothing Monday (and I sleep the normal eight hours and do not pay any penalty), then that'll be 4 out of 5 days on modafinil, each saving 3 or 4 hours. Each day took one pill which cost me $1.20, but each pill saved let's call it 3.5 hours; if I value my time at minimum wage, or 7.25/hr (federal minimum wage), then that 3.5 hours is worth $25.37, which is much more than $1.20, ~21x more.
In addition, the cognitive enhancing effects of stimulant drugs often depend on baseline performance. So whilst stimulants enhance performance in people with low baseline cognitive abilities, they often impair performance in those who are already at optimum. Indeed, in a study by Randall et al., modafinil only enhanced cognitive performance in subjects with a lower (although still above-average) IQ.
Natural nootropic supplements derive from various nutritional studies. Research shows the health benefits of isolated vitamins, nutrients, and herbs. By increasing your intake of certain herbal substances, you can enhance brain function. Below is a list of the top categories of natural and herbal nootropics. These supplements are mainstays in many of today's best smart pills.
A key ingredient of Noehr's chemical "stack" is a stronger racetam called Phenylpiracetam. He adds a handful of other compounds considered to be mild cognitive enhancers. One supplement, L-theanine, a natural constituent in green tea, is claimed to neutralise the jittery side-effects of caffeine. Another supplement, choline, is said to be important for experiencing the full effects of racetams. Each nootropic is distinct and there can be a lot of variation in effect from person to person, says Lawler. Users semi-annonymously compare stacks and get advice from forums on sites such as Reddit. Noehr, who buys his powder in bulk and makes his own capsules, has been tweaking chemicals and quantities for about five years accumulating more than two dozens of jars of substances along the way. He says he meticulously researches anything he tries, buys only from trusted suppliers and even blind-tests the effects (he gets his fiancée to hand him either a real or inactive capsule).
Metabolic function smart drugs provide mental benefits by generally facilitating the body's metabolic processes related to the production of new tissues and the release of energy from food and fat stores. Creatine, a long-time favorite performance-enhancement drug for competitive athletes, was in the news recently when it was found in a double-blind, placebo-controlled crossover trial to have significant cognitive benefits – including both general speed of cognition and improvements in working memory. Ginkgo Biloba is another metabolic function smart drug used to increase memory and improve circulation – however, news from recent studies raises questions about these purported effects.
Similar to the way in which some athletes used anabolic steroids (muscle-building hormones) to artificially enhance their physique, some students turned to smart drugs, particularly Ritalin and Adderall, to heighten their intellectual abilities. A 2005 study reported that, at some universities in the United States, as many as 7 percent of respondents had used smart drugs at least once in their lifetime and 2.1 percent had used smart drugs in the past month. Modafinil was used increasingly by persons who sought to recover quickly from jet lag and who were under heavy work demands. Military personnel were given the same drug when sent on missions with extended flight times.
Schroeder, Mann-Koepke, Gualtieri, Eckerman, and Breese (1987) assessed the performance of subjects on placebo and MPH in a game that allowed subjects to switch between two different sectors seeking targets to shoot. They did not observe an effect of the drug on overall level of performance, but they did find fewer switches between sectors among subjects who took MPH, and perhaps because of this, these subjects did not develop a preference for the more fruitful sector.
Vinpocetine walks a line between herbal and pharmaceutical product. It's a synthetic derivative of a chemical from the periwinkle plant, and due to its synthetic nature we feel it's more appropriate as a 'smart drug'. Plus, it's illegal in the UK. Vinpocetine is purported to improve cognitive function by improving blood flow to the brain, which is why it's used in some 'study drugs' or 'smart pills'.
COGNITUNE is for informational purposes only, and should not be considered medical advice, diagnosis or treatment recommendations. Always consult with your doctor or primary care physician before using any nutraceuticals, dietary supplements, or prescription medications. Seeking a proper diagnosis from a certified medical professional is vital for your health.
One item always of interest to me is sleep; a stimulant is no good if it damages my sleep (unless that's what it is supposed to do, like modafinil) - anecdotes and research suggest that it does. Over the past few days, my Zeo sleep scores continued to look normal. But that was while not taking nicotine much later than 5 PM. In lieu of a different ml measurer to test my theory that my syringe is misleading me, I decide to more directly test nicotine's effect on sleep by taking 2ml at 10:30 PM, and go to bed at 12:20; I get a decent ZQ of 94 and I fall asleep in 16 minutes, a bit below my weekly average of 19 minutes. The next day, I take 1ml directly before going to sleep at 12:20; the ZQ is 95 and time to sleep is 14 minutes.
And yet aside from anecdotal evidence, we know very little about the use of these drugs in professional settings. The Financial Times has claimed that they are "becoming popular among city lawyers, bankers, and other professionals keen to gain a competitive advantage over colleagues." Back in 2008 the narcolepsy medication Modafinil was labeled the "entrepreneur's drug of choice" by TechCrunch. That same year, the magazine Nature asked its readers whether they use cognitive-enhancing drugs; of the 1,400 respondents, one in five responded in the affirmative.
Power times prior times benefit minus cost of experimentation: (0.20 \times 0.30 \times 540) - 41 = -9. So the VoI is negative: because my default is that fish oil works and I am taking it, weak information that it doesn't work isn't enough. If the power calculation were giving us 40% reliable information, then the chance of learning I should drop fish oil is improved enough to make the experiment worthwhile (going from 20% to 40% switches the value from -$9 to +$23.8).
As Sulbutiamine crosses the blood-brain barrier very easily, it has a positive effect on the cholinergic and the glutamatergic receptors that are responsible for essential activities impacting memory, concentration, and mood. The compound is also fat-soluble, which means it circulates rapidly and widely throughout the body and the brain, ensuring positive results. Thus, patients with schizophrenia and Parkinson's disease will find the drug to be very effective.
A number of different laboratory studies have assessed the acute effect of prescription stimulants on the cognition of normal adults. In the next four sections, we review this literature, with the goal of answering the following questions: First, do MPH (e.g., Ritalin) and d-AMP (by itself or as the main ingredient in Adderall) improve cognitive performance relative to placebo in normal healthy adults? Second, which cognitive systems are affected by these drugs? Third, how do the effects of the drugs depend on the individual using them?
To make things more interesting, I think I would like to try randomizing different dosages as well: 12mg, 24mg, and 36mg (1-3 pills); on 5 May 2014, because I wanted to finish up the experiment earlier, I decided to add 2 larger doses of 48 & 60mg (4-5 pills) as options. Then I can include the previous pilot study as 10mg doses, and regress over dose amount.
Nevertheless, a drug that improved your memory could be said to have made you smarter. We tend to view rote memory, the ability to memorize facts and repeat them, as a dumber kind of intelligence than creativity, strategy, or interpersonal skills. "But it is also true that certain abilities that we view as intelligence turn out to be in fact a very good memory being put to work," Farah says.
Another well-known smart drug classed as a cholinergic is Sulbutiamine, a synthetic derivative of thiamine which crosses the blood-brain barrier and has been shown to improve memory while reducing psycho-behavioral inhibition. While Sulbutiamine has been shown to exhibit cholinergic regulation within the hippocampus, the reasons for the drug's discernable effects on the brain remain unclear. This smart drug, available over the counter as a nutritional supplement, has a long history of use, and appears to have no serious side effects at therapeutic levels.
Specifically, the film is completely unintelligible if you had not read the book. The best I can say for it is that it delivers the action and events one expects in the right order and with basic competence, but its artistic merits are few. It seems generally devoid of the imagination and visual flights of fancy that animated movies 1 and 3 especially (although Mike Darwin disagrees), copping out on standard imagery like a Star Wars-style force field over Hogwarts Castle, or luminescent white fog when Harry was dead and in his head; I was deeply disappointed to not see any sights that struck me as novel and new. (For example, the aforementioned dead scene could have been done in so many interesting ways, like why not show Harry & Dumbledore in a bustling King's Cross shot in bright sharp detail, but with not a single person in sight and all the luggage and equipment animatedly moving purposefully on their own?) The ending in particular boggles me. I actually turned to the person next to me and asked them whether that really was the climax and Voldemort was dead, his death was so little dwelt upon or laden with significance (despite a musical score that beat you over the head about everything else). In the book, I remember it feeling like a climactic scene, with everyone watching and little speeches explaining why Voldemort was about to be defeated, and a suitable victory celebration; I read in the paper the next day a quote from the director or screenwriter who said one scene was cut because Voldemort would not talk but simply try to efficiently kill Harry. (This is presumably the explanation for the incredible anti-climax. Hopefully.) I was dumbfounded by the depths of dishonesty or delusion or disregard: Voldemort not only does that in Deathly Hallows multiple times, he does it every time he deals with Harry, exactly as the classic villains (he is numbered among) always do! How was it possible for this man to read the books many times, as he must have, and still say such a thing?↩
as scientific papers become much more accessible online due to Open Access, digitization by publishers, and cheap hosting for pirates, the available knowledge about nootropics increases drastically. This reduces the perceived risk by users, and enables them to educate themselves and make much more sophisticated estimates of risk and side-effects and benefits. (Take my modafinil page: in 1997, how could an average person get their hands on any of the papers available up to that point? Or get detailed info like the FDA's prescribing guide? Even assuming they had a computer & Internet?)
It can easily pass through the blood-brain barrier and is known to protect the nerve tissues present in the brain. There is evidence that the acid plays an instrumental role in preventing strokes in adults by decreasing the number of free radicals in the body. It increases the production of acetylcholine, a neurotransmitter that most Alzheimer's patients are a deficit in.
Both nootropics startups provide me with samples to try. In the case of Nootrobox, it is capsules called Sprint designed for a short boost of cognitive enhancement. They contain caffeine – the equivalent of about a cup of coffee, and L-theanine – about 10 times what is in a cup of green tea, in a ratio that is supposed to have a synergistic effect (all the ingredients Nootrobox uses are either regulated as supplements or have a "generally regarded as safe" designation by US authorities)
Some people aren't satisfied with a single supplement—the most devoted self-improvers buy a variety of different compounds online and create their own custom regimens, which they call "stacks." According to Kaleigh Rogers, writing in Vice last year, companies will now take their customers' genetic data from 23andMe or another source and use it to recommend the right combinations of smart drugs to optimize each individual's abilities. The problem with this practice is that there's no evidence the practice works. (And remember, the FDA doesn't regulate supplements.) Find out the 9 best foods to boost your brain health.
First was a combination of L-theanine and aniracetam, a synthetic compound prescribed in Europe to treat degenerative neurological diseases. I tested it by downing the recommended dosages and then tinkering with a story I had finished a few days earlier, back when caffeine was my only performance-enhancing drug. I zoomed through the document with renewed vigor, striking some sentences wholesale and rearranging others to make them tighter and punchier.
Not all drug users are searching for a chemical escape hatch. A newer and increasingly normalized drug culture is all about heightening one's current relationship to reality—whether at work or school—by boosting the brain's ability to think under stress, stay alert and productive for long hours, and keep track of large amounts of information. In the name of becoming sharper traders, medical interns, or coders, people are taking pills typically prescribed for conditions including ADHD, narcolepsy, and Alzheimer's. Others down "stacks" of special "nootropic" supplements.
Some of the newest substances being used as 'smart drugs' are medically prescribed for other conditions. For example, methylphenidate, commonly known as Ritalin, is used to treat attention deficit hyperactivity disorder (ADHD). So is Adderall, a combination drug containing two forms of amphetamine. These are among a suite of pharmaceuticals now being used by healthy people, particularly university students, to enhance their capabilities for learning or working.
The experiment then is straightforward: cut up a fresh piece of gum, randomly select from it and an equivalent dry piece of gum, and do 5 rounds of dual n-back to test attention/energy & WM. (If it turns out to be placebo, I'll immediately use the remaining active dose: no sense in wasting gum, and this will test whether nigh-daily use renders nicotine gum useless, similar to how caffeine may be useless if taken daily. If there's 3 pieces of active gum left, then I wrap it very tightly in Saran wrap which is sticky and air-tight.) The dose will be 1mg or 1/4 a gum. I cut up a dozen pieces into 4 pieces for 48 doses and set them out to dry. Per the previous power analyses, 48 groups of DNB rounds likely will be enough for detecting small-medium effects (partly since we will be only looking at one metric - average % right per 5 rounds - with no need for multiple correction). Analysis will be one-tailed, since we're looking for whether there is a clear performance improvement and hence a reason to keep using nicotine gum (rather than whether nicotine gum might be harmful).
Piracetam boosts acetylcholine function, a neurotransmitter responsible for memory consolidation. Consequently, it improves memory in people who suffer from age-related dementia, which is why it is commonly prescribed to Alzheimer's patients and people struggling with pre-dementia symptoms. When it comes to healthy adults, it is believed to improve focus and memory, enhancing the learning process altogether.
Eugeroics (armodafinil and modafinil) – are classified as "wakefulness promoting" agents; modafinil increased alertness, particularly in sleep deprived individuals, and was noted to facilitate reasoning and problem solving in non-ADHD youth.[23] In a systematic review of small, preliminary studies where the effects of modafinil were examined, when simple psychometric assessments were considered, modafinil intake appeared to enhance executive function.[27] Modafinil does not produce improvements in mood or motivation in sleep deprived or non-sleep deprived individuals.[28] | CommonCrawl |
A personalized intervention to prevent depression in primary care: cost-effectiveness study nested into a clustered randomized trial
Anna Fernández1,2,26,
Juan M. Mendive3,
Sonia Conejo-Cerón4,
Patricia Moreno-Peral4,
Michael King5,
Irwin Nazareth6,
Carlos Martín-Pérez7,
Carmen Fernández-Alonso8,
Antonina Rodríguez-Bayón9,
Jose Maria Aiarzaguena10,
Carmen Montón-Franco11,
Antoni Serrano-Blanco12,26,
Inmaculada Ibañez-Casas13,
Emiliano Rodríguez-Sánchez14,
Luis Salvador-Carulla15,26,
Paola Bully Garay16,
María Isabel Ballesta-Rodríguez17,
Pilar LaFuente18,
María del Mar Muñoz-García4,
Pilar Mínguez-Gonzalo19,
Luz Araujo4,
Diego Palao20,
María Cruz Gómez16,
Fernando Zubiaga21,
Desirée Navas-Campaña4,
Jose Manuel Aranda-Regules22,
Alberto Rodriguez-Morejón23,
Juan de Dios Luna24 &
Juan Ángel Bellón4,25
Depression is viewed as a major and increasing public health issue, as it causes high distress in the people experiencing it and considerable financial costs to society. Efforts are being made to reduce this burden by preventing depression. A critical component of this strategy is the ability to assess the individual level and profile of risk for the development of major depression. This paper presents the cost-effectiveness of a personalized intervention based on the risk of developing depression carried out in primary care, compared with usual care.
Cost-effectiveness analyses are nested within a multicentre, clustered, randomized controlled trial of a personalized intervention to prevent depression. The study was carried out in 70 primary care centres from seven cities in Spain. Two general practitioners (GPs) were randomly sampled from those prepared to participate in each centre (i.e. 140 GPs), and 3326 participants consented and were eligible to participate. The intervention included the GP communicating to the patient his/her individual risk for depression and personal risk factors and the construction by both GPs and patients of a psychosocial programme tailored to prevent depression. In addition, GPs carried out measures to activate and empower the patients, who also received a leaflet about preventing depression. GPs were trained in a 10- to 15-h workshop. Costs were measured from a societal and National Health care perspective. Qualityadjustedlife years were assessed using the EuroQOL five dimensions questionnaire. The time horizon was 18 months.
With a willingness-to-pay threshold of €10,000 (£8568) the probability of cost-effectiveness oscillated from 83% (societal perspective) to 89% (health perspective). If the threshold was increased to €30,000 (£25,704), the probability of being considered cost-effective was 94% (societal perspective) and 96%, respectively (health perspective). The sensitivity analysis confirmed these results.
Compared with usual care, an intervention based on personal predictors of risk of depression implemented by GPs is a cost-effective strategy to prevent depression. This type of personalized intervention in primary care should be further developed and evaluated.
ClinicalTrials.gov, NCT01151982. Registered on June 29, 2010
In Western societies depression is viewed as a major and increasing public health issue, as it causes high levels of distress for those who experience it and their relatives, as well as considerable financial costs to society. Indeed, in 2013 major depression ranked fourth in the top ten causes of years lived with disability in Europe [1], with an estimated economic burden of €113.4 billion, explained by the use of services, losses in productivity and premature death due to suicide [2].
In an effort to reduce this burden, governments have supported increases in clinical services for mental disorders. However, despite this investment, the prevalence of depression has not changed. In community-representative studies major depression reached an incidence rate that is high (3.0%) relative to the number of prevalent cases (4.7%) [3]; therefore, it will be very difficult to reduce the prevalence unless the incidence is also reduced, and this is only possible by primary prevention. In addition, evidence suggests that although effective treatments for depression are available, they can reduce the burden by only 20% [4] because not all cases are recognized and not all people with recognized depression are treated and adhere to treatment.
Different studies have shown that depression is preventable [5,6,7]. In addition, prevention of depression is relatively good value for money [8]. However, the effect sizes of prevention are small, and most of the interventions are implemented by mental health specialists [5,6,7]. This jeopardizes its translation to primary health care centres, which may be a good setting for implementing preventive interventions [9]. Primary prevention aims to avoid the occurrence of disease by either eliminating the risk factors or increasing resistance to disease, so its application requires that its target population does not have the disease (depression in our case). Classically, primary prevention of depression is classified as 'universal' when it is applied to the general population, 'selective' to participants with some risk factor(s) for depression and 'indicated' to patients with subthreshold depression (they have some symptoms of depression but do not meet the criteria for diagnosis).The best primary prevention programme is likely to be one which targets modifiable risk factors, empowers individuals to address their risks and is inexpensive and capable of large-scale dissemination [10]. The PredictDalgorithm [11, 12] provides a quantification of major depression risk as well as information on risk factors for each individual that could guide prevention. We have recently evaluated the effectiveness of this strategy: compared with usual care, this new preventive intervention reduced the incidence of depression by more than 20% at 18 months [13]. This has been the first trial evaluating the effectiveness of a preventive intervention for depression based on the level and profile of risk and conducted by general practitioners (GPs). In this paper we present the results of the cost-effectiveness analysis of this intervention.
The PredictD trial
Full details of the PredictD protocol and effectiveness analysis are available elsewhere [13, 14]. Briefly, the PredictD-Cluster, Controlled, Randomized Trial (CCRT) was a national, multicentre, randomized controlled trial with two parallel arms, cluster assignment by primary care centre and 18 months of follow-up (from October 2010 to July 2012).
A total of 220 primary care centres from seven Spanish cities (Barcelona, Bilbao, Granada, Jaen, Málaga, Valladolid and Zaragoza) were approached. We conducted meetings in each centre to explain the project and invite physicians to participate; 118 (53.64%) out of the 220 centres were interested in participating. Seventy centres (10 per city) out of the 118 were randomly selected. A total of 193 physicians from the 70 centres consented to participate. Of those who accepted, we randomly selected two physicians per centre (i.e. 140 physicians). Random selection was conducted using sealed opaque envelopes by an independent researcher who was not part of the research team. Randomization to intervention or control group was conducted at the centre level. In each city five centres were assigned to the control group and five centres to the intervention group.
Research assistants randomly selected four to six patients per day from the patients with an appointment with the GP, using random starting points for each day, generated using a random number generator. GPs reviewed the list each day, excluding those patients who were not eligible for the trial. A total of 8292 participants were selected. Of these, 3056 were excluded in this first stage for the following reasons: 1479 were < 18 or > 75 years old; 1039 attended the surgery on behalf of the person who had the appointment; 153 would be away (> 4 months) during the follow-up; 122 had a documented severe mental disorder; 121 did not speak or understand Spanish; 88 had cognitive impairment; 54 had terminal illnesses. The process ended when there were 26–27 eligible patients for each GP.
A total of 5236 persons were invited to participate in the study by the research assistants. Of these, 1453 patients (27.28%) declined to participate. When compared with participants, these non-participants were slightly more likely to be male (38.4% versus 36.5%) but were of similar age (50.5 versus 50.7 years).
The 3783 patients who agreed to participate were then interviewed to detect major depression using the Composite International Diagnostic Interview (CIDI). Of the 3783 patients,457 (12.08%) met criteria for major depression in the last 6 months and were consequently excluded.
A total of 3326 patients (1663 in each arm), nested in 140 GPs from 70 primary care centres, composed the sample of the trial. The total number of patients with missing data in any of the outcome variables at any point was 577 (17.35%).
Although patients did not consent to randomization, patients at the intervention centre consented to receive the intervention, and all patients agreed to data collection. Neither the patients nor the GPs were blind to the intervention, which is common for trials that evaluate psychosocial interventions [15].The interviewers who assessed outcomes, however, were masked regarding allocation to study group. Local Ethics and Human Research Committees at each city approved the protocol.
The Spanish primary care context
The National Health System of Spain provides universal coverage for citizens and foreign nationals (including undocumented immigrants). It is funded through taxes and free at the point of contact. Health care services are distributed into Health Areas and Basic Health Zones according to geographical, epidemiological and socio-economic criteria. Each Health Area covers a population of 200,000–400,000 inhabitants and is composed of several Basic Health Zones, which are the minimum units of health care organization. Basic Health Zones are organized around a primary care centre covering 5000–35,000 inhabitants. The primary care teams are composed of GPs, paediatricians, nurses and, in some cases, social workers. They provide a broad range of services, including the treatment of common mental disorders (shared with mental health specialists in severe cases) such as anxiety or depression [16], health promotion and preventive services. All the primary care centre staff members, including the GPs, are salaried. GP salaries contain two elements: a larger fixed payment and a smaller incentive, based on elements such as numbers of patients assigned, fulfilment of objectives, patterns of prescription and pay-for-performanceincentives [17].
The PredictD intervention has been described in detail elsewhere [14]. Briefly, the intervention started with the physician receiving the patient's risk factors for depression and overall probability of developing depression in the next 12 months, using the Spanish version of the PredictD algorithm [11]. The PredictD algorithm is composed of 12 risk factors: six are patient characteristics or past events (sex, age, sex*age interaction, education, childhood physical abuse, probable lifetime depression), and six refer to current status (Short Form Health Survey (SF-12) physical score, SF-12 mental score, dissatisfaction with unpaid work, number of serious problems in very close persons, dissatisfaction with living together at home, taking medication for stress, anxiety or depression). The PredictD algorithm provides, in addition to the quantification of the overall risk of depression, knowledge of those risk factors influencing a given patient and that could guide a possible preventive intervention. Once the risk was calculated, the GP communicated this risk to the patient, and they worked together on a plan to manage those individual risk factors. This plan was tailored to the patients following a bio-pyscho-family-social framework, emphasizing measures to empower and activate the patients. In addition, all patients received a patient-oriented booklet on preventing depression, based on basic recommendations for self-care, including advice on exercise and sleep. All the GPs in the intervention arm were trained in a 10- to 15-h workshop on the prevention of depression using the PredictD risk algorithm.
GPs in the control arm did not receive the training or any information on their patients' risk factors for depression or their probability of developing depression. They were simply asked to treat their patients as usual.
The economic evaluation was conducted from two perspectives: (1) societal perspective, including the costs of all types of health services (direct costs) and the costs that stem from production losses (indirect costs), and (2) a National Health System perspective (including only direct costs from the Spanish public health services). The time frame of this study was 18 months. Therefore, we discounted both costs and effects at 3.5% following National Institute for Health and Care Excellence (NICE) recommendations [18]. All costs were expressed in euros (€) for the reference year 2012.
We used a modified version of the Client Service Receipt Inventory (CSRI) [19] to collect information about use of health care resources, use of psychotropic drugs (antidepressants, anxiolytics and sedative-hypnotics) and lost productivity.
Direct health costs were calculated by multiplying the number of health service contacts/units (consultations, hospital days, etc.) by their standard cost price. This unit cost was retrieved from 'Oblikue dataset (esalud)' (http://www.oblikue.com/), which includes the official health services tariffs of the different Spanish autonomous communities. Cost of medication was calculated by multiplying cost price per daily dose, multiplied by the number of prescription days recommended, as recalled by the patient. Information about medication costs was obtained from the Spanish Pharmaceutical Vademecum (http://www.vademecum.es/). Indirect costs consisted of the costs of being on sick leave from paid work. Costs of work loss were calculated by multiplying the days on sick leave by the minimum daily wage in Spain according to the human capital approach. In addition, self-reported presenteeism was assessed using some questions from the World Health Organization (WHO) Health and Work Performance Questionnaire (HPQ) [20]. For this assessment, respondents first estimated how many days during the past 4 weeks they had been at work not being able to perform their job as usual (A), and then, they rated their overall work performance during these days using a 0–100 scale where 0 corresponds to doing no work at all (while at work) and 100 signifies top work performance (B). A score was calculated as follows: ((100 – B)*A)*6. Here 6 is the period of the follow-up (6 months).
Intervention costs included the cost of the booklet (€0.16 per patient) and the cost associated with the training of the physicians. Training was included in the GPs' general training programme during working time, so no extra hours were worked or locums needed. No charges were incurred for training venues, as the training was conducted in the health centres or other health sector free venues. We included the cost of the trainer (€100 per h), estimated at 10 h and 7 groups (€7000), and the cost of the 70 dossiers delivered to the GPs with the basic information (€700). The intervention was embedded in the current practice. Participants in the intervention group were required to meet at least three times during the intervention (at baselineand at 6 and 12 months): in each of the three GP-patient interviews the GP communicated to the patient specific and updated information on his/her risk of depression, and then the patient and GP worked on a personalized plan for prevention of depression. These visits lasted approximately from 5 to 15 min, and this time generally was proportional to the level of risk. If the GP considered that the complexity of the case would require more visits, it was proposed to the patient. The patient at his/her own request could also propose new visits to the GP. All visits that were made during the follow-up, both compulsory and optional, were taken into account for costs.
The unit costs used are given in Table 1.
Table 1 Unadjusted costs and effects, by group
Quality-adjusted life years (QALYs) were measured using the EuroQol five dimensions questionnaire (EQ-5D). The EQ-5D instrument has two parts. Part 1 is a self-reported description of health problems according to a five-dimensional classification (mobility, self-care, usual activities, pain/discomfort and anxiety/depression). Patients mark one of three levels of severity (1 = no problems, 2 = some/moderate problems and 3 = severe/extreme problems) in each dimension. Combinations of these categories define a total of 243 different health states. For instance, perfect health is coded as '11111'. Each one of these health states has a 'weight' or 'utility' based on community preferences (i.e. social tariffs), where 1 represents perfect health, 0 death and negative numbers symbolize health states that are considered worse than death. Spanish social tariffs were used to estimate the utility of health states described by patients [21]. QALYs were calculated by multiplying the utility by the amount of time a patient spent in a particular health state. Linear interpolation was used for transitions between health states at baseline and at 6, 12 and 18 months. Part 2 is avisual analogue scale (VAS), graded from 0 (worst imaginable health status) to 100 (best imaginable health status), which is used by patients to estimate the 'value' of their health status. We transformed the VAS to a scale from 0 to 1 and used it to have a 'proxy' of an 'individual tariff' and to calculate QALYs using it (referred to in the results as QALYs-VAS) [22].
Analyses were performed based on the intention-to-treat (ITT) principle, analysing all participants according to their randomized treatment and using multiple imputations when outcomes were missing. Incremental cost-effectiveness ratios (ICERs) were calculated as the difference in the cost between the intervention and the control group, divided by the difference in QALYs. The incremental costs and incremental health effects were modelled by generalized linear models (GLMs). We calculated the intraclass correlation coefficients (ICCs) of the health centre, GP and both, taking the costs and QALYs as dependent variables. The ICCs for the health centre were significant for the effect, while the ICC for the GP was significant for the costs. Thus, we used multilevel GLMs to account for such clustering effects. GLMs were fitted using different distribution families (Gaussian, inverse Gaussian, Poisson and gamma) and link functions (identity and log). Mmodified Park tests were used to select the appropriate family. To identify the correct link function, we compared the model performance of all permutations of candidate link and variance function using different diagnostic tests [23]. The best solution for costs was obtained using a gamma family and a log link. For QALYs, the most adequate family was Gaussian with an identity link.
All models were adjusted by their respective baseline values (i.e. QALYs or cost), the individual risk of depression (i.e. risk score from the PredictD algorithm) and the following variables,which were unbalanced at baseline (and were not included in the PredictD algorithm): employment status, owner/occupier accommodation, perception of safety inside/outside the home, anxiety disorder, experiences of discrimination, city.
We accounted for missing outcomes by using multiple imputations with chained equations under a missing at random (MAR) framework. We generated 50 imputed samples. Estimates for the descriptive analysis were combined using Rubin's rules [24].
The analytic focus on cost-effectiveness (or cost-utility analysis) emphasizes the estimation of the joint density of cost and effects differences, the quantification of uncertainty surrounding the ICER and the presentation cost-effectiveness acceptability curves (CEACs). In that sense, in economic evaluations it is considered inappropriate to carry out separate and sequential hypothesis tests on differences in effects and costs to determine if incremental cost-effectiveness should be estimated (i.e. hypothesis testing is not conducted, so P values are not taken into consideration [25].
To deal with uncertainty, non-parametric bootstraps were used to simulate 1000 ICERs per imputed database (i.e. 50,000 ICERs in total), which were plotted on the cost-effectiveness plane. CEACs were then constructed. Each CEAC was derived from the net benefit approach:
$$ \mathrm{Net}\;\mathrm{monetary}\kern0.17em \mathrm{benefit}=\uplambda \times \left(\varDelta\;\mathrm{Effect}\right)-\left(\varDelta\;\mathrm{Cost}\right), $$
where λ represents the amount of money society is willing to pay to gain one extra unit of effect. All bootstrapped pairs of ∆ Effect and ∆ Cost (i.e. 50,000) were used to calculate the CEACs. Willingness-to-pay values ranged from €0 to 100,000 [26]. We have selected as optimal threshold of €30,000 per QALY ($32,058, £25,704), following Spanish suggestions [27]. This threshold fits into the cost-effectiveness threshold ranging between £20,000 and 30,000 used by NICE [18]. However, it is lower than the $50,000 suggested in the USA [28].
A number of sensitivity analyses were conducted in order to assess the robustness of the results:
Modifying the perspective of analysis, i.e. including only costs related to the outcome (primary health services, mental health services and psychotropic) and all the possible costs,i.e. private costs, absenteeism and presenteeism, and potential intervention-related costs including costs of hiring the venue (€100 per day, per 2 days in 7 cities) plus the costs associated with the time for which the physician attended the course (€10.5 per visit per 60 visits in two days per 70 physicians)
Modifying the discount rate, both in costs and effects, from 0 to 6%, following NICE recommendations [18]
Modifying the unit costs by doubling and halving them
Modifying the statistical analyses— using seemingly unrelated regressions (SURs), a method that consists of a system of regression equations that recognize the correlation between individual costs and outcomes [29], a completers approach (applying inverse probability weighting to address attrition bias) and models adjusted only by cost or QALYs at baseline
The participants in the two groups were similar with regard to gender (63.6% female in the control group and 63.5% in the intervention group), age (51.5 and 50 years in the control and intervention groups, respectively), marital status (68.4% and 69.9% were married in the control and intervention groups, respectively) and educational level (42.2% and 44.3% had primary level education, respectively). However, they differed in key aspects related to the trial. Participants in the intervention group had a higher risk of depression,a slightly worse mental health-related quality of life, more anxiety-related symptoms and a greater proportion of people who answered affirmatively to the two questions we use as a lifetime screen for depression [30]. In addition, there were differences in employment status, owner-occupier of an accommodation, perception of safety inside-outside the home and experiences of discrimination. Further details of trial participants are given by Bellon et al. [13].
Table 1 presents the unadjusted mean costs and effects for the intervention and control groups.
Cost-effectiveness analyses
Table 2 shows the adjusted means and the ICERs.
Table 2 Incremental cost-effectiveness ratio (ICER)-adjusted analysis: main scenarios
From a societal perspective the new intervention is dominant, as the increment in cost is negative and the effects are positive. However, although most (97.4%) of the incremental effects were plotted in the Eastern quadrants (new intervention more effective), the level of uncertainty related to the costs is quite high, with half of them in the Northern quadrants (new intervention more expensive) and the other half in the Southern quadrants (new intervention less expensive). These results are depicted in Fig. 1, left column.
Cost-effectiveness planes
The acceptability curve is shown in Fig. 2. At the €30,000 per QALY ($32,058,£25,704) threshold the probability that the PredictD intervention would be seen as cost-effective was 94%. This probability increased to 98% when considering the effect in terms of the QALYs-VAS. However, these values decreased to 83% and 89%, respectively, when a threshold of €10,000 ($10,686, £8568) was used.
Cost-effectiveness acceptability curves (CEACs): societal perspective
From a National Health System perspective, the incremental cost for QALYs gained was €1326. The cost for QALYs-VAS was €1085.45. Similarly as shown for the societal perspective, although most of the incremental effects were also plotted in the Eastern quadrants, the level of uncertainty related to the costs was quite high (Fig. 1, right column). Figure 3 depicts the acceptability curve from a National Health System perspective. Similarly, at the €30,000 per QALY threshold ($32,058,£25,704), the probability that the PredictD intervention is cost-effective was 96%, increasing to almost 100% when the effect was measured using QALYs-VAS. These values decreased to 89% and 96% when a threshold of €10,000 ($10,686, £8568) was used.
Cost-effectiveness acceptability curves (CEACs): National Health System perspective
Table 3 summarizes the sensitivity analysis. The scenario that only considered the costs directly related to primary and mental health care was the best. The worst scenario was the one where the costs were doubled. However, the values were quite similar.
Table 3 Sensitivity analysis
Over the 18-month evaluation period, the PredictD intervention was found to be efficient. The cost-effectiveness advantage arises from the finding that the PredictD intervention increases quality of life while not significantly increasing overall costs. The sensitivity analyses confirm the robustness of the results.
Strengths and limitations
This is the first economic evaluation nested in a randomized trial to evaluate the effectiveness of universal prevention of depression in adults implemented by GPs. Major strengths include the large sample (more than 3000 primary care attendees) and a follow-up time of 18 months, which is longer than that for most depression prevention trials.
Nonetheless, the results of this study should be considered with the following limitations in mind. First, due to the recruitment procedure, our study may have under-represented patients who attend infrequently [31]; however, frequent attenders are more at risk of major depression [32] and are most in need of prevention. Second, intervention and control groups were unbalanced on some individual variables, so that participants had a higher risk of depression in the intervention group. This is not unusual in cluster randomized control trials, where an imbalance in the characteristics of participants can creep in because randomization occurs at the level of centre [33]. To solve that, we have adjusted the results for the unbalanced variables. Third, the patients were not blind to the intervention. They may have modified their responses to satisfy the researchers/GPs. Fourth, the information on use of services was collected by means of self-report. Some bias in recall may be expected, although it is quite likely that this bias was equally distributed between the intervention and control groups. In addition, we have not taken into account informal care-related costs and costs from general medication. As depression has an impact on physical health, it is possible that this has been affected, making the costs associated with depression possibly higher than we have calculated in our study. On the other hand, the cost associated with the training of the GP was translated to the patient level by dividing by the number of participants in the trial and not by the total number of patients, which would be more appropriate in real practice. Consequently, the costs associated with the intervention in real practice would be even lower. Fifth, in our study, only 32.4% and 36.6% of patients, in the control and the intervention group, respectively, answered yes to the two questions of lifetime screen for depression (except in the 6 months prior to the baseline interview, in which no patient suffered major depression according to the CIDI) [13]. The predictive positive value of responding yes to these two questions is 18% [30],and in our study the proportion of patients who truly suffered a first episode of depression before recruitment was 5.8% and 6.6% in the control and the intervention group, respectively. Therefore, from this point of view, our study is largely based (approximately 94% of participants) on primary prevention of the onset of depression (first episode). Lastly, the generalizability of our findings may be limited because costs associated with primary care processes in Spain are less costly than in other Western countries, due to the fact that GPs are salaried [17]. However, in these other countries, such as the USA, the cost-effectiveness threshold is also higher.
To the best of our knowledge, there are only two economic evaluations focused on the prevention of depression that can be compared with ours. Hunter et al. [34] carried out an economic modelling study based on the PredictD risk algorithm concluding that identifying non-depressed general practice attendees at highrisk of depression using the algorithm PredictD and providing them with a psychosocial preventive programme was potentially more cost-effective than the current practice. At a threshold of £25,000 (€30,000,$31,200 ) per QALY the probability of being cost-effective was around 70%. Our analysis showed that the probability of being cost-effective at this threshold is even higher.
Similarly, Van den Berg et al. [35] built up a model to estimate the cost-effectiveness of preventing depression in people with subthreshold symptoms of depression opportunistically approached in primary care practices. The intervention consisted of a self-help manual with instructions on cognitivebehaviour self-help in mood management plus up to six short telephone calls to support the participants while working through the manual. Given a willingness to pay of €30,000 ($32,058,£25,704) per disability-adjusted life year (DALY), the probability that the intervention was cost-effective was around 80%. Again, our intervention had a higher probability (96%) of being cost-effective at the same threshold. It may be hypothesized that participants in our trial benefited from a tailored face-to-face intervention as well as from the opportunistic reviews when they consulted their GPs related to other issues.
However, in both economic studies [34, 35] strategies for preventing depression in high-risk patients were evaluated (selective or indicated prevention), whereas in our study we followed a universal and personalized prevention implemented by GPs.
Implications for practice
This intervention differs from other interventions to prevent depression because it is tailored to each patient's individual level and profile risk. This approach parallels primary prevention of cardiovascular diseases in primary care, although the risk factors involved and their management are different. Indeed, our intervention did not increase costs because it was embedded into the day-to-day practice. Our study showed that universal prevention of depression in adults, using the PredictD intervention and implemented by GPs, has a high probability of being cost-effective compared to usual care. However, with our study we cannot know if this type of universal prevention would be more cost-effective than other types of primary prevention (selective or indicated). Further trials comparing these types of prevention and different frequencies of risk evaluation are need. GPs generally perceive that they do not have enough time to perform preventive activities, and they may agree to do so only for high-risk patients. Another possibility would be to provide universal prevention to patients by mean of a scalable and cheap strategy (e.g. through apps and smartphones), reserving the intervention of GPs only in cases of high risk of depression. Our research team is currently conducting a new trial with the latter strategy (the e-PredictD study).
The PredictD intervention is likely to be perceived as cost-effective, from both a societal and National Health System perspective, compared to usual care.There is, therefore, both a clinical and an economic case for supporting the implementation of this intervention, which is based on the level and profile of risk for depression, in primary care practices. However, this intervention should be further developed and evaluated in other countries.
CCRT:
Cluster controlled randomized trial
CEAC:
Cost-effectiveness acceptability curve
CSRI:
Client Service Receipt Inventory
DALY:
Disability-adjusted life year
GLM:
Generalized linear model
GP:
ICC:
Intraclasscorrelation coefficient
ICER:
Incremental cost-effectiveness ratio
NICE:
National Institute for Health and Care Excellence
QALY:
Quality-adjusted life year
SUR:
Seemingly unrelated regression
VAS:
Visual analogue scale
WTP:
Willingness to pay
Global Burden of Disease Study C. Global, regional, and national incidence, prevalence, and years lived with disability for 301 acute and chronic diseases and injuries in 188 countries, 1990-2013: a systematic analysis for the Global Burden of Disease Study 2013. Lancet. 2015;386(9995):743–800.
Gustavsson A, Svensson M, Jacobi F, Allgulander C, Alonso J, Beghi E, Dodel R, Ekman M, Faravelli C, Fratiglioni L, et al. Cost of disorders of the brain in Europe 2010. EurNeuropsychopharmacol. 2011;21(10):718–79.
Ferrari AJ, Somerville AJ, Baxter AJ, Norman R, Patten SB, Vos T, Whiteford HA. Global variation in the prevalence and incidence of major depressive disorder: a systematic review of the epidemiological literature. Psychol Med. 2013;43(3):471–81.
Andrews G, Issakidis C, Sanderson K, Corry J, Lapsley H. Utilising survey data to inform public policy: comparison of the cost-effectiveness of treatment of ten mental disorders. Br J Psychiatry. 2004;184:526–33.
Bellon JA, Moreno-Peral P, Motrico E, Rodriguez-Morejon A, Fernandez A, Serrano-Blanco A, Zabaleta-del-Olmo E, Conejo-Ceron S. Effectiveness of psychological and/or educational interventions to prevent the onset of episodes of depression: a systematic review of systematic reviews and meta-analyses. Prev Med. 2015;76(Suppl):S22–32.
van Zoonen K, Buntrock C, Ebert DD, Smit F, Reynolds 3rd CF, Beekman AT, Cuijpers P. Preventing the onset of major depressive disorder: a meta-analytic review of psychological interventions. Int J Epidemiol. 2014;43(2):318–29.
Conejo-Cerón S, Moreno-Peral P, Rodríguez-Morejón A, Motrico E, Navas-Campaña D, Rigabert A, Bellon JA. Effectiveness of psychological and/or educational interventions to prevent depression in primary care: a systematic review and meta-analysis. Ann Fam Med. 2017;15:262–71.
Mihalopoulos C, Chatterton ML. Economic evaluations of interventions designed to prevent mental disorders: a systematic review. Early Interv Psychiatry. 2015;9(2):85–92.
Marshall M. A precious jewel—the role of general practice in the English NHS. N Engl J Med. 2015;372(10):893–7.
Jacka FN, Reavley NJ, Jorm AF, Toumbourou JW, Lewis AJ, Berk M. Prevention of common mental disorders: what can we learn from those who have gone before and where do we go next? Aust N Z J Psychiatry. 2013;47(10):920–9.
Bellon JA, de Dios LJ, King M, Moreno-Kustner B, Nazareth I, Monton-Franco C, GildeGomez-Barragan MJ, Sanchez-Celaya M, Diaz-Barreiros MA, Vicens C, et al. Predicting the onset of major depression in primary care: international validation of a risk prediction algorithm from Spain. Psychol Med. 2011;41(10):2075–88.
King M, Walker C, Levy G, Bottomley C, Royston P, Weich S, Bellon-Saameno JA, Moreno B, Svab I, Rotar D, et al. Development and validation of an international risk prediction algorithm for episodes of major depression in general practice attendees: the PredictD study. Arch Gen Psychiatry. 2008;65(12):1368–76.
Bellon JA, Conejo-Ceron S, Moreno-Peral P, King M, Nazareth I, Martin-Perez C, Fernandez-Alonso C, Rodriguez-Bayon A, Fernandez A, Aiarzaguena JM, et al. Intervention to prevent major depression in primary care: acluster randomized trial. Ann Intern Med. 2016;164:656–65.
Bellon JA, Conejo-Ceron S, Moreno-Peral P, King M, Nazareth I, Martin-Perez C, Fernandez-Alonso C, Ballesta-Rodriguez MI, Fernandez A, Aiarzaguena JM, et al. Preventing the onset of major depression based on the level and profile of risk of primary care attendees: protocol of a cluster randomised trial (the predictD-CCRT study). BMC Psychiatry. 2013;13:171.
Boutron I, Moher D, Altman DG, Schulz KF, Ravaud P, Group C. Extending the CONSORT statement to randomized trials of nonpharmacologic treatment: explanation and elaboration. Ann Intern Med. 2008;148(4):295–309.
Salvador-Carulla L, Costa-Font J, Cabases J, McDaid D, Alonso J. Evaluating mental health care and policy in Spain. J Ment Health Policy Econ. 2010;13(2):73–86.
Borkan J, Eaton CB, Novillo-Ortiz D, Rivero Corte P, Jadad AR. Renewing primary care: lessons learned from the Spanish health care system. Health Aff (Millwood). 2010;29(8):1432–41.
Excellence NIfHaC. Guide to the methods of technology appraisal 2013. London: NICE; 2013
Vazquez-Barquero JL, Gaite L, Cuesta MJ, Carcia-Usieto E, Knapp M, Beecham J. Spanish version of the CSRI: a mental health cost evaluation interview. Archivos de Neurobiologia (Madrid). 1997;60:171–84.
Kessler RC, Barber C, Beck A, Berglund P, Cleary PD, McKenas D, Pronk N, Simon G, Stang P, Ustun TB, et al. The World Health Organization Health and Work Performance Questionnaire (HPQ). J Occup Environ Med. 2003;45(2):156–74.
Herdman M, Badia X, Berra S. EuroQol-5D: a simple alternative for measuring health-related quality of life in primary care. AtenPrimaria. 2001;28:425–30.
Parkin D, Devlin N. Is there a case for using visual analogue scale valuations in cost-utility analysis? Health Econ. 2006;15(7):653–64.
Glick HA, Doshi JA, Sonnad SS, Polsky D. Economic evaluation in clinical trials. Oxford: Oxford University Press; 2007.
Rubin D. Multiple imputations for nonresponse in surveys. New York: Wiley; 1987.
Briggs AH, O'Brien BJ, 2. The death of cost-minimization analysis? Health Econ. 2001;10:179–84.
Fenwick E, Byford S. A guide to cost-effectiveness acceptability curves. Br J Psychiatry. 2005;187:106–8.
Sacristan JA, Oliva J, Del Llano J, Prieto L, Pinto JL. What is an efficient health technology in Spain? Gac Sanit. 2002;16:334–43.
Neumann PJ, Cohen JT, Weinstein MC. Updating cost-effectiveness—the curious resilience of the $50,000-per-QALY threshold. N Engl J Med. 2014;371(9):796–7.
Gomes M, Ng ES, Grieve R, Nixon R, Carpenter J, Thompson SG. Developing appropriate methods for cost-effectiveness analysis of cluster randomized trials. Med Decis Making. 2012;32(2):350–61.
Arroll B, Khin N, Kerse N. Screening for depression in primary care with two verbally asked questions: cross sectional study. BMJ. 2003;327(7424):1144–6.
Lee ML, Yano EM, Wang M, Simon BF, Rubenstein LV. What patient population does visit-based sampling in primary care settings represent? Med Care. 2002;40:761–70.
Dowrick CF, Bellón JA, Gómez MJ. GP frequent attendance in Liverpool and Granada: the impact of depressive symptoms. Br J Gen Pract. 2000;50:361–5.
Puffer S, Torgerson D, Watson J. Evidence for risk of bias in cluster randomised trials: review of recent trials published in three general medical journals. BMJ. 2003;327:785–9.
Hunter RM, Nazareth I, Morris S, King M. Modelling the cost-effectiveness of preventing major depression in general practice patients. Psychol Med. 2014;44(7):1381–90.
van den Berg M, Smit F, Vos T, van Baal PH. Cost-effectiveness of opportunistic screening and minimal contact psychotherapy to prevent depression in primary care patients. PLoS ONE. 2011;6(8), e22884.
The authors thank Bonaventura Bolíbar and the Spanish Network of Primary Care Research for their economic and logistic support. The authors are also grateful to all the patients and primary care physicians who participated in the trial.
This work was supported by grants from the Spanish Ministry of Health, the Institute of Health Carlos III (ISCIII) and the European Regional Development Fund (ERDF) 'A way to build Europe'(grant references PS09/02272, PS09/02147, PS09/01095, PS09/00849 and PS09/00461); the Andalusian Council of Health (grant reference PI-0569-2010); the Spanish Network of Primary Care Research 'redIAPP' (RD06/0018, RD12/0005/0001); the 'Aragón group' (RD06/0018/0020, RD12/0005/0006); the 'Bizkaya group' (RD06/0018/0018, RD12/0005/0010); the Castilla-León Group (RD06/0018/0027); the Mental Health (SJD) Barcelona Group (RD06/0018/0017, RD12/0005/0008); and the Mental-Health, Services and Primary Care (SAMSERAP) MálagaGroup (RD06/0018/0039, RD12/0005/0005).
The statistical code and data set are available from Dr. Bellón (e-mail: [email protected]) and Dr. Fernandez ([email protected]).
ParcSanitariSant Joan de Déu, FundacióSant Joan de Déu, C/Dr. AntoniPujadas, 42, 08830, SantBoi de Llobregat, Barcelona, Spain
Anna Fernández
Mental Health Policy Unit, Brain and Mind Centre, Faculty of Health Sciences, University of Sydney, Sydney, Australia
Centro de Salud La Mina, C/Mar s/n, 08930, Barcelona, Spain
Juan M. Mendive
Distrito de AtenciónPrimariaMálaga-Guadalhorce, Unidad de Investigación, C/Sevilla, 23, 3a Planta, 29009, Málaga, Spain
Sonia Conejo-Cerón
, Patricia Moreno-Peral
, María del Mar Muñoz-García
, Luz Araujo
, Desirée Navas-Campaña
& Juan Ángel Bellón
Division of Psychiatry, University College London, Charles Bell House, 67-73 Riding House Street, London, W1W 7EH, UK
Department of Primary Care & Population Health, University College London, Royal Free Site, Rowland Hill Street, London, NW3, UK
Irwin Nazareth
Centro de SaludMarquesado, Distrito Sanitario Granada Nordeste, Avenida Mariana Pineda s/n, 18500, Granada, Spain
Carlos Martín-Pérez
Gerencia Regional de Salud de Castilla y León, Paseo de Zorrilla, 1, 47007, Valladolid, Spain
Carmen Fernández-Alonso
Centro de Salud San José, Plaza JuanfraGarridoRomera s/n, 23700, Linares, Jaén, Spain
Antonina Rodríguez-Bayón
Centro de Salud San Ignacio, LarrakotorreKalea, 9, 48015, Bilbao, Bizkaia, Spain
Jose Maria Aiarzaguena
Centro de Salud Casablanca, C/Viñedo Viejo, 10, 50009, Zaragoza, Spain
Carmen Montón-Franco
ParcSanitariSant Joan de Déu, C/Dr. AntoniPujadas, 42, 08830, SantBoi de Llobregat, Barcelona, Spain
Antoni Serrano-Blanco
Centro de InvestigaciónBiomédica en Red de Salud Mental de la Universidad de Granada, Facultad de Medicina PTS Avda.de la Investigación (Departamento de Psiquiatría Torre A Planta 9a), 11, 18016, Granada, Spain
Inmaculada Ibañez-Casas
Centro de Salud La Alamedilla, UnidadInvestigación, AvenidaComuneros 27-31, 37003, Salamanca, Spain
Emiliano Rodríguez-Sánchez
Centre for Mental Health Research.Research School of Population Health, ANU College of Health and Medicine-Australian National University, 63 Eggleston Rd, Acton, ACT, 2601, Australia
Luis Salvador-Carulla
Unidad de Investigación de AtenciónPrimaria, C/Luis Power, 18-4o Planta, 48014, Bilbao, Spain
Paola Bully Garay
& María Cruz Gómez
Centro de Salud Federico del Castillo, C/Ramón Espantaleón s/n, 23005, Jaén, Spain
María Isabel Ballesta-Rodríguez
Centro de Salud Andorra, C/de Huesca, 0, 44500, Teruel, Spain
Pilar LaFuente
Gerencia Regional de Salud de Castilla y León, Unidad de Investigación, Paseo de Zorrilla, 1, 47007, Valladolid, Spain
Pilar Mínguez-Gonzalo
Hospital UniversitariParcTaulí, Servei de Salut Mental, ParcTaulí, 1, 08208 Sabadell, UniversitatAutònoma de Barcelona. CIBERSAM, Barcelona, Spain
Diego Palao
Centro de SaludArrabal, Unidad de Investigación de AtenciónPrimaria, AndadorAragues Puerto, 2-4, 50015, Zaragoza, Spain
Fernando Zubiaga
Centro San Andrés-Torcal, C/José Palanca, 29003, Málaga, Spain
Jose Manuel Aranda-Regules
Departamento de Personalidad, Evaluación y TratamientoPsicologico de la Facultad de Psicologia de la Universidad de Málaga, Campus Teatinos s/n, 29590, Málaga, Spain
Alberto Rodriguez-Morejón
Departamento de Bioestadística, Facultad de Medicina, Universidad de Granada ParqueTecnológico de Ciencias de la Salud, Avda de la Investigación 11, 18016, Granada, Spain
Juan de Dios Luna
Centro de Salud El Palo, Departamento de MedicinaPreventiva y Psiquiatría, Universidad de Málaga, Malaga, Spain
Juan Ángel Bellón
Consorcio de Investigación Biomédica en Red de Epidemiología y Salud Pública, CIBERESP, Madrid, Spain
, Antoni Serrano-Blanco
& Luis Salvador-Carulla
Search for Anna Fernández in:
Search for Juan M. Mendive in:
Search for Sonia Conejo-Cerón in:
Search for Patricia Moreno-Peral in:
Search for Michael King in:
Search for Irwin Nazareth in:
Search for Carlos Martín-Pérez in:
Search for Carmen Fernández-Alonso in:
Search for Antonina Rodríguez-Bayón in:
Search for Jose Maria Aiarzaguena in:
Search for Carmen Montón-Franco in:
Search for Antoni Serrano-Blanco in:
Search for Inmaculada Ibañez-Casas in:
Search for Emiliano Rodríguez-Sánchez in:
Search for Luis Salvador-Carulla in:
Search for Paola Bully Garay in:
Search for María Isabel Ballesta-Rodríguez in:
Search for Pilar LaFuente in:
Search for María del Mar Muñoz-García in:
Search for Pilar Mínguez-Gonzalo in:
Search for Luz Araujo in:
Search for Diego Palao in:
Search for María Cruz Gómez in:
Search for Fernando Zubiaga in:
Search for Desirée Navas-Campaña in:
Search for Jose Manuel Aranda-Regules in:
Search for Alberto Rodriguez-Morejón in:
Search for Juan de Dios Luna in:
Search for Juan Ángel Bellón in:
The study was conceived and designed by JÁB, SCC, PMP, MK, IN, CMP, CFA, ARB, AF, JMA, CMF, MIBR, IIC, ERS, ASB, MCG, PLF, MMMG, PMG, LA, DP, PB, FZ, DNC, JM, JMAR, ARM, LSC and JDL. The data were analysed and interpretedby AF, JÁB, MK, IN, ASB, JM and LSC. AF drafted the article. The article was critically revised for important intellectual content by JÁB, SCC, PMP, MK, IN, CMP, CFA, ARB, AF, JMA, CMF, IIC, ERS, MIBR, ASB, DP, JM, JMAR, ARM, LSC and JDL. JAB, AF and JDL provided statistical expertise. Funding was obtained by JÁB, CMP, CFA, LSC, CMF and JMA. Administrative, technical or logistic support was provided by SCC, PMP, IIC, MCG, MMMG, PB, DNC, PLF, PMG, LA and FZ. SCC, PMP, IIC, MCG, PLF, MMMG, PMG, LA, FZ, PB and DNC collected and assembled the data. All authors read and approved the final manuscript.
Correspondence to Anna Fernández.
The PredictD-CCRT study has been approved by the relevant ethics committees in each participating Spanish city: Ethics Committee on Human Research of the University of Granada, Ethics and Research Committee of Primary Health District of Malaga, Ethics Committee for Clinical Research of Sant Joan de Deu Foundation (Barcelona) (PIC CEIC-62-09), Ethics Committee for Clinical Research of Aragon (CEICA) (CP06/05/2009), Ethics Committee for Health Research of the Jaen Hospital, Ethics Committee for Clinical Research of Euskadi (CEIC-E) (03/2009) and Ethics Committee for Clinical Research of the Rio Hortega Hospital of Valladolid (04/2009).
This work was only supported by public grants. The funders had no role in the study design, data collection and analysis, decision to publish or preparation of the manuscript. No conflict of interests was reported by the authors of this paper. Dr. Antoni Serrano-Blanco reports grants from Ferrer International outside the submitted work, but none of the other authors have financial relationships with any organizations that might have an interest in the submitted work in the previous 3 years.There are no other relationships or activities that could appear to have influenced the submitted work.
Fernández, A., Mendive, J.M., Conejo-Cerón, S. et al. A personalized intervention to prevent depression in primary care: cost-effectiveness study nested into a clustered randomized trial. BMC Med 16, 28 (2018) doi:10.1186/s12916-018-1005-y
DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12916-018-1005-y | CommonCrawl |
One of the World's Most Powerful Supercomputers Uses Light Instead of Electric Current
Posted by Kelvin Dafiaghor in categories: quantum physics, robotics/AI, supercomputing
France's Jean Zay supercomputer, one of the most powerful computers in the world and part of the Top500, is now the first HPC to have a photonic coprocessor meaning it transmits and processes information using light. The development represents a first for the industry.
The breakthrough was made during a pilot program that saw LightOn collaborate with GENCI and IDRIS. Igor Carron, LightOn's CEO and co-founder said in a press release: "This pilot program integrating a new computing technology within one of the world's Supercomputers would not have been possible without the particular commitment of visionary agencies such as GENCI and IDRIS/CNRS. Together with the emergence of Quantum Computing, this world premiere strengthens our view that the next step after exascale supercomputing will be about hybrid computing."
The technology will now be offered to select users of the Jean Zay research community over the next few months who will use the device to undertake research on machine learning foundations, differential privacy, satellite imaging analysis, and natural language processing (NLP) tasks. LightOn's technology has already been successfully used by a community of researchers since 2018.
Quantum Mechanics and Machine Learning Used To Accurately Predict Chemical Reactions at High Temperatures
Posted by Saúl Morales Rodriguéz in categories: chemistry, quantum physics, robotics/AI, sustainability
Method combines quantum mechanics with machine learning to accurately predict oxide reactions at high temperatures when no experimental data is available; could be used to design clean carbon-neutral processes for steel production and metal recycling.
Extracting metals from oxides at high temperatures is essential not only for producing metals such as steel but also for recycling. Because current extraction processes are very carbon-intensive, emitting large quantities of greenhouse gases, researchers have been exploring new approaches to developing "greener" processes. This work has been especially challenging to do in the lab because it requires costly reactors. Building and running computer simulations would be an alternative, but currently there is no computational method that can accurately predict oxide reactions at high temperatures when no experimental data is available.
A Columbia Engineering team reports that they have developed a new computation technique that, through combining quantum mechanics and machine learning, can accurately predict the reduction temperature of metal oxides to their base metals. Their approach is computationally as efficient as conventional calculations at zero temperature and, in their tests, more accurate than computationally demanding simulations of temperature effects using quantum chemistry methods. The study, led by Alexander Urban, assistant professor of chemical engineering, was published on December 1, 2021 by Nature Communications.
Continue reading "Quantum Mechanics and Machine Learning Used To Accurately Predict Chemical Reactions at High Temperatures" »
Scientists Are Investigating If Time Warps Near a Nuclear Reactor
Posted by Dan Kummer in categories: nuclear energy, quantum physics
A team of theoretical physicists at Griffiths University in Australia are investigating a radical quantum theory of time which posits that there is a asymmetry between time and space.
To explain why time points from the past to the future, scientists have proposed that under the second law of thermodynamics, time itself moves towards increased entropy, a measurement of disorder in a system.
Continue reading "Scientists Are Investigating If Time Warps Near a Nuclear Reactor" »
The quantum mechanics of time travel through post-selected teleportation
Posted by Dan Kummer in categories: information science, quantum physics, time travel
This paper discusses the quantum mechanics of closed timelike curves (CTC) and of other potential methods for time travel. We analyze a specific proposal for such quantum time travel, the quantum description of CTCs based on post-selected teleportation (P-CTCs). We compare the theory of P-CTCs to previously proposed quantum theories of time travel: the theory is physically inequivalent to Deutsch's theory of CTCs, but it is consistent with path-integral approaches (which are the best suited for analyzing quantum field theory in curved spacetime). We derive the dynamical equations that a chronology-respecting system interacting with a CTC will experience. We discuss the possibility of time travel in the absence of general relativistic closed timelike curves, and investigate the implications of P-CTCs for enhancing the power of computation.
Freaky Physics Proves Parallel Universes Exist
Posted by Dan Kummer in categories: cosmology, quantum physics, time travel
Look past the details of a wonky discovery by a group of California scientists — that a quantum state is now observable with the human eye — and consider its implications: Time travel may be feasible. Doc Brown would be proud.
The strange discovery by quantum physicists at the University of California Santa Barbara means that an object you can see in front of you may exist simultaneously in a parallel universe — a multi-state condition that has scientists theorizing that traveling through time may be much more than just the plaything of science fiction writers.
And it's all because of a tiny bit of metal — a "paddle" about the width of a human hair, an item that is incredibly small but still something you can see with the naked eye.
Participatory Universe
Posted by Dan Kummer in categories: particle physics, quantum physics
John Wheeler, who is mentor to many of today's leading physicists, and the man who coined the term "black hole", suggested that the nature of reality was revealed by the bizarre laws of quantum mechanics. According to the quantum theory, before the observation is made, a subatomic particle exists in several states, called a superposition (or, as Wheeler called it, a 'smoky dragon'). Once the particle is observed, it instantaneously collapses into a single position (a process called 'decoherence').
A new method for testing the performance of quantum computers, designed by Sandia, is faster and more accurate than conventional tests
Posted by Dan Kummer in categories: computing, quantum physics
The so-called "mirror-circuit" testing method will help scientists advance the technology behind these super powerful processors. https://bit.ly/3snkgR8
Entanglement between superconducting qubits and a tardigrade
Posted by Dan Kummer in categories: biological, chemistry, quantum physics
Quantum and biological systems are seldom discussed together as they seemingly demand opposing conditions. Life is complex, "hot and wet" whereas quantum objects are small, cold and well controlled. Here, we overcome this barrier with a tardigrade — a microscopic multicellular organism known to tolerate extreme physiochemical conditions via a latent state of life known as cryptobiosis. We observe coupling between the animal in cryptobiosis and a superconducting quantum bit and prepare a highly entangled state between this combined system and another qubit. The tardigrade itself is shown to be entangled with the remaining subsystems. The animal is then observed to return to its active form after 420 hours at sub 10 mK temperatures and pressure of $6\times 10^{-6}$ mbar, setting a new record for the conditions that a complex form of life can survive.
Examining recent developments in quantum chromodynamics
Posted by Genevieve Klien in categories: engineering, mathematics, particle physics, quantum physics
Created as an analogy for Quantum Electrodynamics (QED) — which describes the interactions due to the electromagnetic force carried by photons — Quantum Chromodynamics (QCD) is the theory of physics that explains the interactions mediated by the strong force — one of the four fundamental forces of nature.
A new collection of papers published in The European Physical Journal Special Topics and edited by Diogo Boito, Instituto de Fisica de Sao Carlos, Universidade de Sao Paulo, Brazil, and Irinel Caprini, Horia Hulubei National Institute for Physics and Nuclear Engineering, Bucharest, Romania, brings together recent developments in the investigation of QCD.
The editors explain in a special introduction to the collection that due to a much stronger coupling in the strong force — carried by gluons between quarks, forming the fundamental building blocks of matter — described by QCD, than the electromagnetic force, the divergence of perturbation expansions in the mathematical descriptions of a system can have important physical consequences. The editors point out that this has become increasingly relevant with recent high-precision calculations in QCD, due to advances in the so-called higher-order loop computations.
Quantum computing: Japan takes step toward light-based technology
NTT, University of Tokyo and Riken aim for full-fledged system by 2030.
TOKYO — A Japanese team of scientists on Wednesday announced a key step in the development of a quantum computer using photons, or particles of light, that eliminates the need for an ultracold environment used to cool existing machines.
Continue reading "Quantum computing: Japan takes step toward light-based technology" »
Page 3 of 41712345678Last | CommonCrawl |
Structural changes and variability of the ITCZ induced by radiation–cloud–convection–circulation interactions: inferences from the Goddard Multi-scale Modeling Framework (GMMF) experiments
William K. M. Lau ORCID: orcid.org/0000-0002-3587-36911,
Kyu-Myong Kim2,
Jiun-Dar Chern1,
W. K. Tao3 &
L. Ruby Leung4
Climate Dynamics volume 54, pages211–229(2020)Cite this article
In this paper, we have investigated the impact of radiation–cloud–convection–circulation interaction (RC3I) on structural changes and variability of the Inter-tropical Convergence Zone (ITCZ) using the Goddard Multi-scale Modeling Framework, where cloud processes are super-parameterized, i.e., explicitly resolved with 2-D cloud resolving models embedded in each coarse grid of the host Goddard Earth Observing System-Version 5 global climate model. Experiments have been conducted under prescribed sea surface temperature conditions for 10 years (2007–2016), with and without cloud radiation feedback in the atmosphere, respectively. Diagnostic analyses separately for January and July show that RC3I leads to an enhanced and expanded Hadley Circulation characterized by (1) a quasi-uniform warming and moistening of the tropical atmosphere and a sharpening of the ITCZ with enhanced deep convection, more intense precipitation and higher clouds, (2) extended drying of the tropical marginal convective zones, and extratropical mid- to lower troposphere, and (3) a cooling of the polar regions, with increased baroclinicity and midlatitude storm track activities. Computations based on the zonal mean thermodynamic energy balance equation show that the radiative warming and cooling are strongly balanced by local adiabatic processes associated with changes in large-scale vertical motions, as well as horizontal atmospheric heat transport. In the tropics, enhanced short-wave absorption and longwave water vapor greenhouse effects by high clouds play key roles in providing strong positive feedback to the tropospheric warming. In the extratropics, increased atmospheric heat transport associated with changes in the Hadley circulation is balanced by strong longwave cooling above, and warming below due to increased high clouds. We also find a strong positive correlation between daily and pentad heavy rain in the ITCZ core, and expansion of the drier zones coupled to a contraction of the highly convective zones in the ITCZ, indicating a strong tendency RC3I-induced convective aggregation in tropical clouds i.e., wet-regions-get-wetter and contracted, and dry-areas-get-drier and expanded.
Differential solar and longwave radiative forcing, manifested as surplus in radiant energy in the tropics and deficit in polar regions, is well known to be the fundamental driver of the general circulation and the hydrologic cycle of the earth's climate system (Lorenz 1967; Wallace and Hobbs 1977). Rising motions in the moist atmosphere generate clouds, which interact with the large-scale circulation via feedback processes involving radiative transfer, phase changes, and convective processes (Stephens and Webster 1979; Stephens 2004). This interaction, hereafter referred to as radiation-clouds-convection-circulation-interaction (RC3I), further alters the heat and water balance, inducing changes in clouds, precipitation and circulation through dynamical adjustment processes, as the earth climate evolves around its quasi-equilibrium state. Dynamical adjustments are not limited to RC3I, but more generally represent responses and feedback to any internal or external forcing, that results in significant perturbation of the energy and water balance of the climate system. First and foremost, on the global scale, dynamical adjustments stemming from equator-to-pole differential radiative forcing give rise to the development and variability of large-scale atmospheric structures such as the Intertropical Convergence Zone (ITCZ), the Hadley and Walker circulations, mid-latitude storm tracks, and associated regional extreme precipitation. Dynamical adjustments involving the atmosphere, land, ocean, cryosphere, and biosphere can operate on diverse spatial and temporal scales, from diurnal, subseasonal-seasonal, interannual, inter-decadal to climate change (IPCC 2013).
One of the most prominent circulation features of the atmospheric general circulation is the ITCZ—a narrow band (< 10° latitude in width) of deep clouds spanning nearly the entire circumference of the equatorial regions. Even with the state-of-the-art climate models, simulations of the sharpness and the seasonal migration of the ITCZ remain a challenge (Zhang and Wang 2006; Lin 2007; Hwang and Frierson 2013; Li and Xie 2014; Byrne et al. 2018; Shonk et al. 2018; Landu et al. 2014; Bischoff and Schneider 2016; Xiang et al. 2017). Early climate modeling studies (Slingo and Slingo 1988, 1991; Randall et al. 1989), comparing control to no-cloud-radiation experiments, have shown that overall RC3I warms and moistens the tropical atmosphere, enhances deep convection and precipitation in the ITCZ, and strengthens the Hadley Circulation (HC). Recent studies also have shown that anomalous cooling (warming) of the extratropics of the northern hemisphere leads to a southward (northward) shift of the ITCZ, with effects of cloud and water vapor radiation playing important roles (Broccoli et al. 2006; Kang et al. 2008; Frierson and Hwang 2012; Hwang and Frierson 2013). Other studies have indicated the importance of longwave radiative cooling in clear-sky or suppressed cloud regions in the tropics and subtropics in balancing the latent heating by deep convection in the ITCZ (Pierrehumbert 1995; Fu et al. 2002; Larson and Hartmann 2003; Mauritsen and Stevens 2015). More recent studies showed that uncertainties of the ITCZ structure in climate model and in idealized aqua-planetary simulations are strongly dependent not only on parameterization of convective processes, but also in model representation of cloud radiative processes (Voigt et al. 2014; Voigt and Shaw 2015, Talib et al. 2018). Tian (2015) showed that bias in model climate sensitivity to global warming maybe traced to similar bias in model simulation of the ITCZ. However, the role of cloud radiation in modulating ITCZ convection, precipitation, large-scale circulation and associated atmospheric heating/cooling processes are still unclear. Bony et al. (2015) argued that cloud radiation feedback in convective organization, and ITCZ structure and variability are two of the top four most pressing knowledge gaps in climate science (the other two being extratropical storm track variability, and convective self-aggregation) that will have the best chance to be filled in the near future, with significant benefits to society, provided the scientific community can devote focused and coordinated efforts in maximizing the use of available modeling tools and observations. As apparent later in this paper, our results show that these four key research areas in climate sciences are closely intertwined. Specifically, extratropical storm track variability and convective aggregation are components of a global dynamical adjustment, linked, to structural changes of the ITCZ and cloud radiation feedback induced by RC3I.
Analyses of long-term satellite derived rainfall have revealed tantalizing hints that there has been a long-term increase in rainfall, i.e., enhanced latent heating in the ITCZ of the equatorial central and eastern Pacific since 1979 (Lau and Wu 2007, 2011; Gu and Adler 2013; Tan et al. 2015; Gu et al. 2016). In addition to the increased ITCZ precipitation, a multi-decadal trend indicating a narrowing of the ITCZ in the central and eastern equatorial Pacific has been observed (Zhou et al. 2011; Wodzicki and Rapp 2016). Contemporaneously, observational studies have found evidence of global warming in multi-decadal trend in a widening of the subsidence zone of the HC (Hu and Fu 2007; Seidel and Randel 2007; Lu et al. 2009), as well as a drier, and poleward expansion of the subtropical arid land regions in the last three decades (Dai 2011, 2013; Feng and Fu 2013; Huang et al. 2017). The observed sharpening of the ITCZ convective core and the drying in the subtropics are likely to be connected to a canonical pattern of changes in global precipitation characteristics, linked by the dynamical adjustments of the general circulation in response to greenhouse warming (Lau et al. 2013). However, the mechanisms of the ITCZ narrowing and intensification, as well as connection to widening and drying are still not well understood. Lau and Kim (2015) based on analyses of CMIP5 models, found evidence of a planetary scale dynamical adjustment mechanism under greenhouse warming, i.e., the Deep Tropical Squeeze (DTS), which posits that dynamical feedback associated with increased latent heating in deep convection in the ITCZ can spur an intensification and a narrowing of the ascending branch of the HC, coupled to a rise of the level of maximum outflow of the HC, and increased drying and widening of the descending branch of the HC. Others have suggested that the narrowing of the ITCZ stems from increasing meridional gradient of moist static energy between the tropics and extratropics, and that atmospheric dynamics contribute substantially to all-sky and cloud radiative feedback in the tropics, but are relative less important at higher latitudes (Byrne and Schneider 2016, 2018; Byrne et al. 2018). Still others have found evidences from CMIP5 model analyses and aqua-planet simulations that RC3I may play important roles in changing clouds and precipitation efficiency in the ITCZ, in connection to changes in the large-scale circulation (Su et al. 2014, 2017; Zhao 2014; Voigt et al. 2014; Voigt and Shaw 2015; Harrop and Hartman 2016). While a scientific consensus has yet to emerge regarding the relative importance of the myriad processes giving rise to changing structures of ITCZ and the large-scale circulation, there is little double that all studies point to the need for a better understanding of fundamental processes governing RC3I.
The objective of this work is to provide, using climate model simulation, a more fundamental understanding of the effects of RC3I on structural changes of the ITCZ and associated changes in the large-scale circulation, clouds and precipitation, including aspects of convective aggregation, from the perspective of cloud radiation effects on diabatic heating processes, and energy balance of the global atmosphere.
Model description and experimental design
This study is based on numerical experiments using the Goddard Multi-scale Modeling Framework (GMMF), which consists of a coupling of the Goddard Cumulus Ensemble model (GCE) and the Goddard Earth Observing System-Version 5 (GEOS-5) global atmospheric model (Tao et al. 2014). The GMMF belongs to the class of global climate model, often referred to as super-parameterized global climate model (SP-GCM), where the subgrid-scale cumulus parameterization schemes are replaced by two-dimensional cloud resolving models at high-resolution (2–4 km), embedded in the coarse resolution (100–250 km) GCM grids (Randall et al. 2003; Khairoutdinov et al. 2005, 2008; Tao et al. 2009; Li et al. 2012; Chern et al. 2016). Because of the more realistic representation of cloud microphysical processes by resolving convection at high resolution, SP-GCMs have been used in a variety of studies on multi-scale interactions involving radiation, cloud-scale convection and circulation on scales from diurnal to climate change (Luo and Stephens 2006; Marchand et al. 2009; Pritchard and Somerville 2009; Pritchard et al. 2011; Tao and Chern 2017).
For the present study, we used the recently updated GMMF Version 2.0, configured to run with 2° × 2.5° (latitude × longitude) horizontal grid spacing in the host GEOS5 GCM. Embedded in each GEOS-5 grid column, is a 64-column two-dimensional GCE with a horizontal spacing of 4 km, running with a 10 s time step with cyclic lateral boundaries (Tao et al. 2009; Chern et al. 2016). GMMF-Version 2.0 includes several important improvements compared to previous versions. Briefly, the vertical layers within the GEOS model and GCE are increased to 48 and 44 respectively, with higher resolution in the lower atmosphere (17 layers below 700 hPa) to better represent boundary layer cloud processes. The GCE model top height has been extended, raising the lowest damping level upward from 16 to 20 km to allow for highly penetrative deep convection. Fully compressible dynamics (Klemp and Wilhelmson 1978) are used instead of the anelastic flow (Ogura and Phillips 1962) used previously. The radiation code is called from the embedded GCE to get better cloud radiation interaction at the natural temporal and spatial resolution of clouds (Chern et al. 2016). Both longwave and shortwave radiation calculations include improved schemes for absorption due to water vapor, O3, O2, CO2, minor trace gas (N2O, CH4, and CFCs), clouds, and aerosols, as well as scattering by clouds, aerosols, molecules (Rayleigh scattering), and the surface (Chou et al. 1999, 2002; Chou and Suarez 1999). Radiative fluxes are integrated over nearly the entire spectrum, including eight bands in the ultraviolet and visible region and three in the infrared region. Most important for this study, GMMF Version 2.0 includes a new four-class ice scheme (cloud ice, snow, graupel, and frozen drops/hail) with capabilities that include: ice supersaturation microphysics, depositional growth of cloud ice to snow, varying cloud ice fall speeds, limiting cloud ice particle size, and new size-mapping schemes for snow and graupel (Lang et al. 2014; Tao et al. 2016). The 4ICE scheme produces better overall spatial distribution of cloud ice amount, total cloud fractions, net radiation, and total cloud radiative forcing than earlier three-class ice schemes compared to observations from CloudSat/CALIPSO and radar data (Chern et al. 2016). Since graupel and frozen drops/hail can simultaneously occur in nature both in the same storm and in different storm systems at different locations, a 4ICE scheme is necessary to simulate the diverse array of cloud systems over the entire globe. Version 2.0 also includes an improved land surface model (Mohr et al. 2013) that has been computationally enhanced for high impact weather studies (Shen et al. 2013).
Worth noting is that the configuration of the GMMF v.2 used for our model simulations contains 13,100 copies of 2D cloud resolving models at 4 km horizontal resolution covering the entire globe, running on a time step of 10 s, coupled to the GEOS5 global model on a 2 × 2.5 lat-long grid and complex physics packages running at 1-h time step. As a result, while the use of GMMF for multi-scale interaction studies represents a clear advantage over GCM with parameterized subgrid scale convection, and considerable saving compared to running cloud resolving models globally (Satoh et al. 2008), computation resources for running the GMMF and storing outputs from the experiments are still very demanding. Computation resource is an important consideration in deciding on the following experimental design.
As a first step in our use of GMMF to investigate cloud radiative feedback processes on ITCZ, the experiment design in this study is aimed at identifying the roles of atmospheric processes only, i.e., RC3I, but not surface feedback. Surface processes over land and ocean are known to provide important climate feedback (Qu and Hall 2006; Andrew et al. 2009; Lloyd et al. 2012; Stephens et al. 2018), and their modulation of RC3I is a subject of an ongoing investigation outside the scope of this study. Here, in the control experiment (Control), the GMMF v.2.0 was integrated with full atmospheric physics, including cloud radiative feedback, interactive land, under prescribed sea surface temperature forcing for 10 years (2007–2016). For the anomaly experiment (NoCRF), the same experiment as Control was conducted, except (a) the cloud optical thickness was set to zero in the radiative code, and (b) all surface heat fluxes including net shortwave and long-wave radiation, sensible and latent heat from the land surface to the atmosphere were restored to those of the hourly values of Control at every GEOS5 time step, i.e., the land surface model is essentially turn off. Step b) is necessary to prevent climate drift in NoCRF due to fast feedback of land processes. Clouds are still generated in NoCRF by the embedding cloud resolving models, except that the cloud radiative (both shortwave and longwave) effects are turn off. Due to the prescribed land surface fluxes, there are some energy imbalance at the lowest layer of the atmosphere in NoCRF that the atmosphere will have to adjust to at every GCE (10 s) and every GCM (1 h) time step. The imbalance result in a sharpening of the climatological precipitation diurnal cycle over land i.e., approximately 10–20% increase in the daily maximum in late afternoon, slight decrease at other times of the day, but no significant diurnal cycle change over the ocean, in NoCRF compared to Control (see Fig. S1). Given that the focus of our study is on changing structure of ITCZ precipitation which is mostly over the ocean, the nearly identical climatologival precipitation diurnal cycle over the ocean between Control and NoCRF provides assurance that the impact of the imbalance on our results is not likely to be too large.
Because of the relatively fast time scales of atmospheric adjustment processes in GMMF, the results of the 10-year integration are reasonably stable, with all the climatological heating and cooling processes in quasi-equilibrium. Differences in various dynamic and thermodynamic quantities between the control and NoCRF (Control-minus-NoCRF) will be referred to as anomalies induced by RC3I in the following discussion.
Because the ITCZ migrates seasonally to and away from the equator every year, change in the structure due to RC3I such as the narrowing or sharpening of the ITCZ, may be masked by the seasonal movement, or changes in the seasonal movement of the ITCZ. To better delineate the fundamental changes in the ITCZ structure due to RC3I, we have conducted analyses separately for January and July, where the ITCZ is at approximately the same location respectively every year, and at its maximum strength. All analyses for January have been repeated for July. Results indicate that key features in July are very similar to January, except for the magnitude, the location of the ITCZ, and a switch of more dynamically active (winter) hemisphere to the southern hemisphere. To avoid repetition, most results for July are shown only in the Supplementary Information (SI). A summary description, comparing salient RC3I features, with respect to ITCZ structural changes, and convective aggregation for January and July are included in Sect. 4.
Changes in temperature, moisture, precipitation and circulation
To begin, we compare the global climatological 10-year (2007–2016) mean January precipitation to TRMM observations for the same years. To facilitate discussion, we define the ITCZ domain as the region within 30°S–30°N where the monthly mean precipitation exceeds 6 mm day−1. The climatological GMMF model global precipitation compare well to the TRMM observation, in terms of the location and width of the ITCZ domain and the mid-latitude storm tracks, except that model over-estimates peak ITCZ precipitation rate by approximately 20–30% compared TRMM (Fig. 1a). The over-estimation of tropical precipitation appears to be a common feature of SP-GCMs, likely due to the inherent limited degree of atmospheric motions in embedding 2-D CRMs with cyclic boundary conditions, i.e., inability for deep convective system to propagate to neighboring GCM grids, and the singular orientation of convection that could bias momentum transport by convection (Randall et al. 2003; Khairoutdinov et al. 2005; Cheng and Xu 2011). From the difference maps (Fig. 1b–d), a substantial change in the general circulation can be discerned. During January, RC3I enhances precipitation in the ITCZ core located near 5–8°S, reduces precipitation near the boundaries of the ITCZ, including regions of the South Africa, Northeastern Australia, and South America (Fig. 1b). Precipitation over the subtropical and extratopical storm track regions of the North Pacific, North Atlantic, and the South Pacific Convergence Zone are also enhanced. These features are manifested in the zonal mean as a sharpening of the ITCZ, i.e., enhanced precipitation in the ITCZ core near the equator and reduced precipitation on its flanks (Fig. 1b, right panel), consistent with the "deep tropical squeeze" (Lau and Kim 2015), and the "upped-ante" mechanism (Neelin et al. 2003; Chou and Neelin 2004; Lintner and Neelin 2007). This change in the ITCZ structure is coupled to enhanced precipitation in the midlatitudes associated with changes in the subsiding and extended branches of the HC. The global precipitation anomalies are associated with a near-uniform warming of the mid-troposphere in the tropics, and cooling at polar latitudes (60°–90°) in both hemispheres, with the cooling more pronounced in the northern (winter) hemisphere (Fig. 1c). The differential warming/cooling creates zones of increased baroclinicity in the extratropics (30–60°N and 35–60°S) of both hemispheres (Fig. 1c, right panel), enhancing midlatitude storm track activities over the oceanic areas in both hemispheres, and western regions of continents of the northern hemispheres (Fig. 1d).
Spatial distribution of a climatological (2001–2010) precipitation (mm day−1) for the model control (contoured) and for TRMM, and b climatological precipitation difference (control-minus-NoCRF). Zonal mean profiles are shown on the right subpanels. c, d are the same as in b, except for 500 hPa temperature (°K), and storm track activities represented by the mean square variances of the zonal mean deviations of the meridional winds (m2 s−2) at 500 hPa
The RC3I-induced warming in the tropics and cooling in polar regions represent substantial global perturbation to the thermal structure of the atmosphere, extending from the lower troposphere to the upper troposphere and lower stratosphere (UTLS, 200-100hpa), where the temperature gradient are most pronounced in the northern (winter) hemisphere near the polar edge (30–45°N) of the subsiding branch of the HC (Fig. 2a, b). The sharpening of the ITCZ is reflected in enhanced ascent in the rising branch of the HC near 5–10°S, coupled to strong anomalous sinking motion, at approximately 5° north of the equator. Also found are regions of alternating anomalous ascent and descent in the subtropics and extratropics, featuring increased subsidence near poleward edges of the subsiding branches of the HC, and overall weakly enhanced ascent in the Ferrel cell at high latitudes (40–90°N, S), consistent with the moderate increase in zonal mean precipitation there (see Fig. 1b, right panel). Strong anomalous cross-equatorial flow from the southern hemisphere (SH) to the northern hemisphere (NH) is found in the upper troposphere, coupled to strong return flow in the lower troposphere and near the surface (Fig. 2c). Noting that NH is anomalously colder due to RC3I compared to the SH, the direction of the cross-equatorial meridional flow is consistent with the notion that the ITCZ ascending branch tend to stay and enhanced in the warmer hemisphere (SH in January), in order to allow for heat balance by atmospheric transport to ameliorate excessive cooling at higher latitudes (Broccoli et al. 2006; Kang et al. 2008; Frierson and Hwang 2012). As indicated earlier (Fig. 1d), and discussed in more details later based on component contributions using the thermodynamic energy equation, storm track activities effecting poleward heat transport are strongly enhanced in the extratropics due to RC3I. In conjunction with the aforementioned changes in the overturning circulations, there is substantial moistening (increased specific humidity, q) and drying (decreased q) of the tropical atmosphere coinciding with the regions of anomalous ascent and descent, respectively (Fig. 2d). Also noticed is that the polar regions (> 60°N, and S) of both hemispheres are generally drier, in connection of the development of colder temperature there. However, atmospheric convection states are not solely controlled by moisture abundance, but more strongly controlled by relative humidity Rh, which are dependent on both thermodynamic and dynamics. Based on the Clausius-Clapeyron relationship dRh = dq/qs – αRhdT, where α = 6.5% K−1, an increase in specific humidity will enhance Rh, but an increase in temperature will reduce Rh. Furthermore, atmospheric advective processes, stemming from anomalous subsidence bringing drier air from aloft, can also lead to reduced Rh in mid- and lower troposphere. Hence, changes in Rh represent the competing effects of changes in q, T and vertical motions (Fig. 2e). These effects give rise to strong Rh increase (decrease) in regions with strong anomalous ascent (descent) in the tropics, and mostly decreased Rh in the mid-and lower troposphere of the subtropics and polar region. In the upper troposphere lower stratosphere (UTLS, 200–100 hPa), where convective moisture transport is limited, change in Rh is mostly controlled by temperature effect, i.e., the warmer (the colder), the stronger decrease (increase) in Rh. Likewise, cloud water (liquid + ice) is strongly increased (decreased) in the anomalous ascending (descending) region of the deep tropics (Fig. 2f). The large reduction in cloud ice over extensive areas in the UTLS of the tropics and extratropics (60°S–60°N) is due to the large reduction in relative humidity associated with the much warmer temperature over this region. Elsewhere, in the subtropics and midlatitudes (30-60° N, S), a reduction in boundary layer (1000–850 hPa) cloud water and increase in mid- to high clouds (850–300 hPa) signals a lifting of the clouds with more abundance of high clouds and increased precipitation over these regions. This is due respectively to the drying of the lower troposphere from mean circulation and temperature change, as well as enhanced poleward transport of heat and moisture in the mid- and upper troposphere associated with the increased storm track activities. Overall the aforementioned features signal an enhanced HC with extended influences on the extratropics due to RC3I (See further discussion in next section).
Height-latitude cross-sections of a temperature (°K), b negative p-velocity (hPa s−1), c merdional winds (ms−1), d specific humidity (g kg−1), e relative humidity (%), and f cloud water (mg kg−1). Control-minus-NoCRF values are shaded and NoCRF values contoured. In b solid (dotted) contours denote rising (sinking) climatological motion; positive deviations (red) denote anomalous rising motion, and negative deviations (blue) denote anomalous sinking motion. The same color code, positive (red), and negative (blue) applies to anomalies in specific humidity, RH and cloud water
Diabatic heating and dynamical tendencies
In this section, we examine the RC3I-induced changes in the heat balance of the atmosphere via the thermodynamic energy equation
$$\frac{{\partial \bar{s}}}{\partial t} + \overline{{v^{\prime} \cdot \nabla s^{\prime}}} + \overline{{\omega^{\prime}\frac{{\partial s^{\prime}}}{\partial p}}} = Q_{MP} + Q_{R} - \nabla \cdot \mathop {\overline{{s^{\prime}v^{\prime}}} } - \overline{{\frac{{\partial{s^{\prime}\omega^{\prime}}}}{\partial p}}} ,$$
which can be re-written as:
$$\frac{{\partial \bar{s}}}{\partial t} = Q_{DYN} + Q_{MP} + Q_{SW} + Q_{LW} = 0\;{\text{for steady state balance}}.$$
In Eq. (1), the overbar represents monthly mean, and the prime daily deviations from the mean, s = (CpT + gz) is the dry static energy, QMP represents moist heating associated with condensation, evaporation, deposition and sublimation, freezing and melting processes in liquid and/or ice-phase precipitation derived from the embedding GCEs, and QR is the radiative heating. Both QMP and QR are computed within the GCE codes, and translated onto the GEOS5 coarse grids. In Eq. (2), \(Q_{DYN} = - \left({\overline{{v^{\prime} \cdot \nabla s^{\prime}}} + {\overline{\omega^{\prime}\frac{{\partial s^{\prime}}}{\partial p}}}} \right) - \left( {\nabla \cdot \mathop {\overline{{s^{\prime}v^{\prime}}} } + {\overline{\frac{{\partial {s^{\prime}\omega^{\prime}} }}{\partial p}}}} \right)\) represents the dynamic tendency, and QR is further decomposed into heating by shortwave (QSW) and longwave (QLW) radiation. Each of the heating terms in Eq. (2) has been computed for Control and NoCRF respectively. In Control climatology, in the deep tropics and near the ITCZ core, there is a strong local balance between QMP and QDYN (adiabatic cooling of ascending air) in the troposphere up 200 hPa (Fig. 3a, b), in accord with the well-known weak temperature gradient approximation of the tropical atmosphere (Sobel et al. 2001; Raymond and Zeng 2005). In mid-latitudes (30–60°N, and 30–60°S), heating by QMP associated with storm track is strong, but with shallower heating profile compared to the deep tropics (Fig. 3a). QDYN heats the atmosphere in the subsidence branch of the HC through adiabatic warming of descending air, and in the mid-latitude and polar regions, by poleward heat transport via transient and stationary eddies associated with storm tracks (Fig. 3b). However, strong cooling in the tropics and warming in the northern (winter) hemisphere by QDYN are not balanced by QMP above 200 hPa because climatologically convection rarely reaches such high altitudes. The net cooling and heating imbalance in the UTLS implies a net transport of heat by QDYN from tropics to extratropics. This is evidenced in the pattern of the sum of QMP and QDYN (Fig. 3c), which can be interpreted as the net atmospheric heat transport (AHT) after the local heating/cooling balance by QMP and QDYN has been accounted for. Here, it is clear that the overall effect of AHT is to transport solar radiation absorb near earth surface to the lower and mid-troposphere, and from the tropics to higher latitudes. The AHT impact on the UTLS is most pronounced in the northern hemisphere, likely due to enhanced upward propagation of wave energy by transient as well as quasi-stationary planetary scale waves during boreal winter (Charney and Drazin 1961; Simmons 1974; Holton 2004). For steady state, the AHT must be balanced by the total radiative heating QR (Fig. 3d). Here, QR shows a pattern almost exactly opposite to that AHT, with a magnitude difference less than a few percent. A breaking down of QR into QSW and QLW (Fig. 3e, f) shows clearly that QLW is the dominant process, highly inversely correlated with AHT. QSW contributes to warming of the troposphere in the southern (summer) hemisphere, but has little or no contribution to the northern (winter) hemisphere due to low inclination of the sun. These results provide assurance that that physical processes governing the general circulation, convection, precipitation and clouds in GMMFv.2.0 are reasonably realistic, and consistent with fundamental conservation principles for atmospheric heat balance (Peixoto and Oort 1992).
Height-latitude cross-sections of climatological control of a moist heating processes QMP, b dynamical tendencies (QDYN), c atmospheric heat transport (AHT), d radiative heating (QR), e shortwave heating QSW, and f longwave heating (QSW). Unites are in K day−1
The sharpening of the ITCZ induced by RC3I can be identified with the strong increase in anomalous QMP at the ITCZ core about 5° south of equator, and decrease on both its flanks (Fig. 4a). Moderate positive QMP with lower scale height associated with enhanced storm track activities can also be found in the extratropics of both hemispheres. Through adiabatic cooling (warming) of the ascending (descending) air, QDYN provides strong balance to the QMP warming/cooling in the ITCZ region (Fig. 4b), as well as heat transport from tropics to extratropics. The AHT (Fig. 4c) indicates increasing heat transport from the tropics to extratropics and polar latitudes in the lower to mid-troposphere, reflecting an intensification of the HC and increased storm track activities in the extratropics. In the UTLS, the strong AHT cooling in the northern hemisphere polar region signals a reduction of the upward propagation of planetary scale wave into the lower stratosphere (See Fig. 3c). The anomalous QR pattern (Fig. 4d) shows almost identical features as AHT everywhere except with the sign switched, indicating a high degree of balance between QR and AHT. A decomposition of QR shows that QSW (Fig. 4e) contributes significantly to the heating of the upper troposphere in the tropics of the southern (summer) hemisphere. Here, the upper-level QSW heating is likely contributed by high (ice) clouds via increased absorption of near-infrared SW radiation, and enhanced absorption by increased super-cooled water, while opposed by cooling below clouds due to increased ice cloud scattering of SW radiation (Randall et al. 1989, Hong et al. 2016). On the other hand, QLW (Fig. 4f) contributes to warming below clouds via the trapping of increased water vapor and cloud droplet, i.e., greenhouse effect, leading to strong warming below clouds in the tropics. Together QLW and QSW maintain a strong column heating by QR (Fig. 4d) in the ITCZ region (10°S–5°N), providing positive feedback to QMP. In the extratropics, ice and mixed-phase high clouds are formed at increasing lower elevations at higher latitudes due to lower climatological tropospheric temperature. As a result, QLW displays a characteristic downward sloping pattern towards the polar region, with cooling above and warming below clouds due to increased greenhouse effect of water vapor and cloud water (Lacis and Hansen 1974; Stephens 2004). Near surface warming and cloud top cooling due to low clouds can also be seen in the lower troposphere below 700 hPa (Fig. 4f). Overall, QLW provides the substantial balance to the AHT heating/cooling due to large-scale dynamical adjustment. The close relationship between variability of outgoing longwave radiation and atmospheric diabatic heating components has been noted from studies using reanalysis and satellite data products (Zhang et al. 2017).
Same as in Fig. 3, except for Control-minus-NoCRF
Cloud–radiation–precipitation feedback in the tropics
In this subsection, we examine in more detail, functional relationships between RC3I-induced changes in precipitation and heating components that underpin the structural change in the ITCZ, described previously. Vertical cross-sections of the heating components averaged over the tropics (30°S–30°N), as a function of precipitation have been constructed. Figure 5a shows that strong positive QMP induced by RC3I is associated with heavy precipitation and deep convection over the ITCZ core region (P > 10 mm day−1) while weak but widespread reduction in QMP is found over marginal convective zone (MCZ: 1 < P < 10 mm day−1) and the dry zone (DZ, P < 1 mm day−1). Clearly, QMP is largely balanced by cooling (warming) via adiabatic ascent (descent) in QDYN, as evidenced in the near-mirror image of two heating components (Fig. 5a, b). AHT (Fig. 5c), represented by the sum of QMP and QDYN, indicates a net heat loss (cooling) in the ITCZ core region, and net transport of heat from the ITCZ to the drier regions in the UTLS region (warming), and an uptake of heat (warming) in the lower troposphere from the surface in the DZ. The near mirror-image of the warming/cooling pattern in QR (Fig. 5d) compared to AHT (Fig. 5c) reflects a new quasi-equilibrium induced by RC3I, involving enhanced radiative heating by high clouds in the ITCZ core region and low-level cooling in surrounding areas, which are strongly balanced by heat transport by atmospheric motions. QSW contributes sizable amount of radiative warming of the upper troposphere of the ITCZ core region (Fig. 5e), due to absorption by enhanced cloud ice and super-cooled water (Fig. 6c, d), while inducing cooling below by cloud shielding (Fig. 5e). Comparing Fig. 5d–f, QLW is clearly the dominant contributor in QR, providing positive feedback to warming by QMP over the ITCZ core regions, while responsible for strong cooling near top of marine low clouds in the lower troposphere (850–700 hPa) over the drier part of the MCZ and the DZ. The strong QLW warming in the ITCZ core (Fig. 5f) is likely associated with increased trapping of longwave by increased water vapor (positive ∆RH in Fig. 6a), increased cloud liquid water and ice (Fig. 6c–e), associated with strongly enhanced precipitation (Fig. 6f). The strong low-level QLW cooling over the MCZ and the DZ (Fig. 5f) occurs near cloud tops, in conjunction with increased dryness (negative ∆RH in Fig. 6a) in the middle and upper troposphere near the edge of the ITCZ, as well as in the UTLS, and near the surface over the DZ where clouds and precipitation are reduced (Fig. 6d–f). Notice that while the reduction in clouds and precipitation appear small in magnitude over the DZ and MCZ (Fig. 6e, f), these two regions cover a large fractional areas (~ 90% of the tropics, see Tables 1, 2 and later discussion) the area-weighted cooling over these regions are comparable to the heating in the ITCZ region (< 10% areal coverage). In the UTLS, the strong negative QLW stems from more cooling to space by the warmer upper atmosphere (Fig. 6b). Overall, QR amplifies QMP in the heavy precipitation region, while increasing the radiative cooling in the dry region, i.e. increasing diabatic heating contrast that supports a stronger overturning circulation between the wet and dry regions.
Control-minus-NoCRF height-precipitation functional relationship for a QMP, b QDYN, c AHT, d QR, e QSW and QLW. Units of heating are in K day−1. Contours indicate NoCRF climatology. Precipitation is in mm day−1, log10 units
Same as in Fig. 5 except for a relative humidity (%), b tropospheric temperature (K), c cloud ice (mg/kg) and d cloud liquid (mg/kg), e total column water and ice (mg/kg), and f total precipitation (mm day−1). Contours indicates No-CRF climatology
Table 1 January fractional areal coverage of tropical precipitation regimes defined by DZ (P1 < 1 mm day−1), MCZ-dry (1 < P2 < 5 mm day−1), MCZ-wet (5 < P3 < 10 mm day−1), and ITCZ-core (P4 > 10 mm day−1), for Control, NoCRF and Control-minus-NoCRF difference (% change)
Table 2 Same as in Table 1, except for July
July vs. January
The same calculations and analyses described in the previous sections for January have been computed for July. As in January, the model shows an excess (20–30%) in maximum ITCZ precipitation, compared to TRMM in July (Fig. 7a). Except for position of the ITCZ core, which is situated approximately 5–8°N in July, instead of 5–10°S in January, similar features induced by RC3I are found. These include: (1) the sharpening of the ITCZ, with suppressed precipitation on its flanks (Fig. 7b), (2) warming of the tropics and cooling of the polar regions (Fig. 7c), and (3) increased baroclinicity and enhanced midlatitude storm track activities (Fig. 7d). All the diagnostic features pertaining to Figs. 2 through 6 have been computed for July and are provided in Figs S2 through S6 in Supplementary Information (SI). Except for differences in the location of the ITCZ and regional details, all our previous discussion regarding the contributions by various processes represented in the thermodynamic energy equation, as well as functional relationships of RC3I variables with tropical precipitation for January generally hold also for July.
Same as Fig. 1, except for July
In the following, we further illustrate the mechanisms underlying RC3I for both seasons. First, we examine changes of the HC based on analysis of the meridional mass streamfunction, \(\psi\) defined as:
$$\psi \left( {\emptyset ,p} \right) = 2\pi Rcos\phi \mathop \int \limits_{0}^{p} v\frac{dp}{g}$$
where R is the radius of the earth, \(\emptyset ,\) the latitude angle, and <v>, the zonal mean meridional velocity. During January (Fig. 8a, b), the DTS, i.e., sharpening of the ITCZ near 5–10°S over the warmer (southern) hemisphere, is evident in the tightly packed anomalous streamfunction lines indicating, strong low-level cross-equatorial flow from NH to SH, coupled to reverse upper-level meridional flow, strong anomalous ascent near the rising center of the climatological HC, and strong decent near 5° N where the climatological streamfunction is maximum and mean vertical motion is near-zero. Most of the low-level "squeeze" appears to come from the northern (cooler) hemisphere. During July (Fig. 8c, d), the DTS leads to an enhanced ITCZ over the warmer (northern) hemisphere near 5–10°N. Here, the "squeeze" seems to come from both hemispheres, with stronger contribution from the southern (cooler) hemisphere. Even though the perturbation on the zonal mean mass streamfunction is relatively small in the extratropics, atmospheric heat transport by stationary and transient eddies (see discussion pertaining to Figs. 3 and 4) play an important role in the heating and cooling of the extratropics, that could provide strong feedback to the ITCZ in the presence of cloud and water vapor radiation feedback (Kang et al. 2009; Seo et al. 2014).
Mass streamfunction showing a NoCRF climatology (contour), b Control-NoCRF (color shaded) for January. c, d Are the same as a, b respectively, except for July. Units are in 1010 kg s−1
Next, we explore the underlying causes for the invariance in the functional relationships between ITCZ precipitation and various cloud-scale and large-scale controls between January and July. Recent studies that have suggested the importance of convective aggregation, i.e., natural tendency of convection to cluster into smaller areas, creating larger drier and less cloudy areas between convective clusters, thus allowing more efficient cooling to space by longwave radiation (Bretherton et al. 2005; Tobin et al. 2012; Muller and Held 2012; Bony et al. 2015). Here, we explore this idea, using the mean 500 hPa ascending motion (negative p-velocity) as a proxy of the ITCZ domain, which matches well with that defined earlier in Sect. 3.1. A comparison of the spatial pattern of ITCZ in the Control (Fig. 9a) and in NoCRF (Fig. 9b) shows that the former appears to be more organized, and concentrated over smaller areas than the latter, i.e., RC3I promotes convective aggregation. The difference map (Fig. 9c) shows that much of reduced convection, as implied by the weakened ascending motion appears to be found mostly in the MCZ near the perimeter of the ITCZ, and over land regions of South America, and South Africa, with increased ascent near the center of the ITCZ over the oceans. Overall, RC3I strongly enhances ascent in the ITCZ region, but suppresses ascent or enhances decent in the MCZ and DZ (Fig. 9d). On shorter (daily to pentad) time scales, heavy rainfall occurs only over small fractions of the tropics (< 10%), with large fractions of no-rain, or light rain (> 90%). A scatter plot of pentad rainfall change over regions where RC3I increases rainfall against fraction of drier area fraction where rainfall decreases (Fig. 9e), shows a strong positive correlation of 0.75, p value < 0.0001, indicating an expansion of the drier area outside the ITCZ with increasing rainfall in the ITCZ. For July (Fig. 10a–d), even with a major shift of the ITCZ domain due to the seasonal change and different regional interactions compared to January, the reduced vertical ascent (suppressed convection) in the MCZ is still evident. The correlation between ITCZ precipitation, and fractional drier areas outside the ITCZ remain strong at 0.74, p value < 0.0001 (Fig. 10d). The invariance of the correlations between precipitation rate and MCZ and DZ areal fractions between January and July suggests an intrinsic tendency for convective aggregation induced by RC3I, independent of the season.
January spatial distribution of 500 hPa climatologically ascending regions for a control, b NoCRF, and c Control-minus-NoCRF, d vertical motion as a function of precipitation (log 10 mm day−1) for Control and NoCRF respectively, and e scattered plot of increased precipitation in ITCZ region vs. area fraction where precipitation decreases
Fig. 10
Same as in Fig. 9, except for July
To further illustrate the convective aggregation concept, the fractional areal coverage of the tropics (30S–30 N) for four rainfall sub-regimes, i.e., DZ (P1 < 1 mm day−1), MCZ-dry (1 < P2 < 5 mm day−1), MCZ-wet (5 < P3 < 10 mm day−1) and ITCZ core (P4 > 10 mm day−1), have been computed based on daily model rainfall for January and July, respectively. Additionally, a convective aggregation index (CAI) has been computed as the inverse ratio of area of the ITCZ-core (P4) to the relatively dry areas (P1and P2). A large CAI signals a strong tendency for convective aggregation in the ITCZ-core, relative to the expansion of surrounding drier areas. Tables 1 and 2 shows that to a high degree of stability (relatively small standard deviation of the mean compared to the mean), the invariance of the statistics between January and July. Climatologically, for both Control and NoCRF, P1 occupies the largest fraction (~ 70%), and P4 the least (~ 10%) of the total tropical areas, and the rest of the areas (~ 20%) in the MCZ. If P2 (MCZ-dry) areas are also included as dry areas, the fraction of DZ rises to 85% over the entire tropics. The CAI shows approximately a nearly tenfold more dry areas compared to ITCZ-core areas. The effects of RC3I, as evident in the difference between the Control and NoCRF, are highly significant (p-value < 0.0001) indicating an expansion of the dry areas in P1 and P2 (1–5%), and contraction in P4 (~ 11%) under RC3I, in both January and July. The wetter regimes (P3, P4) are contracting and drier regimes (P1 and P2) are expanding with nearly the same rate for January and July. An exception is for P3 (MCZ-wet), which is contracting at much faster rate in July (12.3%) than in January (5.7%). The reason for the difference is unknown, and maybe related to more frequent occurrence of meso-scale complexes associated with development of deep convection and heavy precipitation over northern hemisphere land during boreal summer under RC3I, compared to the boreal winter (Laing and Fritsch 1997; Houze 2004). Overall, the CAI shows a robust percentage increase due to RC3I for January (15.3%) and July (15.7%). A further decomposition of CAI (columns 6 and 7 in Tables 1, 2) shows that percentage-wise a large portion of the contraction of the ITCZ-core region is associated with the synchronized expansion of the MCZ-dry (P2) region, 18.0% and 19.3% respectively for January and July. These expansion of the MCZ-dry regions are consistent with the drying at the outer edge of the ITCZ core noted in Figs. 9c, and 10c. We have carried out sensitivity calculations, adjusting the rainfall thresholds within various reasonable limits, and found the above results to be highly robust, and not sensitive to the choice of thresholds.
Using the Goddard Multi-scale Modeling Framework, we have investigated the multi-scale interactions involving radiation, clouds, convection, and circulation (RC3I) in affecting the structure and variability of the ITCZ. A 10-year control simulation experiment with full RC3I physics under prescribed sea surface forcing (2007–2016), and an anomaly experiment with the identical SST forcing but with cloud radiation feedback in the atmosphere disabled have been carried out. The results are summarized in Fig. 11, and briefly described in the following. On a global scale, RC3I leads to:
Schematic showing key features and processes involving changes in ITCZ structure induced by radiation–cloud–convection–circulation interactions. Anomalous longwave cooling, upward motions, horizontal atmospheric heat transport are represented by broad arrows (blue), solid black arrows, and dashed black arrows, respectively. Size of arrow symbolizes relative magnitude of the effect
A near-uniform warmer and moister tropics, with a sharpened ITCZ characterized by increased deep clouds, intensified precipitation, in association with increased ascent and a narrowing of the rising branch of the Hadley Circulation (HC).
Reduced precipitation in the tropical marginal convective zones, coupled to a widened and drier descending branch of the HC, in the subtropics and mid-latitudes.
A cooler and drier polar region, with increased baroclinicity and enhanced storm track activities, with enhanced mixed-phase clouds in mid-latitudes.
Computation of the various terms in the thermodynamic energy equation show that in the zonal mean, the anomalous tropical latent heating/cooling is strongly balanced by adiabatic processes associated with changes in vertical motions. The atmospheric heat transport (AHT, after the local balance between latent heating and adiabatic processes has been accounted for) moves heat out of the tropics towards the subtropics and to higher latitudes via a strengthened HC, as well as increased extratropical storm track activities. Our results show the profound importance of high clouds in contributing to RC3I. In the tropics, trapping of longwave by increased high clouds and water vapor, and absorption of shortwave by increased cloud ice and water vapor in the upper troposphere provide a strong positive feedback to the warming. In the extratropics, increased high clouds induced by cloud radiation feedback maintain longwave cooling above, and warming below clouds. The changes in radiation energy are balanced by atmospheric heat transport associated with the aforementioned structural changes in the ITCZ and the large-scale circulation. Within the tropics, positive feedback between radiation effects of clouds and water vapor, and latent heating is strongest in the ITCZ core region. In the marginal convective, and dry zones, longwave radiation from marine boundary layer cloud top provides strong cooling effects. In addition, we find a strong correlation between increased daily precipitation rate, and contracting areal coverage of deep convection in the ITCZ, which is coupled to expanding drier areas in the marginal convective zones and dry zones outside the ITCZ, indicating a tendency for convective aggregation. Our results suggest the notion of "wet regions getting wetter and contracted" coupled to "dry-getting drier and expanded" as a fundamental way the large-scale circulation adjust to cloud radiation feedback on a global scale, in agreement with the "Deep Tropical Squeeze" hypothesis on changing structure of the ITCZ under greenhouse warming (Lau and Kim 2015). This notion is also consistent with previous studies indicating the importance of cloud radiation in intensifying and maintaining the diabatic heating contrast, between cloudy (wet) and clear (dry) regions in the mean climate (Gray and Jacobson 1977; Webster 1994; Webster and Stephens 1980) and in perturbed climate states such as El Nino-Southern Oscillation (Rädel et al. 2016; Stephens et al. 2008, 2018).
Notably, because of its ability to resolve convection near its native scales, a SP-GCM, in this case the GMMF, can provide a powerful modeling tool for better understanding of RC3I, and related climate change sciences. However, cloud resolving model (CRM) with 4 km resolution still cannot actually resolve clouds, but rather represent an improvement in simulation of convective processes compared to traditional convective parameterization. Additionally, the SP-GCMs are also known to possess bias of too strong tropical precipitation, probably related to limitation stemming from the 2-D nature and cyclic boundary condition of the embedding CRMs. Most important, because simulations of shallow convection and low clouds in state-of-the-art CRMs are sensitive to grid resolution (Noda et al. 2010; Muller and Held 2012), effects of subgrid turbulent moist processes still need to be included, and higher resolution CRMs, or large eddy scale (LES) models may need to be used. Finally, we note that the omission of surface fluxes feedback from the ocean and land in the present experimental design limits the application of model results to atmosphere-only cloud radiation feedback. Hence the present results, especially the quantitative aspects should be taken with caution, when compared to observations, and coupled model studies. Further studies are needed to examine how surface feedback from ocean and land may modulate the RC3I features shown here (Andrew et al. 2009; Qu and Hall 2006). As such, the present study can only be considered as providing a unifying theme underlying a number of current theories on impacts of RC3I on ITCZ maintenance and variability (e.g. Fu et al. 2002; Lintner and Neelin 2007; Kang et al. 2008, 2009; Frierson and Hwang 2012; Lau and Kim 2015; Fu 2015; Bischoff and Schneider 2016; Su et al. 2014, 2017; Byrne and Schneider 2018). This theme needs to be ascertained with more reliable present-day observations, inter-comparison with theoretical studies using aqua-planet models, as well as advanced tools such as cloud-permitting variable-resolution models, and eventually global cloud resolving coupled models.
Andrew T, Forster PM, Gregory JM (2009) A surface energy perspective on climate change. J Clim 22:2557–2570. https://doi.org/10.1175/2008JCLI2759.1
Bischoff T, Schneider T (2016) The equatorial energy balance, ITCZ position, and double-ITCZ bifurcations. J Clim 29:2997–3013
Bony et al (2015) Clouds, circulation and climate sensitivity. Nat Geosci 8:261–268. https://doi.org/10.1038/ngeo2398
Bretherton CS, Blossey PN, Khairoutdinov M (2005) An energy-balance analysis of deep convective self-aggregation above uniform SST. J Atmos Sci 62:4273–4292
Broccoli AJ, Dahl KA, Stouffer RJ (2006) Response of the ITCZ to Northern Hemisphere cooling. Geophys Res Lett 33:L01702. https://doi.org/10.1029/2005GL024546
Byrne MP, Schneider T (2016) Narrowing of the ITCZ in a warming climate: Physical mechanisms. Geophys Res Lett 43:11350–11357. https://doi.org/10.1002/2016gl070396
Byrne MP, Schneider T (2018) Atmospheric dynamics feedback: concept, simulations, and climate implications. J Clim 31:3249–3264. https://doi.org/10.1175/JCLI-D-17-0470.1
Byrne MP, Pendergrass AG, Rapp AD, Woodzicki KR (2018) Response of the Intertropical Convergence Zone to climate change: location, width, and strength. Curr Clim Change 4:355. https://doi.org/10.1007/s40641-018-0110-5
Charney JG, Drazin PG (1961) Propagation of planetary-scale disturbances from the lower into the upper atmosphere. J Geophys Res 66:83–109. https://doi.org/10.1029/JZ066i001p00083
Cheng A, Xu KM (2011) Improved low-cloud simulation from a multiscale modeling framework with a third-order turbulence closure in its cloud-resolving model component. J Geophys Res 116:D14101. https://doi.org/10.1029/2010JD015362
Chern JD, Tao WK, Lang SE, Li JLF, Mohr KI, Skofronick-Jackson GM, Peters-Lidard CD (2016) Performance of the goddard multiscale modeling framework with goddard microphysical schemes. J Adv Model Earth Syst 8:66–95. https://doi.org/10.1002/2015MS000469
Chou C, Neelin JD (2004) Mechanisms of global warming impacts on regional tropical precipitation. J Clim 17:2688–2701
Chou MD, Suarez MJ (1999) A shortwave radiation parameterization for atmospheric studies, Rep. NASA/TM-104606, pp. 40, NASA, Center for AeroSpace Information, Hanover, Maryland
Chou MD, Lee KT, Tsay SC, Fu Q (1999) Parameterization for cloud longwave scattering for use in atmospheric models. J Clim 12:159–169
Chou MD, Lee KT, Yang P (2002) Parameterization of shortwave cloud optical properties for a mixture of ice particle habits for use in atmospheric models. J Geophys Res 107:4600. https://doi.org/10.1029/2002JD002061
Dai A (2011) Drought under global warming: a review. WIRES Clim Change 2:45065. https://doi.org/10.10002/wcc.81
Dai A (2013) Increasing drought under global warming in observations and models. Nat Clim Change 3:52–58. https://doi.org/10.1038/NCLIMAE1633
Feng S, Fu Q (2013) Expansion of global drylands under a warming climate. Atmos Chem Phys 13:10081–10094. https://doi.org/10.5194/acp-13-10081-2013
Frierson DMW, Hwang YT (2012) Extratropical influence on ITCZ shifts in slab Ocean simulations of global warming. J Clim 25:720–733. https://doi.org/10.1175/JCLI-D-11-00116.1
Fu R (2015) Global warming-accelerated drying in the tropics. Proc Natl Acad Sci 112:3593–3594. https://doi.org/10.1073/pnas.1503231112
Fu Q, Baker M, Hartmann DL (2002) Tropical cirrus and water vapor: an effective Earth infrared iris feedback? Atmos Chem Phys 2:31–37. https://doi.org/10.5194/acp-2-31-2002
Gray WM, Jacobson RW (1977) Diurnal variation of deep cumulus convection. Mon Weather Rev 105:1171–1188. https://doi.org/10.1175/1520/0493(1977)105%3C1171:DVODCC%3E2.0.CO;2
Gu G, Adler RF (2013) Interdecadal variability/long-term changes in global precipitation patterns during the past three decades: global warming and/or pacific decadal variability? Clim Dyn 40:3009. https://doi.org/10.1007/s00382-012-1443-8
Gu G, Adler RF, Huffman GJ (2016) Long term changes/trends in surface temperature and precipitation during the satellite era 1979–2012. Clim Dyn 46:1091. https://doi.org/10.1007/s00382-015-2634-x
Harrop BE, Hartman DL (2016) The role of cloud radiative heating in determining the location of the ITCZ in aqua-planet simulations. J Clim 29:2741–2763
Holton JR (2004) An introduction to dynamic meteorology, 4rd Edn. Elsevier Academic Press, Inc., Oxford. ISBN-13:978-0-12-354015-7
Hong Y, Liu G, Li JLF (2016) Assessing the radiative effects of global ice clouds based on CloudSat and CALIPSO measurements. J Clim 29:7651–7674
Houze RA (2004) Mesoscale convective systems. Rev Geophys 42:RG4003. https://doi.org/10.1029/2004rg000150
Hu Y, Fu Q (2007) Observed poleward expansion of the Hadley Circulation since 1979. Atmos Chem Phys 7:5229–5236
Huang et al (2017) Dryland climate change: recent progress and challenges. Rev Geophys 55:719–778. https://doi.org/10.1002/2016RG000550
Hwang YT, Frierson DMW (2013) Link between the double-intertropical convergence zone problem and cloud biases over the Southern Ocean. Proc Natl Acad Sci 110:4935–4940
IPCC (2013) The physical science basis. In: Stocker TF et al (eds) Contribution of working group I to the fifth assessment report of the intergovernmental panel on climate change. Cambridge University Press, Cambridge
Kang SM, Held IM, Frierson DM, Zhao M (2008) The response of the ITCZ to extratropical thermal forcing: idealized slab-ocean experiments with a GCM. J Clim 21:3521–3532. https://doi.org/10.1175/2007JCLI2146.1
Kang SM, Frierson DM, Held IM (2009) The tropical response to extratropical thermal forcing in an idealized GCM: the importance of radiative feedbacks and convective parameterization. J Atmos Sci 66:2812–2827
Khairoutdinov M, Randall D, DeMott C (2005) Simulations of the atmospheric general circulation using a cloud-resolving model as a superparameterization of physical processes. J Atmos Sci 62:2136–2154. https://doi.org/10.1175/JAS3453.1
Khairoutdinov M, DeMott C, Randall D (2008) Evaluation of the simulated interannual and subseasonal variability in an AMIP-style simulation using the CSU Multiscale Modeling Framework. J Clim 21:413–431. https://doi.org/10.1175/2007JCLI1630.1
Klemp JB, Wilhelmson RB (1978) The simulation of three dimensional convective storm dynamics. J Atmos Sci 35:1070–1096
Lacis AA, Hansen J (1974) A parameterization for the absorption of solar radiation in the earth's atmosphere. J Atmos Sci 31:118–133
Laing AG, Fritsch JM (1997) The global population of mesoscale convective complexes. Q J R Meteorool Soc 123:389–405
Landu K, Leunbg LR, Hagos S, Vinoj V, Rayscger SA, Ringler T, Taylor M (2014) The dependence of ITCZ structure on model resolution and dynamical core in aquaplanet simulations. J Clim 27:2375–2385
Lang S, Tao W-K, Chern JD, Wu D, Li X (2014) Benefits of a 4th ice class in the simulated radar reflectivities of convective systems using a bulk microphysics scheme. J Atmos Sci 71:3583–3612. https://doi.org/10.1175/JAS-D-13-0330.1
Larson K, Hartmann DL (2003) Interactions among cloud, water vapor, radiation, and large-scale circulation in the tropical climate. Part I: sensitivity to uniform sea surface temperature changes. J Clim 16:1425–1440
Lau KM, Kim KM (2015) Robust responses of the Hadley circulation and global dryness form CMIP5 model CO2 warming projections. Proc Natl Acad Sci 112:3630–3635. https://doi.org/10.1073/pnas.1418682112
Lau KM, Wu HT (2007) Detecting trends in tropical rainfall characteristic, 1979-2003. Int J Climatology 27:979–988. https://doi.org/10.1002/joc.1454
Lau KM, Wu HT (2011) Climatology and changes in tropical oceanic rainfall characteristics inferred from Tropical Rainfall Measuring Mission (TRMM) data (1998–2009). J Geophys Res 116:D17111. https://doi.org/10.1029/2011JD015827
Lau KM, Wu HT, Kim KM (2013) A canonical response in rainfall characteristics to global warming from CMIP5 model projections. Geophys Res Lett 40:3163–3169. https://doi.org/10.1002/grl.50420
Li G, Xie S (2014) Tropical Biases in CMIP5 multimodel ensemble: the excessive equatorial pacific cold tongue and double ITCZ problems. J Clim 27:1765–1780. https://doi.org/10.1175/JCLI-D-13-00337.1
Li F, Rosa D, Collins WD, Wehner MF (2012) "Super-parameterization": a better way to simulate regional extreme precipitation? J Adv Model Earth Syst 4:M04002. https://doi.org/10.1029/2011MS000106
Lin J (2007) The double-ITCZ problem in IPCC AR4 coupled GCMs: ocean-atmosphere feedback analysis. J Clim 20:4497–4525. https://doi.org/10.1175/JCLI4272.1
Lintner BR, Neelin JD (2007) A prototype for convective margin shifts. Geophys Res Lett 34:L05812. https://doi.org/10.1029/2006GL027305
Lloyd J, Guilyardi E, Weller H (2012) The role of atmosphere feedbacks during ENSO in the CMIP3 models. Part III: the shortwave flux feedback. J Clim 25:4275–4293
Lorenz E (1967) The nature and theory of the general circulation of the atmosphere. World Meteorological Organization, Geneva
Lu J, Deser C, Reichler T (2009) Cause of the widening of the tropical belt since 1958. Geophys Res Lett 36:L03803. https://doi.org/10.1029/2008GL036076
Luo Z, Stephens GL (2006) An enhanced convection-wind-evaporation feedback in a superparameterization GCM (SP-GCM) depiction of the Asian summer monsoon. Geophys Res Lett 33:L06707. https://doi.org/10.1029/2005GL025060
Marchand R, Haynes J, Mace GG, Ackerman T, Stephens G (2009) A comparison of simulated cloud radar output from the multiscale modeling framework global climate model with CloudSat cloud radar observations. J Geophys Res 114:D00A20
Mauritsen T, Stevens B (2015) Missing iris effect as a possible cause of muted hydrological change and high climate sensitivity in models. Nat Geosci 8:346–351. https://doi.org/10.1038/NGEO2414
Mohr KI, Tao WK, Chern JD, Kumar SV, Peters-Lidard CD (2013) The NASA-Goddard Multi-scale Modeling Framework-Land Information System: global land/atmosphere interaction with resolved convection. Environ Model Softw 39:103–115. https://doi.org/10.1016/j.envsoft.2012.02.023
Muller CJ, Held I (2012) Detailed investigation of the self aggregation of convection in cloud resolving simulations. J Atmos Sci 69:2551–2565
Neelin JD, Chou C, Su H (2003) Tropical drought regions in global warming and El Niño teleconnections. Geophys Res Lett 30:2275. https://doi.org/10.1029/2003GL018625
Noda AT, Oouchi K, Satoh M, Tomita H, Iga S, Tsushima Y (2010) Importance of the subgrid-scale turbulent moist processes: cloud distribution in global cloud-resolving simulations. Atmos Res 96:208–217. https://doi.org/10.1016/j.atmosres.2009.05.007
Ogura Y, Phillips NA (1962) Scale analysis of deep and shallow convection in the atmosphere. J Atmos Sci 19:173–179
Peixoto JP, Oort AH (1992) Physics of climate, American Institute of Physics. ISBN:0-88318-712-4
Pierrehumbert RT (1995) Thermostats, radiator fins, and the local runaway greenhouse. J Atmos Sci 52:1784–1806
Pritchard MS, Somerville RCJ (2009) Assessing the diurnal cycle of precipitation in a multi-scale climate model. J Adv Model Earth Syst 1:12. https://doi.org/10.3894/JAMES.2009.1.12
Pritchard MS, Moncrieff MW, Somerville RCJ (2011) Orogenic propagating precipitation systems over the United States in a global climate model with embedded explicit convection. J Atmos Sci 68:1821–1840. https://doi.org/10.1175/2011JAS3699.1
Qu X, Hall A (2006) Assessing snow Albedo feedback in simulated climate change. J Clim 19:2617–2630. https://doi.org/10.1175/JCLI3750.1
Rädel G, Mauritsen T, Stevens B, Dommenget D, Matei D, Bellomo K, Clement A (2016) Amplification of El Niño by cloud longwave coupling to atmospheric circulation. Nat Geosci 9:106–110. https://doi.org/10.1038/NGEO2630
Randall DA, DA Harshvardhan Dazlich, Corsetti TG (1989) Interactions among radiation, convection, and large-scale dynamics in a general circulation model. J Atmos Sci 46:1943–1970. https://doi.org/10.1175/1520-0469(1989)046%3c1943:IARCAL%3e2.0.CO;2
Randall D, Khairoutdinov M, Arakawa A, Grabowski W (2003) Breaking the cloud parameterization deadlock. Bull Am Meteorol Soc 84:1547–1564
Raymond DJ, Zeng X (2005) Modeling tropical atmospheric convection in the context of the weak temperature gradient approximation. Q J R Meteorol Soc 131:1301–1320
Satoh M, Matsuno T, Tomita H, Miura H, Nasuno T, Iga S (2008) Nonhydrostatic icosahedral atmospheric model (NICAM) for global cloud resolving simulations. J Comput Phys 227:3486–3514
Seidel DJ, Randel WJ (2007) Recent widening of the tropical belt: evidence from tropopause observations. J Geophys Res 112:D20113. https://doi.org/10.1029/2007JD008861
Seo J, Kang SM, Frierson DM (2014) Sensitivity of intertropical convergence zone movement to the latitudinal position of thermal forcing. J Clim 27:3035–3042
Shen BW, Nelson B, Cheung S, Tao WK (2013) Improving NASA's Multiscale Modeling Framework for tropical cyclone climate study. Comput Sci Eng 15:56–67. https://doi.org/10.1109/MCSE.2012.90
Shonk JKP, Guilyardi E, Toniazzo T, Woolnough SJ, Stockdale T (2018) Identifying causes of Western Pacific ITCZ drift in ECMWF System 4 hindcasts. Clim Dyn 50:939–954
Simmons AJ (1974) Planetary-scale disturbances in the polar winter stratosphere. Q J R Meteorol Soc 100:76–108. https://doi.org/10.1002/qj.49710042309
Slingo JM, Slingo A (1988) The response of the general circulation model to longwave radiative forcing, I: introduction and initial experiment. Q J R Meteorol Soc 114:1027–1062. https://doi.org/10.1002/qj.49711448209
Slingo JM, Slingo A (1991) The response of the general circulation model to longwave radiative forcing, II: further experiment. Q J R Meteorol Soc 117:333–364. https://doi.org/10.1002/qj.49711749805
Sobel AH, Nilsson J, Polvani LM (2001) The weak temperature gradient approximation and balanced tropical moisture waves. J Atmos Sci 58:3650–3665
Stephens GL (2004) Cloud feedback in the climate system: a critical Review. J Clim. 18:232–273
Stephens GL, Webster PJ (1979) Sensitivity of radiative forcing to variable cloud and moisture. J Atmos Sci 36:1542–1556. https://doi.org/10.1175/1520-0469(1979)036%3c1542:SORFTV%3e2.0.CO;2
Stephens GL, van den Heever S, Pakula LA (2008) Radiative convective feedback in idealized states of radiative-convective equilibrium. J Atmos Sci 65:3899–3916. https://doi.org/10.1175/2008JAS2524.1
Stephens GL, Hakuba MZ, Webb MJ, Lebsock M, Yue Q, Kahn BH et al (2018) Regional intensification of the tropical hydrological cycle during ENSO. Geophys Res Lett 45:4361–4370. https://doi.org/10.1029/2018GL077598
Su H, Jiang JH, Zhai C, Shen T, Neelin JD, Stephens GL, Yung Y (2014) Weakening and strengthening structures in the Hadley Circulation change under global warming and implications for cloud response and climate sensitivity. J Geophys Res 119:5787–5805. https://doi.org/10.1002/2014JD021642
Su H, Jiang JH, Neelin JD, Shen TJ, Zhai C, Yue Q, Wang Z, Huang L, Choi YS, Stephens GL, Yung YL (2017) Tightening of the tropical ascent and high clouds key to precipitation change in a warmer climate. Nat Commun 8:15771. https://doi.org/10.1038/ncomms15771
Talib J, Woolnough SJ, Klingaman NP, Holloway CE (2018) The role of the cloud radiative effect in the sensitivity of the Intertropical Convergence Zone to convective mixing. J Clim 31:6821–6838
Tan J, Jakob C, Rossow W, Tselioudis G (2015) Increases in tropical rainfall driven by increases in frequency o organized deep convection. Nature 519:451–454
Tao WK, Chern J (2017) The impact of mesoscale convective systems on global precipitation: a modeling study. J Adv Model Earth Syst 9:790–809. https://doi.org/10.1002/2016MS000836
Tao W-K, Chern J, Atlas R, Randall D, Lin X, Khairoutdinov M, Li JL, Waliser DE, Hou A, Peters-Lidard C, Lau KM, Simpson J (2009) Multi-scale modeling system: development, applications and critical issues. Bull Am Meteorol Soc 90:515–534
Tao W-K, Lang S, Zeng X, Li X, Matsui T, Mohr K, Posselt D, Chern J, Peters-Lidard C, Norris P, Kang IS, Choi I, Hou A, Lau KM, Yang YM (2014) The Goddard Cumulus Ensemble model (GCE): improvements and applications for studying precipitation processes. Atmos Res 143:392–424
Tao W-K, Wu D, Lang S, Chern J, Fridlind A, Peters-Lidard C, Matsui T (2016) High-resolution model simulations of MC3E, deep convective-precipitation systems: comparisons between Goddard microphysics schemes and observations. J Geophys Res 121:1278–1306. https://doi.org/10.1002/2015JD023986
Tian B (2015) Spread of model climate sensitivity linked to double-Intertropical Convergence Zone bias. Geophys Res Lett 42:4133–4141. https://doi.org/10.1002/2015GL064119
Tobin I, Bony S, Roca R (2012) Observational evidence for relationships between the degree of aggregation of deep convection, water vapor, surface fluxes, and radiation. J Clim 25:6885–6904
Voigt A, Shaw TA (2015) Circulation response to warming shaped by radiative change of cloud and water vapor. Nat Geosci 8:102–106
Voigt A, Stevens B, Bader J, Mauritsen T (2014) Compensation of hemispheric albedo asymmetries by shifts of the ITCZ and tropical clouds. J Clim 27:1029–1045
Wallace JM, Hobbs PV (1977) Atmospheric sciences: an introduction survey. Academic, New York
Webster PJ (1994) The role of hydrological processes in ocean-atmosphere interactions. Rev Geophys 32:427–476. https://doi.org/10.1029/94RG01873
Webster PJ, Stephens GL (1980) Tropical upper-tropospheric extended clouds: inferences from winter MONEX. J Atmos Sci 37:1521–1541. https://doi.org/10.1175/1520-0469-37.7.1521
Wodzicki KR, Rapp AD (2016) Long-term characterization of the Pacific ITCZ using TRMM, GPCP, and ERA-Interim. J Geophys Res Atmos 121:3153–3170. https://doi.org/10.1002/2015JD024458
Xiang B, Zhao M, Held IM, Golaz JC (2017) Predicting the severity of spurious "double ITCZ" problem in CMIP5 coupled models from AMIP simulations. Geophys Res Lett 44:1520–1527. https://doi.org/10.1002/2016GL071992
Zhang GJ, Wang H (2006) Toward mitigating the double ITCZ problem in NCAR CCSM3. Geophys Res Lett 33:L06709. https://doi.org/10.1029/2005GL025229
Zhang K, Randel WJ, Fu R (2017) Relationships between outgoing longwave radiation and diabatic heating in reanalysis. Clim Dyn 49:2911–2929. https://doi.org/10.1007/s00382-016-3501-0
Zhao M (2014) An investigation of the connections among convection, clouds, and climate sensitivity in a global climate model. J Clim 27:1845–1862. https://doi.org/10.1175/JCLI-D-13-00145.1
Zhou Y, Xu KM, Sud Y, Betts A (2011) Recent trends of the tropical hydrological cycle inferred from Global Precipitation Climatology Project and International Satellite Cloud Climatology Project data. J Geophys Res 116:D09101. https://doi.org/10.1029/2010JD015197
This work was supported jointly by the NASA Precipitation Measuring Mission (PMM) Grant NNX16AE45G to University of Maryland, and by the Department of Energy, Office of Science, Biological and Environmental Research. The Pacific Northwest National Laboratory is operated for the Department of Energy, by Battelle Memorial Institute under contract DE-AC05-76RL01830. Partial support was also provided by the NASA Modeling, Analysis and Prediction (MAP).
Earth System Science Interdisciplinary Center, University of Maryland, College Park, 20740, USA
William K. M. Lau
& Jiun-Dar Chern
Climate and Radiation Laboratory, NASA/Goddard Space Flight Center, Greenbelt, USA
Kyu-Myong Kim
Mesoscale Atmospheric Processes Laboratory, NASA/Goddard Space Flight Center, Greenbelt, USA
W. K. Tao
Pacific Northwest National Laboratory, Richland, USA
L. Ruby Leung
Search for William K. M. Lau in:
Search for Kyu-Myong Kim in:
Search for Jiun-Dar Chern in:
Search for W. K. Tao in:
Search for L. Ruby Leung in:
Correspondence to William K. M. Lau.
Supplementary material 1 (DOCX 1840 kb)
Lau, W.K.M., Kim, K., Chern, J. et al. Structural changes and variability of the ITCZ induced by radiation–cloud–convection–circulation interactions: inferences from the Goddard Multi-scale Modeling Framework (GMMF) experiments. Clim Dyn 54, 211–229 (2020) doi:10.1007/s00382-019-05000-y
DOI: https://doi.org/10.1007/s00382-019-05000-y | CommonCrawl |
Deterministic Time Hierarchy Theorem
In particular: worst-case det. time; in the proof.
Let $f \left({n}\right)$ be a time-constructible function.
Then there exists a decision problem which:
can be solved in worst-case deterministic time $f \left({2n + 1}\right)^3$
cannot be solved in worst-case deterministic time $f \left({n}\right)$.
In other words, the complexity class $\mathsf{DTIME} \left({ f \left({n}\right) }\right) \subsetneq \mathsf{DTIME} \left({ f \left({2n+1}\right)^3 }\right)$.
Let $H_f$ be a set defined as follows:
$H_f = \left\{ { \left({ \left[{M}\right], x}\right): \text{$M$ accepts $x$ in $f \left({\left\vert{x}\right\vert}\right)$ steps} }\right\}$
$M$ is a (deterministic) Turing machine
$x$ is its input (the initial contents of its tape)
$\left[{M}\right]$ denotes an input that encodes the Turing machine $M$
Let $m$ be the size of $\left({ \left[{M}\right], x }\right)$.
We know that we can decide membership of $H_f$ by way of a (deterministic) Turing machine that:
$(1): \quad$ calculates $f \left({\left\vert{x}\right\vert}\right)$
$(2): \quad$ writes out a row of $0$s of that length
$(3): \quad$ uses this row of $0$s as a counter to simulate $M$ for at most that many steps.
At each step, the simulating machine needs to look through the definition of $M$ to decide what the next action would be.
It is safe to say that this takes at most $f \left({m}\right)^3$ operations, so:
$ H_f \in \mathsf{DTIME} \left({ f \left({m}\right)^3 }\right)$
This article contains statements that are justified by handwavery, in particular:
"We know that ...", "It is safe to say ..."
You can help $\mathsf{Pr} \infty \mathsf{fWiki}$ by adding precise reasons why such statements hold.
If you are able to do this, then when you have done so you can remove this instance of {{Handwaving}} from the code.
Now assume:
$H_f \in \mathsf{DTIME} \left({ f \left({ \left\lfloor{ \dfrac m 2 }\right\rfloor }\right) }\right)$
Then we can construct some machine $K$ which:
given some machine description $\left[{M_K} \right]$ and input $x$
decides within $ \mathsf{DTIME} \left({ f \left({ \left\lfloor{ \dfrac m 2 }\right\rfloor }\right) }\right)$ whether $\left({ \left[{ M_K }\right], x }\right) \in H_f$.
Construct another machine $N$ which:
takes a machine description $\left[{M_N}\right]$
runs $K$ on $\left({ \left[{M_N}\right], \left[{M_N}\right] }\right)$
accepts only if $K$ rejects, and rejects if $K$ accepts.
Let $m_n$ be the length of $\left[{M_N}\right]$.
Then $m$ (the length of the input to $K$) is twice $m_n$ plus some delimiter symbol, so:
$ m = 2m_n + 1 $
$N$'s running time is thus:
\(\displaystyle \mathsf{DTIME} \left({f \left({\left\lfloor{\frac m 2}\right\rfloor}\right)}\right)\) \(=\) \(\displaystyle \mathsf{DTIME} \left({f \left({\left\lfloor{\frac{2 m_n + 1} 2}\right\rfloor }\right)}\right)\)
\(\displaystyle \) \(=\) \(\displaystyle \mathsf{DTIME} \left({f \left({m_n}\right)}\right)\)
Now consider the case $M_N = N$.
That is we feed $\left[{N}\right]$ as input into $N$ itself).
In this case $m_n$ is the length of $\left[{N}\right]$.
If $N$ accepts $\left[{N}\right]$ (which we know it does in at most $f \left( {m_n} \right)$ operations):
By the definition of $N$, $K$ rejects $\left({ \left[{N}\right], \left[{N}\right] }\right)$
Therefore, by the definition of $K$, $ \left({ \left[{N}\right], \left[{N}\right] }\right) \notin H_f $
Therefore, by the definition of $H_f$, $N$ does not accept $\left[{N}\right]$ in $f \left( {m_n} \right)$ steps -- a contradiction.
If $N$ rejects $\left[{N}\right]$ (which we know it does in at most $f \left( {m_n} \right)$ operations):
By the definition of $N$, $K$ accepts $\left({ \left[{N}\right], \left[{N}\right] }\right)$
Therefore, by the definition of $K$, $ \left({ \left[{N}\right], \left[{N}\right] }\right) \in H_f $
Therefore, by the definition of $H_f$, $N$ does accept $\left[{N}\right]$ in $f \left( {m_n} \right)$ steps -- a contradiction.
Therefore, $K$ does not exist, and so:
$H_f \notin \mathsf{DTIME}\left({f \left({\left\lfloor{\dfrac m 2}\right\rfloor}\right)}\right)$
Substituting $2n + 1$ for $m$, we get:
$H_f \notin \mathsf{DTIME} \left({f \left({n}\right)}\right)$
and, from the earlier result:
$H_f \in \mathsf{DTIME} \left({f \left({2n+1}\right)^3}\right)$
Retrieved from "https://proofwiki.org/w/index.php?title=Deterministic_Time_Hierarchy_Theorem&oldid=224009"
Handwaving
Complexity Theory
This page was last modified on 15 September 2015, at 17:25 and is 4,645 bytes | CommonCrawl |
3.E: Classification (Exercises)
[ "article:topic", "showtoc:no" ]
Book: Partial Differential Equations (Miersemann)
3: Classification
Q3.1
Q3.10
Q3.15
These are homework exercises to accompany Miersemann's "Partial Differential Equations" Textmap. This is a textbook targeted for a one semester first course on differential equations, aimed at engineering students. Partial differential equations are differential equations that contains unknown multivariable functions and their partial derivatives. Prerequisite for the course is the basic calculus sequence.
Let \(\chi\): \({\mathbb{R}^n}\to {\mathbb{R}^1}\) in \(C^1\), \(\nabla\chi\not=0\). Show that for given \(x_0\in {\mathbb{R}^n}\) there is in a neighborhood of \(x_0\) a local diffeomorphism \(\lambda=\Phi(x)\), \(\Phi:\ (x_1,\ldots,x_n)\mapsto(\lambda_1,\ldots,\lambda_n)\), such that \(\lambda_n=\chi(x)\).
Show that the differential equation
$$a(x,y)u_{xx}+2b(x,y)u_{xy}+c(x,y)u_{yy}+\mbox{lower order terms}=0$$
is elliptic if \(ac-b^2>0\), parabolic if \(ac-b^2=0\) and hyperbolic if \(ac-b^2<0\).
Show that in the hyperbolic case there exists a solution of \(\phi_x+\mu_1\phi_y=0\), see equation (3.9), such that \(\nabla\phi\not=0\).
Hint: Consider an appropriate Cauchy initial value problem.
Show equation (3.4).
Find the type of
$$Lu:=2u_{xx}+2u_{xy}+2u_{yy}=0$$
and transform this equation into an equation with vanishing mixed derivatives by using the orthogonal mapping (transform to principal axis) \(x=Uy,\ U\) orthogonal.
Determine the type of the following equation at \((x,y)=(1,1/2)\).
$$Lu:=xu_{xx}+2yu_{xy}+2xyu_{yy}=0.$$
Find all \(C^2\)-solutions of
$$u_{xx}-4u_{xy}+u_{yy}=0.$$
Hint: Transform to principal axis and stretching of axis lead to the wave equation.
Oscillations of a beam are described by
\begin{eqnarray*}
w_x-{1\over E}\sigma_t&=& 0\\
\sigma_x-\rho w_t&=&0,
\end{eqnarray*}
where \(\sigma\) stresses, \(w\) deflection of the beam and \(E,\ \rho\) are positive constants.
Determine the type of the system.
Transform the system into two uncoupled equations, that is, \(w,\ \sigma\) occur only in one equation, respectively.
Find non-zero solutions.
Find nontrivial solutions (\(\nabla \chi\not=0\)) of the characteristic equation to
$$x^2u_{xx}-u_{yy}=f(x,y,u,\nabla u),$$
where \(f\) is given.
Determine the type of
$$u_{xx}-xu_{yx}+u_{yy}+3u_x=2x,$$
where \(u=u(x,y)\).
Transform equation
$$u_{xx}+(1-y^2)u_{xy}=0,$$
\(u=u(x,y)\), into its normal form.
Transform the Tricomi-equation
$$yu_{xx}+u_{yy}=0,$$
\(u=u(x,y)\), where \(y<0\), into its normal form.
$$x^2u_{xx}-y^2u_{yy}=0,$$
Show that
$$\lambda=\dfrac{1}{\left(1+|p|^2\right)^{3/2}},\ \ \Lambda=\dfrac{1}{\left(1+|p|^2\right)^{1/2}}.$$
are the minimum and maximum of eigenvalues of the matrix \((a^{ij})\), where
$$a^{ij}=\left(1+|p|^2\right)^{-1/2}\left(\delta_{ij}-\dfrac{p_ip_j}{1+|p|^2}\right).$$
Show that Maxwell equations are a hyperbolic system.
Consider Maxwell equations and prove that \(\text{div}\ E=0\) and \(\text{div}\ H=0\) for all \(t\) if these equations are satisfied for a fixed time \(t_0\).
Hint. \(\text{div}\ \text{rot} \ A=0\) for each \(C^2\)-vector field \(A=(A_1,A_2,A_3)\).
Assume a characteristic surface \(\mathcal{S}(t)\) in \(\mathbb{R}^3\) is defined by \(\chi(x,y,z,t)=const.\) such that \(\chi_t=0\) and \(\chi_z\not=0\). Show that \(\mathcal{S}(t)\) has a nonparametric representation \(z=u(x,y,t)\) with \(u_t=0\), that is \(\mathcal{S}(t)\) is independent of \(t\).
Prove formula (3.22) for the normal on a surface.
Prove formula (3.23) for the speed of the surface \(\mathcal{S}(t)\).
Write the Navier-Stokes system as a system of type (3.4.1).
Show that the following system (linear elasticity, stationary case of (3.4.1.1) in the two-dimensional case) is elliptic
\mu\triangle u+(\lambda+\mu)\mbox{\ grad(div}\ u)+f=0,
where \(u=(u_1,u_2)\). The vector \(f=(f_1,f_2)\) is given and
\(\lambda,\ \mu\) are positive constants.
Discuss the type of the following system in stationary gas dynamics (isentrop flow) in \(\mathbb{R}^2\).
\rho u u_x+\rho v u_y+ a^2\rho_x&=&0\\
\rho u v_x+\rho v v_y+ a^2\rho_y&=&0\\
\rho (u_x+v_y)+u\rho_x+ v\rho_y&=&0.
Here are \((u,v)\) velocity vector, \(\rho\) density and
\(a=\sqrt{p'(\rho)}\) the sound velocity.
Show formula 7. (directional derivative).
Hint: Induction with respect to \(m\).
Let \(y=y(x)\) be the solution of:
y'(x)&=&f(x,y(x))\\
y(x_0)&=&y_0,
where \(f\) is real analytic in a neighborhood of \((x_0,y_0)\in \mathbb{R}^2\).
Find the polynomial \(P\) of degree 2 such that
y(x)=P(x-x_0)+O(|x-x_0|^3)
as \(x\to x_0\).
Let \(u\) be the solution of
\triangle u&=&1\\
u(x,0)&=&u_y(x,0)=0.
u(x,y)=P(x,y)+O((x^2+y^2)^{3/2})
as \((x,y)\to(0,0)\).
Solve the Cauchy initial value problem
V_t&=&{Mr\over r-s-NV}(1+N(n-1)V_s)\\
V(s,0)&=&0.
Hint: Multiply the differential equation with \((r-s-NV)\).
Write \(\triangle^2 u=-u\) as a system of first order.
Hint: \(\triangle^2 u\equiv\triangle(\triangle u)\).
Write the minimal surface equation
$${\partial\over\partial x}\left({u_x\over\sqrt{1+u_x^2+u_y^2}}\right)+{\partial\over\partial y}\left({u_y\over\sqrt{1+u_x^2+u_y^2}}\right)=0$$
as a system of first order.
Hint: \(v_1:= u_x/\sqrt{1+u_x^2+u_y^2},\ v_2:=u_y/\sqrt{1+u_x^2+u_y^2}.\)
Let \(f:\ \mathbb{R}^1\times\mathbb{R}^m\to\mathbb{R}^m\) be real analytic in \((x_0,y_0)\). Show that a real analytic solution in a neighborhood of \(x_0\) of the problem
y'(x)&=&f(x,y)\\
y(x_0)&=&y_0
exists and is equal to the unique \(C^1[x_0-\epsilon, x_0+\epsilon]\)-solution from the Picard-Lindel\"of theorem, \(\epsilon>0\) sufficiently small.
Show (see the proof of Proposition A7)
$$\dfrac{\mu\rho(r-x_1-\ldots-x_n)}{\rho r-(\rho+mM)(x_1+\ldots+x_n)} <<\dfrac{\mu\rho r}{\rho r-(\rho+mM)(x_1+\ldots+x_n)}.$$
Hint: Leibniz's rule.
Prof. Dr. Erich Miersemann (Universität Leipzig)
Integrated by Justin Marshall.
3.5.1 Appendix: Real Analytic Functions
4: Hyperbolic Equations | CommonCrawl |
How to estimate the growth of a "savage" function near 1?
Say I have a function which exists within the unit disk, say $$f(x)=a_0+a_1x+a_2x^2+...$$ If we know sufficient information about the coefficients, say we know the growth rate of $\sum\limits_{k=0}^{n}a_k$ or something similar, can we describe the growth rate of $f$ near 1? Lets give some examples. $$1+x+x^2+...\approx \frac{1}{1-x}, x\to1^-$$ $$1+3x+4x^2+5x^3+...=\frac{1}{1-x}+\frac{1}{(1-x)^2}\approx \frac{1}{(1-x)^2},x\to1^-$$ Even less simple functions such as $$\zeta(s)\approx\frac{1}{1-s}, s\to1^+$$ Etc. But what about less $elementary$ functions. What about, say $$f(x)=x+x^2+x^4+x^8+x^{16}+x^{32}+...$$ Or $$f(x)=x^2+x^3+x^5+x^7+x^{11}++x^{13}+...$$ How can we estimate the growth rate of these noble savages? I am aware that certain $nice$ functions can be expanded in Laurent series' around 1, like the first two given. But what makes you think Mathematics cares to be nice?
The purpose of the excursion is to investigate the relationship between the growth rate of $f(n)=\sum\limits_{k=0}^{n}a_k$ and that of $f(x)=a_0+a_1x+a_2x^2+...$. Once this is done I will look at the growth of $f(s)=a_1/1^s a_2/2^s+a_3/3^s+...$ near 1.
Essentially what I am asking for is a link between $f(n)=\sum\limits_{k=0}^{n}a_k$ and $f(x)=a_0+a_1x+a_2x^2+...$. If $f(s)$ was a Dirichlet series, such a link would be Perrons formula; $$\sum\limits_{k=0}^{n}a_k=\frac{1}{2\pi i}\int\limits_{c-i\infty}^{c+i\infty}f(z)\frac{n^z}{z}dz$$
asymptotics power-series laurent-series
Antonio Vargas
Elie BergmanElie Bergman
$\begingroup$ @Mhenni, both "disk" and "disc" are correct depending on where you're from. Please respect the original author's choice and don't edit that kind of thing. $\endgroup$ – Antonio Vargas Mar 26 '14 at 23:42
$\begingroup$ @AntonioVargas: What I did does not need all of this attention! $\endgroup$ – Mhenni Benghorbal Mar 26 '14 at 23:46
If the coefficients of the series are positive and can be described by a nice formula then you can get pretty far by comparing the series to an integral.
For example, if the terms of the series are eventually strictly decreasing and the series has radius of convergence $1$ with a singularity at $x=1$ then
$$ \sum_{n=0}^{\infty} a_n x^n \sim \int_0^\infty a_n x^n \,dn $$
as $x \to 1^-$. This can be proved using the idea behind the integral test for convergence.
Applying this to the first two series yields
$$ \sum_{n=0}^{\infty} x^n \sim \int_0^\infty x^n\,dn = -\frac{1}{\log x} \sim \frac{1}{1-x} \tag{1} $$
$$ \sum_{n=0}^{\infty} (n+2)x^n \sim \int_0^\infty (n+2)x^n\,dn = \frac{1}{(\log x)^2} - \frac{2}{\log x} \sim \frac{1}{(1-x)^2} \tag{2} $$
as $x \to 1^-$. Note that in both cases we used the fact that
$$ \log x = x-1 + O\left((x-1)^2\right) $$
as $x \to 1$.
A similar argument leads to the asymptotic
$$ \sum_{n=1}^{\infty} \frac{1}{n^s} \sim \int_1^\infty \frac{dn}{n^s} = \frac{1}{s-1} \tag{3} $$
as $s \to 1^+$.
Sometimes the resulting integral can't be done in closed form but we can still obtain an asymptotic after some additional analysis. To address another of your examples let's study the estimate
$$ \sum_{n=0}^{\infty} x^{b^n} \sim \int_0^\infty x^{b^n}\,dn = \int_0^\infty \exp\Bigl[-b^n (-\log x)\Bigr]\,dn \tag{4} $$
where $b > 1$ is fixed. Making the change of variables $(-\log x) b^n = t$ yields
$$ \int_0^\infty \exp\Bigl[-b^n (-\log x)\Bigr]\,dn = \frac{1}{\log b} \int_{-\log x}^\infty e^{-t}t^{-1}\,dt. \tag{5} $$
The integral blows up as $-\log x$ approaches zero. For $t \approx 0$ the integrand is
$$ e^{-t} t^{-1} \approx t^{-1}, $$
so we expect that the integral has a logarithmic singularity here. We'll proceed by pulling out this term from the integral:
$$ \begin{align} &\int_{-\log x}^\infty e^{-t}t^{-1}\,dt \\ &\qquad = \int_{-\log x}^1 e^{-t}t^{-1}\,dt + \int_{1}^\infty e^{-t}t^{-1}\,dt \\ &\qquad = \int_{-\log x}^1 t^{-1}\,dt + \int_{-\log x}^1 \left(e^{-t}-1\right)t^{-1}\,dt + \int_{1}^\infty e^{-t}t^{-1}\,dt \\ &\qquad = -\log(-\log x) + \int_{-\log x}^1 \left(e^{-t}-1\right)t^{-1}\,dt + \int_{1}^\infty e^{-t}t^{-1}\,dt. \end{align} $$
The first integral in the last expression converges as $-\log x \to 0$, so the only unbounded term is the first. Thus
$$ \int_{-\log x}^\infty e^{-t}t^{-1}\,dt \sim -\log(-\log x) $$
as $x \to 1^-$. By combining this with $(5)$ we get
$$ \int_0^\infty \exp\Bigl[-b^n (-\log x)\Bigr]\,dn \sim -\log_b(-\log x) $$
and so, returning to the original sum through $(4)$ and once again using the asymptotic $\log x \sim x-1$, we have arrived at the conclusion that
$$ \sum_{n=0}^{\infty} x^{b^n} \sim -\log_b(1-x) \tag{6} $$
as $x \to 1^-$.
What follows has been added in response to the comments below.
The series $\sum_p x^p$, where $p$ ranges over the prime numbers, is more tricky to deal with. If we call the $n^\text{th}$ prime $p_n$ then it is known that
$$ p_n \sim n\log n $$
as $n \to \infty$. If we knew ahead of time that
$$ \sum_{n=1}^{\infty} x^{p_n} \sim \sum_{n=1}^{\infty} x^{n\log n} \tag{7} $$
as $x \to 1^-$ then we could directly obtain an asymptotic equivalent for $\sum_p x^p$ by studying the behavior of the integral $\int_1^\infty x^{n\log n}\,dn$. Unfortunately I don't know how to prove $(7)$ directly. I've actually asked a question about the topic here. We can, however, proceed by using the idea presented in an answer to that posted question.
(Interestingly the equivalence $(7)$ will be a corollary of our calculations. Combine $(8)$ with $\lambda = 1$ with $(10)$.)
First, by comparing the series with the corresponding integral it's possible to show that, for $\lambda > 0$ fixed,
$$ \sum_{n=1}^{\infty} x^{\lambda n \log n} \sim \frac{1}{\lambda(x-1)\log(1-x)} \tag{8} $$
Fix $0 < \epsilon < 1$ and choose $N \in \mathbb N$ such that
$$ \left|\frac{p_n}{n\log n} - 1\right| < \epsilon $$
for all $n \geq N$. For $0 < x < 1$ we have
$$ \sum_{n=N}^{\infty} x^{(1+\epsilon)n\log n} < \sum_{n=N}^{\infty} x^{p_n} < \sum_{n=N}^{\infty} x^{(1-\epsilon)n\log n}. $$
By completing the three series we see that the above inequality is equivalent to
$$ \begin{align} &\sum_{n=1}^{\infty} x^{(1+\epsilon)n\log n} + \sum_{n=1}^{N} \left(x^{p_n} - x^{(1+\epsilon)n\log n}\right) \\ &\qquad < \sum_{n=1}^{\infty} x^{p_n} \\ &\qquad < \sum_{n=1}^{\infty} x^{(1-\epsilon)n\log n} + \sum_{n=1}^{N} \left(x^{p_n} - x^{(1-\epsilon)n\log n}\right). \end{align} \tag{9} $$
Note that the two error sums are each bounded independently of $x$:
$$ \left|\sum_{n=1}^{N} \left(x^{p_n} - x^{(1 \pm \epsilon)n\log n}\right)\right| \leq 2N. $$
Now, multiply $(9)$ by $(x-1)\log(1-x)$. Taking the limits infimum and supremum as $x \to 1^-$ and using $(8)$ yields
$$ \begin{align} \frac{1}{1+\epsilon} &\leq \liminf_{x \to 1^-} (x-1)\log(1-x) \sum_{n=1}^{\infty} x^{p_n} \\ &\leq \limsup _{x \to 1^-} (x-1)\log(1-x) \sum_{n=1}^{\infty} x^{p_n} \\ &\leq \frac{1}{1-\epsilon}. \end{align} $$
This is true for all $0 < \epsilon < 1$, so by allowing $\epsilon \to 0$ we obtain
$$ \lim_{x \to 1^-} (x-1)\log(1-x) \sum_{n=1}^{\infty} x^{p_n} = 1. $$
Thus, changing the notation of the sum back to $\sum_p x^p$,
$$ \sum_p x^p \sim \frac{1}{(x-1)\log(1-x)} \tag{10} $$
as $x \to 1^-$, which is what we wanted to show.
Antonio VargasAntonio Vargas
$\begingroup$ Suppose instead of considering x^2+x^3+x^5+x^7+..., we considered x^(1ln1)+x^(2ln2)+x^(3ln3)+x^(4ln4)+... do you think we could find the nature of the singularity of the original series since P(n)~nln(n). How would one estimate x^(1ln1)+x^(2ln2)+x^(3ln3)+x^(4ln4)+... with x~1? The integral comparison is no less simple than the original series in my eyes. $\endgroup$ – Elie Bergman Mar 26 '14 at 13:26
$\begingroup$ Hmm, I would have to think about whether $\sum_p x^p \sim \sum_n x^{n\log n}$ as $x \to 1^-$. I suspect it would be true. Handling the latter series using this method is possible, at least; I get $$\sum_{n=1}^{\infty} x^{n \log n} \sim \frac{1}{(x-1)\log(1-x)}$$ as $x \to 1^-$ by comparing to the corresponding integral. $\endgroup$ – Antonio Vargas Mar 26 '14 at 17:06
$\begingroup$ In regards to whether $\sum_p x^p \sim \sum_n x^{n\log n}$ as $x \to 1^-$, I just remembered that I asked a question here on the topic a while back: math.stackexchange.com/q/277051/5531 $\endgroup$ – Antonio Vargas Mar 26 '14 at 19:34
$\begingroup$ Okay, I was able to use your idea to obtain an equivalent to $\sum_p x^p$. I've added this to the answer. $\endgroup$ – Antonio Vargas Mar 26 '14 at 23:10
$\begingroup$ Very nice indeed. I can you explain your steps when you "compare with the integral" to derive at your expression. I tried comparing with the integral but couldn't derive any expression for the corresponding result. $\endgroup$ – Elie Bergman Mar 27 '14 at 17:55
Not the answer you're looking for? Browse other questions tagged asymptotics power-series laurent-series or ask your own question.
If $\lambda_n \sim \mu_n$, is it true that $\sum \exp(-\lambda_n x) \sim \sum \exp(-\mu_n x)$ as $x \to 0$?
Behavior of a sum on the boundary of convergence/divergence
Equivalent of the sum $\sum_{n=1}^\infty\frac{x^n}{\sqrt{n}}$
How does one begin to find an asymptotic function of an infinite power series?
Radius of convergence for the exponential function
How to determine growth rate of coefficients of generating function
On the sum of prime powers
Determining the asymptotics of the Summatory function of an Arithmetic Function
Technique for constructing an entire function satisfying a given growth condition
Does the big o notation always denotes the grow rate of the function?
How to prove that the coefficients $a_n$ is 0 in power series for all even $n$
How to obtain Asymptotic Expansion of a given function | CommonCrawl |
Impact of geographic distance on appraisal delay for active TB treatment seeking in Uganda: a network analysis of the Kawempe Community Health Cohort Study
Kyle Fluegge ORCID: orcid.org/0000-0003-1822-42241,2,3,
LaShaunda L. Malone4,
Mary Nsereko5,
Brenda Okware5,
Christian Wejse6,
Hussein Kisingo5,
Ezekiel Mupere7,
W. Henry Boom4,5 &
Catherine M. Stein8,5
Appraisal delay is the time a patient takes to consider a symptom as not only noticeable, but a sign of illness. The study's objective was to determine the association between appraisal delay in seeking tuberculosis (TB) treatment and geographic distance measured by network travel (driving and pedestrian) time (in minutes) and distance (Euclidean and self-reported) (in kilometers) and to identify other risk factors from selected covariates and how they modify the core association between delay and distance.
This was part of a longitudinal cohort study known as the Kawempe Community Health Study based in Kampala, Uganda. The study enrolled households from April 2002 to July 2012. Multivariable interval regression with multiplicative heteroscedasticity was used to assess the impact of time and distance on delay. The delay interval outcome was defined using a comprehensive set of 28 possible self-reported symptoms. The main independent variables were network travel time (in minutes) and Euclidean distance (in kilometers). Other covariates were organized according to the Andersen utilization conceptual framework.
A total of 838 patients with both distance and delay data were included in the network analysis. Bivariate analyses did not reveal a significant association of any distance metric with the delay outcome. However, adjusting for patient characteristics and cavitary disease status, the multivariable model indicated that each minute of driving time to the clinic significantly (p = 0.02) and positively predicted 0.25 days' delay. At the median distance value of 47 min, this represented an additional delay of about 12 (95% CI: [3, 21]) days to the mean of 40 days (95% CI: [25, 56]). Increasing Euclidean distance significantly predicted (p = 0.02) reduced variance in the delay outcome, thereby increasing precision of the mean delay estimate. At the median Euclidean distance of 2.8 km, the variance in the delay was reduced by more than 25%.
Of the four geographic distance measures, network travel driving time was a better and more robust predictor of mean delay in this setting. Including network travel driving time with other risk factors may be important in identifying populations especially vulnerable to delay.
Tuberculosis (TB) remains a global disease burden, especially for developing countries with high prevalence of individuals co-infected with HIV. In 2015, Uganda had an overall TB incidence rate of 202 per 100,000 and 66 per 100,000 among HIV-positive individuals. This rate placed the country in the top twenty of disease burden among all countries assessed for the double epidemics of TB and HIV [1]. Long delay in starting treatment, especially among HIV-positive TB patients, has been associated with unfavorable treatment outcomes [2]. Previous research has identified a common reason for delay: many patients, including those co-infected, view initial symptoms as not serious [3, 4] and, in some cases, not even reflective of TB [5].
Appraisal delay is the time a patient takes to consider a symptom as not only noticeable, but a sign of illness [6]. The occurrence over time of more than one symptom is an indicator to many patients of the presence of illness necessitating medical intervention. Multi-symptom appraisal delay has been suggested when considering symptom clusters in chronic disease [7]. In the case of TB, many early symptoms are non-specific and therefore not immediately perceived as signals of disease among individuals who experience them [8]. However, the co-occurrence of symptoms can influence patients' disease perception. For example, cough is often not recognized as possible TB unless accompanied by more serious symptoms like hemoptysis and weight loss [9, 10], after which patients are more likely to seek health care [11]. Symptom duration is defined as the number of days from the first day of onset of any symptom attributed to tuberculosis until the first day of appropriate TB therapy [12]. This definition frequently encompasses illness, utilization and system delay (see Fig. 1). Several studies of TB patients have considered this definition when deriving a quantitative (generally binary) measure of patient delay [13,14,15,16]. However, such a definition obscures the occurrence of existing, albeit nonspecific, symptoms that preceded the appraisal date (see Fig. 1).
Appraisal delay is the time a person takes to evaluate a symptom as a sign of illness. Illness delay is the time the person takes from the first sign of illness until deciding to seek professional medical care. Utilization delay is the time from the decision to seek care until the consult at a health facility. System delay is the time from the first consultation to initiation of treatment. The red arrow indicates the appraisal date, at which time the patient recognizes possible TB as the explanation for his or her symptoms
Our goal was to investigate whether distance to healthcare facility influences the patient's appraisal delay. We did so by assessing the period before that used to typically define delay in the TB literature, what we refer to as the appraisal interval, a period when the illness is perceived by the patient to be either non-existent and/or non-threatening. Stock [17] examined the impact of distance to health facility on health care utilization in sub-Saharan Africa. He discovered the association depended upon illness perception; the more serious the disease (i.e., TB), the less distance to facility impeded utilization. However, this finding was based on data from the 1970s, a pre-HIV/AIDS era in which the TB burden was comparatively lower [18] and stigma not as great. In recent decades, however, as technology has enhanced our ability to categorize not only disease but also its severity, research findings generally flip Stock's assessment [19]. We hypothesized that a greater distance to clinic extends the appraisal interval, contributing to a longer period of overall delay. If confirmed, it restores Stock's [17] initial finding that a patient's perception of illness severity is an important modifier of the relationship between distance to health facility and treatment utilization.
To measure distance, we considered Euclidean distance and network travel time. Euclidean distance, owing to its computational simplicity, has been commonly used to measure distance and is calculated as the straight-line distance between two geographic locations [20]. Network travel time, derived from network analysis, uses distance and speed to systematically create the fastest (or least costly) travel time route between two geographic locations in a given road network [20]. Deriving a more sophisticated measure of geographic distance may allow a more accurate assessment of access to health services, leading to more effective interventions.
The data from this study were obtained from a longitudinal cohort study called the 'Kawempe Community Health Study' (KCHS), which enrolled households from April 2002 to July 2012 in Kampala, Uganda [21]. Participants resided within Kawempe and contiguous divisions, representative of other sub-Saharan low-resource settings.
Study participants
Eligible participants (index cases) were 18 years or older, had an initial pulmonary TB diagnosis that was confirmed based on growth of Mycobacterium tuberculosis in culture, resided in Kawempe Division or contiguous divisions for at least three consecutive months and provided HIV testing and informed consent. Referral sources included direct self-referral to the Ugandan National Tuberculosis and Leprosy Program (NTLP), community sensitization outreach programs, community/private clinics or some other source.
TB screening
Eligible patients received a baseline evaluation consisting of a standard history, physical examination and a comprehensive clinical work-up, which included chest radiography and acid-fast bacilli (AFB) sputum smear/culture. Patients were asked questions by a trained nurse or counselor, who then recorded the patient's responses onto the case report forms. Patients were instructed to return to the clinic in 7 days to determine enrollment into the study. Enrolled patients met the eligibility criteria and had household members willing to participate. Individuals who were not enrolled in KCHS were referred back to the NTLP for the completion of their medical care.
Delay interval
The dependent variable was a patient's appraisal delay. It was constructed from two variables: the number of days after the appearance of the most recent symptom and the number of days from the appearance of the initial symptom to the first point of contact with the NTLP. It is an interval construction, where the number of days after the most recent symptom is always less than or equal to the number of days after the appearance of the initial symptom. Equality indicates no appraisal delay. Rather than delay only being defined as a specific number of days since the appearance of one symptom, this approach allows us to model the appraisal delay as occurring within a range, where appropriate, for patients reporting multiple symptoms occurring over a period of time. There were twenty-eight possible symptom categories from which these intervals were constructed. Numbers of days' delay reported for each symptom were self-reported by patients upon clinical intake.
Geographic distance
ArcGIS® Network Analyst was used to determine the network travel time using a Kampala road network obtained from a local office [22]. OpenStreetMap was used to supplement the road network for recruited patients living outside the study catchment area [23]. Network travel time was computed from road distance and speed, as the fastest (least costly) driving network travel time in minutes from the patient home to the TB clinic. Surrogate speeds were applied where speed limits were not available and averaged actual peak time travel speeds were used to reflect traffic congestion. Pedestrian network travel time was computed similarly, using a standard travel speed of three kilometers per hour.
ArcGIS® Proximity tool was used to compute the Euclidean distance as a straight-line distance from the patient home to the NTLP clinic. Detailed logistical descriptions of the ArcGIS® software and extensions can be found in this resource [24]. Figure 2 displays the Kampala road map. Yellow and purple roads indicate higher travel speeds. The yellow square identifies the NTLP clinic. The most variable speeds in the map are those surrounding this clinic.
Dark green paths identify roads. Travel speeds were highlighted around areas including study households (not identified). In order of speeds, highlighted yellow and purple paths indicate higher travel speeds. The yellow square identifies the NTLP clinic. The most variable road speeds in the map are those surrounding this clinic. The Kampala, Uganda digitized base map was sourced from the Uganda Bureau of Statistics in 2009 and displayed in ArcGIS [22]. OpenStreetMap was used to supplement road travel speeds in areas not covered by the digitized maps [23]
Twenty-five covariates were selected to assess the relationship of delay and geographic distance as well as to identify potential risk factors. The covariates were organized according to the Andersen utilization conceptual framework [25]:
predisposing characteristics (age, sex, tribe, religion, marital status); enabling (patient education, social support: family size, type of residence such as Muzigo (i.e., typical housing structure for slum area) or a multi-family housing unit [26]);
perceived needs: indicator variable describing if cough was the most recent reported symptom, total number of symptoms reported and whether the patient or any other household members were previously treated for TB;
evaluated needs: AFB smear, chest cavities, physical examination findings (body mass index (BMI) & BCG vaccination scar), Karnofsky performance score, modified Bandim TBscore for disease severity, comorbidities: HIV status;
and personal health practices: smoking, drinking alcohol
The Karnofsky score was segmented by a threshold score of 80, which distinguishes between patients who are able to carry on normal activity and to work, and those who are unable to work [27]. The Bandim TB score was included to assess disease severity [27]. The derivation, use and analysis of a modified version of this score are presented in the Additional file 1.
Descriptive analyses comprised continuous variables expressed as median and interquartile range (IQR). Categorical variables were expressed in proportions. Chi-square and t-tests were used to evaluate potential differences in enabling, predisposing, evaluated and self-perceived needs and personal health behaviors as well as the delay endpoints between patients with and without GPS data. We used interval regression to assess the association between distance and delay. We distinguished between mean and variance effects on the delay outcome by estimating an interval regression model with multiplicative heteroscedasticity [28]. Estimating the variance allows us to assess how the boundaries of the delay interval change in relation to the mean. The mean delay and the log of the variance in delay were each specified as linear functions of the regressors. Estimation was by maximum likelihood (ML) with robust standard errors.
The interval regression model is specified as follows. We let y = Xβ + ε be the interval regression model, where y represents the unobserved continuous delay outcome and the X indicates a matrix of our covariates of interest. The model assumes ε ∼ N (0, σ2). For observations j ∈ C, we observe true y j , that is, point data for individual j. These uncensored delays occur either when patients report multiple symptoms with the same number of days' duration or only one symptom. In the latter case, the most recent symptom is the initial symptom, rendering the interval to be point data. Delays that are represented by these point data suggest no appraisal delay. Observations j ∈ I are intervals. We know that the unobserved y j is in the interval [y 1j , y 2j ]. These observations include patients reporting multiple symptoms with different days' duration associated with each one. The model assumes no right- or left-censoring.
The likelihood is proportional to the probability of observing the data, treating the parameters of the distribution as variables and the data as fixed. The goal of ML methods is to find the estimate(s) of the parameter(s) that maximizes the probability of observing the data we have. The log-likelihood of the interval regression model is specified as
$$ \ln L=-\frac{1}{2}\sum \limits_{j\in \mathrm{C}}\left\{{\left(\frac{y_j- x\beta}{\sigma}\right)}^2+\log 2\pi {\sigma}^2\right\}+\sum \limits_{j\in \mathrm{I}}\log \left\{\varPhi \left(\frac{y_{2_j}- x\beta}{\sigma}\right)-\varPhi \left(\frac{y_{1_j}- x\beta}{\sigma}\right)\right\} $$
where Φ is the standard cumulative normal.
Three interval regression models using the log-likelihood were analyzed: an intercept-only model, a multivariable (MV) model without multiplicative heteroscedasticity and a final multivariable model where the log of the variance was specified as linear functions of the regressors (MV + MH). The ML parameters (β, ln(σ)) for each model were compared. Additionally, we used these parameters to calculate (1) the expected delay for each individual, conditional on it being within the defined interval, and (2) the probability that the expected delay would fall in the observed interval. We posited that the final MV + MH model would maximize the mean probability of observing our data. A test of equivalence was used to assess the expected delay and probabilities of the MV + MH model [29].
To determine the set of covariates included in the final multivariable model, a series of bivariate interval regression models were fit. Significant variables (at p ≤ 0.10) from these models were included in the multivariable model. All distance variables were included in the model regardless of the statistical significance of their association with the delay outcome. The same variable set was used to specify the conditional variance of the delay outcome. Crude and adjusted marginal effects on delay and 95% confidence intervals (CI) were reported. An alpha of 0.05 was used for the threshold of statistical significance of primary distance predictors in the multivariable models. Probit plotting (e.g., using normal Q-Q plot) was used to assess normality of the residuals [30]. The statistical analyses were performed using Stata, the Statistics/Data Analysis statistical package, version 13 [31].
Ethical approval for the research was provided to the Tuberculosis Research Unit (TBRU) based in Case Western Reserve University and received from Institutional Review Boards at University Hospitals of Cleveland in Cleveland Ohio, USA and Uganda Council for Science and Technology in Kampala, Uganda. Participant consent was written.
Overall description of study participants
Figure 3 identifies the enrollment and eligibility flow for the Kawempe cohort study. A total of 878 newly diagnosed TB cases was enrolled during the period of April 2002 to July 2012. All but two of these eligible cases (who were missing all symptom reports) were retained in defining the interval delay outcome (see below). Of the 878 individuals, thirty-eight (4%) had either erroneous or missing GPS data. This resulted in 838 eligible TB cases in the network analysis. Among the interval patient delays, none were left- or right-censored, thereby meeting the model assumption. There were 708 patients with interval delays, producing an appraisal delay rate of 84%. Among the 838 patients with GPS data, 130 had uncensored delay with average number of 5.6 symptoms (SD = 2.3, maximum of 14) and 708 had interval delay with 7.7 symptoms (SD = 2.2, maximum of 17), a statistically significant difference (two-sample t-statistic = 10.5, p < 0.01). The only variables that were significantly different between the patient groups with (n = 838) and without (n = 38) GPS data were: marital status (p = 0.03), religion (p = 0.01), and culture result (p = 0.04).
There were 984 households enrolled in the study. Of these, 878 (89%) were eligible for analysis. Of these eligible households, 840 (96%) had global positioning system waypoints available, making them eligible for inclusion in the network analysis sample
The distributions of interval delay by symptom category for 876 patients are listed in Table 1. The data for cough were most complete: 875 patients (99.8%) had this symptom, with 460 patients (53%) reporting a median cough duration of exactly 90 days. There were seven other symptom categories in which 50% or more of all patients reported experiencing. All of these additional categories had a median delay of 60 days. They included loss of appetite, chest pain, fever, production of sputum, purulent sputum, nights sweats and weight loss. For each of these categories, 20% or less of all patients identifying the symptom category reported a duration of the median length of 60 days. The median minimum delay for 838 patients used in the network analysis was 30 days (mean of 36 days with standard deviation of 40.9 and range of 0 to 365 days). The median maximum delay was 90 days (mean of 122 days with standard deviation of 122.9 and range of 10 to 999 days).
Table 1 Symptoms and associated patient self-reported delays
Among the predisposing factors, TB patients were mostly young adults (median age of 27 years), unmarried (54%), men (53%) who were either Roman Catholic or Anglican (61%). The predominant tribe in this setting was the Buganda tribe (56%). Among the enabling factors, patients reported a median level of 11 years of education. Overall, households consisted of a median of three members, with 2.5 members per room. Most patients lived in Muzigos (70%) with poor ventilation. Most of the patients neither smoked (81%) nor consumed alcohol (77%).
Among the patient's perceived need factors, 18% of patients self-reported cough being the most recent symptom. This result was significantly associated with delay of more than 2 weeks (beta = 15.5, p = 0.001). The median number of symptoms was seven per patient. However, this variable was not significantly associated with delay (beta = − 0.68, p = 0.20). Only two patients reported being previously treated for TB; this variable was therefore removed from inclusion in the regression models. The need factors evaluated by the doctor identified a majority of patients with advanced disease. Most patients had a tuberculin skin test (TST) induration greater than 10 mm (85%). A majority of the patients had evident chest disease, including 63% with cavitary TB disease; 88% with moderate to far advanced TB disease extent on chest radiographs. AFB sputum smear results were positive for 93% of the patients, with 86% producing confluent growth to innumerable colonies on media. Thirty percent of the patients were HIV-positive. The median Bandim TBscore was 6, with a maximum score of 12.
Factors associated with interval delay in bivariate models (Table 2)
All distance variables predicted increased delay in the bivariate models; however, none reached statistical significance. Each minute of network driving travel time was associated with 0.13 (standard error (SE) = 0.09) days' delay. At median values, this represented 6.2 total days' delay. Each kilometer of Euclidean (self-reported) distance measure was associated with 0.78 (SE = 1.03) days' delay. At median values, this represented 2.2 total days' delay.
Table 2 Bivariate associations with delay interval (in days)
Among the predisposing factors, only older age significantly predicted increased delay: being 1 year older predicted 0.60 days' delay (95% confidence interval (CI): 0.03, 1.12). Of the enabling factors, more years of education marginally predicted reduced days' delay: 1 year of additional education was associated with reduction in delay by 0.75 days (95% CI: -1.5, 0.02). Among the need factors from the patient's perspective, cough being the most recent symptom (i.e., cough duration was equivalent to the minimum delay value of the appraisal interval) was associated with more than 2 weeks' delay (15.5 days) (95% CI: 6, 25). This covariate was the most significant predictor of delay (p = 0.001) among the full set of covariates considered. Other need factors evaluated by a doctor also predicted increased number of days' delay. Patients with cavitary disease experienced almost 10 days' delay (p = 0.003), especially patients with far advanced disease (p = 0.03) and high AFB Grade smear (p = 0.01). No personal behaviors were significantly associated with delay. A higher Bandim TBscore was associated with increased delay, although the result was not statistically significant (beta = 0.31, p = 0.73). We further consider analysis of the TBscore, including its significant association with the delay variance, in the Additional file 1.
Factors associated with interval delay in multivariable model (Table 3)
The final multivariable model included a sample of 798 observations, consisting of 123 uncensored delays and 675 interval observations, representing 89 and 91% of the available patient data on delay. Nine covariates deemed significant (p ≤ 0.10) from the bivariate results were included in the MV + MH model. These included patient age, years of education, cough being the most recent symptom, BMI, HIV status, cavitary disease, cavitary disease extent, AFB grade smear and culture result. For interpretation of the model intercept, age and BMI were mean-centered. In this adjusted model, there was a more pronounced and greater effect of distance on delay. Driving network travel time significantly predicted increased delay (p = 0.02): each minute of driving time was associated with 0.25 days' delay (95% CI: [0.07, 0.44]). At median values, this represented 11.8 days' delay. Thus, adjusting for other patient and clinical factors, the median driving time added 12 (95% CI: [3, 21]) days to the average patient delay of 40 days (95% CI: [25, 56]), an increase of 30%. However, increasing Euclidean distance was associated with reduced variability in the delay interval (beta = − 0.32, p = 0.02). Adjusting for the same factors, at the median Euclidean distance of 2.8 km, the variance in the delay was reduced by more than 25% (beta × median distance / constant). These results demonstrate that while driving time influenced changes in the mean delay, Euclidean distance was associated with precision of the delay interval length.
Table 3 Multivariable associations with appraisal delay interval (in days) using multiplicative heteroscedasticity
Overall, the log-likelihood for the fully specified MV + MH model (− 1318) suggested a better fit, compared to − 1426 for the MV model and − 1608 for the intercept-only model. Sensitivity checks revealed the robustness of these main effects. The statistical significance of the driving time predictor was attenuated (p = 0.07) and the marginal effect was reduced by more than 20% when not also modeling the delay variance. This supports the importance of accounting for the variability associated with delay in urban areas with locations of particularly congested traffic zones. Second, in validating assumptions of the MV + MH model, we observed non-normality of the residuals using a normal Q-Q plot. To inspect the impact of this violation, we re-analyzed the model including only observations whose residuals were within the interquartile range of the distribution (N = 399). This check revealed a 50% increase in the effect estimate for driving time (beta = 0.38, p < 0.001). Furthermore, self-reported distance in kilometers was also now statistically significant (beta = 1.1, p = 0.003). These results revealed that driving time as a distance metric was more robust to model misspecification than other measures.
Among the other covariates included in the multivariable model, several significantly (p ≤ 0.05) and positively tracked both mean delay and the variability in the interval: increasing patient age, cough being the most recent symptom reported, presence of cavitary disease, and higher culture result. Using the median value, continuous age was associated with 17.3 days' delay, cough being the most recent symptom indicated 13.4 days' delay, and having advanced disease contributed between nine (for cavitary disease) to 12.5 (for culture result of 50+ colonies) days' delay. Collectively assessed, older and sicker patients accumulated the greatest appraisal delay. Controlling for these patient and disease characteristics, driving time distance significantly modified the mean delay outcome.
Figure 4 shows the post-estimation results of the interval regression models using ML for all 675 interval delay observations. The expected delay (in days) was calculated conditional on the value being within the interval identified for each individual. The mean probability that this expected value was contained in the interval observed in the data is also shown. For the intercept model, the MV model and the MV + MH model, the respective estimates were as follows: 59.5 days with probability 0.46, 59 days with probability 0.47 and 56.6 days with probability 0.52.
The expected delay (in days) was calculated for each patient under each model scenario. The per-patient probability that this expected value was contained in the observed delay interval was derived. The means of both outcomes are shown in the figure. Bars represent one standard error from the mean. Abbreviations: MV, multivariable model; MV + MH, multivariable model with multiplicative heteroscedasticity
A test of equivalence was performed on the mean estimates from the MV and MV + MH models. To define the equivalence margins, we used the standard deviations of the estimate differences between the intercept and MV models (0.05 for probability and 4 days in delay). Results revealed equivalence in the delay outcome, but not the probability. The ML estimates that achieved the 0.52 probability are contained in Table 3. Modeling the variance of the delay outcome therefore significantly increased the probability that we observed our data.
Our findings suggest that adjusting for TB disease severity, patients with longer driving times to access TB treatment may be more vulnerable to delay and its concomitant morbidity. However, this association was not present when omitting these disease status characteristics in the bivariate model. That is, distance in driving time was not statistically associated with the appraisal delay outcome among our cohort of patients, most of whom had advanced disease. This result conforms to Stock's [19] assessment on delay and distance, suggesting patients with less serious disease are prone to delay as a result of the barriers associated with driving time.
Our patients present a particular public health challenge as longer delay results in continued MTB exposure and transmission. Given the available GIS software, TB control programs can identify populations experiencing greater travel times (which may not reflect distance traveled) and provide the appropriate interventions in order to reduce the travel burden. Our novel application of interval regression with multiplicative heteroscedasticity provides an estimate of the effect of the main predictors on mean delay, while also taking into account the increased variability associated with shorter kilometer distance from the clinic.
A number of previous studies [9, 32,33,34] have identified geographic distance as an important factor for delay. However, most of the studies perform a cursory assessment of distance (e.g., urban versus rural or only self-reported travel time or distance). This study increases the rigor and sophistication by using both network travel time and Euclidean distance to characterize a more complete picture of the impact of distance on delay. The Euclidean distance measure alone, though computationally simple, may be limited in settings of high traffic congestion [35]. Indeed, in this study where traffic congestion is common in areas close to the clinic, we found that shorter Euclidean distance predicted increased variability of delay in the multivariable analyses (likely dependent on the designated road speeds intended to accommodate such congestion). In contrast, increasing network travel driving time was associated with increased patient delay. These findings were consistent with previous research that closely examined geographic distance and delay [36]. Among the predisposing factors examined in this study, increasing patient age (but not sex) were significant risk factors for delay, mostly aligning with existing literature [33, 37, 38]. Although more years of education were protective against delay, our results were not significant, matching others who found no relationship between education and delay [34, 39]. However, almost all patients reported cough and the recentness of this symptom was positively and significantly associated with delay. Notably, this is a considerably higher proportion than that reported in other sub-Saharan nations [40, 41], but likely reflects our cohort's more advanced disease.
The study represented individuals selected for research purposes according to the inclusion and exclusion criteria which would limit the generalizability to a population with similar characteristics. Among these criteria was the inclusion of mostly newly diagnosed TB patients. Patients with recurrent episodes of TB may experience a different set of risk factors based on their previous knowledge of TB disease and encounters with the health system. Patients with recurrent TB constituted less than 1 % of our study sample. Sensitivity analyses revealed that our results did not change by excluding these patients; however, future studies should further assess what impact recurrent TB has on our understanding of distance on treatment delay. Furthermore, the study enrolled patients within Kawempe and contiguous counties in a 20 km radius; this may also limit generalizability to similarly urban and congested areas.
We used the earliest and most recent number of days since the start of a list of symptoms to calculate the appraisal delay interval. Ostensibly recall bias may have played a role. This bias is further complicated by similarly presenting infectious diseases endemic in this region. To minimize the bias, patients were interviewed by trained and experienced medical doctors who correlated the presenting signs and symptoms with the patient's disease progression. Furthermore, the derived appraisal delay interval may be biased upwards, as some patients may have sought care from other providers before their arrival at the NTLP clinic. Unfortunately, we do not have data on whether or when subjects visited other health care providers in relation to their symptom reports. However, because very few of our index cases were previously treated for TB, it is possible that any prior treatment-seeking behaviors may not have been a result of the patient's awareness of their TB status.
Our study finds that geographic distance was associated with delay. Of the four geographic distance measures, network travel driving time was a better and more robust predictor of mean delay in this setting. We find that increasing network travel driving time increases the number of days' delay. Other important contributors to delay include patient age and disease progression. We conclude that, in addition to the use of traditional risk factors, TB control programs should consider network travel time in identifying vulnerable populations, with the caveat that increasing variability in congested areas may make it more difficult to discern the influence of distance on patient appraisal delay.
AFB:
Acid-fast bacilli
BCG:
Bacillus Calmette–Guérin
BMI:
FMI:
Fat mass index
HIV/AIDS:
Human immunodeficiency virus / acquired immunodeficiency syndrome
IQR:
Interquartile range
KCHS:
Kawempe Community Health Study
LMI:
Lean mass index
ML:
Maximum likelihood
MUAC:
Mid-upper arm circumference
NTLP:
National Tuberculosis and Leprosy Program
TB:
TBRU:
Tuberculosis Research Unit
TST:
Tuberculin skin test
World Health Organization. (2016). Global tuberculosis report 2016. http://www.who.int/tb/publications/global_report/en/ Accessed 20 Feb 2017.
Gebreegziabher SB, Bjune GA, Yimer SA. Total delay is associated with unfavorable treatment outcome among pulmonary tuberculosis patients in west Gojjam zone, Northwest Ethiopia: a prospective cohort study. PLoS One. 2016;11(7):e0159579.
Van der Werf MJ, Chechulin Y, Yegorova OB, Marcinuk T, Stopolyanskiy A, Voloschuk V, Zlobinec M, Vassall A, Veen J, Hasker E, Turchenko LV. Health care seeking behaviour for tuberculosis symptoms in Kiev City, Ukraine. Int J Tuberc Lung Dis. 2006;10(4):390–5.
Paz-Soldan VA, Alban RE, Dimos Jones C, Powell AR, Oberhelman RA. Patient reported delays in seeking treatment for tuberculosis among adult and pediatric TB patients and TB patients co-infected with HIV in Lima, Peru: a qualitative study. Front Public Health. 2014;2:281.
Watkins RE, Plant AJ. Pathways to treatment for tuberculosis in Bali: patient perspectives. Qual Health Res. 2004;14(5):691–703.
Safer MA, Tharps QJ, Jackson TC, Levknthal H. Determinants of three stages of delay in seeking care at a medical clinic. Med Care. 1979;17(1):11–29.
Dobson CM, Russell AJ, Rubin GP. Patient delay in cancer diagnosis: what do we really mean and can we be more specific? BMC Health Serv Res. 2014;14(1):387.
Dick WP. Significance of symptoms in diagnosis of pulmonary tuberculosis. Br Med J. 1946;1(4449):571.
Article PubMed Central PubMed Google Scholar
Demissie M, Lindtjorn B, Berhane Y. Patient and health service delay in the diagnosis of pulmonary tuberculosis in Ethiopia. BMC Public Health. 2002;2(1):23.
Nair DM, George A, Chacko KT. Tuberculosis in Bombay: new insights from poor urban patients. Health Policy Plan. 1997;12(1):77–85.
Hannay DR. The symptom iceberg. In: A study of community health. London: Boston and Henley; 1979.
Giordano TP, Soini H, Teeter LD, Adams GJ, Musser JM, Graviss EA. Relating the size of molecularly defined clusters of tuberculosis to the duration of symptoms. Clin Infect Dis. 2004;38(1):10–6.
Ayuo PO, Diero LO, Owino-Ong'or WD, Mwangi AW. Causes of delay in diagnosis of pulmonary tuberculosis in patients attending a referral hospital in western Kenya. East Afr Med J. 2008;85(6):263–8.
Laohasiriwong W, Mahato RK, Koju R, Vaeteewootacharn K. Delay for first consultation and its associated factors among new pulmonary tuberculosis patients of Central Nepal. Tuberc Res Treat. 2016; https://doi.org/10.1155/2016/4583871.
Sreeramareddy CT, Qin ZZ, Satyanarayana S, Subbaraman R, Pai M. Delays in diagnosis and treatment of pulmonary tuberculosis in India: a systematic review. Int J Tuberc Lung Dis. 2014;18(3):255–66.
Takarinda KC, Harries AD, Nyathi B, Ngwenya M, Mutasa-Apollo T, Sandy C. Tuberculosis treatment delays and associated factors within the Zimbabwe national tuberculosis programme. BMC Public Health. 2015;15(1):29.
Stock R. Distance and the utilization of health facilities in rural Nigeria. Soc Sci Med. 1983;17(9):563–70.
World Health Organization (WHO). Global Tuberculosis Control: Surveillance, Planning and Financing. WHO/HTM/TB/2006.362. Geneva: WHO; 2006.
Virenfeldt J, Rudolf F, Camara C, Furtado A, Gomes V, Aaby P, Petersen E, Wejse C. Treatment delay affects clinical severity of tuberculosis: a longitudinal cohort study. BMJ Open. 2014;4(6):e004818.
Allen DW. GIS tutorial 2: spatial analysis workbook. New York: ESRI Press; 2010.
Stein C, Hall NB, Malone L, Mupere E. The household contact study design for genetic epidemiological studies of infectious diseases. Front Genet. 2013;4:61.
Uganda Bureau of Statistics [Internet] [cited 2010 10/27/2010]. Available from: http://www.ubos.org/.
OpenStreetMap contributors. (2015) Planet dump [Data file from April 2016 of database dump$]. Retrieved from https://planet.openstreetmap.org
ESRI. ArcGIS Desktop Help, Release 9.2. Redlands, CA, USA: Environmental Systems Research Institute; 2007.
Andersen RM. Revisiting the behavioral model and access to medical care: does it matter? J Health Soc Behav. 1995;36(1):1–10.
Guwatudde D, Nakakeeto M, Jones-Lopez EC, Maganda A, Chiunda A, Mugerwa RD, Ellner JJ, Bukenya G, Whalen CC. Tuberculosis in household contacts of infectious cases in Kampala, Uganda. Am J Epidemiol. 2003;158(9):887–98.
Péus D, Newcomb N, Hofer S. Appraisal of the Karnofsky performance status and proposal of a simple algorithmic system for its evaluation. BMC Med Inform Decis Mak. 2013;13(1):72.
Harvey AC. Estimating regression models with multiplicative heteroscedasticity. Econometrica. 1976;44(3):461–5.
Schuirmann DJ. A comparison of the two one-sided tests procedure and the power approach for assessing the equivalence of average bioavailability. J Pharmacokinet Pharmacodyn. 1987;15(6):657–80.
Miller RG Jr. Beyond ANOVA: basics of applied statistics: CRC Press; 1997.
StataCorp. Stata statistical software: release 13. College Station, TX: StataCorp LP; 2013.
Mesfin MM, Newell JN, Walley JD, Gessessew A, Madeley RJ. Delayed consultation among pulmonary tuberculosis patients: a cross sectional study of 10 DOTS districts of Ethiopia. BMC Public Health. 2009;9(1):53.
Godfrey-Faussett P, Kaunda H, Kamanga J, Van Beers S, Van Cleeff M, Kumwenda-Phiri R, Tihon V. Why do patients with a cough delay seeking care at Lusaka urban health centres? A health systems research approach. Int J Tuberc Lung Dis. 2002;6(9):796–805.
Basnet R, Hinderaker SG, Enarson D, Malla P, Mørkve O. Delay in the diagnosis of tuberculosis in Nepal. BMC Public Health. 2009;9(1):236.
Phibbs CS, Luft HS. Correlation of travel time on roads versus straight line distance. Med Care Res Rev. 1995;52(4):532–42.
Lin X, Chongsuvivatwong V, Geater A, Lijuan R. The effect of geographical distance on TB patient delays in a mountainous province of China. Int J Tuberc Lung Dis. 2008;12(3):288–93.
Gokce C, Gokce O, Erdogmus Z, Arisoy E, Arisoy S, Koldas O, Altinisik ME, Tola M, Goral F, Asikoglu H, Arslan N. Problems in running a tuberculosis dispensary in a developing country: Turkey. Tubercle. 1991;72(4):268–76.
Storla DG, Yimer S, Bjune GA. A systematic review of delay in the diagnosis and treatment of tuberculosis. BMC Public Health. 2008;8(1):15.
Kiwuwa MS, Charles K, Harriet MK. Patient and health service delay in pulmonary tuberculosis patients attending a referral hospital: a cross-sectional study. BMC Public Health. 2005;5(1):122.
Rudolf F, Haraldsdottir TL, Mendes MS, Wagner AJ, Gomes VF, Aaby P, Østergaard L, Eugen-Olsen J, Wejse C. Can tuberculosis case finding among health-care seeking adults be improved? Observations from Bissau. Int J Tuberc Lung Dis. 2014;18(3):277–85.
Ayles H, Schaap A, Nota A, Sismanidis C, Tembwe R, De Haas P, Muyoyeta M, Beyers N. Prevalence of tuberculosis, HIV and respiratory symptoms in two Zambian communities: implications for tuberculosis control in the era of HIV. PLoS One. 2009;4(5):e5602.
We would also like to acknowledge the contributions made by senior physicians, medical officers, health visitors, laboratory and data personnel: Dr. Lorna Nshuti, Dr. Roy Mugerwa, Dr. Alphonse Okwera, Dr. Deo Mulindwa, Dr. Christopher Whalen, Denise Johnson, Allan Chiunda, Mark Breda, Dennis Dobbs, Mary Rutaro, Albert Muganda, Richard Bamuhimbisa, Yusuf Mulumba, Deborah Nsamba, Barbara Kyeyune, Faith Kintu, Gladys Mpalanyi, Janet Mukose, Grace Tumusiime, Pierre Peters, Annet Kawuma, Saidah Menya, Joan Nassuna, Keith Chervenak, Karen Morgan, Alfred Etwom, Micheal Angel Mugerwa, and Lisa Kucharski. We would like to acknowledge Dr. Francis Adatu Engwau, former Head of the Uganda National Tuberculosis and Leprosy Program, for supporting this project. We would like to acknowledge the medical officers, nurses and counselors at the National Tuberculosis Treatment Centre, Mulago Hospital, the Ugandan National Tuberculosis and Leprosy Program and the Uganda Tuberculosis Investigation Bacteriological Unit, Wandegeya, for their contributions to this study.
Map data copyrighted OpenStreetMap contributors and available from https://www.openstreetmap.org.
Funding for this work was provided by the Tuberculosis Research Unit (grant N01-AI95383 and HHSN266200700022C/ N01-AI70022 from the NIAID) and NIH National Heart Lung and Blood Institute Grant T32HL007567.
Department of Population and Quantitative Health Sciences, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA
Kyle Fluegge
Present address: Office of Strategic Data Use, New York City Department of Health and Mental Hygiene, 42-09 28th Street, Long Island City, NY, 11101-4132, USA
Present address: Institute of Health and Environmental Research, Cleveland, OH, 44118, USA
Tuberculosis Research Unit, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA
LaShaunda L. Malone & W. Henry Boom
Case Western Reserve University Research Collaboration, Kampala, Uganda
Mary Nsereko, Brenda Okware, Hussein Kisingo, W. Henry Boom & Catherine M. Stein
Department of Infectious Diseases, Institute for Clinical Medicine / Center for Global Health, Department of Public Health, Aarhus University, Aarhus, Denmark
Christian Wejse
Department of Pediatrics and Child Health College of Health Sciences, Makerere University, Kampala, Uganda
Ezekiel Mupere
Department of Population and Quantitative Health Sciences and Tuberculosis Research Unit, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA
Catherine M. Stein
LaShaunda L. Malone
Mary Nsereko
Brenda Okware
Hussein Kisingo
W. Henry Boom
KF designed the study, conducted the data analysis, conducted the literature review, adapted the TBscore for this study, and wrote all sections of the paper. LLM supervised data quality and control of the study, created the analysis dataset, and edited all sections of the paper. MN and BO clinically characterized all of the study subjects, and supervised the field activities and data quality assurance and control. CW helped adapt the TBscore for this study and helped write sections of the paper. HK collated the GIS waypoints and was involved in data management. EM helped supervise field activities of the study, helped adapt the TBscore for this study, and helped write the paper. WHB designed the study and directed its implementation, and helped write the paper. CMS helped design the study and directed its implementation, helped design the study's analytic strategy including adaptation of the TBscore, and helped write the paper. All authors read and approved the final manuscript.
Correspondence to Kyle Fluegge.
Ethical approval for the research was provided to the TBRU based in Case Western Reserve University and received from Institutional Review Boards at University Hospitals of Cleveland in Cleveland Ohio, USA and Uganda Council for Science and Technology in Kampala, Uganda. Participant consent was written.
Adaptation and analysis of the TBscore in the Kawempe Community Health Cohort Study. (DOCX 30 kb)
Fluegge, K., Malone, L.L., Nsereko, M. et al. Impact of geographic distance on appraisal delay for active TB treatment seeking in Uganda: a network analysis of the Kawempe Community Health Cohort Study. BMC Public Health 18, 798 (2018). https://doi.org/10.1186/s12889-018-5648-6
Healthcare access
Treatment delay | CommonCrawl |
MCOM Theory Final
lbrezinski13
Focault *
The French philosopher, 1979 essay What is an Author, was written in response Barthes's Death of an Author that argued that the concept of "author" did not always exist and probably would pass out of relevance.
Uses the concept of author fiction rather than just author. We adapt this concept as a means of thinking about the producer function.
The producer function is a set of beliefs that lead us to have certain expectations about a work with regard to the status of its producer. (ex: when we speak of a Nike ad, we attribute the producer function to Nike because the corporation, and not the actual creative director of the ad is the entity that owns and appears to speak through the work)
Realism *
Typically refers to a set of conventions or a style of art or representation that is understood at a given historical to accurately represent nature or the real or to convey and interpret accurate or universal meanings about people, objects, and events of the world.
Origins of mass society
"Mass society describes social formations in Europe and United States that began during the early period of industrialization and culminated after World War 2. The rise of mass culture is usually characterized much like modernity: with the increased industrialization and mechanization of modern society, populations consolidated around urban centers" (Sturken and Cartwright 224).
A combination of media together into one point of access or one conglomerate form. Ex. Combination of a telephone, wireless-email, camera, etc all in one device.
Frankfurt school
A group of scholars and social theorists, working first in Germany in the 1930s and then primarily in the U.S., who were interested in applying Marxist theory to the new forms of cultural production and social life in twentieth-century capitalist societies.
Key figures associated with Frankfurt School were: Theodor Adorno, Max Horkheimer, Herbert Marcuse, and Jurgen Habermas.
Created "medium is the message." The medium for which the message is sent influences the interpretation of the message. Developed the theory "global village" which predicted the internet. The idea that we'd become one interconnected society.
The Public Sphere
Social/virtual space in which citizens come together to debate and discuss the pressing issues of their society. Habermas defined this as an ideal space in which well-informed citizens would discuss matters of common not private interests.
German sociologist and philosopher. Created the Public Sphere. Also involved with the Frankfurt School.
Consumer Society
Emerged in the late 19th and early 20th centuries with rise of mass production, industrial revolution, and consolidation of populations. -Constant demand for new products. Characteristics of goods constantly changing.
Process by which material objects are turned into marketable goods with exchange value (Marxist Theory.)
Sociologist and philosopher from France. Developed the theory of Cultural Capital. Theorized on power in society. Also popularized the term habitus, but as far as cultural capital goes, he identified different forms of capital, in addition to economic capital (material wealth and access to material goods), including social capital (whom you know & social network), symbolic capital (prestige, celebrity, honors) and cultural capital, which refers to the forms of cultural knowledge that give you social advantages.
The forms of cultural knowledge that give you social advantages. Can come in form of rare taste, connoisseurship, and a competence in deciphering cultural relations and artifacts.
Commodity Fetishism
The process through which commodities are emptied of the meaning of their production, and instead filled with abstract meaning.
Movement in the 1950's-60's that used the images and materials of popular or "low" culture for art. Took mass culture and reworked them as art objects and in paintings. Andy Warhol.
Focused on the role of the unconscious in representation and in dismantling the opposition between the real and imaginary. Salvador Dali, Rene Magritte
in art, a non-representation set of styles that respectively focus on material and formal qualities (composition, shape, color, line, texture) in advertising, the term is used to describe the fantasy world separated out from reality that is created by ads.
Marketing Social Awareness
understanding how you react to social situations and effectively modify your interaction. (in relation to marketing, understanding that you're being marketed to)
French sociologist, philosopher, and cultural theorist. He is best known for his analysis of media, contemporary culture and technological communication. As well as his formulation of concepts, such as simulation and hyperreality.
used by Baudrillard... refers to a sign that does not clearly have a real life counterpart referent or precedent. (ex: to simulate a disease was to acquire its symptoms thus making it difficult to distinguish between the simulation and the actual disease)
Fredric Jameson
describes postmodernism as a historical period that is the cultural outcome of the logic of late capitalism
a term used to capture life during a period marked by radical transformation of social economic and political aspects of modernity. Has been characterized as a critique of modernist concepts such as universalism, the idea presence. The traditional notion of the subject as unified and self aware, and faith in progress.
High Culture/Low Culture
both terms that have traditionally been used to make distinctions about different kinds of culture. High culture distinguishes culture that only an elite can appreciate such as classical art, music and literature. As opposed to commercially produced mass culture presumed to be accessible to lower classes.
the deliberate contradiction between the literal meaning and intended meaning.
cultural productions that make fun of more serious works through humor and satire while maintaining some of their elements such as plot or character.
a style of plagiarizing, quoting and borrowing from previous styles with no reference to history or a sense of rules. (Ex: In architecture, it would be a mixing of classical motifs with modern elements in an aesthetic that does not reference the historical meanings of those styles.
Reflexivity
The practice of making viewers aware of the material and technical means of production by featuring those aspects as the "content" of cultural production. Both a part of the tradition of modernism, with its emphasis on form and structure, and of postmodernism, with its array of intertextual references and ironic marking of the frame of the image and its status as a cultural product. Prevents viewers from being completely absorbed in the illusion of an experience of a film or image, distancing viewers from that experience.
interpreting the outward appearance and configuration of the body (the face in particular) in the 1900s, it was used in art, criminal investigations and social classification.
A term used by Foucault to describe technologies used to subjugate and control their human bodies. Example: footbinding in China.
The socially organized process of talking about a particular subject matter. According to Foucault, its a body of knowledge that both defines and limits what can be said about something. The term tends to be used for broad bodies of social knowledge. Specific to particular social and historical contexts, and they change over time. It is fundamental to Foucault's theory that they produce certain kinds of subjects and knowledge and that we occupy to varying degrees the subject positions defined within a broad array of types.
Genetic Mapping
Used to locate and identify the gene or group of genes that determines a particular inherited trait. One of the key influences of the scientific and postmodern concepts of the body has been the human genome project, a global scientific endeavor which aims to create a complete genetic map of the human
Related to rise of communications technologies, economic interdependence, and conceptions of universal human rights; sometimes considered more market- than culture-driven. Largely understood as a 20th century phenomenon: discourses began circulating in late 70s (check) and escalated in early 90s; facilitated by increased rates of migration, rise of multinational corporations, globalization of capital and financial networks, development of global communications and transportation systems, perceived decline of the sovereign nation-state, and formation of communities not bound by geography (web-based communities). Impacted distribution, exhibition, and production of art.
The Whole Earth image (the most famous picture of earth by itself in space) was the product of the U.S imperial mission in space, which was fueled by the Cold War in its space race with the Soviet Union, and prompted popular discourse about world unity and the idea of "one world". It became an icon of the peace movement, symbolizing global unity and harmony. Satellites changed perspectives on visualizing the earth. Satellites were mean to spy on other nations, as a means of transmission for TV and news, and most telecommunications like cellphones. A form of satellite panopticon also evolved during the Cold War, where countries were able to view other countries. Example: Cuban missile crisis was discovered by the U.S with aerial photos as proof that Cuba had missile silos. Viewing the weather also changed by changing the perspective of looking down, instead of looking up at the sky. This changed our relationship in regard to knowledge about ourselves and the world around us. Google Earth changed the perspective of how we see the world and places all around the world.
Cultural Imperialism
Refers to how ways of life and cultural aspects are exported into other territories (influences the other cultures or shows dominance over the other) through cultural products and popular culture. Example: is film or music
The existence of different communities, usually containing a particular ethnicity, culture, or nation, scattered across different places outside of their land of origin. Example: There are large diasporic communities of South Asians living throughout England and the United States.
Bond Women
Example of globalization, relating to James Bond series, globe/power change as visual culture/values change. European beauty norms have become globalized, James Bond's partner shown as more powerful/cultural, his desire began focused on european women, when culture globalized his desire changed to more ethnic women, (Asian, Black)
Cosmopolitanism
Being a citizen of the world and having an identity that is more broadly defined than in a provincial or national context. Used today most often in relation to theories of globalization.
The ways which things are vehicles for meaning. A tool for analyzing cultural signs and how the meaning is produced. The study of signs both visual and linguistic.
All the social, cultural, and historical meanings that are added to a sign's literal meaning. Rely on the cultural and historical context of the image and its viewers lived, felt knowledge of those circumstances.
Denotation
In semiotics, the literal, face-value meaning of a sign. The main ideology of a thing or idea. Think dictionary definition
A semiotic term that describes the relationship between a vehicle of meaning, such as a word, image or object, and its specific meaning in a particular context. Can be interpreted.
Signified
The mental concept of the referent, which together with the signifier makes the sign.
The word, image, or object within a sign that conveys meaning.
Objects that are trite, cheaply sentimental and formulaic. A good example is a lava lamp. It was seen as gaudy in its original release but once it made its comeback in the 60s it was seen as a cool and in-style thing to have, and then it went right back to gaudy in the 80s.
The way in which, images call out to people and imagine yourself being the one that the image is talking to. In popular culture, it refers to the ways that cultural products address their consumers and recruit them into a particular ideological position.
Theory Final
War Film Midterm
Social Media Quiz #1
Film Midterm
The Accreditation Council for Graduate Medical Education (ACGME) introduced new regulations limiting the number of hours that all residents could work in 2003. The requirement that residents work no more than 80 hours a week is a crucial part of these regulations. The following table shows how many hours per week a sample of Tidelands Medical Center residents worked in 2017. $$ \begin{array}{llllllllllll} 84 & 86 & 84 & 86 & 79 & 82 & 87 & 81 & 84 & 78 & 74 & 86 \end{array} $$ Using this, develop a 90% confidence interval for the population mean.
Erica was very frustrated with her job and ultimately decided to quit. What do we call this method of handling frustration? a. This approach is called using a scapegoat. b. This is an example of an emotion-focused method. c. This approach is called escape or withdrawal. d. This approach is called ignoring.
Prepare the journal entry on June 30 for the withdrawal of $11,500 by Dawn Pierce for personal use.
The general fund pays rent for two months. Which of the following is *not* true? *a.* Rent expense should be reported in the government-wide financial statements. *b.* Rent expense should be reported in the fund financial statements. *c.* An expenditure should eventually be reported in the fund financial statements. *d.* If one month of rent is in the first year with the other month in the next year, either the purchases method or the consumption method can be used in fund financial statements.
15th Edition•ISBN: 9781337520164John David Jackson, Patricia Meglich, Robert Mathis, Sean Valentine
10th Edition•ISBN: 9780134700724Elliot Aronson, Robin M. Akert, Samuel R. Sommers, Timothy D. Wilson
Operations Management: Sustainability and Supply Chain Management
12th Edition•ISBN: 9780134163451 (3 more)Barry Render, Chuck Munson, Jay Heizer
Anderson's Business Law and the Legal Environment, Comprehensive Volume
23rd Edition•ISBN: 9781305575080David Twomey, Marianne Jennings, Stephanie Greene
Combo with "Quiz" and 2 others
scmdancer
BIO FINAL
madalaine_leah_doran
MMT P 456
Venus_____00
PSYA3: Cultural Influences On Gender Role (AO1)
Chioma_TL_O_N | CommonCrawl |
Cosmic formation and distribution
Biological role
Biochemical function
Food sources
Deficient intake
Detection by taste buds
Commercial production
Chemical extraction
Cation identification
Commercial uses
Medical use
Chemical element with atomic number 19
Chemical element, symbol K and atomic number 19
Potassium, 19K
Potassium pearls (in paraffin oil, ~5 mm each)
/pəˈtæsiəm/ (pə-TASS-ee-əm)
silvery gray
Standard atomic weight Ar, std(K)
39.0983(1)[1]
Potassium in the periodic table
Hydrogen Helium
Lithium Beryllium Boron Carbon Nitrogen Oxygen Fluorine Neon
Sodium Magnesium Aluminium Silicon Phosphorus Sulfur Chlorine Argon
Potassium Calcium Scandium Titanium Vanadium Chromium Manganese Iron Cobalt Nickel Copper Zinc Gallium Germanium Arsenic Selenium Bromine Krypton
Rubidium Strontium Yttrium Zirconium Niobium Molybdenum Technetium Ruthenium Rhodium Palladium Silver Cadmium Indium Tin Antimony Tellurium Iodine Xenon
Caesium Barium Lanthanum Cerium Praseodymium Neodymium Promethium Samarium Europium Gadolinium Terbium Dysprosium Holmium Erbium Thulium Ytterbium Lutetium Hafnium Tantalum Tungsten Rhenium Osmium Iridium Platinum Gold Mercury (element) Thallium Lead Bismuth Polonium Astatine Radon
Francium Radium Actinium Thorium Protactinium Uranium Neptunium Plutonium Americium Curium Berkelium Californium Einsteinium Fermium Mendelevium Nobelium Lawrencium Rutherfordium Dubnium Seaborgium Bohrium Hassium Meitnerium Darmstadtium Roentgenium Copernicium Nihonium Flerovium Moscovium Livermorium Tennessine Oganesson
argon ← potassium → calcium
Atomic number (Z)
group 1: hydrogen and alkali metals
s-block
Electron configuration
[Ar] 4s1
Electrons per shell
Phase at STP
336.7 K (63.5 °C, 146.3 °F)
1032 K (759 °C, 1398 °F)
Density (near r.t.)
when liquid (at m.p.)
0.828 g/cm3
Critical point
2223 K, 16 MPa[2]
Heat of fusion
76.9 kJ/mol
Molar heat capacity
29.6 J/(mol·K)
Atomic properties
Oxidation states
−1, +1 (a strongly basic oxide)
Electronegativity
1st: 418.8 kJ/mol
2nd: 3052 kJ/mol
3rd: 4420 kJ/mol
Atomic radius
Covalent radius
203±12 pm
Van der Waals radius
Spectral lines of potassium
Natural occurrence
body-centered cubic (bcc)
2000 m/s (at 20 °C)
Thermal expansion
83.3 µm/(m⋅K) (at 25 °C)
Thermal conductivity
102.5 W/(m⋅K)
72 nΩ⋅m (at 20 °C)
paramagnetic[3]
Molar magnetic susceptibility
+20.8×10−6 cm3/mol (298 K)[4]
Young's modulus
3.53 GPa
Shear modulus
Brinell hardness
0.363 MPa
CAS Number
Discovery and first isolation
Humphry Davy (1807)
"K": from New Latin kalium
Main isotopes of potassium
Isotope
Abundance
Half-life (t1/2)
Decay mode
Product
93.258% stable
0.012% 1.248×109 y β− 40Ca
ε 40Ar
β+ 40Ar
6.730% stable
Category: Potassium
Potassium is a chemical element with the symbol K (from Neo-Latin kalium) and atomic number 19. Potassium is a silvery-white metal that is soft enough to be cut with a knife with little force.[5] Potassium metal reacts rapidly with atmospheric oxygen to form flaky white potassium peroxide in only seconds of exposure. It was first isolated from potash, the ashes of plants, from which its name derives. In the periodic table, potassium is one of the alkali metals, all of which have a single valence electron in the outer electron shell, that is easily removed to create an ion with a positive charge – a cation, that combines with anions to form salts. Potassium in nature occurs only in ionic salts. Elemental potassium reacts vigorously with water, generating sufficient heat to ignite hydrogen emitted in the reaction, and burning with a lilac-colored flame. It is found dissolved in sea water (which is 0.04% potassium by weight[6][7]), and occurs in many minerals such as orthoclase, a common constituent of granites and other igneous rocks.[8]
Potassium is chemically very similar to sodium, the previous element in group 1 of the periodic table. They have a similar first ionization energy, which allows for each atom to give up its sole outer electron. It was suspected in 1702 that they were distinct elements that combine with the same anions to make similar salts,[9] and was proven in 1807 using electrolysis. Naturally occurring potassium is composed of three isotopes, of which 40
K is radioactive. Traces of 40
K are found in all potassium, and it is the most common radioisotope in the human body.
Potassium ions are vital for the functioning of all living cells. The transfer of potassium ions across nerve cell membranes is necessary for normal nerve transmission; potassium deficiency and excess can each result in numerous signs and symptoms, including an abnormal heart rhythm and various electrocardiographic abnormalities. Fresh fruits and vegetables are good dietary sources of potassium. The body responds to the influx of dietary potassium, which raises serum potassium levels, with a shift of potassium from outside to inside cells and an increase in potassium excretion by the kidneys.
Most industrial applications of potassium exploit the high solubility in water of potassium compounds, such as potassium soaps. Heavy crop production rapidly depletes the soil of potassium, and this can be remedied with agricultural fertilizers containing potassium, accounting for 95% of global potassium chemical production.[10]
The English name for the element potassium comes from the word potash,[11] which refers to an early method of extracting various potassium salts: placing in a pot the ash of burnt wood or tree leaves, adding water, heating, and evaporating the solution. When Humphry Davy first isolated the pure element using electrolysis in 1807, he named it potassium, which he derived from the word potash.
The symbol K stems from kali, itself from the root word alkali, which in turn comes from Arabic: القَلْيَه al-qalyah 'plant ashes'. In 1797, the German chemist Martin Klaproth discovered "potash" in the minerals leucite and lepidolite, and realized that "potash" was not a product of plant growth but actually contained a new element, which he proposed calling kali.[12] In 1807, Humphry Davy produced the element via electrolysis: in 1809, Ludwig Wilhelm Gilbert proposed the name Kalium for Davy's "potassium".[13] In 1814, the Swedish chemist Berzelius advocated the name kalium for potassium, with the chemical symbol K.[14]
The English and French-speaking countries adopted Davy and Gay-Lussac/Thénard's name Potassium, whereas the Germanic countries adopted Gilbert/Klaproth's name Kalium.[15] The "Gold Book" of the International Union of Pure and Applied Chemistry has designated the official chemical symbol as K.[16]
The flame test of potassium.
Potassium is the second least dense metal after lithium. It is a soft solid with a low melting point, and can be easily cut with a knife. Freshly cut potassium is silvery in appearance, but it begins to tarnish toward gray immediately on exposure to air.[17] In a flame test, potassium and its compounds emit a lilac color with a peak emission wavelength of 766.5 nanometers.[18]
Neutral potassium atoms have 19 electrons, one more than the configuration of the noble gas argon. Because of its low first ionization energy of 418.8 kJ/mol, the potassium atom is much more likely to lose the last electron and acquire a positive charge, although negatively charged alkalide K−
ions are not impossible.[19] In contrast, the second ionization energy is very high (3052 kJ/mol).
Potassium reacts with oxygen, water, and carbon dioxide components in air. With oxygen it forms potassium peroxide. With water potassium forms potassium hydroxide. The reaction of potassium with water can be violently exothermic, especially since the coproduced hydrogen gas can ignite. Because of this, potassium and the liquid sodium-potassium (NaK) alloy are potent desiccants, although they are no longer used as such.[20]
Structure of solid potassium superoxide (KO
2).
Three oxides of potassium are well studied: potassium oxide (K2O), potassium peroxide (K2O2), and potassium superoxide (KO2).[21] The binary potassium-oxygen binary compounds react with water forming potassium hydroxide.
Potassium hydroxide (KOH) is a strong base. Illustrating its hydrophilic character, as much as 1.21 kg of KOH can dissolve in a single liter of water.[22][23] Anhydrous KOH is rarely encountered. KOH reacts readily with carbon dioxide to produce potassium carbonate and in principle could be used to remove traces of the gas from air. Like the closely related sodium hydroxide, potassium hydroxide reacts with fats to produce soaps.
In general, potassium compounds are ionic and, owing to the high hydration energy of the K+
ion, have excellent water solubility. The main species in water solution are the aquated complexes [K(H
2O)
n]+
where n = 6 and 7.[24] Potassium heptafluorotantalate is an intermediate in the purification of tantalum from the otherwise persistent contaminant of niobium.[25]
Organopotassium compounds illustrate nonionic compounds of potassium. They feature highly polar covalent K---C bonds. Examples include benzyl potassium. Potassium intercalates into graphite to give a variety of compounds, including KC8.
There are 25 known isotopes of potassium, three of which occur naturally: 39
K (93.3%), 40
K (0.0117%), and 41
K (6.7%). Naturally occurring 40
K has a half-life of 1.250×109 years. It decays to stable 40
Ar by electron capture or positron emission (11.2%) or to stable 40
Ca by beta decay (88.8%).[26] The decay of 40
Ar is the basis of a common method for dating rocks. The conventional K-Ar dating method depends on the assumption that the rocks contained no argon at the time of formation and that all the subsequent radiogenic argon (40
Ar) was quantitatively retained. Minerals are dated by measurement of the concentration of potassium and the amount of radiogenic 40
Ar that has accumulated. The minerals best suited for dating include biotite, muscovite, metamorphic hornblende, and volcanic feldspar; whole rock samples from volcanic flows and shallow instrusives can also be dated if they are unaltered.[26][27] Apart from dating, potassium isotopes have been used as tracers in studies of weathering and for nutrient cycling studies because potassium is a macronutrient required for life.[28]
K occurs in natural potassium (and thus in some commercial salt substitutes) in sufficient quantity that large bags of those substitutes can be used as a radioactive source for classroom demonstrations. 40
K is the radioisotope with the largest abundance in the body. In healthy animals and people, 40
K represents the largest source of radioactivity, greater even than 14
C. In a human body of 70 kg mass, about 4,400 nuclei of 40
K decay per second.[29] The activity of natural potassium is 31 Bq/g.[30]
Potassium in feldspar
Potassium is formed in supernovae by nucleosynthesis from lighter atoms. Potassium is principally created in Type II supernovae via an explosive oxygen-burning process.[31] 40
K is also formed in s-process nucleosynthesis and the neon burning process.[32]
Potassium is the 20th most abundant element in the solar system and the 17th most abundant element by weight in the Earth. It makes up about 2.6% of the weight of the earth's crust and is the seventh most abundant element in the crust.[33] The potassium concentration in seawater is 0.39 g/L[6] (0.039 wt/v%), about one twenty-seventh the concentration of sodium.[34][35]
Potash is primarily a mixture of potassium salts because plants have little or no sodium content, and the rest of a plant's major mineral content consists of calcium salts of relatively low solubility in water. While potash has been used since ancient times, its composition was not understood. Georg Ernst Stahl obtained experimental evidence that led him to suggest the fundamental difference of sodium and potassium salts in 1702,[9] and Henri Louis Duhamel du Monceau was able to prove this difference in 1736.[36] The exact chemical composition of potassium and sodium compounds, and the status as chemical element of potassium and sodium, was not known then, and thus Antoine Lavoisier did not include the alkali in his list of chemical elements in 1789.[37][38] For a long time the only significant applications for potash were the production of glass, bleach, soap and gunpowder as potassium nitrate.[39] Potassium soaps from animal fats and vegetable oils were especially prized because they tend to be more water-soluble and of softer texture, and are therefore known as soft soaps.[10] The discovery by Justus Liebig in 1840 that potassium is a necessary element for plants and that most types of soil lack potassium[40] caused a steep rise in demand for potassium salts. Wood-ash from fir trees was initially used as a potassium salt source for fertilizer, but, with the discovery in 1868 of mineral deposits containing potassium chloride near Staßfurt, Germany, the production of potassium-containing fertilizers began at an industrial scale.[41][42][43] Other potash deposits were discovered, and by the 1960s Canada became the dominant producer.[44][45]
Sir Humphry Davy
Pieces of potassium metal
Potassium metal was first isolated in 1807 by Humphry Davy, who derived it by electrolysis of molten KOH with the newly discovered voltaic pile. Potassium was the first metal that was isolated by electrolysis.[46] Later in the same year, Davy reported extraction of the metal sodium from a mineral derivative (caustic soda, NaOH, or lye) rather than a plant salt, by a similar technique, demonstrating that the elements, and thus the salts, are different.[37][38][47][48] Although the production of potassium and sodium metal should have shown that both are elements, it took some time before this view was universally accepted.[38]
Because of the sensitivity of potassium to water and air, air-free techniques are normally employed for handling the element. It is unreactive toward nitrogen and saturated hydrocarbons such as mineral oil or kerosene.[49] It readily dissolves in liquid ammonia, up to 480 g per 1000 g of ammonia at 0 °C. Depending on the concentration, the ammonia solutions are blue to yellow, and their electrical conductivity is similar to that of liquid metals. Potassium slowly reacts with ammonia to form KNH
2, but this reaction is accelerated by minute amounts of transition metal salts.[50] Because it can reduce the salts to the metal, potassium is often used as the reductant in the preparation of finely divided metals from their salts by the Rieke method.[51] Illustrative is the preparation of magnesium:
MgCl
2 + 2 K → Mg + 2 KCl
Elemental potassium does not occur in nature because of its high reactivity. It reacts violently with water (see section Precautions below)[49] and also reacts with oxygen. Orthoclase (potassium feldspar) is a common rock-forming mineral. Granite for example contains 5% potassium, which is well above the average in the Earth's crust. Sylvite (KCl), carnallite (KCl·MgCl
2·6(H
2O)), kainite (MgSO
4·KCl·3H
2O) and langbeinite (MgSO
4·K
4) are the minerals found in large evaporite deposits worldwide. The deposits often show layers starting with the least soluble at the bottom and the most soluble on top.[35] Deposits of niter (potassium nitrate) are formed by decomposition of organic material in contact with atmosphere, mostly in caves; because of the good water solubility of niter the formation of larger deposits requires special environmental conditions.[52]
Potassium is the eighth or ninth most common element by mass (0.2%) in the human body, so that a 60 kg adult contains a total of about 120 g of potassium.[53] The body has about as much potassium as sulfur and chlorine, and only calcium and phosphorus are more abundant (with the exception of the ubiquitous CHON elements).[54] Potassium ions are present in a wide variety of proteins and enzymes.[55]
Potassium levels influence multiple physiological processes, including[56][57][58]
resting cellular-membrane potential and the propagation of action potentials in neuronal, muscular, and cardiac tissue. Due to the electrostatic and chemical properties, K+
ions are larger than Na+
ions, and ion channels and pumps in cell membranes can differentiate between the two ions, actively pumping or passively passing one of the two ions while blocking the other.[59]
hormone secretion and action
vascular tone
systemic blood pressure control
acid–base homeostasis
glucose and insulin metabolism
mineralocorticoid action
renal concentrating ability
Potassium homeostasis denotes the maintenance of the total body potassium content, plasma potassium level, and the ratio of the intracellular to extracellular potassium concentrations within narrow limits, in the face of pulsatile intake (meals), obligatory renal excretion, and shifts between intracellular and extracellular compartments.
Plasma levels
Plasma potassium is normally kept at 3.5 to 5.0 millimoles (mmol) [or milliequivalents (mEq)] per liter by multiple mechanisms. Levels outside this range are associated with an increasing rate of death from multiple causes,[60] and some cardiac, kidney,[61] and lung diseases progress more rapidly if serum potassium levels are not maintained within the normal range.
An average meal of 40–50 mmol presents the body with more potassium than is present in all plasma (20–25 mmol). However, this surge causes the plasma potassium to rise only 10% at most as a result of prompt and efficient clearance by both renal and extra-renal mechanisms.[62]
Hypokalemia, a deficiency of potassium in the plasma, can be fatal if severe. Common causes are increased gastrointestinal loss (vomiting, diarrhea), and increased renal loss (diuresis).[63] Deficiency symptoms include muscle weakness, paralytic ileus, ECG abnormalities, decreased reflex response; and in severe cases, respiratory paralysis, alkalosis, and cardiac arrhythmia.[64]
Control mechanisms
Potassium content in the plasma is tightly controlled by four basic mechanisms, which have various names and classifications. The four are 1) a reactive negative-feedback system, 2) a reactive feed-forward system, 3) a predictive or circadian system, and 4) an internal or cell membrane transport system. Collectively, the first three are sometimes termed the "external potassium homeostasis system";[65] and the first two, the "reactive potassium homeostasis system".
The reactive negative-feedback system refers to the system that induces renal secretion of potassium in response to a rise in the plasma potassium (potassium ingestion, shift out of cells, or intravenous infusion.)
The reactive feed-forward system refers to an incompletely understood system that induces renal potassium secretion in response to potassium ingestion prior to any rise in the plasma potassium. This is probably initiated by gut cell potassium receptors that detect ingested potassium and trigger vagal afferent signals to the pituitary gland.
The predictive or circadian system increases renal secretion of potassium during mealtime hours (e.g. daytime for humans, nighttime for rodents) independent of the presence, amount, or absence of potassium ingestion. It is mediated by a circadian oscillator in the suprachiasmatic nucleus of the brain (central clock), which causes the kidney (peripheral clock) to secrete potassium in this rhythmic circadian fashion.
The action of the sodium-potassium pump is an example of primary active transport. The two carrier proteins embedded in the cell membrane on the left are using ATP to move sodium out of the cell against the concentration gradient; The two proteins on the right are using secondary active transport to move potassium into the cell. This process results in reconstitution of ATP.
The ion transport system moves potassium across the cell membrane using two mechanisms. One is active and pumps sodium out of, and potassium into, the cell. The other is passive and allows potassium to leak out of the cell. Potassium and sodium cations influence fluid distribution between intracellular and extracellular compartments by osmotic forces. The movement of potassium and sodium through the cell membrane is mediated by the Na+/K+-ATPase pump.[66] This ion pump uses ATP to pump three sodium ions out of the cell and two potassium ions into the cell, creating an electrochemical gradient and electromotive force across the cell membrane. The highly selective potassium ion channels (which are tetramers) are crucial for hyperpolarization inside neurons after an action potential is triggered, to cite one example. The most recently discovered potassium ion channel is KirBac3.1, which makes a total of five potassium ion channels (KcsA, KirBac1.1, KirBac3.1, KvAP, and MthK) with a determined structure. All five are from prokaryotic species.[67]
Renal filtration, reabsorption, and excretion
Renal handling of potassium is closely connected to sodium handling. Potassium is the major cation (positive ion) inside animal cells [150 mmol/L, (4.8 g)], while sodium is the major cation of extracellular fluid [150 mmol/L, (3.345 g)]. In the kidneys, about 180 liters of plasma is filtered through the glomeruli and into the renal tubules per day.[68] This filtering involves about 600 g of sodium and 33 g of potassium. Since only 1–10 g of sodium and 1–4 g of potassium are likely to be replaced by diet, renal filtering must efficiently reabsorb the remainder from the plasma.
Sodium is reabsorbed to maintain extracellular volume, osmotic pressure, and serum sodium concentration within narrow limits. Potassium is reabsorbed to maintain serum potassium concentration within narrow limits.[69] Sodium pumps in the renal tubules operate to reabsorb sodium. Potassium must be conserved, but because the amount of potassium in the blood plasma is very small and the pool of potassium in the cells is about 30 times as large, the situation is not so critical for potassium. Since potassium is moved passively[70][71] in counter flow to sodium in response to an apparent (but not actual) Donnan equilibrium,[72] the urine can never sink below the concentration of potassium in serum except sometimes by actively excreting water at the end of the processing. Potassium is excreted twice and reabsorbed three times before the urine reaches the collecting tubules.[73] At that point, urine usually has about the same potassium concentration as plasma. At the end of the processing, potassium is secreted one more time if the serum levels are too high.[citation needed]
With no potassium intake, it is excreted at about 200 mg per day until, in about a week, potassium in the serum declines to a mildly deficient level of 3.0–3.5 mmol/L.[74] If potassium is still withheld, the concentration continues to fall until a severe deficiency causes eventual death.[75]
The potassium moves passively through pores in the cell membrane. When ions move through Ion transporters (pumps) there is a gate in the pumps on both sides of the cell membrane and only one gate can be open at once. As a result, approximately 100 ions are forced through per second. Ion channel have only one gate, and there only one kind of ion can stream through, at 10 million to 100 million ions per second.[76] Calcium is required to open the pores,[77] although calcium may work in reverse by blocking at least one of the pores.[78] Carbonyl groups inside the pore on the amino acids mimic the water hydration that takes place in water solution[79] by the nature of the electrostatic charges on four carbonyl groups inside the pore.[80]
The U.S. National Academy of Medicine (NAM), on behalf of both the U.S. and Canada, sets Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs), or Adequate Intakes (AIs) for when there is not sufficient information to set EARs and RDAs. Collectively the EARs, RDAs, AIs and ULs are referred to as Dietary Reference Intakes.
For both males and females under 9 years of age, the AIs for potassium are: 400 mg of potassium for 0-6-month-old infants, 860 mg of potassium for 7-12-month-old infants, 2,000 mg of potassium for 1-3-year-old children, and 2,300 mg of potassium for 4-8-year-old children.
For males 9 years of age and older, the AIs for potassium are: 2,500 mg of potassium for 9-13-year-old males, 3,000 mg of potassium for 14-18-year-old males, and 3,400 mg for males that are 19 years of age and older.
For females 9 years of age and older, the AIs for potassium are: 2,300 mg of potassium for 9-18-year-old females, and 2,600 mg of potassium for females that are 19 years of age and older.
For pregnant and lactating females, the AIs for potassium are: 2,600 mg of potassium for 14-18-year-old pregnant females, 2,900 mg for pregnant females that are 19 years of age and older; furthermore, 2,500 mg of potassium for 14-18-year-old lactating females, and 2,800 mg for lactating females that are 19 years of age and older. As for safety, the NAM also sets tolerable upper intake levels (ULs) for vitamins and minerals, but for potassium the evidence was insufficient, so no UL was established.[81][82]
As of 2004, most Americans adults consume less than 3,000 mg.[83]
Likewise, in the European Union, in particular in Germany and Italy, insufficient potassium intake is somewhat common.[84] The British National Health Service recommends a similar intake, saying that adults need 3,500 mg per day and that excess amounts may cause health problems such as stomach pain and diarrhoea.[85]
Previously the Adequate Intake for adults was set at 4,700 mg per day. In 2019, the National Academies of Sciences, Engineering, and Medicine revised the AI for potassium to 2,600 mg/day for females 19 years and older and 3,400 mg/day for males 19 years and older.[86]
Potassium is present in all fruits, vegetables, meat and fish. Foods with high potassium concentrations include yam, parsley, dried apricots, milk, chocolate, all nuts (especially almonds and pistachios), potatoes, bamboo shoots, bananas, avocados, coconut water, soybeans, and bran.[87]
The USDA lists tomato paste, orange juice, beet greens, white beans, potatoes, plantains, bananas, apricots, and many other dietary sources of potassium, ranked in descending order according to potassium content. A day's worth of potassium is in 5 plantains or 11 bananas.[88]
Diets low in potassium can lead to hypertension[89] and hypokalemia.
Supplements of potassium are most widely used in conjunction with diuretics that block reabsorption of sodium and water upstream from the distal tubule (thiazides and loop diuretics), because this promotes increased distal tubular potassium secretion, with resultant increased potassium excretion. A variety of prescription and over-the counter supplements are available. Potassium chloride may be dissolved in water, but the salty/bitter taste makes liquid supplements unpalatable.[90] Typical doses range from 10 mmol (400 mg), to 20 mmol (800 mg). Potassium is also available in tablets or capsules, which are formulated to allow potassium to leach slowly out of a matrix, since very high concentrations of potassium ion that occur adjacent to a solid tablet can injure the gastric or intestinal mucosa. For this reason, non-prescription potassium pills are limited by law in the US to a maximum of 99 mg of potassium.[citation needed]
Since the kidneys are the site of potassium excretion, individuals with impaired kidney function are at risk for hyperkalemia if dietary potassium and supplements are not restricted. The more severe the impairment, the more severe is the restriction necessary to avoid hyperkalemia.
A meta-analysis concluded that a 1640 mg increase in the daily intake of potassium was associated with a 21% lower risk of stroke.[91] Potassium chloride and potassium bicarbonate may be useful to control mild hypertension.[92] In 2017, potassium was the 37th most commonly prescribed medication in the United States, with more than 19 million prescriptions.[93][94]
Potassium can be detected by taste because it triggers three of the five types of taste sensations, according to concentration. Dilute solutions of potassium ions taste sweet, allowing moderate concentrations in milk and juices, while higher concentrations become increasingly bitter/alkaline, and finally also salty to the taste. The combined bitterness and saltiness of high-potassium solutions makes high-dose potassium supplementation by liquid drinks a palatability challenge.[90][95]
Sylvite from New Mexico
Monte Kali, a potash mining and beneficiation waste heap in Hesse, Germany, consisting mostly of sodium chloride.
Potassium salts such as carnallite, langbeinite, polyhalite, and sylvite form extensive evaporite deposits in ancient lake bottoms and seabeds,[34] making extraction of potassium salts in these environments commercially viable. The principal source of potassium – potash – is mined in Canada, Russia, Belarus, Kazakhstan, Germany, Israel, United States, Jordan, and other places around the world.[96][97][98] The first mined deposits were located near Staßfurt, Germany, but the deposits span from Great Britain over Germany into Poland. They are located in the Zechstein and were deposited in the Middle to Late Permian. The largest deposits ever found lie 1,000 meters (3,300 feet) below the surface of the Canadian province of Saskatchewan. The deposits are located in the Elk Point Group produced in the Middle Devonian. Saskatchewan, where several large mines have operated since the 1960s pioneered the technique of freezing of wet sands (the Blairmore formation) to drive mine shafts through them. The main potash mining company in Saskatchewan until its merge was the Potash Corporation of Saskatchewan, now Nutrien.[99] The water of the Dead Sea is used by Israel and Jordan as a source of potash, while the concentration in normal oceans is too low for commercial production at current prices.[97][98]
Several methods are used to separate potassium salts from sodium and magnesium compounds. The most-used method is fractional precipitation using the solubility differences of the salts. Electrostatic separation of the ground salt mixture is also used in some mines. The resulting sodium and magnesium waste is either stored underground or piled up in slag heaps. Most of the mined potassium mineral ends up as potassium chloride after processing. The mineral industry refers to potassium chloride either as potash, muriate of potash, or simply MOP.[35]
Pure potassium metal can be isolated by electrolysis of its hydroxide in a process that has changed little since it was first used by Humphry Davy in 1807. Although the electrolysis process was developed and used in industrial scale in the 1920s, the thermal method by reacting sodium with potassium chloride in a chemical equilibrium reaction became the dominant method in the 1950s.
The production of sodium potassium alloys is accomplished by changing the reaction time and the amount of sodium used in the reaction. The Griesheimer process employing the reaction of potassium fluoride with calcium carbide was also used to produce potassium.[35][100]
Na + KCl → NaCl + K (Thermal method)
2 KF + CaC
2 → 2 K + CaF
2 + 2 C (Griesheimer process)
Reagent-grade potassium metal costs about $10.00/pound ($22/kg) in 2010 when purchased by the tonne. Lower purity metal is considerably cheaper. The market is volatile because long-term storage of the metal is difficult. It must be stored in a dry inert gas atmosphere or anhydrous mineral oil to prevent the formation of a surface layer of potassium superoxide, a pressure-sensitive explosive that detonates when scratched. The resulting explosion often starts a fire difficult to extinguish.[101][102]
Potassium is now quantified by ionization techniques, but at one time it was quantitated by gravimetric analysis.
Reagents used to precipitate potassium salts include sodium tetraphenylborate, hexachloroplatinic acid, and sodium cobaltinitrite into respectively potassium tetraphenylborate, potassium hexachloroplatinate, and potassium cobaltinitrite.[49] The reaction with sodium cobaltinitrite is illustrative:
3K+ + Na3[Co(NO2)6] → K3[Co(NO2)6] + 3Na+
The potassium cobaltinitrite is obtained as a yellow solid.
Potassium sulfate/magnesium sulfate fertilizer
Potassium ions are an essential component of plant nutrition and are found in most soil types.[10] They are used as a fertilizer in agriculture, horticulture, and hydroponic culture in the form of chloride (KCl), sulfate (K
4), or nitrate (KNO
3), representing the 'K' in 'NPK'. Agricultural fertilizers consume 95% of global potassium chemical production, and about 90% of this potassium is supplied as KCl.[10] The potassium content of most plants ranges from 0.5% to 2% of the harvested weight of crops, conventionally expressed as amount of K
2O. Modern high-yield agriculture depends upon fertilizers to replace the potassium lost at harvest. Most agricultural fertilizers contain potassium chloride, while potassium sulfate is used for chloride-sensitive crops or crops needing higher sulfur content. The sulfate is produced mostly by decomposition of the complex minerals kainite (MgSO
4). Only a very few fertilizers contain potassium nitrate.[103] In 2005, about 93% of world potassium production was consumed by the fertilizer industry.[98] Furthermore, potassium can play a key role in nutrient cycling by controlling litter composition.[104]
Potassium, in the form of potassium chloride is used as a medication to treat and prevent low blood potassium.[105] Low blood potassium may occur due to vomiting, diarrhea, or certain medications.[106] It is given by slow injection into a vein or by mouth.[107]
Potassium sodium tartrate (KNaC
4O
6, Rochelle salt) is a main constituent of some varieties of baking powder; it is also used in the silvering of mirrors. Potassium bromate (KBrO
3) is a strong oxidizer (E924), used to improve dough strength and rise height. Potassium bisulfite (KHSO
3) is used as a food preservative, for example in wine and beer-making (but not in meats). It is also used to bleach textiles and straw, and in the tanning of leathers.[108][109]
Major potassium chemicals are potassium hydroxide, potassium carbonate, potassium sulfate, and potassium chloride. Megatons of these compounds are produced annually.[110]
Potassium hydroxide KOH is a strong base, which is used in industry to neutralize strong and weak acids, to control pH and to manufacture potassium salts. It is also used to saponify fats and oils, in industrial cleaners, and in hydrolysis reactions, for example of esters.[111][112]
Potassium nitrate (KNO
3) or saltpeter is obtained from natural sources such as guano and evaporites or manufactured via the Haber process; it is the oxidant in gunpowder (black powder) and an important agricultural fertilizer. Potassium cyanide (KCN) is used industrially to dissolve copper and precious metals, in particular silver and gold, by forming complexes. Its applications include gold mining, electroplating, and electroforming of these metals; it is also used in organic synthesis to make nitriles. Potassium carbonate (K
3 or potash) is used in the manufacture of glass, soap, color TV tubes, fluorescent lamps, textile dyes and pigments.[113] Potassium permanganate (KMnO
4) is an oxidizing, bleaching and purification substance and is used for production of saccharin. Potassium chlorate (KClO
3) is added to matches and explosives. Potassium bromide (KBr) was formerly used as a sedative and in photography.[10]
While potassium chromate (K
2CrO
4) is used in the manufacture of a host of different commercial products such as inks, dyes, wood stains (by reacting with the tannic acid in wood), explosives, fireworks, fly paper, and safety matches,[114]as well as in the tanning of leather, all of these uses are due to the chemistry of the chromate ion rather than to that of the potassium ion.[115]
Niche uses
There are thousands of uses of various potassium compounds. One example is potassium superoxide, KO
2, an orange solid that acts as a portable source of oxygen and a carbon dioxide absorber. It is widely used in respiration systems in mines, submarines and spacecraft as it takes less volume than the gaseous oxygen.[116][117]
4 KO
2 + 2 CO2 → 2 K
3 + 3 O
Another example is potassium cobaltinitrite, K
3[Co(NO
6], which is used as artist's pigment under the name of Aureolin or Cobalt Yellow.[118]
The stable isotopes of potassium can be laser cooled and used to probe fundamental and technological problems in quantum physics. The two bosonic isotopes possess convenient Feshbach resonances to enable studies requiring tunable interactions, while 40K is one of only two stable fermions amongst the alkali metals.[119]
Laboratory uses
An alloy of sodium and potassium, NaK is a liquid used as a heat-transfer medium and a desiccant for producing dry and air-free solvents. It can also be used in reactive distillation.[120] The ternary alloy of 12% Na, 47% K and 41% Cs has the lowest melting point of −78 °C of any metallic compound.[17]
Metallic potassium is used in several types of magnetometers.[121]
GHS labelling:
H260, H314
P223, P231+P232, P280, P305+P351+P338, P370+P378, P422[122]
NFPA 704 (fire diamond)
Potassium metal can react violently with water producing potassium hydroxide (KOH) and hydrogen gas.
2 K (s) + 2 H2O (l) → 2 KOH (aq) + H
2↑ (g)
A reaction of potassium metal with water. Hydrogen is produced, and with potassium vapor, burns with a pink or lilac flame. Strongly alkaline potassium hydroxide is formed in solution.
This reaction is exothermic and releases sufficient heat to ignite the resulting hydrogen in the presence of oxygen. Finely powdered potassium ignites in air at room temperature. The bulk metal ignites in air if heated. Because its density is 0.89 g/cm3, burning potassium floats in water that exposes it to atmospheric oxygen. Many common fire extinguishing agents, including water, either are ineffective or make a potassium fire worse. Nitrogen, argon, sodium chloride (table salt), sodium carbonate (soda ash), and silicon dioxide (sand) are effective if they are dry. Some Class D dry powder extinguishers designed for metal fires are also effective. These agents deprive the fire of oxygen and cool the potassium metal.[123]
During storage, potassium forms peroxides and superoxides. These peroxides may react violently with organic compounds such as oils. Both peroxides and superoxides may react explosively with metallic potassium.[124]
Because potassium reacts with water vapor in the air, it is usually stored under anhydrous mineral oil or kerosene. Unlike lithium and sodium, however, potassium should not be stored under oil for longer than six months, unless in an inert (oxygen free) atmosphere, or under vacuum. After prolonged storage in air dangerous shock-sensitive peroxides can form on the metal and under the lid of the container, and can detonate upon opening.[125]
Ingestion of large amounts of potassium compounds can lead to hyperkalemia, strongly influencing the cardiovascular system.[126][127] Potassium chloride is used in the United States for lethal injection executions.[126]
^ "Standard Atomic Weights: Potassium". CIAAW. 1979.
^ Haynes, William M., ed. (2011). CRC Handbook of Chemistry and Physics (92nd ed.). Boca Raton, FL: CRC Press. p. 4.122. ISBN 1-4398-5511-0.
^ Magnetic susceptibility of the elements and inorganic compounds, in Lide, D. R., ed. (2005). CRC Handbook of Chemistry and Physics (86th ed.). Boca Raton (FL): CRC Press. ISBN 0-8493-0486-5.
^ Weast, Robert (1984). CRC, Handbook of Chemistry and Physics. Boca Raton, Florida: Chemical Rubber Company Publishing. pp. E110. ISBN 0-8493-0464-4.
^ Augustyn, Adam. "Potassium/ Chemical element". Encyclopedia Britannica. Retrieved 2019-04-17. Potassium Physical properties
^ a b Webb, D. A. (April 1939). "The Sodium and Potassium Content of Sea Water" (PDF). The Journal of Experimental Biology (2): 183.
^ Anthoni, J. (2006). "Detailed composition of seawater at 3.5% salinity". seafriends.org.nz. Retrieved 2011-09-23.
^ Halperin, Mitchell L.; Kamel, Kamel S. (1998-07-11). "Potassium". The Lancet. 352 (9122): 135–140. doi:10.1016/S0140-6736(98)85044-7. ISSN 0140-6736. PMID 9672294. S2CID 208790031.
^ a b Marggraf, Andreas Siegmund (1761). Chymische Schriften. p. 167.
^ a b c d e Greenwood, p. 73
^ Davy, Humphry (1808). "On some new phenomena of chemical changes produced by electricity, in particular the decomposition of the fixed alkalies, and the exhibition of the new substances that constitute their bases; and on the general nature of alkaline bodies". Philosophical Transactions of the Royal Society. 98: 32. doi:10.1098/rstl.1808.0001.
^ Klaproth, M. (1797) "Nouvelles données relatives à l'histoire naturelle de l'alcali végétal" (New data regarding the natural history of the vegetable alkali), Mémoires de l'Académie royale des sciences et belles-lettres (Berlin), pp. 9–13 ; see p. 13. From p. 13: "Cet alcali ne pouvant donc plus être envisagé comme un produit de la végétation dans les plantes, occupe une place propre dans la série des substances primitivement simples du règne minéral, &I il devient nécessaire de lui assigner un nom, qui convienne mieux à sa nature.
La dénomination de Potasche (potasse) que la nouvelle nomenclature françoise a consacrée comme nom de tout le genre, ne sauroit faire fortune auprès des chimistes allemands, qui sentent à quel point la dérivation étymologique en est vicieuse. Elle est prise en effet de ce qu'anciennement on se servoit pour la calcination des lessives concentrées des cendres, de pots de fer (pott en dialecte de la Basse-Saxe) auxquels on a substitué depuis des fours à calciner.
Je propose donc ici, de substituer aux mots usités jusqu'ici d'alcali des plantes, alcali végétal, potasse, &c. celui de kali, & de revenir à l'ancienne dénomination de natron, au lieu de dire alcali minéral, soude &c."
(This alkali [i.e., potash] — [which] therefore can no longer be viewed as a product of growth in plants — occupies a proper place in the originally simple series of the mineral realm, and it becomes necessary to assign it a name that is better suited to its nature.
The name of "potash" (potasse), which the new French nomenclature has bestowed as the name of the entire species [i.e., substance], would not find acceptance among German chemists, who feel to some extent [that] the etymological derivation of it is faulty. Indeed, it is taken from [the vessels] that one formerly used for the roasting of washing powder concentrated from cinders: iron pots (pott in the dialect of Lower Saxony), for which roasting ovens have been substituted since then.
Thus I now propose to substitute for the until now common words of "plant alkali", "vegetable alkali", "potash", etc., that of kali ; and to return to the old name of natron instead of saying "mineral alkali", "soda", etc.)
^ Davy, Humphry (1809). "Ueber einige neue Erscheinungen chemischer Veränderungen, welche durch die Electricität bewirkt werden; insbesondere über die Zersetzung der feuerbeständigen Alkalien, die Darstellung der neuen Körper, welche ihre Basen ausmachen, und die Natur der Alkalien überhaupt" [On some new phenomena of chemical changes that are achieved by electricity; particularly the decomposition of flame-resistant alkalis [i.e., alkalies that cannot be reduced to their base metals by flames], the preparation of new substances that constitute their [metallic] bases, and the nature of alkalies generally]. Annalen der Physik. 31 (2): 113–175. Bibcode:1809AnP....31..113D. doi:10.1002/andp.18090310202. p. 157: In unserer deutschen Nomenclatur würde ich die Namen Kalium und Natronium vorschlagen, wenn man nicht lieber bei den von Herrn Erman gebrauchten und von mehreren angenommenen Benennungen Kali-Metalloid and Natron-Metalloid, bis zur völligen Aufklärung der chemischen Natur dieser räthzelhaften Körper bleiben will. Oder vielleicht findet man es noch zweckmässiger fürs Erste zwei Klassen zu machen, Metalle und Metalloide, und in die letztere Kalium und Natronium zu setzen. — Gilbert. (In our German nomenclature, I would suggest the names Kalium and Natronium, if one would not rather continue with the appellations Kali-metalloid and Natron-metalloid which are used by Mr. Erman [i.e., German physics professor Paul Erman (1764–1851)] and accepted by several [people], until the complete clarification of the chemical nature of these puzzling substances. Or perhaps one finds it yet more advisable for the present to create two classes, metals and metalloids, and to place Kalium and Natronium in the latter — Gilbert.)
^ Berzelius, J. Jacob (1814) Försök, att, genom användandet af den electrokemiska theorien och de kemiska proportionerna, grundlägga ett rent vettenskapligt system för mineralogien [Attempt, by the use of electrochemical theory and chemical proportions, to found a pure scientific system for mineralogy]. Stockholm, Sweden: A. Gadelius., p. 87.
^ 19. Kalium (Potassium) – Elementymology & Elements Multidict. vanderkrogt.net
^ McNaught, A. D. and Wilkinson,A. eds. (1997). Compendium of Chemical Terminology, 2nd ed. (the "Gold Book"). IUPAC. Blackwell Scientific Publications, Oxford.
^ a b Greenwood, p. 76
^ Greenwood, p. 75
^ Dye, J. L. (1979). "Compounds of Alkali Metal Anions". Angewandte Chemie International Edition. 18 (8): 587–598. doi:10.1002/anie.197905871.
^ Williams, D. Bradley G.; Lawton, Michelle (2010). "Drying of Organic Solvents: Quantitative Evaluation of the Efficiency of Several Desiccants". The Journal of Organic Chemistry. 75 (24): 8351–8354. doi:10.1021/jo101589h. PMID 20945830. S2CID 17801540.
^ Lide, David R. (1998). Handbook of Chemistry and Physics (87th ed.). Boca Raton, Florida, United States: CRC Press. pp. 477, 520. ISBN 978-0-8493-0594-8.
^ Lide, D. R., ed. (2005). CRC Handbook of Chemistry and Physics (86th ed.). Boca Raton (FL): CRC Press. p. 4–80. ISBN 0-8493-0486-5.
^ Schultz, p. 94
^ Lincoln, S. F.; Richens, D. T. and Sykes, A. G. "Metal Aqua Ions" in J. A. McCleverty and T. J. Meyer (eds.) Comprehensive Coordination Chemistry II, Vol. 1, pp. 515–555, ISBN 978-0-08-043748-4.
^ Anthony Agulyanski (2004). "Fluorine chemistry in the processing of tantalum and niobium". In Anatoly Agulyanski (ed.). Chemistry of Tantalum and Niobium Fluoride Compounds (1st ed.). Burlington: Elsevier. ISBN 9780080529028.
^ a b Audi, Georges; Bersillon, Olivier; Blachot, Jean; Wapstra, Aaldert Hendrik (2003), "The NUBASE evaluation of nuclear and decay properties", Nuclear Physics A, 729: 3–128, Bibcode:2003NuPhA.729....3A, doi:10.1016/j.nuclphysa.2003.11.001
^ Bowen, Robert; Attendorn, H. G. (1988). "Theory and Assumptions in Potassium–Argon Dating". Isotopes in the Earth Sciences. Springer. pp. 203–8. ISBN 978-0-412-53710-3.
^ Anaç, D. & Martin-Prével, P. (1999). Improved crop quality by nutrient management. Springer. pp. 290–. ISBN 978-0-7923-5850-3.
^ "Radiation and Radioactive Decay. Radioactive Human Body". Harvard Natural Sciences Lecture Demonstrations. Retrieved July 2, 2016.
^ Winteringham, F. P. W; Effects, F.A.O. Standing Committee on Radiation, Land And Water Development Division, Food and Agriculture Organization of the United Nations (1989). Radioactive fallout in soils, crops and food: a background review. Food & Agriculture Org. p. 32. ISBN 978-92-5-102877-3. CS1 maint: multiple names: authors list (link)
^ Shimansky, V.; Bikmaev, I. F.; Galeev, A. I.; Shimanskaya, N. N.; et al. (September 2003). "Observational constraints on potassium synthesis during the formation of stars of the Galactic disk". Astronomy Reports. 47 (9): 750–762. Bibcode:2003ARep...47..750S. doi:10.1134/1.1611216. S2CID 120396773.
^ The, L.-S.; Eid, M. F. El; Meyer, B. S. (2000). "A New Study of s-Process Nucleosynthesis in Massive Stars". The Astrophysical Journal. 533 (2): 998. arXiv:astro-ph/9812238. Bibcode:2000ApJ...533..998T. doi:10.1086/308677. ISSN 0004-637X. S2CID 7698683.
^ a b Micale, Giorgio; Cipollina, Andrea; Rizzuti, Lucio (2009). Seawater Desalination: Conventional and Renewable Energy Processes. Springer. p. 3. ISBN 978-3-642-01149-8.
^ a b c d Prud'homme, Michel; Krukowski, Stanley T. (2006). "Potash". Industrial minerals & rocks: commodities, markets, and uses. Society for Mining, Metallurgy, and Exploration. pp. 723–740. ISBN 978-0-87335-233-8.
^ du Monceau, H. L. D. (1702–1797). "Sur la Base de Sel Marin". Mémoires de l'Académie Royale des Sciences (in French): 65–68.
^ a b Weeks, Mary Elvira (1932). "The discovery of the elements. IX. Three alkali metals: Potassium, sodium, and lithium". Journal of Chemical Education. 9 (6): 1035. Bibcode:1932JChEd...9.1035W. doi:10.1021/ed009p1035.
^ a b c Siegfried, R. (1963). "The Discovery of Potassium and Sodium, and the Problem of the Chemical Elements". Isis. 54 (2): 247–258. doi:10.1086/349704. JSTOR 228541. PMID 14147904. S2CID 38152048.
^ Browne, C. A. (1926). "Historical notes upon the domestic potash industry in early colonial and later times". Journal of Chemical Education. 3 (7): 749–756. Bibcode:1926JChEd...3..749B. doi:10.1021/ed003p749.
^ Liebig, Justus von (1840). Die organische Chemie in ihrer Anwendung auf Agricultur und Physiologie (in German). F. Vieweg und Sohn.
^ Cordel, Oskar (1868). Die Stassfurter Kalisalze in der Landwirthschalt: Eine Besprechung ... (in German). L. Schnock.
^ Birnbaum, Karl (1869). Die Kalidüngung in ihren Vortheilen und Gefahren (in German).
^ United Nations Industrial Development Organization and Int'l Fertilizer Development Center (1998). Fertilizer Manual. pp. 46, 417. ISBN 978-0-7923-5032-3.
^ Miller, H. (1980). "Potash from Wood Ashes: Frontier Technology in Canada and the United States". Technology and Culture. 21 (2): 187–208. doi:10.2307/3103338. JSTOR 3103338.
^ Rittenhouse, P. A. (1979). "Potash and politics". Economic Geology. 74 (2): 353–7. doi:10.2113/gsecongeo.74.2.353.
^ Enghag, P. (2004). "11. Sodium and Potassium". Encyclopedia of the elements. Wiley-VCH Weinheim. ISBN 978-3-527-30666-4.
^ Davy, Humphry (1808). "On some new phenomena of chemical changes produced by electricity, in particular the decomposition of the fixed alkalies, and the exhibition of the new substances that constitute their bases; and on the general nature of alkaline bodies". Philosophical Transactions of the Royal Society. 98: 1–44. doi:10.1098/rstl.1808.0001.
^ Shaposhnik, V. A. (2007). "History of the discovery of potassium and sodium (on the 200th anniversary of the discovery of potassium and sodium)". Journal of Analytical Chemistry. 62 (11): 1100–2. doi:10.1134/S1061934807110160. S2CID 96141217.
^ a b c Holleman, Arnold F.; Wiberg, Egon; Wiberg, Nils (1985). "Potassium". Lehrbuch der Anorganischen Chemie (in German) (91–100 ed.). Walter de Gruyter. ISBN 978-3-11-007511-3.
^ Burkhardt, p. 32
^ Rieke, R. D. (1989). "Preparation of Organometallic Compounds from Highly Reactive Metal Powders". Science. 246 (4935): 1260–4. Bibcode:1989Sci...246.1260R. doi:10.1126/science.246.4935.1260. PMID 17832221. S2CID 92794.
^ Ross, William H. (1914). "The Origin of Nitrate Deposits". Popular Science. Bonnier Corporation. pp. 134–145.
^ Abdel-Wahab, M.; Youssef, S.; Aly, A.; el-Fiki, S.; et al. (1992). "A simple calibration of a whole-body counter for the measurement of total body potassium in humans". International Journal of Radiation Applications and Instrumentation A. 43 (10): 1285–9. doi:10.1016/0883-2889(92)90208-V. PMID 1330980.
^ Chang, Raymond (2007). Chemistry. McGraw-Hill Higher Education. p. 52. ISBN 978-0-07-110595-8.
^ Vašák, Milan; Schnabl, Joachim (2016). "Chapter 8. Sodium and Potassium Ions in Proteins and Enzyme Catalysis". In Astrid, Sigel; Helmut, Sigel; Roland K.O., Sigel (eds.). The Alkali Metal Ions: Their Role in Life. Metal Ions in Life Sciences. 16. Springer. pp. 259–290. doi:10.1007/978-3-319-21756-7_8. PMID 26860304.
^ Weiner ID, Linus S, Wingo CS (2014). "Disorders of potassium metabolism". In Freehally J, Johnson RJ, Floege J (eds.). Comprehensive clinical nephrology (5th ed.). St. Louis: Saunders. p. 118. ISBN 9780323242875.
^ Malnic G, Giebisch G, Muto S, Wang W, Bailey MA, Satlin LM (2013). "Regulation of K+ excretion". In Alpern RJ, Caplan MJ, Moe OW (eds.). Seldin and Giebisch's the kidney: physiology and pathophysiology (5th ed.). London: Academic Press. pp. 1659–1716. ISBN 9780123814630.
^ Mount DB, Zandi-Nejad K (2011). "Disorders of potassium balance". In Taal MW, Chertow GM, Marsden PA, Skorecki KL, Yu AS, Brenner BM (eds.). The kidney (9th ed.). Philadelphia: Elsevier. pp. 640–688. ISBN 9781455723041.
^ Lockless, S. W.; Zhou, M.; MacKinnon, R. (2007). "Structural and thermodynamic properties of selective ion binding in a K+ channel". PLOS Biol. 5 (5): e121. doi:10.1371/journal.pbio.0050121. PMC 1858713. PMID 17472437.
^ Goyal, Abhinav; Spertus, John A.; Gosch, Kensey; Venkitachalam, Lakshmi; Jones, Philip G.; Van den Berghe, Greet; Kosiborod, Mikhail (2012). "Serum Potassium Levels and Mortality in Acute Myocardial Infarction". JAMA. 307 (2): 157–164. doi:10.1001/jama.2011.1967. PMID 22235086.
^ Smyth, A.; Dunkler, D.; Gao, P.; et al. (2014). "The relationship between estimated sodium and potassium excretion and subsequent renal outcomes". Kidney Int. 86 (6): 1205–1212. doi:10.1038/ki.2014.214. PMID 24918156.
^ Moore-Ede, M. C. (1986). "Physiology of the circadian timing system: predictive versus reactive homeostasis". Am J Physiol. 250 (5 Pt 2): R737–R752. doi:10.1152/ajpregu.1986.250.5.R737. PMID 3706563.
^ Slonim, Anthony D.; Pollack, Murray M. (2006). "Potassium". Pediatric critical care medicine. Lippincott Williams & Wilkins. p. 812. ISBN 978-0-7817-9469-5.
^ Visveswaran, Kasi (2009). "hypokalemia". Essentials of Nephrology (2nd ed.). BI Publications. p. 257. ISBN 978-81-7225-323-3.
^ Gumz, Michelle L.; Rabinowitz, Lawrence; Wingo, Charles S. (2015-07-02). "An Integrated View of Potassium Homeostasis". The New England Journal of Medicine. 373 (1): 60–72. doi:10.1056/NEJMra1313341. ISSN 0028-4793. PMC 5675534. PMID 26132942.
^ Campbell, Neil (1987). Biology. Menlo Park, California: Benjamin/Cummings Pub. Co. p. 795. ISBN 978-0-8053-1840-1.
^ Hellgren, Mikko; Sandberg, Lars; Edholm, Olle (2006). "A comparison between two prokaryotic potassium channels (KirBac1.1 and KcsA) in a molecular dynamics (MD) simulation study". Biophysical Chemistry. 120 (1): 1–9. doi:10.1016/j.bpc.2005.10.002. PMID 16253415.
^ Potts, W. T. W.; Parry, G. (1964). Osmotic and ionic regulation in animals. Pergamon Press.
^ Lans, H. S.; Stein, I. F.; Meyer, K. A. (1952). "The relation of serum potassium to erythrocyte potassium in normal subjects and patients with potassium deficiency". American Journal of the Medical Sciences. 223 (1): 65–74. doi:10.1097/00000441-195201000-00011. PMID 14902792.
^ Bennett, C. M.; Brenner, B. M.; Berliner, R. W. (1968). "Micropuncture study of nephron function in the rhesus monkey". Journal of Clinical Investigation. 47 (1): 203–216. doi:10.1172/JCI105710. PMC 297160. PMID 16695942.
^ Solomon, A. K. (1962). "Pumps in the living cell". Scientific American. 207 (2): 100–8. Bibcode:1962SciAm.207b.100S. doi:10.1038/scientificamerican0862-100. PMID 13914986.
^ Kernan, Roderick P. (1980). Cell potassium (Transport in the life sciences). New York: Wiley. pp. 40, 48. ISBN 978-0-471-04806-0.
^ Wright, F. S. (1977). "Sites and mechanisms of potassium transport along the renal tubule". Kidney International. 11 (6): 415–432. doi:10.1038/ki.1977.60. PMID 875263.
^ Squires, R. D.; Huth, E. J. (1959). "Experimental potassium depletion in normal human subjects. I. Relation of ionic intakes to the renal conservation of potassium". Journal of Clinical Investigation. 38 (7): 1134–48. doi:10.1172/JCI103890. PMC 293261. PMID 13664789.
^ Fiebach, Nicholas H.; Barker, Lee Randol; Burton, John Russell & Zieve, Philip D. (2007). Principles of ambulatory medicine. Lippincott Williams & Wilkins. pp. 748–750. ISBN 978-0-7817-6227-4.
^ Gadsby, D. C. (2004). "Ion transport: spot the difference". Nature. 427 (6977): 795–7. Bibcode:2004Natur.427..795G. doi:10.1038/427795a. PMID 14985745. S2CID 5923529. ; for a diagram of the potassium pores are viewed, see Miller, C (2001). "See potassium run". Nature. 414 (6859): 23–24. Bibcode:2001Natur.414...23M. doi:10.1038/35102126. PMID 11689922. S2CID 4423041.
^ Jiang, Y.; Lee, A.; Chen, J.; Cadene, M.; et al. (2002). "Crystal structure and mechanism of a calcium-gated potassium channel" (PDF). Nature. 417 (6888): 515–22. Bibcode:2002Natur.417..515J. doi:10.1038/417515a. PMID 12037559. S2CID 205029269. Archived from the original on 2009-04-24. CS1 maint: bot: original URL status unknown (link)
^ Shi, N.; Ye, S.; Alam, A.; Chen, L.; et al. (2006). "Atomic structure of a Na+- and K+-conducting channel". Nature. 440 (7083): 570–4. Bibcode:2006Natur.440..570S. doi:10.1038/nature04508. PMID 16467789. S2CID 4355500. ; includes a detailed picture of atoms in the pump.
^ Zhou, Y.; Morais-Cabral, J. H.; Kaufman, A.; MacKinnon, R. (2001). "Chemistry of ion coordination and hydration revealed by a K+ channel-Fab complex at 2.0 A resolution". Nature. 414 (6859): 43–48. Bibcode:2001Natur.414...43Z. doi:10.1038/35102009. PMID 11689936. S2CID 205022645.
^ Noskov, S. Y.; Bernèche, S.; Roux, B. (2004). "Control of ion selectivity in potassium channels by electrostatic and dynamic properties of carbonyl ligands". Nature. 431 (7010): 830–4. Bibcode:2004Natur.431..830N. doi:10.1038/nature02943. PMID 15483608. S2CID 4414885.
^ National Academies of Sciences, Engineering and Medicine (2019). "Potassium: Dietary Reference Intakes for Adequacy". In Stallings, Virginia A; Harrison, Meghan; Oria, Maria (eds.). Dietary Reference Intakes for Sodium and Potassium. Washington, DC: The National Academies Press. doi:10.17226/25353. ISBN 978-0-309-48834-1. PMID 30844154.
^ Stallings, Virginia A; Harrison, Meghan; Oria, Maria, eds. (March 5, 2019). Dietary Reference Intakes for Sodium and Potassium – Publication. Health and Medicine Division. National Academies of Sciences, Engineering and Medicine. doi:10.17226/25353. ISBN 978-0-309-48834-1. PMID 30844154. S2CID 104464967. Retrieved May 13, 2019.
^ Panel on Dietary Reference Intakes for Electrolytes and Water, Standing Committee on the Scientific Evaluation of Dietary Reference Intakes, Food and Nutrition (2004). DRI, dietary reference intakes for water, potassium, sodium, chloride, and sulfate. Washington, D.C.: National Academies Press. ISBN 978-0-309-53049-1. Archived from the original on 2011-10-06. CS1 maint: multiple names: authors list (link)
^ Karger, S. (2004). "Energy and nutrient intake in the European Union". Annals of Nutrition and Metabolism. 48 (2 (suppl)): 1–16. doi:10.1159/000083041.
^ "Potassium" in NHS Choices – Other vitamins and minerals
^ "Potassium Fact Sheet". foodinsight.org. International Food Information Council Foundation. 7 April 2020. Retrieved May 16, 2020.
^ "Potassium Food Charts". Asia Pacific Journal of Clinical Nutrition. Archived from the original on 2021-04-29. Retrieved 2011-05-18.
^ "Potassium Content of Selected Foods per Common Measure, sorted by nutrient content" (PDF). USDA National Nutrient Database for Standard Reference, Release 20. Archived from the original (PDF) on December 17, 2008.
^ Whelton PK, He J, Cutler JA, Brancati FL, Appel LJ, Follmann D, Klag MJ (1997). "Effects of oral potassium on blood pressure. Meta-analysis of randomized controlled clinical trials". JAMA. 277 (20): 1624–32. doi:10.1001/jama.1997.03540440058033. PMID 9168293. S2CID 25937399.
^ a b Institute of Medicine (U.S.). Committee on Optimization of Nutrient Composition of Military Rations for Short-Term, High-Stress Situations; Institute of Medicine (U.S.). Committee on Military Nutrition Research (2006). Nutrient composition of rations for short-term, high-intensity combat operations. National Academies Press. pp. 287–. ISBN 978-0-309-09641-6.
^ D'Elia, L.; Barba, G.; Cappuccio, F.; Strazzullo (2011). "Potassium Intake, Stroke, and Cardiovascular Disease: A Meta-Analysis of Prospective Studies". J Am Coll Cardiol. 57 (10): 1210–9. doi:10.1016/j.jacc.2010.09.070. PMID 21371638.
^ He FJ, Marciniak M, Carney C, Markandu ND, Anand V, Fraser WD, Dalton RN, Kaski JC, MacGregor GA (2010). "Effects of potassium chloride and potassium bicarbonate on endothelial function, cardiovascular risk factors, and bone turnover in mild hypertensives". Hypertension. 55 (3): 681–8. doi:10.1161/HYPERTENSIONAHA.109.147488. PMID 20083724.
^ "The Top 300 of 2020". ClinCalc. Retrieved 11 April 2020.
^ "Potassium Drug Usage Statistics". ClinCalc. 23 December 2019. Retrieved 11 April 2020.
^ Shallenberger, R. S. (1993). Taste chemistry. Springer. pp. 120–. ISBN 978-0-7514-0150-9.
^ Garrett, Donald E. (1995-12-31). Potash: deposits, processing, properties and uses. Springer. ISBN 978-0-412-99071-7.
^ a b Ober, Joyce A. "Mineral Commodity Summaries 2008:Potash" (PDF). United States Geological Survey. Retrieved 2008-11-20.
^ a b c Ober, Joyce A. "Mineral Yearbook 2006:Potash" (PDF). United States Geological Survey. Retrieved 2008-11-20.
^ Wishart, David J. (2004). Encyclopedia of the Great Plains. U of Nebraska Press. p. 433. ISBN 978-0-8032-4787-1.
^ Chiu, Kuen-Wai (2000). "Potassium". Kirk-Othmer Encyclopedia of Chemical Technology. John Wiley & Sons, Inc. doi:10.1002/0471238961.1615200103080921.a01.pub2. ISBN 9780471238966.
^ Delahunt, J.; Lindeman, T. (2007). "Review of the safety of potassium and potassium oxides, including deactivation by introduction into water". Journal of Chemical Health and Safety. 14 (2): 21–32. doi:10.1016/j.jchas.2006.09.010.
^ Roy, Amit H. (2007). Kent and Riegel's handbook of industrial chemistry and biotechnology. Springer. pp. 1135–57. Bibcode:2007karh.book....... ISBN 978-0-387-27843-8.
^ Ochoa-Hueso, R; Delgado-Baquerizo, M; King, PTA; Benham, M; Arca, V; Power, SA (2019). "Ecosystem type and resource quality are more important than global change drivers in regulating early stages of litter decomposition". Soil Biology and Biochemistry. 129: 144–152. doi:10.1016/j.soilbio.2018.11.009. S2CID 92606851.
^ World Health Organization (2009). Stuart MC, Kouimtzi M, Hill SR (eds.). WHO Model Formulary 2008. World Health Organization. p. 491. hdl:10665/44053. ISBN 9789241547659.
^ "Potassium chloride medical facts from Drugs.com". www.drugs.com. Archived from the original on 18 January 2017. Retrieved 14 January 2017.
^ British national formulary : BNF 69 (69 ed.). British Medical Association. 2015. pp. 680, 684. ISBN 9780857111562.
^ Figoni, Paula I (2010). "Bleaching and Maturing Agents". How Baking Works: Exploring the Fundamentals of Baking Science. John Wiley and Sons. p. 86. ISBN 978-0-470-39267-6.
^ Chichester, C. O. (July 1986). "Uses and Exposure to Sulfites in Food". Advances in food research. Academic Press. pp. 4–6. ISBN 978-0-12-016430-1.
^ Schultz
^ Toedt, John; Koza, Darrell & Cleef-Toedt, Kathleen Van (2005). "Personal Cleansing Products: Bar Soap". Chemical composition of everyday products. Greenwood Publishing Group. ISBN 978-0-313-32579-3.
^ Siegel, Richard S. (1940). "Ignition of the safety match". Journal of Chemical Education. 17 (11): 515. Bibcode:1940JChEd..17..515S. doi:10.1021/ed017p515.
^ Anger, Gerd; Halstenberg, Jost; Hochgeschwender, Klaus; Scherhag, Christoph; Korallus, Ulrich; Knopf, Herbert; Schmidt, Peter; Ohlinger, Manfred. "Chromium Compounds". Ullmann's Encyclopedia of Industrial Chemistry. 9. Weinheim: Wiley-VCH. p. 178. doi:10.1002/14356007.a07_067.
^ Marx, Robert F. (1990). The history of underwater exploration. Courier Dover Publications. p. 93. ISBN 978-0-486-26487-5.
^ Gettens, Rutherford John & Stout, George Leslie (1966). Painting materials: A short encyclopaedia. Courier Dover Publications. pp. 109–110. ISBN 978-0-486-21597-6.
^ Modugno, G.; Benkő, C.; Hannaford, P.; Roati, G.; Inguscio, M. (1999-11-01). "Sub-Doppler laser cooling of fermionic ${}^{40}\mathrm{K}$ atoms". Physical Review A. 60 (5): R3373–R3376. arXiv:cond-mat/9908102. Bibcode:1999PhRvA..60.3373M. doi:10.1103/PhysRevA.60.R3373. S2CID 119001675.
^ Jackson, C. B.; Werner, R. C. (1957). "Ch. 18: The Manufacture of Potassium and NaK". Handling and uses of the alkali metals. Advances in Chemistry. 19. pp. 169–173. doi:10.1021/ba-1957-0019.ch018. ISBN 978-0-8412-0020-3.
^ Kearey, Philip; Brooks, M & Hill, Ian (2002). "Optical Pumped Magnetometer". An introduction to geophysical exploration. Wiley-Blackwell. p. 164. ISBN 978-0-632-04929-5.
^ "Potassium 244856". Sigma Aldrich.
^ Solomon, Robert E. (2002). Fire and Life Safety Inspection Manual. Jones & Bartlett Learning. p. 459. ISBN 978-0-87765-472-8.
^ "DOE Handbook-Alkali Metals Sodium, Potassium, NaK, and Lithium". Hss.doe.gov. Archived from the original on 2010-09-28. Retrieved 2010-10-16.
^ Wray, Thomas K. "Danger: peroxidazable chemicals" (PDF). Environmental Health & Public Safety, North Carolina State University. Archived from the original on 2016-07-29. CS1 maint: bot: original URL status unknown (link)
^ a b Schonwald, Seth (2004). "Potassium Chloride and Potassium Permanganate". Medical toxicology. Lippincott Williams & Wilkins. pp. 903–5. ISBN 978-0-7817-2845-4.
^ Markovchick, Vincent J. & Pons, Peter T. (2003). Emergency medicine secrets. Elsevier Health Sciences. p. 223. ISBN 978-1-56053-503-4.
Burkhardt, Elizabeth R. (2006). "Potassium and Potassium Alloys". Ullmann's Encyclopedia of Industrial Chemistry. A22. pp. 31–38. doi:10.1002/14356007.a22_031.pub2. ISBN 978-3-527-30673-2.
Greenwood, Norman N.; Earnshaw, Alan (1997). Chemistry of the Elements (2nd ed.). Butterworth-Heinemann. ISBN 978-0-08-037941-8.
Holleman, Arnold F.; Wiberg, Egon; Wiberg, Nils (2007). "Potassium". Lehrbuch der Anorganischen Chemie (in German) (91–100 ed.). Walter de Gruyter. ISBN 978-3110177701.
Schultz, H.; et al. (2006). "Potassium compounds". Ullmann's Encyclopedia of Industrial Chemistry. A22. pp. 39–103. doi:10.1002/14356007.a22_031.pub2. ISBN 978-3-527-30673-2.
National Nutrient Database at USDA Website
Potassiumat Wikipedia's sister projects
"Potassium". Drug Information Portal. U.S. National Library of Medicine.
KAsO2
K3AsO4
KBrO3
KCN
KCNO
KClO
KClO3
KHCO2
KHF2
KHSO3
KH2PO3
KIO3
KMnO4
KNH2
KN3
KNO2
KOCN
KPF6
KSCN
KCH3COO
K2Al2O4
K2Al2B2O7
K2CO3
KCrO3Cl
K3CrO4
K2Cr2O7
K2FeO4
K2HPO4
K2MnO4
K2O
K2O2
K2PtCl4
K2Pt(CN)4
K2SeO4
K2SO3
K2S2O3
K2Po
K2SiO3
K2SiF6
K3Fe(CN)6
K3Fe(C2O4)3
K3PO4
K4Mo2Cl8 | CommonCrawl |
Ensemble deep model for continuous estimation of Unified Parkinson's Disease Rating Scale III
Murtadha D. Hssayeni1,
Joohi Jimenez-Shahed2,
Michelle A. Burack3 &
Behnaz Ghoraani ORCID: orcid.org/0000-0003-0075-76631
BioMedical Engineering OnLine volume 20, Article number: 32 (2021) Cite this article
Unified Parkinson Disease Rating Scale-part III (UPDRS III) is part of the standard clinical examination performed to track the severity of Parkinson's disease (PD) motor complications. Wearable technologies could be used to reduce the need for on-site clinical examinations of people with Parkinson's disease (PwP) and provide a reliable and continuous estimation of the severity of PD at home. The reported estimation can be used to successfully adjust the dose and interval of PD medications.
We developed a novel algorithm for unobtrusive and continuous UPDRS-III estimation at home using two wearable inertial sensors mounted on the wrist and ankle. We used the ensemble of three deep-learning models to detect UPDRS-III-related patterns from a combination of hand-crafted features, raw temporal signals, and their time–frequency representation. Specifically, we used a dual-channel, Long Short-Term Memory (LSTM) for hand-crafted features, 1D Convolutional Neural Network (CNN)-LSTM for raw signals, and 2D CNN-LSTM for time–frequency data. We utilized transfer learning from activity recognition data and proposed a two-stage training for the CNN-LSTM networks to cope with the limited amount of data.
The algorithm was evaluated on gyroscope data from 24 PwP as they performed different daily living activities. The estimated UPDRS-III scores had a correlation of \(0.79\, (\textit{p}<0.0001)\) and a mean absolute error of 5.95 with the clinical examination scores without requiring the patients to perform any specific tasks.
Our analysis demonstrates the potential of our algorithm for estimating PD severity scores unobtrusively at home. Such an algorithm could provide the required motor-complication measurements without unnecessary clinical visits and help the treating physician provide effective management of the disease.
Parkinson's disease (PD) is a key chronic, progressive neurological disorder. It often occurs in older people and impacts motor as well as non-motor activities of the patients [1]. People with PD (PwP) at mid- and advanced stages of the disease experience motor complications such as troubling motor fluctuations [2]. Motor fluctuations are experienced as levodopa, the main PD medication, wears off between doses, and the PD symptoms reappear [3]. At this stage of the disease, an iterative therapeutic adjustment is needed to manage the motor fluctuations through multiple clinical visits. As part of these visits, part III of the Unified Parkinson Disease Rating Scale (UPDRS III) is assessed by a neurologist to measure the severity of PD motor complications such as tremor and bradykinesia (i.e., slowness of voluntary movements) [4]. UPDRS-III score, besides history-taking and subject reports, is the main contributing factor to a successful therapeutic adjustment. Wearable inertial sensors have the potential to capture complex body movements related to PD symptoms, thus, they can be used to assess UPDRS III. The significance of continuous at-home assessment of UPDRS III is providing a tool for longitudinal monitoring of daily motor fluctuations [5] and managing PD medications [6]. It will limit the need for in-person clinical examinations of PwP and reduce exposure to risk of infection from infectious agents such as COVID-19 [7].
To assess UPDRS III, PwP are required to perform several tasks, such as sitting at rest, finger and toe-tapping, hand movement, gait, and arising from a chair. A home-based system for continuous and unobtrusive PD severity assessment using wearable sensors has to score UPDRS III without requiring the patients' active engagement. However, we cannot achieve such a system without addressing two main limitations in the existing work. First, work in this area has been mostly focused on estimating the severity of each of the PD symptoms separately, instead of the total UPDRS-III score. For example, Griffiths et al. [8] and Sama et al. [9] estimate bradykinesia severity and then use the estimated value as the UPDRS-III score. Similarly, Pan et al. [10] and Dia et al. [11] estimate tremor severity instead of UPDRS III directly. Pulliam and colleagues estimate tremor [12] and bradykinesia subscore [13]. Second, existing methods to estimate the UPDRS-III score are obtrusive as they require subjects' active engagement to perform some specific tasks to elicit PD symptoms. For example, Giubert et al. [14] require sit-to-stand task to estimate UPDRS III. Rodriguez et al. [15] and Zhao et al. [16] propose an algorithm to estimated UPDRS III based on gait. Parisi et al. [17] require the patients to perform the UPDRS-III tasks of gait, leg agility, and sit to stand. In another work [18], an approach is developed to estimate mobile PD score (mPDS) that measures PD severity using a smartphone application as subjects perform five specific tasks (gait, balance, finger tapping, reaction time, and voice). However, the work of Pissadaki et al. [19] shows that complex body movements during ADL mostly can be decomposed into movement primitives performed during the UPDRS-III clinical exams. We, therefore, hypothesize that effective machine-learning algorithms can estimate the UPDRS-III total score unobtrusively during ADL without the limitations of the current approaches.
Most of the methods in the papers mentioned above are based on hand-crafted features and traditional machine learning. However, recent work based on deep learning has shown to outperform the traditional methods in assessing different aspects of PD disease. For example, Hammerla and colleagues show that a sequence of Restricted Boltzmann Machines provides a better generalization than the traditional machine-learning methods used for PD medication state detection [20]. Zhao et al. compare the performance of Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and dual-channel deep model with traditional methods and show a high performance using the LSTM networks for PD severity estimation [16]. Artificial Neural Network has been shown to outperform the traditional methods for classifying PD severity [21] or estimating UPDRS III [22]. In a recent work, we also show that LSTM provides promising results for detecting PD motor fluctuations during a variety of daily living activities [23]. Hence, in the present work, we take advantage of deep learning for data-driven feature extraction from raw signals and learning temporal patterns.
Our objective in this paper is to develop a novel algorithm based on deep learning to continuously estimate UPDRS III from the complex ADL movements collected during the subjects' free body movements. Our algorithm is based on the ensemble of three deep models. One is an LSTM network with hand-crafted features trained using transfer learning with an activity recognition dataset. The other two models are based on data-driven features from raw signals and their time–frequency representations. We also proposed a two-stage training method to address challenges of training deep-learning models with limited data. For comparison purposes, we also implemented a traditional model based on Gradient Tree Boosting in this paper.
The developed algorithm for estimating UPDRS III is based on free movement gyroscope data collected from the most affected wrist and ankle using wearable sensors. We ensured the deep models were diverse and achieve better performance by training them on hand-crafted features that represent experts' knowledge about the presentations of PD symptoms on body movements and data-driven features extracted from raw signals and their time–frequency representations. One deep-learning model was a dual-channel LSTM used with hand-crafted features. This proposed structure was based on our preliminary work indicating that a dual-channel LSTM network outperforms a single-channel LSTM for estimating UPDRS-III score [24]. The other two models were used with raw signals: a 1D CNN-LSTM network for raw signals and a 2D CNN-LSTM network for the time–frequency representation of the raw signals. We utilized transfer learning for the hand-crafted LSTM network to cope with the limited amount of data and proposed a novel two-stage training for the data-driven networks.
For our evaluation purposes, we used a dataset of 24 PwP as they performed a variety of ADL in a clinical setting. Fifteen of the subjects completed four rounds of ADL intermittently with a 1-h gap for about 4 h, and the other nine subjects performed ADL continuously for about 2 h. UPDRS III was performed before each round for the 15 subjects and at the beginning and end of the other subjects' experiments. First, we evaluated the performance of each deep-learning model for estimating UPDRS III separately. We also compared their performance against traditional machine learning based on Gradient Tree Boosting. Next, we evaluated the performance of the ensemble of different combinations of deep-learning models.
The proposed models generated a UPDRS-III score for each round of ADL that was about 4 min for 15 subjects and 10 min for the other nine subjects. For the ensemble algorithm, the estimated UPDRS-III scores using the individual models were averaged. All the training and testing steps were performed in subject-based, leave-one-out cross-validation (LOOCV). In each of the 24 cross-validation iterations, the data of one subject were used for testing, and the data of the other subjects were used for training. In addition, an inner split was applied to the training data to select a random 20% for validation. Pearson correlation (\(\rho\)) and Mean Absolute Error (MAE) were used to evaluate the developed network. A high correlation \(\rho\) and low MAE indicate a close estimation of UPDRS III when compared to the gold-standard scores.
Table 1 reports the performance of each of the individual deep models in comparison with Gradient Tree Boosting and the performance of the ensemble of two or three deep models. Among the single models, CNN-LSTM using raw signals had the highest \(\rho\) of \(0.70 \, (\textit{p}< 0.001)\). Gradient Tree Boosting resulted in the least \(\rho\) and MAE performance. Note that transfer learning improved the performance of the model with the hand-crafted features from \(\rho\) of 0.62 to 0.67 and MAE of 7.50–6.85. Ensemble of the two deep-learning models improved the single models' performance by increasing \(\rho\) and reducing MAE. The best performance was achieved by the ensemble of the three deep models with \(\rho = 0.79\, (\textit{p}< 0.001)\) and MAE = 5.95.
Table 1 The LOOCV testing correlation (\(\rho\)) and MAE of the proposed deep models and Gradient Tree Boosting are reported for single models and the ensemble of two or three models of the deep models
The estimated total UPDRS-III scores using the ensemble of the three deep models vs. the gold-standard total UPDRS-III scores
The estimated total UPDRS-III scores using the three deep models' ensemble vs. the gold-standard total UPDRS-III scores is shown in Fig. 1. Figure 2 shows the ensemble model estimations of UPDRS III over time vs. the gold-standard UPDRS III for four PwP. The examples shown in A and B are from PwP with steady improvement in PD symptoms after medication intake. The two examples in C and D are for PwP who experienced reappearance of their symptoms before their next medication intake (i.e., motor fluctuations). In all the cases, the algorithm follows the change in UPDRS III with a good correlation. Additional file 1: Figure S1 and S2 show the ensemble model estimations of UPDRS III over time vs. the gold-standard UPDRS-III scores for all the 24 PwP.
The ensemble model estimations of UPDRS III over time vs. the gold-standard UPDRS III for four PwP. a and b PwP who experienced an improvement in their PD symptoms. c A patient who experienced the return of PD symptoms before taking the next dose of medication. d A similar behavior; however, it also shows a reduction in the symptoms after receiving the second dose. Note that the data used for UPDRS-III estimation were from either before or after the UPDRS-III assessment. As a result, the estimated and gold-standard time points do not coincide. Patient A performed only two UPDRS III assessment. The red arrow indicates medication intake
As shown in Fig. 3a, a reduction in the gold-standard UPDRS-III score is expected up to 1 h after the medication intake. We investigated whether the estimated scores from the ensemble model show similar behavior in the UPDRS III scores as the medication kicks in. The results are shown in Fig. 3b. Both the gold-standard and estimated UPDRS-III scores indicate a significant difference after patients take their PD medications as confirmed by a paired t-test with \(\textit{p} < 0.001\). In addition, Additional file 1: Figure S3 shows the box-plots of the total UPDRS-III scores from the single models before and 1 h after taking the PD medications. The estimated UPDRS-III scores from all models indicate a significant difference after patients take their PD medications as confirmed by a paired t-test with \(\textit{p} < 0.01\).
The total UPDRS-III scores before and 1 h after taking the PD medications from gold-standard measurements (a) and the ensemble model estimations (b). Both the gold-standard and estimated UPDRS-III scores show a significant drop after PD medication intake \((\textit{p} < 0.001)\)
Unobtrusive estimation of UPDRS III
We hypothesized that advanced machine-learning algorithms could estimate UPDRS III from patients' free body movements as collected using two wearable sensors placed on the upper and lower extremities. Our analysis indicated such a possibility with a high correlation of \(\rho = 0.79\, (\textit{p}< 0.001)\) and low MAE = 5.95 when using an ensemble of three deep-learning models. Most of the existing work for UPDRS III estimation requires PwP's active engagement to perform the specific tasks used in the UPDRS-III procedure [15, 17, 18, 25]. Unlike these approaches, our algorithm could estimate UPDRS III as the patients performed a variety of ADL without the need for performing constrained tasks. As a result, our system has the potential to be translated into unobtrusive home-based monitoring for continuous assessments of UPDRS III. It can track changes in motor fluctuations due to the medication wearing-off effect, as shown in Fig. 2, and tracking the response to medication, as shown in Fig. 3.
Another interesting observation from our analysis is our algorithm's ability to estimate UPDRS III scores despite the following challenges. First, the UPDRS-III score is measured by assessing the face/head, neck and all four extremities, but our system is based on only two sensors placed on the wrist and ankle of the most affected side of the body. Second, total UPDRS III includes items representing symptoms measures such as rigidity, speech, and facial expression that cannot be captured by wearable motion sensors. However, our ensemble model captured the dependencies between these items [26, 27] and achieved a high correlation. However, these challenges impacted the estimation MAE, and thus our model was only comparable to the minimal clinically important difference in UPDRS III.
Comparison to related work
A review of the methods proposed for estimating the severity of PD is shown in Table 2. Comparing our algorithm to task-dependent approaches (i.e., obtrusive methods) [15, 17, 18, 25] indicates that our method provides comparable performance with the advantage of not constraining PwP's activities. For example, it has a better correlation than Ref. [15] with -0.56, equal or slightly lower than Refs. [17, 25] and lower than Ref. [18] with 0.88. However, it is worth mentioning that the work in Ref. [18] is based on performing a series of tasks using a smartphone application, while ours is solely based on movement data patterns.
Comparing our algorithm to unobtrusive methods [8, 13, 28] shows that our model outperforms Ref. [8] with 0.64 even though they only estimated bradykinesia. Our algorithm performs slightly lower than Refs. [13, 28]. Our careful analysis of the work in Ref. [13] indicates that the results reported by Pulliam et al. [13] were not based on LOOCV. The authors instead developed multiple linear regression models to estimate tremor, bradykinesia, and dyskinesia, and then designed a radar chart reporting the severity and duration of these symptoms. The correlation between the radar chart area and UPDRS III was 0.81 when the models fit all the data. The authors did not report their algorithm's performance on a held-out set or in a cross-validation fashion; thus, their model's generalizability is not comparable to ours. Another limitation of Pulliam's work et al. [13] is the challenge involved with interpreting the range of the estimated area to the clinically meaningful range used for the UPDRS III as their estimated range is different from the clinically meaningful range of the UPDRS III. Other limitations are that they included dyskinesia severity for estimating UPDRS III. However, dyskinesia is a side effect of taking levodopa and not a PD symptom and is not included in the UPDRS-III assessments. Abrami et al. [28] developed an unsupervised algorithm based on clustering and Markov-Chain. They applied a multi-dimensional scaling algorithm to estimate each subject's daily UPDRS-III score as the sum of tremor, bradykinesia, and gait items for each day. They reported a high \(\rho ^2\) of 0.64 in clinic, but a significantly lower \(\rho ^2\) of about 0.43 at home. Our algorithm performed better (\(\rho ^2\) = 0.58) than their method at home (\(\rho ^2\) = 0.43) but slightly lower in clinic (\(\rho ^2\) = 0.64). However, their estimation does not include UPDRS-III items such as rigidity, voice, and facial expressions. Their method also performed better when patients performed more tasks, which was the case in the clinic, where they performed more than nine scripted tasks. At home, people performed fewer tasks in a short time, which could be the reason for the lower performance at home. In addition, there is no information about the ability of their method for hourly estimation of UPDRS III.
Table 2 Proposed methods in the literature for estimating the severity of PD represented by UPDRS III
The advantage of deep learning
The dual-channel LSTM developed in our preliminary work [24] provides only slightly higher performance than Gradient Tree Boosting with a 0.62 correlation vs. 0.61. However, transfer learning from the activity recognition dataset improves performance by providing a 10% higher correlation and 13% lower MAE when compared to Gradient Tree Boosting. This behavior indicates that temporal dependencies captured by the first two LSTM layers using hand-crafted features extracted from healthy subjects are beneficial to UPDRS-III estimation.
Another observation is that both the 1D and 2D CNN-LSTM networks outperform Gradient Tree with 0.70 and 0.67 correlation, respectively with greater than 10% increase in correlation, and 6.93 and 7.11 MAE, respectively, with a decrease of greater than 9% in MAE. These networks achieve comparable performance to the dual-channel LSTM with hand-crafted features, which means CNN could extract relevant data-driven features.
We also observe that the ensemble of the models based on hand-crafted and data-driven features improves the performance. The ensemble of multiple models is known to improve the regression results if the models solve different aspects of the given problem [29]. Hence, we can conclude that the trained deep models are diverse and learn different views of the motion signals (i.e., hand-crafted features, data-driven features from raw signals and from the time–frequency data), and therefore, are necessary for successful UPDRS-III estimation.
Limitations and future work
Our algorithm provides overall high performance for UPDRS-III estimation using patients' free body movement data. However, we notice that the model underestimates high UPDRS-scores, as shown in Fig. 1. This is because of the imbalanced data distribution as there are only nine rounds of ADL with the UPDRS III score of higher than 40, and only one is above 50 (see Fig. 4b). Parisi et al. [17] reported a similar limitation due to the imbalance distribution of their training data toward the mean score of UPDRS III. Collecting more data in a home setting with a uniform data distribution is expected to improve our algorithm's performance further and consists of the main aspect of our future work.
We developed a novel algorithm to provide a continuous and unobtrusive estimation of the UPDRS-III score using free-body motion data recorded from two wearable sensors. The novelty aspect our proposed approach is combining both expert knowledge in the field by extracting hand-crafted features with data-driven knowledge using deep learning to extract features from raw temporal and time–frequency signals. To the best of our knowledge, we proposed the first ensemble algorithm based on three deep models to provide a continuous and unobtrusive estimation of the UPDRS-III score using free-body motion data recorded from two wearable sensors. In addition, we utilized transfer learning from an activity recognition dataset for the model using the hand-crafted features and a two-stage training for the models dealing with the raw data. The models were evaluated and compared using the sensor data of 24 PD subjects. Subject-based, LOOCV demonstrated that the three deep models' ensemble provided a high correlation of \(\rho =0.79\, (\textit{p}<0.0001)\) and a low MAE of 5.95, indicating that each model learns different aspects of the PD motor complications from the movement data. We compared our algorithm with the existing work in the literature and discussed the different advantages of our developed algorithm as providing relatively high performance while providing an unobtrusive estimation of UPDRS III from ADL; direct estimation of UPDRS III instead of estimating the symptoms such as tremor or bradykinesia and then delivering it as the estimated of the UPDRS III; estimation of total UPDRS III without removing any of the items such as rigidity or facial expression; and estimation of the clinically known range of UPDRS III instead of providing a new metric, which requires interpretation. Our future work includes evaluation of more training data collected from an at-home setting to further increase the performance of our algorithm.
In this section, we first describe the PD dataset [13, 31] that was used for evaluating the developed models. We also provide a brief description of the Physical Activity Monitoring Dataset (PAMAP2) [32] that was used for transfer learning of the deep model with hand-crafted features. Next, we describe signal segmentation and extraction of the hand-crafted features. Finally, we describe the proposed deep models.
Collection of PD Data
A protocol was designed to record motion data of 24 PwP with idiopathic PD as they performed a variety of ADL [13, 31]. A summary of patient characteristics is shown in Table 3. The age average was 58.9 years, and the age range was between 42 and 77 years. Fourteen of the PWP were female and ten were male. The average of the disease duration was 9.9 years, and the range was between 4 and 17 years. The UPDRS-III average was 29.7 before taking PD medications and 17.3 1 h after taking PD medications. The institutional review board approved the study, and all patients provided written informed consent.
Table 3 Subject demographics. LEDD stands for Levodopa Equivalent Daily Dose. Values are presented as number or mean ± standard deviation
Two wearable sensors (Great Lakes NeuroTechnologies Inc., Cleveland, OH) consisting of triaxial gyroscope and accelerometer were mounted on the most affected wrist and ankle to collect the motion data at a sampling rate of 64 Hz. The participants stopped their PD medication the night before the experiment and started the experiments in their medication OFF states. Fifteen of the subjects performed various ADL in four rounds spanned for 4 h. The ADL were cutting food, unpacking groceries, grooming, resting, drinking, walking, and dressing. The time of each activity trial ranges between 15 and 60 s, and each round was about 2–4 min. The subjects were asked to perform the ADL at self-pace, and no training was provided. After the first round, the subjects resumed their routine PD medications. 20 trials of activities were missing due to unsuccessful data collection. In addition, two subjects performed three rounds since they started the experiment in their medication ON states. The total duration of each round for all the 15 subjects is shown in Fig. 4a.
The structure of the processed data from the 24 subjects. a The rounds' duration and their UPDRS-III scores are shown for each subject. The color of each bar represents a round of data, and the height of the bar indicates the duration of the round. Each bar's number shows the UPDRS-III score as determined by the nearest UPDRS assessment to the round. b The rounds' distribution is displayed based on their UPDRS-III scores
The other nine subjects cycled through multiple stations (such as laundry room, entertainment station, snack, and desk work) in a home-like setting while engaging in unconstrained activities. Next, the subjects resumed their routine PD medications. Later, when the medicine kicked in (as confirmed by a neurologist), the subjects repeated the same ADL or cycled through the stations in their medication ON states. For these nine subjects, the recording was continuous for about 2 h. Later, rounds of 10 min were segmented close to UPDRS-III assessments as shown in Fig. 4a.
Concurrently, the clinical examinations were performed by a neurologist to measure and record the subjects' UPDRS-III scores. Four rounds of UPDRS-III assessment were performed for 15 subjects at the beginning of every hour of the experiment. Two rounds of UPDRS-III assessment were performed at the beginning and end of the experiment for the other nine participants. In each assessment, 27 signs of PD were scored on a 0–4 scale for different body parts and both sides; thus, the range of UPDRS III was 0–108, the sum of scores from the 27 signs.
Physical activity monitoring dataset
PAMAP2 is a public dataset of motion signals recorded using two wearable sensors while nine healthy subjects performed various ADL. The subjects were 27.22 ± 3.31 years old, with eight males and one female. The wearable sensors contained triaxial gyroscopes and accelerometers with a 100 Hz sampling rate and were mounted on the dominant side's arm and ankle. The recorded ADL included 12 protocol activities such as lying, sitting, standing, walking, watching TV, and working on a computer. We used this dataset for transfer learning of the deep-learning models. The reason for selecting this dataset was the availability of the gyroscope signals and the similarity in the sensor placement locations with our PD dataset.
For both datasets, we used only angular velocity signals generated from the gyroscopes. We found experimentally that the gyroscope sensor performs better than accelerometer sensors in estimating UPDRS III, which is in agreement with the finding of Dia et al. [11]. In addition, using one sensor type decreased the computation power and time required to train and test the models because of the reduction in data dimensionality. The energy consumption of gyroscopes is higher than that of accelerometers, which can constrain the long-term recording [33]. However, the availability of devices with long battery life can avoid this issue. The collected signals were filtered to eliminate low and high-frequency noises using a bandpass FIR filter with a 3 dB cutoff frequency (0.5–15 Hz).
For the PD dataset, we excluded the data recorded during the UPDRS-III examination from our analysis to ensure that the developed model will not benefit from the UPDRS III-specific tasks that elicit PD symptoms. Next, 2–4 rounds of data with a maximum duration of 10 min (i.e., maximum \(N_S\) samples) were selected from each subject's recordings. Fig. 4a demonstrates the number and duration of rounds as well as the corresponding UPDRS-III score for all the subjects. A total of 91 rounds (\({N_R}\)) were selected to form the set \(\mathcal {D}=\{ (X^{(r)},y^{(r)}) \}_r^{N_R}\) \((X^{(r)} \in \mathbb {R}^{N_S^{(r)} \times 6}\), \(y^{(r)} \in \mathbb {R})\) where \(X^{(r)}\) denotes the motion time-series data in round r with \(N_S^{(r)}\) as the number of samples in this round, and \(y^{(r)}\) denotes the UPDRS-III score for round r. The set was used to train and test the developed algorithm using LOOCV. The distribution of these rounds based on the assessed UPDRS III is shown in Fig. 4b. Similarly for PAMAP2 dataset, 1-min rounds of data were selected from each subject's recordings after down-sampling the signals to 64 Hz. Each round included one activity. A total of 455 rounds were selected to form the set \(\mathcal {D}\) for PAMAP2 dataset.
The PD symptoms have both short- and long-term representations on the body movements. Therefore, there is a need for features extracted from both short and long-term duration of the motion signals [34, 35]. Hence, we used 5-s windows to segment the signals for short-term features, and 1-min windows for long-term features. The segmentation process is shown in Fig. 5a.
The architectures of the proposed deep models to estimate UPDRS-III score. a Dual-channel LSTM network to estimate UPDRS III from hand-crafted features. b 1D CNN-LSTM network to estimated UPDRS III from raw signals. Each convolutional layer is followed by a ReLU activation layer. Convolutional Block-2 was repeated to increase the depth of the CNN network. c. 2D CNN-LSTM network to estimate UPDRS III from time–frequency representations. The spectrogram of each 1-min window is the input to the CNN network. d The overall architecture of the proposed ensemble model
We extracted \(N_{SF}=\)26 short- and \(N_{LF}=\)32 long-term features from each segment of the data. First, 39 short-term features were extracted from the three (x, y, z) axes' signals of the wrist and 39 from the ankle sensor (i.e., segmented X). The short-term features were selected to capture high-frequency symptoms such as tremor. They consisted of 4–6 Hz signal power (3 features = x3), percentage power of frequencies > 4 Hz (x3), 0.5–15 Hz signal power (x3), amplitude and lag of the first auto-correlation peak (x6), number and sum of auto-correlation peaks (x6), spectral entropy (x3), dominant and secondary frequencies and their powers (x12), cross-correlation (x3) between x and y, x and z and y and z axes. The details of these features were provided in our previous work [36]. This step provided a total of 78 features from the three axes of the wrist and ankle sensors. Next, the features were averaged across the three axes to get \(N_{SF}=\) 26. To conclude, a feature vector (\(\vec {fv} \in \mathbb {R}^{N_{SF}}\)) was extracted from each 5-s window and provided a set of \(\mathcal {D}_{S}=\{ (S^{(r)},y^{(r)}) \}_r^{N_R}\) \((S^{(r)} \in \mathbb {R}^{{N_{Ws}^{(r)}} \times N_{SF}}\), \(y^{(r)} \in \mathbb {R})\) where \(S^{(r)}=[\vec {fv}_1\vec {fv}_2...\vec {fv}_{N_{Ws}^{(r)}}]\), and \(N_{Ws}^{(r)}\) was the number of 5-s windows in round r.
Similarly, 48 long-term features were extracted from the three (x, y, z) axes' signals of the wrist and 48 from the ankle sensor (i.e., segmented X). The long-term features were selected to capture low-frequency symptoms such as bradykinesia. These features were average jerk (x3), velocity peak-to-peak (x3), 1–4 Hz signal power (x3), 0.5–15 Hz signal power (x3), Shannon entropy (x3), standard deviation (x3), number and sum of auto-correlation peaks (x6), Gini index (x3), sample entropy (x3), mean (x3), skewness (x3), kurtosis (x3), spectral entropy (x3), dominant frequency and its power [36] (x6). Next, the features were averaged across each axes to get \(N_{SF}=\) 32. To conclude, a feature vector (\(\vec {fv} \in \mathbb {R}^{N_{LF}}\)) was extracted from each 1-min window and provided a set of \(\mathcal {D}_{L}=\{ (L^{(r)},y^{(r)}) \}_r^{N_R}\) \((L^{(r)} \in \mathbb {R}^{N_{Wl}^{(r)} \times N_{LF}}\), \(y^{(r)} \in \mathbb {R})\), where \(L^{(r)}=[\vec {fv}_1\vec {fv}_2...\vec {fv}_{N_{Wl}^{(r)}}]\), and \(N_{Wl}^{(r)}\) was the number of 1-min windows in round r.
Regression models for UPDRS-III estimation
In our preliminary work, we explored two different architectures based on a single-channel and dual-channel LSTM of hand-crafted features and showed that the latter provides superior performance [24]. In this section, we first describe an extension to that model by applying transfer learning using PAMAP2 dataset. Next, we develop a new 1D and 2D CNN-LSTM models using raw motion signals and their time–frequency representations, respectively. The proposed ensemble model is described next. Lastly, Gradient Tree Boosting is described as a traditional machine learning method for comparison purposes.
Dual-channel LSTM network with transfer learning
LSTM is a special type of Recurrent Neural Networks to overcome the vanishing gradient problem when training using gradient descent with backpropagation through time. LSTM can efficiently learn the temporal dependencies and has been successfully used in applications involving signals with temporal memory. In this work, LSTM architecture proposed by [37] is used.
LSTM unit consists of input gate (i), input modulation gate (g), forget gate (f), output gate (o), and memory cell (\(c_t\) at time step t). Before applying the operations in these gates, current feature vector (\(\vec {fv}^{(r)}_t\)) at time t in round r is linearly transformed using the following equation:
$$\begin{aligned} \vec {x}^{(r)}_t=W_{fx} \vec {fv}^{(r)}_t +b_{fx} \end{aligned}$$
where \(\vec {x}^{(r)}_t \in \mathbb {R}^{N_H}\), \({N_H}\) is the number of hidden states and \(W_{fx}\) and \(\vec {b}_{fx}\) are the weight matrix and bias vector, respectively. The operations in these gates are performed on \(\vec {x}^{(r)}_t\) using \({N_H}\) hidden states (\(h_{t-1} \in \mathbb {R}^{N_H}\)) and internal states (\(c_{t-1} \in \mathbb {R}^{N_H}\)) from the previous time step as defined below:
$$\begin{aligned} i_t=& {} \sigma \left( W_{xi} \vec {x}^{(r)}_t + W_{hi} h_{t-1} +b_i\right) \end{aligned}$$
$$\begin{aligned} g_t=& {} \phi \left( W_{xg} \vec {x}^{(r)}_t + W_{hg} h_{t-1} +b_g\right) \end{aligned}$$
$$\begin{aligned} f_t=& {} \sigma \left( W_{xf} \vec {x}^{(r)}_t + W_{hf} h_{t-1} +b_f\right) \end{aligned}$$
$$\begin{aligned} o_t=& {} \sigma \left( W_{xo} \vec {x}^{(r)}_t + W_{ho} h_{t-1} +b_0\right) \end{aligned}$$
$$\begin{aligned} c_t=& {} f_t c_{t-1}+i_t g_t \end{aligned}$$
$$\begin{aligned} h_t=& {} o_t \phi \left( c_t\right) \end{aligned}$$
where \(W_{ab}\) is a weight matrix (\(a=\{x,h\}\) and \(b=\{i,g,f,o\}\)), and \(\sigma\) and \(\phi\) are the logistic sigmoid and tanh activation functions, respectively. The output (\(\hat{y}^{(r)}\)) in many-to-one LSTM network is calculated based on \(h_{t}\) of the last LSTM layer and last \(\vec {x}^{(r)}\) in round r using the following linear transformation:
$$\begin{aligned} \hat{y}^{(r)}=W_{hy} h_{t} +b_y \end{aligned}$$
After segmentation and feature extraction (refer to segmentation and feature extraction sections), there were only one long-term feature vector for each 1-min window while there are 12 short-term feature vectors. Therefore, we developed a dual-channel LSTM network to combine the two sets of feature vectors as a strategy to appropriately handle the differences in the number of the short-term feature vectors (\(S^{(r)}=[\vec {fv}_1\vec {fv}_2...\vec {fv}_{N_{Ws}^{(r)}}]\)) and long-term feature vectors (\(L^{(r)}=[\vec {fv}_1\vec {fv}_2...\vec {fv}_{N_{Wl}^{(r)}}]\)). This method was based on building a separate LSTM channel on the short-term and long-term sets (\(\mathcal {D}_{S}\) and \(\mathcal {D}_{L}\), respectively) and then integrating the outcome of the two channels into one UPDRS-III score estimation using a fully connected layer. The feature vectors in both sets were linearly transformed using a fully connected layer to have a depth of \(N_{H}\) hidden states in both channels (Eq. 1). The transformed feature vectors \(\vec {x}^{(r)}\) were then passed to a many-to-one LSTM network in both channels as shown in Fig. 5a. The hidden states \(h_{t}\) from the last feature vector in both channels were then concatenated to create a fusion feature that was passed through a fully connected layer to estimate UPDRS III (Eq. 8).
Transfer learning: Due to the limited number of data rounds in the PD dataset used to train the LSTM network, we applied transfer learning to improve the LSTM performance. The LSTM network's weights to estimate UPDRS III were not randomly initialized; instead, they were transferred from an LSTM network trained to perform activity classification. Next, only the last layer of the LSTM network and the fully connected layers were fine-tuned for estimating UPDRS III. PAMAP2 dataset was used to train the LSTM network for activity classification initially. Note that transfer learning could only be used in the case of the hand-crafted features. Although the sensors in PD and PAMAP2 were placed on the same extremity, the axes' orientations and the placement on the same extremity were different. Therefore, the learned deep model's weights on PAMAP2 were not transferable to the PD dataset when the raw signals were used. However, extracting features and averaging them across axes eliminated the effect of having different sensors' orientation in the PAMAP2 dataset and PD dataset.
1D CNN-LSTM network
We used CNN as a data-driven feature extraction method to explore raw signals. We fed the feature maps of CNN into an LSTM network to model the feature maps' temporal dependencies and estimate UPDRS III. Our proposed 1D CNN-LSTM is shown in Fig. 5b. It consisted of three convolutional blocks. The first block consisted of two convolutional layers with 32 filters with a width of 8, followed by a max-pooling layer. The second block had the same structure but deeper with 64 filters. The third block had one convolutional filter and a global average pooling layer representing the bottleneck to extract short-term, data-driven features. These features were feed to a many-to-one LSTM network followed by two fully connected layers (96 nodes and one output node) to estimate UPDRS III. Increasing the number of convolutional layers was done by repeating Conv Block-2 multiple times.
Training a good-performing CNN-LSTM model on a relatively limited number of training rounds could be challenging. We applied data augmentation by allowing for a random start for each round of ADL and a 0.5-dropout layer to overcome this challenge. Besides, we proposed a novel two-stage training. In the first stage, a CNN network with a fully connected layer was trained on 5-s windows to estimate UPDRS III while extracting short-term features. The best CNN's weights selected based on validation data were saved. In the second stage, the fully connected layer of the pre-trained CNN was discarded since they are not extracting new features. Next, the extracted features using the CNN model (i.e., from the global averaging layer) were fed to the LSTM network to estimate UPDRS III for each ADL round.
Many PD symptoms have spectral features such as tremor that manifest in 4–6 Hz and bradykinesia in low frequencies. Therefore, the CNN network can learn new temporal and spectral features if trained on the time–frequency representations of the raw signals. For this purpose, we generated spectrograms by applying a short-time Fourier transform on the 1-min windows and then taking the magnitude. We used a 5-s Kaiser window with 90% overlaps. The spectrograms of the windows from each axes were stacked to construct a time \(\times\) frequency \(\times\) axes tensor and were fed to a 2D CNN-LSTM network as shown in Fig. 5c. The 2D CNN-LSTM consisted of three convolutional blocks. The first block was two convolutional layers with 32 filters of width five by five, followed by a max-pooling layer. The rest of the architecture of the 2D CNN-LSTM was similar to 1D CNN-LSTM described before except for using filters of size 5 \(\times\) 5. In addition, the same two-stage training strategy described before was used to address the limiting training data.
The Ensemble Model
We explored the accuracy of UPDRS III estimation by considering the ensemble of the three models we developed. As shown in Fig. 5d, the ensemble of the previous models was performed by averaging the UPDRS-III scores from each model to get one estimation for each round of ADL.
Gradient Tree Boosting
Gradient Tree Boosting is a traditional machine-learning method used in practice for solving regression problems [38]. It is based on ensemble of \(N_t\) weak regression trees (\(\{f_i\}_{i=1}^{N_t}\)) to estimate the output \(\hat{y}\) or the UPDRS-III score as follows:
$$\begin{aligned} \hat{y}\left( \vec {fv}_t\right) =\sum _{i=1}^{N_t} {f_i\left( \vec {fv}_t\right) } \end{aligned}$$
where \(f_i(\vec {fv}_t)=w_{q(\vec {fv}_t)}\) is the space of regression tree i with L leaves, \(q(\vec {fv}_t)\) is the structure of the tree that maps \(\vec {fv}_t\) to an index represents the corresponding tree leaf, and \(w \in \mathbb {R}^L\) is the leaf weights. Learning the regression trees is performed using additive training strategy by learning one tree at each iteration that optimize the objective function which includes the first and second gradient statistics on the loss function.
The short- and long-term feature vectors (refer to the feature extraction section) were combined into one feature vector and were fed into the Gradient Tree Boosting model. For every 5-s segment in a 1-min interval, the long-term feature vectors \(\vec {fv}\) were repeated and concatenated with the corresponding short-term feature vectors \(\vec {fv}\) to form a matrix of \(N_{Ws}\) feature vectors with (\(N_{SF}+N_{LF}\)) number of features (\(SL^{(r)} \in \mathbb {R}^{N_{Ws}^{(r)} \times (N_{SF}+N_{LF})}\)). The combined set \(\mathcal {D}_{TB}=\{ (SL^{(r)},y^{(r)}) \}_r^{N_R}\) was used to train and test the model. To estimate the average \(\hat{y}^{(r)}\) (i. e. UPDRS III) of round r during testing, the model first estimate \(\hat{y}\) for each of the feature vectors in \(SL^{(r)}\), and then they were averaged to get the average \(\hat{y}^{(r)}\) (i. e. UPDRS III) for that round.
The UPDRS-III estimation methods were evaluated and compared using the data of 24 PD subjects described in the dataset section using LOOCV. In addition, an inner split was applied on the training data to select a random 20% for validation. The mean and standard deviation of the training data in each cross-validation iteration were calculated and used to normalize the entire data. The developed dual-channel LSTM and CNN-LSTM networks were implemented in TensorFlow [39]. In each cross-validation iteration, the networks were trained for 200 epochs using Adam optimizer [40]. During the training, the depth of the CNN and LSTM networks and filter sizes were optimized by selecting the best performing model on the validation data (i.e. maximum validation \(\rho\)) then evaluating them on the held-out test data. The depth of the CNNs was increased by repeating Conv Block-2 up to four times. The LSTM hyper-parameters space (number of layers: 1–3 and number of hidden states: 16–224) were searched. Mini-batches of size 2 and learning rate of 1e-3 were used during the training. In each mini-batch, the signals of all the rounds were repeated to have a length equal to the longest round. In addition, before feeding the hand-crafted or data-driven features of each round to the network in each epoch, a random start point was initialized and data prior to the start point was excluded. This augmentation approach was applied to prevent the LSTM network from memorizing the training sequence.
The Gradient Tree Boosting algorithm was implemented using XGboost library [38]. The learning rate was 0.1. A grid search was applied to find the optimal number of regression trees in the range of 10–200 with a step of 20. The tree depth was in the range of 3–10 with a step of 2. The percentage of used-features per tree was in the range of 10–50% with a step of 10%.
The PAMAP2 activity dataset is publicly available, and the PD dataset used and analyzed during the current work are available from the corresponding author on reasonable request.
PD::
PwP::
People with Parkinson's disease
UPDRS III::
Unified Parkinson Disease Rating Scale-part III
LSTM::
Long Short-Term Memory
CNN::
Convolutional Neural Network
ANN::
SVR::
Support Vector Regression
ADL::
LOOCV::
Subject-based, leave-one-out cross-validation
PAMAP2::
Public dataset of motion signals
Jankovic J. Parkinson's disease: clinical features and diagnosis. J Neurol Neurosurg Psychiatry. 2008;79(4):368–76.
Perez-Lloret S, Negre-Pages L, Damier P, Delval A, Derkinderen P, Destée A, Meissner WG, Tison F, Rascol O, Group CS, et al. L-dopa-induced dyskinesias, motor fluctuations and health-related quality of life: the copark survey. Eur J Neurol 2017;24(12):1532–1538
Parkinson Study Group. Levodopa and the progression of Parkinson's disease. N Engl J Med. 2004;351(24):2498–508.
Goetz CG, Nutt JG, Stebbins GT. The unified dyskinesia rating scale: presentation and clinimetric profile. Mov Disord. 2008;23(16):2398–403.
Cova I, Priori A. Diagnostic biomarkers for Parkinson's disease at a glance: where are we? J Neural Transm. 2018;125(10):1417–32.
Espay AJ, Hausdorff JM, Sánchez-Ferro Á, Klucken J, Merola A, Bonato P, Paul SS, Horak FB, Vizcarra JA, Mestre TA, et al. A roadmap for implementation of patient-centered digital outcome measures in Parkinson's disease obtained using mobile health technologies. Mov Disord. 2019;34(5):657–63.
Rochester L, Mazzà C, Mueller A, Caulfield B, McCarthy M, Becker C, Miller R, Piraino P, Viceconti M, Dartee WP, et al. A roadmap to inform development, validation and approval of digital mobility outcomes: the mobilise-d approach. Digit Biomarkers. 2020;4(1):13–27.
Griffiths R, Kotschet K, Arfon S, Xu Z, Johnson W, Drago J, Evans A, Kempster P, Raghav S, Horne M. Automated assessment of bradykinesia and dyskinesia in Parkinson's disease. J Parkinson Dis. 2012;2(1):47–55.
Samà A, Pérez-López C, Rodríguez-Martín D, Català A, Moreno-Aróstegui JM, Cabestany J, de Mingo E, Rodríguez-Molinero A. Estimating bradykinesia severity in Parkinson's disease by analysing gait through a waist-worn sensor. Comput Biol Med. 2017;84:114–23.
Pan D, Dhall R, Lieberman A, Petitti DB. A mobile cloud-based Parkinson's disease assessment system for home-based monitoring. JMIR mHealth and uHealth. 2015;3(1):29.
Dai H, Zhang P, Lueth TC. Quantitative assessment of Parkinsonian tremor based on an inertial measurement unit. Sensors. 2015;15(10):25055–71.
Pulliam C, Eichenseer S, Goetz C, Waln O, Hunter C, Jankovic J, Vaillancourt D, Giuffrida J, Heldman D. Continuous in-home monitoring of essential tremor. Parkinsonism Relat Disord. 2014;20(1):37–40.
Pulliam C, et al. Continuous assessment of Levodopa response in Parkinson's disease using wearable motion sensors. IEEE TBME. 2018;65(1):159–64.
Giuberti M, Ferrari G, Contin L, Cimolin V, Azzaro C, Albani G, Mauro A. Automatic updrs evaluation in the sit-to-stand task of Parkinsonians: kinematic analysis and comparative outlook on the leg agility task. IEEE J Biomed Health Inform. 2015;19(3):803–14.
Rodríguez-Molinero A, Samà A, Pérez-López C, Rodríguez-Martín D, Alcaine S, Mestre B, Quispe P, Giuliani B, Vainstein G, Browne P, et al. Analysis of correlation between an accelerometer-based algorithm for detecting Parkinsonian gait and updrs subscales. Front Neurol. 2017;8:431.
Zhao A, et al. A hybrid spatio-temporal model for detection and severity rating of Parkinson's disease from gait data. Neurocomputing. 2018;315:1-8.
Parisi F, et al. Body-sensor-network-based kinematic characterization and comparative outlook of UPDRS scoring in leg agility, sit-to-stand, and gait tasks in Parkinson's disease. IEEE J BHI. 2015;19(6):1777–93.
Zhan A, Mohan S, Tarolli C, Schneider RB, Adams JL, Sharma S, Elson MJ, Spear KL, Glidden AM, Little MA, et al. Using smartphones and machine learning to quantify Parkinson disease severity: the mobile Parkinson disease score. JAMA Neurol. 2018;75(7):876–80.
Pissadaki E, et al. Decomposition of complex movements into primitives for Parkinson's disease assessment. IBM J Res Dev. 2018;62(1):5–1.
Hammerla N, Andras P, Rochester L, Ploetz T. PD disease state assessment in naturalistic environments using deep learning. AAAI Conference on Artificial Intelligence. 2015;1742–1748.
Grover S, Bhartia S, Yadav A, Seeja K, et al. Predicting severity of Parkinson's disease using deep learning. Proc Comput Sci. 2018;132:1788–94.
Wan S, et al. Deep multi-layer perceptron classifier for behavior analysis to estimate Parkinson's disease severity using smartphones. IEEE Access. 2018;6:36825–33.
Hssayeni MD, Adams JL, Ghoraani B. Deep learning for medication assessment of individuals with Parkinson's disease using wearable sensors. In: 2018 40th IEEE EMBC, 2018;1–4. IEEE.
Hssayeni MD, Jimenez-Shahed J, Burack MA, Ghoraani B. Symptom-based, dual-channel lstm network for the estimation of unified Parkinson's disease rating scale iii. In: 2019 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), 2019;1–4. IEEE.
Butt AH, Rovini E, Fujita H, Maremmani C, Cavallo F. Data-driven models for objective grading improvement of parkinson's disease. Ann Biomed Eng. 2020;48(12):2976–87.
Stochl J, Boomsma A, Ruzicka E, Brozova H, Blahus P. On the structure of motor symptoms of Parkinson's disease. Mov Disord. 2008;23(9):1307–12.
Vassar SD, et al. Confirmatory factor analysis of the motor unified Parkinson's disease rating scale. Parkinson's Dis. 2012;2012:719167.
Abrami A, Heisig S, Ramos V, Thomas KC, Ho BK, Caggiano V. Using an unbiased symbolic movement representation to characterize parkinson's disease states. Sci Rep. 2020;10(1):1–12.
Sagi O, Rokach L. Ensemble learning: A survey. Wiley Interdiscipl Rev. 2018;8(4):1249.
Dyagilev K, Saria S. Learning (predictive) risk scores in the presence of censoring due to interventions. Mach Learn. 2016;102(3):323–48.
Mera TO, et al. Objective motion sensor assessment highly correlated with scores of global Levodopa-induced dyskinesia in Parkinson's disease. J Parkinsons Dis. 2013;3(3):399.
Reiss A, Stricker D. Introducing a new benchmarked dataset for activity monitoring. In: 2012 16th International Symposium on Wearable Computers, 2012;108–109. IEEE.
Ramdhani RA, Khojandi A, Shylo O, Kopell BH. Optimizing clinical assessments in Parkinson's disease through the use of wearable sensors and data driven modeling. Front Computat Neurosci. 2018;12:72.
Salarian A. Ambulatory monitoring of motor functions in patients with Parkinson's disease using kinematic sensors. PhD thesis 2006.
Patel S, et al. Monitoring motor fluctuations in patients with Parkinson's disease using wearable sensors. IEEE Trans on Inf Tech Biomed. 2009;13(6):864–73.
Hssayeni MD, Burack MA, Jimenez-Shahed J, Ghoraani B. Assessment of response to medication in individuals with Parkinson's disease. Med Eng Phys. 2019;67:33-43.
Zaremba W, Sutskever I, Vinyals O. Recurrent neural network regularization. arXiv preprint 2014. arXiv:1409.2329.
Chen T, Guestrin C. Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd Acm Sigkdd International Conference on Knowledge Discovery and Data Mining, 2016;785–794. ACM.
Abadi M, et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. Software available from tensorflow.org 2015. http://tensorflow.org/
Kingma D, Ba J. Adam: A method for stochastic optimization 2014. arXiv preprint arXiv:1412.6980.
The dataset was supported by Small Business Innovation Research grant offered by NIH to Cleveland Medical Devices (1R43NS071882-01A1; T. Mera, PI) and the National Institute on Aging to Great Lakes NeuroTechnologies Inc. (5R44AG044293). The authors would like to acknowledge the use of the GPU services provided by Research Computing at the Florida Atlantic University.
This study was supported by National Science Foundation with Grant Numbers 1936586 and 1942669.
Department of Computer and Electrical Engineering and Computer Science, Florida Atlantic University, Boca Raton, FL, 33431, USA
Murtadha D. Hssayeni & Behnaz Ghoraani
Icahn School of Medicine at Mount Sinai, New York, NY, USA
Joohi Jimenez-Shahed
Department of Neurology, University of Rochester Medical Center, Rochester, NY, USA
Michelle A. Burack
Murtadha D. Hssayeni
Behnaz Ghoraani
All the authors performed the conceptualization and methodology; data curation, software, and implementation was performed by MDH; validation and formal analysis were performed by MDH, JJ-S., and BG.; writing—original draft preparation by MDH and BG; writing—review and editing by all the authors. All authors read and approved the final manuscript.
Correspondence to Behnaz Ghoraani.
The protocol was approved by the institutional review boards of the University of Rochester and Great Lakes NeuroTechnologies. All the participants provided written informed consent.
contains the ensemble model estimations of UPDRS III overtime for all 24 PwPs, and the total UPDRS-III scores before and onehour after taking the PD medications as estimated by the developed single models.
Hssayeni, M.D., Jimenez-Shahed, J., Burack, M.A. et al. Ensemble deep model for continuous estimation of Unified Parkinson's Disease Rating Scale III. BioMed Eng OnLine 20, 32 (2021). https://doi.org/10.1186/s12938-021-00872-w
Deep models
Wearable sensors | CommonCrawl |
How do I find the bias of an estimator?
My notes say
$$B(\hat\theta) = E(\hat\theta) - \theta $$
And I understand that the bias is the difference between a parameter and the expectation of its estimator. What I don't understand is how to calulate the bias given only an estimator? My notes lack ANY examples of calculating the bias, so even if anyone could please give me an example I could understand it better!
H4-mathH4-math
$\begingroup$ You do it by calculating the expectation. $\endgroup$
– Michael Hardy
$\begingroup$ The concept becomes clearer with examples. Wikipedia has a few. You could also try Google. $\endgroup$
– T.J. Gaffney
$\begingroup$ So if i was given the estimator $\hat\p$ = $X/n$ (p hat, haven't quite figured out the editing for this yet sorry) and i want to find the bias of that, i start by finding the expectation of $\hat\p$ ? @MichaelHardy $\endgroup$
– H4-math
$\begingroup$ Yes. $\qquad\qquad$ $\endgroup$
The concept of bias is related to sampling distribution of the statistic. Consider, for example, a random sample $X_{1},X_{2},\cdots X_{n}$ from $N(\mu, \sigma^{2})$. Then, it is easy to observe that, the sampling distribution of the sample mean $\bar{X}$ is $N(\mu,\frac{1}{n}\sigma^{2})$. we note that, $E(\bar{X})=\mu$. That is, the center of the sampling distribution of $\bar{X}$ is also $\mu$. Now consider, the statistics, \begin{equation*} S_{1}^{2}=\frac{1}{n-1}\sum_{i=1}^{n}(X_{i}-\bar{X})^{2},\qquad\qquad S_{2}^{2}=\frac{1}{n}\sum_{i=1}^{n}(X_{i}-\bar{X})^{2} \end{equation*} as estimators of the parameter $\sigma^{2}$. It can be shown that \begin{equation*} E(S_{1}^{2})=\sigma^{2} \mbox{ and } E(S_{2}^{2})=\frac{n-1}{n} \sigma^{2} \end{equation*} The sampling distribution of $S_{1}^{2}$ is centered at $\sigma^{2}$, where as that of $S_{2}^{2}$ is not. We say that, the estimator $S_{2}^{2}$ is a biased estimator for $\sigma^{2}$. Now using the definition of bias, we get the amount of bias in $S_{2}^{2}$ in estimating $\sigma^{2}$.
L.V.RaoL.V.Rao
$\begingroup$ Nice simple example (+1). $\endgroup$
– BruceET
Roughly speaking there are two favorable attributes for an estimator $T$ of a parameter $\tau$, accuracy and precision. Accuracy is lack of bias and precision is small variance. If an estimator is unbiased, then we just look at its variance. If it is biased we sometimes look at 'mean squared error', which is $$MSE_\tau = E[(T - \tau)^2] = B^2(T) + Var(T).$$
As an example, consider data $X_1, X_2, \dots, X_n \stackrel{iid}{\sim} UNIF(0, \tau).$ The estimator $T_1 = 2\bar X$ is unbiased, and the estimator $T_2 = X_{(n)} = \max(X_i)$ is biased because $E(T_2) = \frac{n}{n+1}\tau.$
As a substitute for a (fairly easy) analytical proof, here is a simulation to show that $T_2$ is 'better' in the sense that its MSE is smaller. We look at a million samples of size $n = 5$ from $UNIF(0, \tau = 1).$
m = 10^6; n = 5; tau = 1
x = runif(m*n, 0, 1)
DTA = matrix(x, nrow=m) # each row a sample of n
t1 = 2*rowMeans(DTA); t2 = apply(DTA, 1, max)
mean(t1); mean(t2)
## 0.9997444 # aprx E(T1) = 1 unbiased
## 0.8332033 # aprx E(T2) = 5/6 biased
n/(n+1)
## 0.8333333
var(t1); var(t2)
## 0.06665655 # aprx Var(T1)
## 0.01983109 # aprx Var(T2) < Var(T1)
mse.t1 = mean((t1-tau)^2); mse.t2 = mean((t2-tau)^2)
mse.t1; mse.t2
## 0.06665655 # aprx MSE(T1)
## 0.04765219 # aprx MSE(T2) < MSE(T1)
We see that the smaller variance of $T_2$ is enough to overcome its bias to give it the smaller MSE. It is possible to 'unbias' $T_2$ by multiplying by $(n+1)/n$ to get $T_3 = \frac{6}{5}T_2,$ which is unbiased and still has smaller variance than $T_1:$ $Var(T_3) \approx 0.029 < Var(T_1) \approx 0.067.$ The simulated distributions of the three estimators are shown in the figure below.
BruceETBruceET
Not the answer you're looking for? Browse other questions tagged statistics or ask your own question.
Sample Bias of a Statistic - Stuck on Definition and Formula
find the estimator
Find the bias for the Maximum-likelihood estimator
How is the sample variance an unbiased estimator for population variance?
bias reduction when the bias depends on the true parameter
MSE of an estimator as sum of bias and variance
Derive the bias and MSE of the estimator $\hat{\beta}$
The bias of $\hat \sigma^2$ for the population variance $\sigma^2$
How to Find the Mean Square Error for a biased estimator? | CommonCrawl |
How to find the $LU$ factorization of a matrix $A$ when elimination breaks down
Let's say I have the following matrix $A \in \mathbb{R}^{3\times3}$
$$A = \begin{bmatrix} x & x & x \\ x & x & x \\ x & x & x \\ \end{bmatrix}$$
How can I find the $LU$ factorization of this matrix $A$, if elimination breaks down, as each row is just a scalar multiple of each other?
The method I've used in the past to factorize $A$ into $LU$, has been:
To reduce $A$ to $U$ by multiplying $A$ by elimination matrices $E_{ij}E_{kl}...E_{mn}$
Then finding $L$ as the inverse of the product of those elimination matrices i.e. $L = E_{mn}^{-1}...E_{kl}^{-1}E_{ij}^{-1}$, but this method only works if elimination doesn't break down (i.e. no zero entries in pivot positions)
Is there a more general way to factorize matrices that allows elimination to break down and still find the $LU$ factorization?
linear-algebra matrices numerical-methods matrix-decomposition
PerturbativePerturbative
So $\begin{bmatrix} x & x & x \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}$ is what you get after you try to clear the first column below its pivot. This is already an upper triangular matrix which is row-equivalent to $A$, even though you "expected" to need to do more row operations to clear out the second column. So this matrix can serve as your $U$.
What did we do to make this happen? We subtracted row $1$ from row $2$ and we subtracted row $1$ from row $3$. Denote $E_{ij}(y)$ as the matrix with $1$s in the diagonal, $y$ in the $(i,j)$ position, and zeros elsewhere. (This is not standard notation, I just made it up.) Then $U=E_{31}(-1) E_{21}(-1) A$. $E_{31}$ and $E_{21}$ are invertible, so you can invert them to get $L$. You'll find $L=\begin{bmatrix} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 1 & 0 & 1 \end{bmatrix}$.
So in this example elimination actually didn't break down, you just didn't need to use a second pivot to clear out the second column. By contrast, there are other cases where elimination does break down, in the sense that you have a zero pivot with nonzero entries below it. For example, $A=\begin{bmatrix} 1 & 1 & 1 \\ 1 & 1 & 2 \\ 3 & 1 & 1 \end{bmatrix}$. With this matrix, removing the $1$ in the $(2,1)$ position also removes the $1$ in the $(2,2)$ position, so the second column does not have a pivot in the correct position.
In this case a row exchange is required. A factorization which tracks this row exchange is commonly referred to as a $PA=LU$ decomposition, where $P$ is a permutation matrix. In Matlab we can write
[L,U,P]=lu(A).
Another way to write it would be $A=(P^T L)U$; Matlab calls this aggregate $P^T L$ a "psychologically lower triangular matrix". This is what you get in Matlab if you make the obvious call to lu, i.e.
[L,U]=lu(A).
IanIan
One obvious LU decomposition would be $$ \begin{pmatrix} x&x&x \\ x&x&x \\ x&x&x \end{pmatrix} = \begin{pmatrix} x&& \\ x&0& \\ x&0&0 \end{pmatrix} \begin{pmatrix} 1&1&1 \\ &0&0 \\ &&0 \end{pmatrix} $$
hmakholm left over Monicahmakholm left over Monica
The matrix having at least one singular value = 0 (or too close to 0 numerically) is a prerequisite for row elimination breaking down. One could do an SVD and then do operations on the rows/columns corresponding to nonzero singular values. One could interpret Hennings factorization as such an SVD with an invisible $\left[\begin{array}{ccc}1&0&0\\0&0&0\\0&0&0\end{array}\right]$ matrix in between and then we can move over the x into singular position to get: $$\left[\begin{array}{ccc}1&0&0\\1&0&0\\1&0&0\end{array}\right]\left[\begin{array}{ccc}x&0&0\\0&0&0\\0&0&0\end{array}\right]\left[\begin{array}{ccc}1&1&1\\0&0&0\\0&0&0\end{array}\right]$$
Not exactly normalized yet, but you get the idea.
mathreadlermathreadler
$\begingroup$ I think that depends on what you mean by "breaking down". If you mean "requiring a row interchange", then this is incorrect. In rank deficient problems this is correct, though. $\endgroup$ – Ian Aug 6 '16 at 23:21
Not the answer you're looking for? Browse other questions tagged linear-algebra matrices numerical-methods matrix-decomposition or ask your own question.
Without elimination, find bases for the four subspaces for this matrix. [GStrang P191 3.6.6]
How to determine the transition matrices when doing Gaussian elimination?
How to find the correct order to multiply elimination matrices?
Performing LU factorization on Ax = b. Can someone show me step by step how this works?
When doing gaussian elimination, why are you able to add/subtract/multiply rows? | CommonCrawl |
Hostname: page-component-6c8bd87754-ncgjf Total loading time: 0.592 Render date: 2022-01-18T17:03:31.261Z Has data issue: true Feature Flags: { "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true, "metricsAbstractViews": false, "figures": true, "newCiteModal": false, "newCitedByModal": true, "newEcommerce": true, "newUsageEvents": true }
>British Journal of Political Science
>Economic Opportunities, Emigration and Exit Prisoners
British Journal of Political Science
EMIGRATION AND EXIT IN EAST GERMANY
EXTERNAL VALIDITY
Economic Opportunities, Emigration and Exit Prisoners
Published online by Cambridge University Press: 25 August 2020
Carlo M. Horz and
Moritz Marbach [Opens in a new window]
Carlo M. Horz
Department of Political Science, Texas A&M University
Moritz Marbach*
Department of Humanities, Social and Political Sciences, ETH Zurich
*Corresponding author. Email: [email protected].
Save PDF
Save PDF (0.34 mb) View PDF[Opens in a new window]
Save to Google Drive
Save to Kindle
How do economic opportunities abroad affect citizens' ability to exit an authoritarian regime? This article theorizes the conditions under which authoritarian leaders will perceive emigration as a threat and use imprisonment instead of other types of anti-emigration measures to prevent mass emigration. Using data from communist East Germany's secret prisoner database that we reassembled based on archival material, the authors show that as economic opportunities in West Germany increased, the number of East German exit prisoners – political prisoners arrested for attempting to cross the border illegally – also rose. The study's causal identification strategy exploits occupation-specific differences in the changing economic opportunities between East and West Germany. Using differential access to West German television, it also sheds light on the informational mechanism underlying the main finding; cross-national data are leveraged to present evidence of the external validity of the estimates. The results highlight how global economic disparities affect politics within authoritarian regimes.
authoritarian politicsimmigrationemigrationrefugeesrepressionimprisonmenthuman rights violations
British Journal of Political Science , Volume 52 , Issue 1 , January 2022 , pp. 21 - 40
According to the CIRI Human Rights Project (Cingranelli, Richards and Clay Reference Cingranelli, Richards and Clay2014), the vast majority of authoritarian regimes restrict citizens' right to move abroad. Figure 1 illustrates the prevalence of emigration restrictions worldwide for authoritarian regimes, for the earliest and latest years for which CIRI data are available. While the absolute number of authoritarian regimes declined considerably between 1981 and 2011 (from seventy-seven to twenty-one), the proportion restricting the freedom to move abroad increased (from 78 to 81 per cent).Footnote 1 Authoritarian regimes can restrict the freedom to move abroad in many ways. Burmese citizens, for example, are required to obtain clearance from the Ministry of Finance and Revenue and a departure permit from the Ministry of Immigration and Population. Moreover, Burmese passports are typically valid for no more than three years, and the cost of the accompanying paperwork is roughly equal to a year's salary (Cingranelli, Richards and Clay Reference Cingranelli, Richards and Clay2014). Another example is Eritrea, which has an indefinite military conscription that makes it impossible for young male citizens to leave the country legally (Human Rights Watch 2019). Lastly, North Korea rarely allows its citizens to move or even travel abroad, and imprisonments due to attempted emigration are common. The records of the Database Center for North Korean Human Rights document 5,413 violations of the right to movement and residence. While these violations constitute only 12 per cent of all documented human rights violations, this category is the second-most frequently documented type of violation (after violations of the right to integrity of the person).Footnote 2 Similarly, the Castro regime in Cuba at times used the threat of prison to control emigration flows. Historically, one of the most notorious examples of an authoritarian regime using harsh anti-emigration measures was the German Democratic Republic (GDR or East Germany). Emigration was effectively criminalized, and in 1961 the regime constructed the Berlin Wall to stop emigration altogether.Footnote 3 While the wall slowed down mass emigration considerably, many GDR citizens still tried to leave. While some succeeded, many failed. Our data, described in detail below, show that on average, 1,900 individuals were charged with either an unlawful emigration attempt or support of such an action each year.
Figure 1. Freedom of foreign movement restrictions in authoritarian polities around the world
Note: countries not classified as authoritarian polities (Polity IV score of strictly less than −5) are in light gray.
Source: CIRI project supplemented with information from Hannum (Reference Hannum1987) for the German Democratic Republic and the People's Democratic Republic of Yemen.
In this article, we analyze the conditions under which authoritarian regimes employ harsh anti-emigration measures such as imprisonment, focusing on the role of improvements in economic conditions in other countries. Our theoretical discussion builds on the insight that citizens seek to emigrate in order to improve their economic well-being. When authoritarian rulers perceive emigration as a threat – that is, rulers would be better off if they were to prevent citizens from leaving the country – they have an incentive to employ measures that decrease the likelihood of emigration. We employ a simple theoretical framework to discuss the use of economic reforms, border walls, and more targeted measures such as social programs, passport costs or imprisonment. We theorize about the conditions under which imprisonment will be used, analyzing among other factors the role of budget constraints, the loyalty of security forces and the quality of information at the regime's disposal. We derive two hypotheses: if the regime imprisons people who attempt to emigrate, we expect first that the number of exit prisoners (defined as political prisoners who were arrested for attempting to leave the country) will increase when economic opportunities abroad improve. Secondly, we expect there to be more exit prisoners when citizens are better informed about economic opportunities abroad.
Testing these two hypotheses is challenging for two reasons. First, authoritarian regimes are rarely transparent about what is happening within their borders (Hollyer, Rosendorff and Vreeland Reference Hollyer, Rosendorff and Vreeland2011), and this is especially true of their political prisoners. Data from public sources such as newspaper reports are likely biased by selection effects – more successful attempts to capture political prisoners are less likely to be reported. Secondly, identifying the causal effect of changing economic opportunities abroad on the number of exit prisoners is challenging because of common causes that affect both the economic conditions abroad and the number of restrictions on the freedom to move abroad.
To overcome these empirical challenges, we make use of formerly secret administrative records of the GDR in combination with data on economic growth in the neighboring Federal Republic of Germany (FRG or West Germany) during the same time period. Specifically, we reconstruct the entire database of political and ordinary prisoners in use by the GDR between 1979 and 1988. Since the database, which contains about half a million entries, was designed to manage the prison system in the GDR – and was never intended for release to the public – it contains the universe of exit prisoners in East Germany during this period. Using this database and administrative data on the number of available jobs by occupation group in West Germany, we build a quarterly panel of occupation cohorts. We then use the variance within occupation groups and between quarters to identify and estimate the causal effect of the number of open positions on the number of exit prisoners.
We show that improvements in economic conditions in West Germany increased the number of people arrested for trying to emigrate. Our estimates imply that an increase of about 1,000 open positions in West Germany resulted in one additional exit prisoner in East Germany. We also demonstrate that, consistent with our second hypothesis, counties that had access to West German television (TV), which reported on economic conditions in West Germany, experienced significantly more imprisonments than those without access to West German TV.
While we empirically focus on the case of East and West Germany, our argument is general: economic disparities between countries affect the manner in which many authoritarian regimes restrict emigration. In the last section of this article, we return to the general argument in which restrictions on emigration can be more subtle (high passport costs, costly exit visas and severe taxes on emigration). Using cross-country panel data, we show that emigration restrictions are systematically related to neighboring countries' economic opportunities. This finding strengthens our confidence in the external validity of our estimates for East Germany.
An extensive literature examines coercive activities by authoritarian regimes, typically using indices of repressive behavior at the country-year level, and domestic political and economic factors as explanatory variables (see for example, Hill and Jones Reference Hill and Jones2014; Svolik Reference Svolik2012). The theoretical link between these variables is usually protest behavior (for example, Shadmehr Reference Shadmehr2014; Siegel Reference Siegel2011; Svolik Reference Svolik2013). For example, a country's high level of inequality may increase citizens' level of discontent, which in turn motivates the regime to employ repression (Acemoglu and Robinson Reference Acemoglu and Robinson2000). We take a different approach by (1) measuring a specific kind of repression, namely imprisonment at the individual level (similar to Ritter and Conrad Reference Ritter and Conrad2016; Truex Reference Truex2019), (2) using economic conditions abroad as our main explanatory variable and (3) using emigration decisions as a link between economic opportunities abroad and regime behavior (as in the formal literature on exit-voice-loyalty, see, for example, Clark, Golder and Golder Reference Clark, Golder and Golder2017; Gehlbach Reference Gehlbach2006). The most closely related articles are Danneman and Ritter (Reference Danneman and Ritter2014), who analyze how authoritarian repression is affected by civil wars abroad (by contrast, we analyze the effect of economic opportunities abroad on repression), and Miller and Peters (Reference Miller and Peters2020), who study the effect of migration and remittances on a country's regime status (by contrast, we consider migration decisions to be a link between our main independent and dependent variables). Finally, our primary empirical case, East Germany, has been the subject of several studies in political science (Crabtree, Darmofal and Kern Reference Crabtree, Darmofal and Kern2015; Kern and Hainmueller Reference Kern and Hainmueller2009; Lohmann Reference Lohmann1994) as well as in sociology and history (Fulbrook Reference Fulbrook1995; Pfaff Reference Pfaff2006). However, these studies typically focus on the exit and protest decisions that ultimately brought down the GDR in 1989. We are the first to quantitatively examine how the number of exit prisoners changed in the decade before the fall of the regime.
Our results imply that while the recorded number of international migrants is large (244 million in 2015 (UN 2016)), this number is only a lower bound: some would-be emigrants might have ended up in prison instead. Consequently, without restrictions and authoritarian coercion, the number of international migrants would be higher. Conversely, some of the variation in restrictions and repression is due to anticipated cross-national migration, which is an important link that has a profound impact on politics, creating interdependence between countries (Keohane and Nye Reference Keohane and Nye1972). Indirectly, we thus also contribute to the literature on the effect of immigration on political outcomes (Leblang Reference Leblang2010; Salehyan Reference Salehyan2008; Salehyan and Gleditsch Reference Salehyan and Gleditsch2006).
Why Citizens Leave
In any country, but especially in those ruled by an authoritarian regime, at least some proportion of the population is considering emigration. For this group of citizens, various factors determine whether or not they will actually attempt to leave the country. The literature on immigration distinguishes between push factors (domestic variables) and pull factors (international variables) (Lee Reference Lee1966). While it is generally acknowledged that both types of factors influence citizens' decisions about whether to leave, the importance of economic opportunities abroad as a pull factor has been widely recognized ever since Ravenstein's influential essay on 'The Laws of Migration':
Bad or oppressive laws, heavy taxation, an unattractive climate, uncongenial social surroundings, and even compulsion (slave trade, transportation), all have produced and are still producing currents of migration, but none of these currents can compare in volume with that which arises from the desire inherent in most men to 'better' themselves in material respects (Ravenstein Reference Ravenstein1889).
Today's micro-economic models of immigration incorporate Ravenstein's insight by modeling the decision to emigrate as a function of wage differentials and (expected) lifetime earnings (Borjas Reference Borjas2014; Harris and Todaro Reference Harris and Todaro1970; Todaro Reference Todaro1969). Formal versions of Hirschman's (Reference Hirschman1970) exit-voice-loyalty framework that typically leave the motives of citizens abstract (Clark, Golder, and Golder Reference Clark, Golder and Golder2017; Gehlbach Reference Gehlbach2006) are consistent with the notion that economic opportunities are an important pull factor.
Of course, citizens do not always have complete information about the economic opportunities in potential destination countries; the exact level of opportunities abroad typically remains unknown. Yet they likely have beliefs about the state of the economy abroad based on a variety of information they receive about other countries' economies from the media and their social networks. These beliefs are an important determinant in their emigration decisions.
When citizens learn about changes in economic opportunities abroad, the beliefs of some citizens about the attractiveness of other countries change. Whether or not these changes in beliefs affect citizens' behavior will depend on the relative strength of other push and pull factors, such as the cost of leaving and the extent to which economic interests matter relative to political and social interests. Importantly, it is plausible to assume that citizens are familiar with at least some of the measures taken by the regime to prevent emigration (discussed in detail below). Thus they have to weigh the potential benefits of better economic opportunities abroad against the increased risk of potentially harsher enforcement of anti-emigration measures. If the increase in risk is strong enough, citizens may choose to stay despite an increase in economic opportunities abroad. However, for a citizen previously indifferent about leaving or staying prior to a change in (beliefs about) opportunities, it is sometimes still worth attempting to emigrate despite the chance of being caught.Footnote 4
Emigration – Threat of Opportunity?
For an authoritarian regime, emigration can be a blessing or a curse. On the one hand, emigration can threaten a regime's survival if it causes a labor shortage. For example, according to the 'brain drain' hypothesis, countries undergoing mass emigration will face dire economic consequences because fewer and fewer (skilled) workers will produce goods and services (Docquier and Rapoport Reference Docquier and Rapoport2012). Consequently, to the extent that emigration causes a shortage of labor that ultimately hampers economic growth, emigration becomes a threat to an authoritarian leader. More generally, having fewer workers (of any level of skill) reduces the size of the pie available for redistribution or private goods allocation – which can lead to more conflict and rebellion (see Haggard and Kaufman Reference Haggard and Kaufman2018; Smith Reference Smith2008). If emigrants take their savings with them, emigration can also lead to capital flight, which in turn causes further economic problems (see Pepinsky Reference Pepinsky2009). Non-economic considerations may play a role, too. If the political opposition has an easier time organizing itself abroad – for example, because their destination country is a democracy with guaranteed rights of assembly and expression – or if emigrants return and bring democratic norms with them, the likelihood of regime change increases with emigration (see Miller and Peters Reference Miller and Peters2020). Emigrants can also transmit insider knowledge about the regime, such as information about atrocities. For authoritarian leaders, this may pose a security risk as other states might be compelled to impose sanctions on the regime.
On the other hand, emigration may not always be a threat to an authoritarian regime. Remittances by emigrants can stabilize and stimulate the economy, thereby benefiting the regime (Miller and Peters Reference Miller and Peters2020). A regime might also encourage its opponents to emigrate, so these citizens no longer work to overthrow the regime. In this way, emigration provides a safety valve (Hirschman Reference Hirschman1993). Similarly, if it is too costly to repress the opposition physically, forced exile might be optimal from the regime's perspective. For example, Esberg (Reference Esberg2018) presents evidence from Chile under Pinochet that more prominent opposition figures were more likely to experience forced exile rather than the physical repression experienced by less prominent opposition figures. Regimes might also use emigrants as a bargaining chip to destabilize or pressure destination countries (Adelman Reference Adelman1998; Greenhill Reference Greenhill2010; Tsourapas Reference Tsourapas2018; Zolberg, Suhrke and Aguayo Reference Zolberg, Suhrke and Aguayo1989), as in the case of Cuba and the United States, or to extract financial concessions (such as inexpensive loans) from a destination country, as in the case of East and West Germany (Judt Reference Judt2007).
In short, there is a wide variety of reasons why authoritarian regimes may view emigration positively or negatively. In general, the level of emigration, emigrants' destination preference (specifically, whether the destination country is a democracy), their skills in relation to the needs of the economy, and their political attitudes will determine whether a regime views emigration positively or negatively (see also Miller and Peters Reference Miller and Peters2020). Empirically, it seems that most authoritarian regimes perceive emigration as a threat, as a large majority of regimes do restrict emigration at least to some degree (cf. Cingranelli, Richards and Clay Reference Cingranelli, Richards and Clay2014).
Imprisonment as a Measure Against Emigration
If an authoritarian regime perceives emigration as a threat, how will it react? Why do some authoritarian regimes imprison potential emigrants while others reform the economy or design welfare programs (for example, housing or pension programs) to encourage citizens to stay?
Focusing on anti-emigration measures that can be implemented within a regime's political structure (that is, excluding democratization), existing work on authoritarian politics and emigration suggests a simple framework to unify a diverse set of empirically observable anti-emigration measures. We distinguish authoritarian regimes' anti-emigration measures along two dimensions. On the one hand, a regime can either increase citizens' expected utility of staying (reward) or decrease their expected utility in attempting to leave (punish).Footnote 5 On the other hand, a measure can either be more targeted toward citizens who actually consider emigrating, or less targeted, and thus affect all citizens independently of their emigration intentions. Table 1 provides examples of measures that authoritarian regimes use.Footnote 6
Table 1. Distinguishing authoritarian regimes' anti-emigration measures along two dimensions with empirical examples in the cells
In general, the chosen strategy depends on the measure's effectiveness in preventing emigration and its feasibility. We first discuss when the regime will punish rather than reward before turning to the issue of targeting.
When regimes punish rather than reward
One important factor determining the effectiveness of punishment is the loyalty of the security apparatus. Punishment is a viable strategy to prevent emigration only when the police and army are willing to follow the leadership's orders and coerce citizens who seek to emigrate (see Dragu and Polborn Reference Dragu and Polborn2013; Tyson Reference Tyson2018). By contrast, when the police and army tend to be less loyal, for example because of weak ties to the regime or because they are badly paid (Albrecht and Ohl Reference Albrecht and Ohl2016; Nepstad Reference Nepstad2013), rewarding citizens for staying will be more effective. Of course, providing rewards is feasible only when the regime has sufficient resources. When the regime's pockets are full, for example because of a natural resources boom, it will likely distribute rewards that not only prevent emigration but also garner political goodwill (Wright, Frantz and Geddes Reference Wright, Frantz and Geddes2015). By contrast, tight budgets favor repression because the agents tasked with such coercion are already paid, and so fewer additional resources are needed (for example, Escribà-Folch Reference Escribà-Folch2012). Another important concern is the by-products of the use of a particular measure – particularly the possibility of a backlash. In the context of preventing protests, repressive measures are frequently mentioned as creating a backlash. For example, citizens learn that the regime does not care about their welfare when they observe coercive measures (Bueno de Mesquita and Dickson Reference Bueno de Mesquita and Dickson2007).
But a form of backslash with rewards is also possible. Consider the prospects for an economic reform or a new welfare program that will improve the incomes of groups that are likely to leave, thereby reducing their emigration rates. Four factors may stand in the way of such a reform. First, existing policy choices as well as ideological commitments that are inconsistent with the reform will make its implementation difficult, because citizens might infer that the regime is incompetent or weak, which in turn will endanger its survival (Ginkel and Smith Reference Ginkel and Smith1999; Majumdar and Mukand Reference Majumdar and Mukand2004). Secondly, economic reforms create winners as well as losers (Haggard and Webb Reference Haggard and Webb1993). If the group that would be worse off as a result of the reform is sufficiently important to the regime's survival, the reform is unlikely to pass (see Acemoglu and Robinson Reference Acemoglu and Robinson2006). Thirdly, the international context matters. Some economic reforms might be efficient, but the reaction of allies can damage their prospects, as the reforms in Prague and the subsequent invasion by members of the Warsaw Pact illustrate. Finally, by-products need not be negative. Coercive measures such as passport or visa costs or a long military service prevent emigration and provide an additional benefit to the regime in the form of revenue or military power. For example, in 2012, the Cuban state charged $150 for an exit permit – more than seven times the average monthly salary (Rainsford 2012). Eritrea's indefinite military conscription stops emigration and increases the country's military power, which is important given its rivalry with its larger neighbor, Ethiopia (see Stevis and Parkinson 2016). Finally, the sheer size of the potential emigrant population matters. As more and more citizens choose not to emigrate, measures that reward staying in the polity become increasingly expensive while measures that punish attempts to leave become increasingly cheap. Thus, the likely use of positive or negative measures depends on the prevalence of potential emigrants (Alexander Reference Alexander2017).
Targeted measures to prevent emigration
The most important factor that determines whether targeted measures are feasible to begin with is the amount of information a regime has about its citizens. More targeted measures are likely to be more effective and cheaper, but their implementation requires the regime to have detailed information about which (groups of) citizens are likely to leave (cf. Dimitrov and Sassoon Reference Dimitrov and Sassoon2014). This information is difficult to come by: when the regime punishes attempts to emigrate, citizens have an incentive to engage in preference falsification (Kuran Reference Kuran1997). This suggests that only authoritarian regimes with an extensive and capable surveillance apparatus will be able to use targeted measures. Furthermore, the expected number of potential emigrants (prior to the regime's choice) is an important determinant of the regime's strategy to use more or less targeted measures: the smaller the expected number of citizens attempting to exit, the more likely the regime is to employ targeted measures.
The expected number of potential emigrants, in turn, is affected by at least two factors: (1) existing geographic, cultural or political hurdles to emigration and (2) the skill distribution of the labor force. Concerning the first point, whenever there are severe hurdles to emigration – such as a different language, large geographic obstacles such as rivers or mountains, or political barriers such as border walls – we would anticipate the expected number of emigrants to go down (even if the regime is not actively trying to stop them). As a consequence, targeted measures are sufficient to deal with the remaining set of potential emigrants. By contrast, if they are less severe hurdles, less targeted measures are more likely to be chosen.Footnote 7
Concerning the second point, note that in almost all countries, there are likely to be more low-skilled workers than high-skilled workers. Everything else being equal, we would expect that if low-skilled workers are more likely to leave than high-skilled workers, targeted measures are unlikely to be chosen, precisely because there are so many potential emigrants. By contrast, targeted measures are more likely to be chosen when high-skilled workers are more likely to leave than low-skilled workers. Whether high- or low-skilled workers have greater incentives to leave depends on the structure of the home and destination economies, and specifically on the difference in expected lifetime earnings for both types of workers. We would expect a regime to choose targeted measures to prevent emigration if the difference in expected lifetime earnings is positive only for a small subset of workers; otherwise, we should observe less targeted measures.
Testable Implications
The preceding discussion suggests we should expect imprisonment as a strategy to restrict emigration in authoritarian regimes that (1) are well informed about their citizens' proclivity to leave, (2) have a loyal security service or military, (3) have longstanding economic policies and corresponding ideological commitments, making economic reforms costly, (4) have to deal with a relatively small set of potential emigrants at this point and (5) are budget constrained.
In these cases, we first expect that as economic opportunities in a destination country improve, the total number of exit prisoners – defined as political prisoners who were arrested for attempting to leave the country – will increase (Hypothesis 1). The reason is that, as more citizens are trying to leave the country, an authoritarian regime making the same effort to enforce its anti-emigration policies arrests more citizens. Moreover, the regime might increase its efforts, in which case even more citizens will be arrested.Footnote 8
Empirical evidence favoring this hypothesis also implicitly provides information regarding the extent to which the regime perceives emigration as a threat. If the regime views emigration positively, we would not expect to find a positive relationship between economic opportunities abroad and the number of exit prisoners.
Secondly, even if economic opportunities remain objectively the same in the destination country, citizens might receive better information about them. This will make them more confident about their expected well-being abroad and will provide a substitute for objectively better economic opportunities. Therefore, we expect that as more information about economic opportunities abroad becomes available, the number of exit prisoners in an authoritarian regime will increase (Hypothesis 2).
One of the most notorious examples of an authoritarian regime using harsh anti-emigration measures was the GDR. Our data from the central prisoner database of the GDR Minister of the Interior suggest that between 1979 and the regime's downfall an average of 1,900 individuals were charged with either an unlawful emigration attempt or support of such an action each year. We use these data to test our previously derived hypotheses. But first we explain why, according to our theory, it is not surprising that the regime in East Germany used imprisonment to stop emigration.
Since its founding in 1949, the GDR struggled with large-scale emigration to West Germany. Figure 2 shows the monthly number of GDR emigrants that were officially processed by the West German authorities. Initially, the East German regime sought to reduce the wave of emigration by increasing the legal barriers.Footnote 9 However, after an average of 18,200 people left the GDR each month in 1961, the authorities began constructing the Berlin Wall, which reduced emigration to less than one-tenth of the average monthly rate before its construction.
Figure 2. Monthly number of emigrants from East to West Germany
Note: the dashed line highlights November 1961 when the construction of the Berlin Wall was completed.
Source: Government of West Germany, collected by the authors (see Appendix for the sources). Data from November 1961 to December 1964 is missing in the archival materials.
The inner German border featured a restricted zone, fences and/or a wall, mines, armed border guards as well as automatic firing systems. In addition to these fortifications, the regime also used a variety of other measures to stop citizens from leaving. One important strategy was propaganda (for an overview, see Gibas Reference Gibas2000). Using biased reporting in newspapers or TV broadcasts (for example, the East German program 'The Black Channel' in which selected parts of West German TV were exploited for pro-regime propaganda), the regime attempted to shape citizens' views about West Germany. Indoctrination was also widespread in the education system, with the ruling party aiming to make each generation less willing to emigrate than the previous one. Another strategy was surveillance. The regime employed a dense network of informants to detect and prevent emigration attempts (see below).
Despite all these efforts, even after the construction of the Berlin Wall and the further fortification of the entire border between East and West Germany, many citizens tried to leave, often using ingenious ideas including building hot-air balloons, digging tunnels and crossing by boat. For example, two citizens armored a firm-owned truck and broke through the border – despite heavy fire by the GDR border guards. Another successful exit attempt involved two citizens who dived into the Elbe River in severe weather conditions that made it impossible for the boats operated by the border guards to patrol. Other successful attempts were less spectacular: some citizens simply did not return after regime-sanctioned travel. And of course, some attempts failed: two citizens spent three years building a hot-air balloon, only to be arrested after an informant alerted the police. Many times, citizens used vehicles to attempt to break through the border, only to be stopped by physical barriers or border guards (all of these examples are from Mayer Reference Mayer2002).
As the regime had a longstanding economic policy, corresponding strict ideological commitments and tight budgets, economic reform or more targeted programs to incentivize citizens to stay were difficult to implement. From a theoretical perspective, we thus expect what has historically happened: the regime expanded its security apparatus to provide information about citizens who were planning to leave the country (Bruce Reference Bruce2010). The staff of the Central Coordination Group (Zentrale Koordinierungsgruppe, ZKG), which was responsible for co-ordinating the efforts of all other departments to prevent citizens from leaving, quadrupled in size during the period 1976–1988, from about 100 staff members to almost 450 (Eisenfeld Reference Eisenfeld1996, 49). Even compared to other East European security agencies, the German security apparatus had both an exceptionally dense informant network and very high numbers of staffers (Thomson Reference Thomson2018).
Every bureaucrat in the security apparatus had to remain dedicated to his or her work to prevent citizens from leaving the GDR. For example, in Order 1/75 from December 1975 Erich Mielke, the head of the Ministry of State Security (Ministerium für Staatssicherheit, MfS), wrote:
The enemy seeks to lure away in particular skilled workers in order to discredit the GDR internationally, to weaken its economy, to obstruct and shatter the socialistic economic integration, to find leverage for additional subversive actions and to simultaneously compensate its own lack in skilled labor in some sectors and with that to strengthen the capitalist economy.Footnote 10
Milke's fear of losing additional workers was not unfounded. Before and after the construction of the Berlin Wall, a typical emigrant was at least partially motivated by economic opportunities abroad:
Although both the East and West German governments tended to present the causes of the mass emigration in political terms – either as a vote for freedom or a betrayal of socialism – material factors were undoubtedly paramount. The West German 'economic miracle' of the 1950s exerted a powerful 'pull' on many East Germans, especially the young and relatively mobile, and in particular skilled workers, engineers and technicians who were in great demand … At the same time, there was a whole range of economic 'push' factors within the GDR, foremost among them the aggravating shortage of consumer goods and housing we have already encountered (Ross Reference Ross2004, 30).
Throughout the lifetime of the GDR, West Germany's economic opportunities continued to be an important motive for leaving the GDR. In a survey of arriving GDR emigrants from 1984, Ronge and Köhler (Reference Ronge and Köhler1984) asked about the main reasons for leaving East Germany. About 46 per cent of the respondents mentioned the limited availability of goods as a motive, 45 per cent an unfavorable future outlook, and about 21 per cent reported unfavorable career opportunities as a motive. In a similar survey conducted a month before the fall of the Berlin Wall, 76 per cent of the respondents said that the low standard of living was a motive for their emigration, and about 54 per cent referenced poor working conditions (Voigt, Belitz-Demiriz and Meck Reference Voigt, Belitz-Demiriz and Meck1990).
Since all Germans were entitled to citizenship in West Germany, citizens of the GDR who reached West Germany could live and work there with minimal hurdles. As West German citizens, GDR emigrants were allowed to work without obtaining a separate work permit first. This also explains why, for example, Schmidt (Reference Schmidt1994) finds that on average, GDR emigrants performed equally well in the West German labor market relative to natives in the mid- and late 1980s (see also Bauer and Zimmermann Reference Bauer and Zimmermann1997; Hofbauer, Billmeier and Warnhagen Reference Hofbauer, Billmeier and Warnhagen1985).
The GDR Prisoner Database
At the end of the 1970s, East Germany embarked on an ambitious project: to create a secret digital database of all of its prisoners. Our archival research suggests that one important function of the database was to identify trends in criminal offenses that the East German government deemed politically relevant. For example, on 13 January 1984, the head of the East German prison regime, Major General Lustik, requested a list of all prisoners charged with illegally attempting to emigrate, broken down by gender, punishment and type of imprisonment.Footnote 11 Since the database was kept secret, and the regime actively used it to identify citizen transgressions, it likely provides accurate data on the number of exit prisoners.
We obtained an anonymized copy of the raw, unmaintained and partially corrupted prisoner database. As we explain in more detail in Appendix B, using a combination of archival research and data forensics, we were able to reconstruct the data for all individuals imprisoned between 1979 and 1982 and from 1984 to 1988. We then filtered the database to include exit prisoners; that is, political prisoners arrested for illegally attempting to cross the border.Footnote 12 Table 2 shows the total number of (newly admitted) prisoners and the total number of (newly admitted) exit prisoners per year.
Table 2. Total number of prisoners per year and number of exit prisoners charged with either an unlawful emigration attempt or support of such an action
Source: GDR Prisoner Database (1979–1988)
Comparing Cohorts of Exit Prisoners
We used the database to construct a cohort panel in which the number of newly admitted exit prisoners per occupation group and quarter-year is the unit of analysis. The panel consists of thirty-six quarters across thirty-eight occupation groups. Using the information about each prisoner's occupation contained in the dataset and the occupation group classifications constructed by West Germany's Federal Employment Agency,Footnote 13 we classify each prisoner into one of thirty-eight occupation groups. We discuss the details of this coding in Appendix B. To measure economic opportunities for each occupation group, we combine this dataset with detailed quarterly records from the West German Federal Employment Agency about the number of open positions in an occupation group (Bundesanstalt für Arbeit 1989).Footnote 14 In our occupation-quarter panel, the median number of open positions is 2,500 (mean: 4,500) and the standard deviation is about 5,469.
In the Appendix, we show the average number of arrests per occupation group. With an average number of newly admitted exit prisoners of about 80, the occupation category 'locksmiths, mechanics and related professions' is the largest, followed by those related to construction work (60) and farming (25). Engineers, chemists, physicists, mathematicians and technicians (10) follow, while housekeeping and authors, interpreters and librarians rank lowest. While this ranking of occupation groups is indicative of the regime's strategy to target certain groups, it is also a function of the prevalence of the occupation groups in the general population.
A bivariate correlation between the number of exit prisoners in the GDR and the number of available jobs in West Germany will typically be confounded. The most likely sources of confounding are general global economic trends that increase the number of available jobs in the FRG and simultaneously (through its effect on the economy in the GDR) the amount of resources the GDR regime has available to prevent citizens from leaving. Our empirical strategy addresses this problem by identifying the effect from the variation within occupation groups and allowing for unobserved common trends such as economic growth. We implement this empirical strategy in our baseline specification, in which we regress the number of exit prisoners on the number of open positions in the FRG using OLS with two-way fixed effects for the units (thirty-eight occupation groups) and time (thirty-six quarters).
However, any correlation might be spurious if there are still differential trends across occupation groups. One conceivable source of such differential trends is the notion that, over time, some occupation groups increase or decrease in size because of the global restructuring of the economy; that is, the number of farmers decreases, but service occupations increase. Such differential trends within occupation groups may generate a spurious correlation between the number of open positions in the FRG and the number of arrests in the GDR. Thus, our preferred specification includes an occupation-specific time trend. Denoting yct as the number of exit prisoners in the cth occupation group in quarter-year t, we use the following specification of the OLS estimator:
(1) $$y_{ct} = \alpha _0 + \beta {\rm job}{\rm s}_{c( t-1)} + \alpha _t + \alpha _c + \gamma _c{\rm tim}{\rm e}_{c} + \varepsilon _{ct}.$$
The β parameter is of primary interest and measures how 1,000 additional jobs in the previous quarter increase the expected number of newly admitted exit prisoners in the next quarter. The parameters αc and αt denote the unmodeled (fixed) effects for an occupation group c and quarter-year t.
This specification relies on the assumption that a linear time trend can capture any differential trend within occupation groups. To evaluate the plausibility of this assumption, we also estimate a demanding specification that permits occupation-specific effects to vary across years. This allows for a wide range of potential occupation-specific trends and amounts to treating each occupation group in a year as a separate unit identifying the effect of available jobs in the FRG only from the variation within a year.Footnote 15
Before proceeding to the results, we discuss the implications of our identification strategy with respect to our theoretical argument. As discussed, the fixed-effects specification implies that we are using the variation within occupation groups to estimate the effect of economic opportunities abroad on the number of political prisoners. In order for our theoretical argument to be valid, citizens' beliefs about changes in economic opportunities in West Germany must be positively correlated with actual changes in economic opportunities in West Germany. This prerequisite may seem demanding, but it is likely to be satisfied for the case of the GDR and the FRG as fairly fine-grained information flows between these two countries have been documented. One source of information is West German TV. Even seemingly coarse information from the TV (for example, 'Economic growth is driven by the manufacturing sector') provides relevant information for an occupation group (here 'locksmith'). We return to the role of West German TV below. Another, perhaps more important, source of information is social networks. Letters and phone calls were frequently exchanged between the two German states. In our study period, 62–95 million letters were sent each year in both directions, and in 1987 (the only year for which data are available) 35 million phone calls were made from the West to the East (Plück Reference Plück1995). While the regime's surveillance apparatus kept close tabs on all communication channels, news about economic opportunities was transmitted in a seemingly innocent fashion. For example, a letter from West Germany could include a statement about a relative who got a better job and bought a new car. The receiver of the letter in East Germany can then infer that economic opportunities in West Germany had improved. Moreover, the inference will be strengthened if the relative in West Germany and the receiver of the letter work in a similar occupation.
Economic Opportunities and the Number of Exit Prisoners
The estimates for the fitted bivariate linear regression are shown in the first column of Table 3. The unconditional estimates suggest that an increase of about 1,000 open positions in the FRG is associated with roughly one additional exit prisoner. The table also reports the estimates for our baseline (Column 2), preferred (Column 3) and demanding specifications (Column 4), as described previously.
Table 3. Estimates of the effect of the number of open positions (per 1,000) in the FRG (jobst -1) on the number of GDR exit prisoners in a quarter-year
Note: OLS estimates with standard errors clustered by thirty-eight occupations in brackets (n = 1,368). ***p < 0.001, **p < 0.01, *p < 0.05
The baseline specification (Column 2) removes all time-constant confounders and common quarter-year shocks but relies on the assumption that there are no differential trends across occupation groups. Our preferred specification, shown in Column 3, allows for a linear time trend. A comparison of the two columns suggests that our concern with such trends is not unfounded; however, they tend to bias our estimates downward. The estimate from our preferred specification suggests that 1,000 additional jobs in an occupation group led to about one additional prisoner in the same occupation group. An alternative way to understand the substantive effect size is that a one-standard-deviation increase in the number of open positions led to, on average, about 5.7 more arrests. This effect corresponds to an increase of about 1.8 per cent relative to the average of 326 arrests per occupation-quarter in the data.
Column 4 in Table 3 presents the demanding specification, which allows the occupation-specific fixed effects to vary across years. In this specification, we essentially use only the within-year variance for an occupation group to estimate the effect of jobs on the number of exit prisoners. Despite using more local variation, the estimate essentially doubles in magnitude and remains significant with standard errors clustered by occupation group.
In Table 4 we compare the estimates from our preferred specification (M3) to those we obtain using a larger lag of two or three quarter-years (Columns 2 and 3). The estimates for the two- and three-quarter-years' lag are smaller than the one from the one-quarter lag, suggesting that the citizens and/or the GDR regime promptly reacted to positive economic changes in the FRG.
Table 4. Estimates of the effect of the number of open positions (per 1,000) in the FRG measured for different quarter-year lags on the number of GDR exit prisoners in a quarter-year
Note: OLS estimates with standard errors clustered by thirty-eight occupations in brackets. All specifications include occupation FE and trend as well as quarter-year FE (n = 1,368). ***p < 0.001, **p < 0.01, *p < 0.05
To examine how sensitive these results are, we conduct a number of robustness checks (reported in Appendix D). First, we show that we obtain similar estimates when using the first-difference or interactive fixed-effect estimator (Bai Reference Bai2009). The latter estimator allows unit and time fixed effects to interact in a structured fashion. The interaction is represented by a low-dimensional factor structure (we use two factors). In our application, this amounts to allowing for some occupation-specific time-varying unobservables, that is, some occupation-specific trends. Secondly, when we estimate the same specifications using a binary version of the jobs variable (defined as more open positions than in 75 per cent of the rest of the sample), we find about five additional arrests in an occupation group after a disproportional increase in available jobs. This result is insensitive to the exact definition of the cut-point. Thirdly, we show that the significant positive effect persists after top-coding values larger than 95 per cent of the rest of the data or estimating a negative binomial regression, suggesting that the positive skew of our variables does not affect our results. In general, the estimates shown in Appendix D generally support our first hypothesis: increasing economic opportunities increases the number of exit prisoners. We estimate that 1,000 additional open positions in the FRG lead to about one additional exit prisoner in the GDR.
We have shown robust evidence that an increase in economic opportunities in West Germany led to a rise in the number of exit prisoners in East Germany. This is in line with our theoretical discussion, in which citizens emigrate in order to enhance their economic prospects but an authoritarian regime uses imprisonment to stop them. It also demonstrates that the East German regime perceived emigration as a threat, as we would otherwise not expect to see a positive relationship between economic opportunities abroad and the number of exit prisoners.
Two mechanisms could explain our main empirical result: (1) for a fixed pool of potential emigrants, there was an increase in the regime's efforts to imprison (enforcement effect) or (2) for a fixed level of imprisonment effort, the pool of citizens who are attempting to leave increased (pool effect).Footnote 16 As we do not have data on the regime's efforts to find potential emigrants, or on citizens' desire to emigrate, we cannot directly determine the relative importance of these two causal mechanisms. However, two pieces of evidence suggest that the regime increased its enforcement efforts. First, as described in the previous section, qualitative evidence suggests the regime adapted its strategies to current conditions, anticipating that citizens are more likely to emigrate. Secondly, we collected data on successful emigration attempts to calculate the ratio:
$$\frac{\#\,{\rm exit}\,{\rm prisoners}} {\/ \#\,{\rm exit}\,{\rm prisoners }\!+\!\#\,{\hbox{successful\ emigrants}}}$$
on a quarterly basis and regress our measure of economic opportunities on this ratio. This is a test of the enforcement effect: a positive coefficient on the economic opportunities variable indicates that the regime is exerting more effort on enforcement. This is exactly what we find in our short time series and across different model specifications (for more details, see Appendix D). However, we caution against a causal interpretation of this finding. Future studies should gather data on choices made by both citizens and the regime to provide more conclusive evidence regarding the mechanism.
Information about Economic Opportunities
Our second hypothesis suggests that citizens with access to more information about West Germany should be more likely to be imprisoned. The intuition is that as citizens receive better information about economic opportunities, their confidence in their expected well-being abroad increases, which serves as a substitute for objectively better economic opportunities.
There are two conceivable channels through which East Germans received information about West Germany: their West German relatives and West German TV news. In the main text, we present the results of an analysis in which we estimate whether exit prisoners were living in counties where people could watch West German TV news. In Appendix F we report the results of a second analysis, which shows that exit prisoners were much more likely than ordinary prisoners to have relatives in the FRG. Yet, we are more cautious in interpreting these estimates since they rely on stronger assumptions.
We construct a county-level dataset of exit prisoners for the period 1984–88 based on prisoners' last place of residence and merge these data with information on the availability of West German TV by Crabtree, Darmofal and Kern (Reference Crabtree, Darmofal and Kern2015).Footnote 17 The empirical strategy for this analysis is based on comparing counties that happened to receive West German TV with those that did not. Excluding the capital district of Berlin, only four districts in East Germany exhibit variation with respect to the availability of West German TV: Neubrandenburg and Rostock in the north and Cottbus and Dresden in the south. Since there is no variation in the availability of West German TV over time, we analyze our data as pooled, repeated cross-sections and condition on a series of observable characteristics of the counties. For more details, see Appendix E.
In Table 5 we present OLS estimates with robust standard errors clustered at the county level. Column 1 presents the estimate for a specification that only includes the TV signal variable as well as a measure of the population size for each year (collected from the Statistical Yearbooks of the GDR), while our baseline (Column 2) and preferred specification (Column 3) include fixed effects for years and districts as well as two time-constant covariates from Crabtree, Darmofal and Kern (Reference Crabtree, Darmofal and Kern2015): the number of cities in a county, the size of the county (in km2), as well as the number of protests during the Uprising of 1953. The fourth specification includes the same covariates, but instead of including them as continuous predictors, we include them in a more flexible manner using a series of dummy variables. We conduct a number of robustness checks that we present in Appendix E.Footnote 18
Table 5. Estimates of the effect of the availability of West German TV in a county on the number of exit prisoners in a county-year
Note: OLS estimates with clustered standard errors at the county level (# of clusters: 61) in brackets. Sample includes four GDR districts: Neubrandenburg, Rostock, Cottbus, and Dresden (n = 305). ***p < 0.001, **p < 0.01, *p < 0.05
Across the four specifications, the effect remains quite stable and increases slightly when we adjust for the covariates in a more flexible matter. The estimates of our preferred specification (Column 3) suggest that the availability of West German TV is associated with at least one additional exit prisoner. These results are consistent with our theoretical expectation: East German citizens with more information about West German economic opportunities were more likely to be arrested. However, since the availability of the TV signal in a particular county in the districts we analyze is a function of geographic characteristics (Kern and Hainmueller Reference Kern and Hainmueller2009), it could be the case that GDR regime opponents were more likely to move to counties with West German TV reception. To the extent that the GDR regime targeted these regime opponents at a higher rate, an increased arrest rate in these counties could be due to regime opponents' location choice and not their information about economic opportunities in the FRG.
We believe this interpretation is less plausible than the one spelled out in our theoretical discussion for two main reasons. First, while intra-country mobility was very steady starting in 1970 (about 1.3 per cent of the total population migrated across county borders each year), it was about 5 per cent after the June 1953 uprising and steadily declined afterward (Statistisches Bundesamt 1993). Typically, larger cities were the main destinations of internal migration in the GDR (Burkhardt and Burkhardt-Osadnik Reference Burkhardt and Burkhardt-Osadnik1974). In the Appendix we show that, even after excluding all urban counties such as Rostock and Dresden (Stadtkreis), we find a significant effect of West German TV availability. Secondly, previous scholarship has shown that the availability of West German TV is not associated with the 1989 protest activity (Crabtree, Darmofal, and Kern Reference Crabtree, Darmofal and Kern2015); in fact, people living in counties with West German TV were more likely to support the regime in 1988/89 (Kern and Hainmueller Reference Kern and Hainmueller2009). If such sorting of political opponents into these counties happened, one would expect that these counties also had higher protest activity and lower levels of regime support.
We conclude with a brief analysis of time-series cross-sectional data that examines the extent to which variation in emigration restrictions is linked to the economic opportunities that citizens of authoritarian countries have in neighboring countries. To that end, we return to the CIRI Human Rights Dataset (Cingranelli, Richards and Clay Reference Cingranelli, Richards and Clay2014), which is one of the most widely used datasets to study repression cross-nationally. This dataset features a categorical variable that measures the degree of restrictions on the right of movement to foreign countries (2 = severely restricted, 1 = somewhat restricted, 0 = unrestricted). We match the CIRI data with information about each country's GDP per capita as well as the GDP per capita of its neighboring countries, focusing on the post-Cold War period. We define two countries as neighboring if they are separated by a land or river border. Our measure of citizens' economic opportunities abroad is coded 0 if a country systemically exceeds the GDP per capita of all its neighbors (in which case there are no economic opportunities for citizens abroad), and 1 otherwise.
Table 6 shows that economic opportunities abroad are indeed systematically associated with more emigration restrictions even when we include country and year fixed effects. For countries where citizens have better economic opportunities abroad, the 3-point emigration restriction index is 0.72 points higher and the effect is statistically significant (p < 0.05). This effect is robust to including a fixed effect for the neighboring country or controlling for GDP and Polity IV Scores. Even though East Germany is a relative wealthy authoritarian regime (for a comparison with other authoritarian regimes in the 1980s, see Appendix G), these results strengthen our confidence in the external validity of our main findings in the previous sections.
Table 6. Estimates of economic opportunities abroad on an index of emigration restrictions
Note: OLS estimates with clustered-robust standard errors by country (# of clusters: 67) in brackets. Sample includes all authoritarian regimes between 1990–2011 (n = 633). Covariates: GDP and Polity IV score for the country and the neighboring country. ***p < 0.001, **p < 0.01, *p < 0.05
In this article, we analyze theoretically and empirically how improvements in economic conditions abroad affect the number of citizens arrested for attempting to emigrate. Our theoretical discussion builds on the insight that citizens seek to emigrate in order to improve their economic well-being. When authoritarian rulers perceive emigration as a threat, they impose and enforce limits on the freedom to move abroad. While there are some benefits of emigration to rulers – for instance, opposition supporters may no longer work to overthrow the regime, or the prospect of substantial amounts of remittances – we emphasize the costs: the loss of human capital and the corresponding decrease in economic potential. Authoritarian rulers may use a variety of measures to curb the threat of repression. We distinguish analytically between rewards and punishments as well as between more or less targeted measures. Imprisonment is a targeted punishment that is likely to be used when the regime can count on a loyal security and surveillance apparatus, has longstanding economic policies and ideological commitments, is budget constrained, and faces pressure from a relatively small set of workers who are considering leaving the polity. If the regime imprisons those who try to leave, we expect the number of exit prisoners to increase when economic opportunities abroad improve, and expect there to be more exit prisoners when citizens are better informed about these opportunities.
Our empirical analysis, focused on East Germany, demonstrates that the regime perceived emigration as a threat and that citizens were attracted by economic opportunities in West Germany. We estimate that when the number of open positions in West Germany increased by 1,000, the regime in East Germany arrested one additional individual for attempting to illegally emigrate. We also show that where citizens had better information about economic opportunities in West Germany, more citizens were arrested for illegal emigration attempts. Lastly, we also present evidence that supports the external validity of our analysis using cross-national data from the CIRI Human Rights Dataset. In general, our results point to the importance of global migration flows in analyses of cross-sectional variation in human rights violations. At least some proportion of this variation around the world could be the result of (anticipated) migration flows.
Our results have important policy implications. While many have highlighted the cost of liberal immigration policies for migrant-sending countries in terms of the risk of a 'brain drain' (Docquier and Rapoport Reference Docquier and Rapoport2012), we highlight the political consequences: There is a subtle risk that liberal immigration policies will encourage authoritarian leaders to tighten their grip on their citizens. Given the growing economic disparities across the world and an expected increase in international immigration, we expect such considerations to become more and more important in discussions about redesigning immigration policies in Europe and the United States.
Data replication sets are available in Harvard Dataverse at: https://doi.org/10.7910/DVN/R5YV1L and online appendices at: https://doi.org/10.1017/S0007123420000216.
We thank the Bundesarchiv for their valuable support of our archival work and Axel W. Salheiser for sharing his extensive list of occupations in the GDR. We also thank Dominik Hangartner, Holger L. Kern, David D. Laitin, Andrew T. Little, Carl Mueller-Crepon, Alastair Smith, Salif Jaiteh, the seminar participants at New York University, University of Zurich, ETH Zurich, University of Mannheim, Texas A&M University, and the MPSA 2017 panel 'Repression and other Strategies for Authoritarian Survival,' the editor René Lindstädt, and three anonymous reviewers for excellent comments. Carlo M. Horz acknowledges funding by the ANR – Labex IAST.
1 Of course, regimes that revoke citizens' right to movement also tend to disrespect their physical integrity. The non-parametric Kendall rank correlation coefficient for the CIRI physical integrity index with the freedom of movement index is 0.40 in 1981 and 0.37 in 2011.
2 http://www.nkdb.org/en/database/findings.php.
3 As we detail below, the regime sometimes considered emigration requests by citizens. In many cases, however, citizens feared repercussions when applying for an exit visa.
4 We scrutinize this issue further in Appendix A, where we analyze a game-theoretic model of emigration and imprisonment. We show that when the regime and the citizen do not observe the other actor's action, deterrence is limited and the citizen's proclivity to leave always increases. Another possibility, not present in the model, is that some citizens may underestimate the level of enforcement and therefore erroneously attempt to emigrate, i.e., they would not have done so with better information. Unfortunately, there is little research on how much emigrants know before their journey even when it comes to emigrants from countries not ruled by authoritarian regimes.
5 These labels are similar to 'co-optation' and 'repression', which are used to describe the strategies that authoritarian regimes can use to prevent large-scale citizen protests (e.g., Wintrobe Reference Wintrobe1998).
6 We leave the role of information implicit. The regime could also employ censorship or propaganda in order to convince citizens that particular rewards or punishments exist.
7 Thus, there is an important interaction effect between more and less targeted measures. If a regime chooses to build a border wall at some point (a less targeted measure), targeted measures are more profitable later on. This surely played a role in the GDR.
8 In the Appendix, we complement this informal statement with a simple game-theoretical model of emigration from which we formally derive the two hypotheses.
9 The regulation about the restitution of the personal passport when emigrating to West Germany or Berlin (25 January 1951) mandated that individuals seeking emigration to West Germany must return their passports or they would be sentenced to up to three months in jail. The law amending the GDR passport law (15 September 1954) required individuals to obtain authorization before leaving the country or face up to three years in jail. For details, see Schurig (Reference Schurig2016). It is important to note that East German citizens could apply for an exit visa. However, many citizens feared repercussions when doing so (see, e.g., Göbel and Meisner Reference Göbel and Meisner2019; Lochen and Meyer-Seitz Reference Lochen and Meyer-Seitz1992), and emigration without an exit visa remained illegal until the end of the regime.
10 Befehl Nr. 1/75 zur Vorbeugung, Aufklärung und Verhinderung des ungesetzlichen Verlassens der DDR und Bekämpfung des staatsfeindlichen Menschenhandels.
11 See Appendix B for a picture of the information request.
12 Section 213 of the criminal code (§ 213 'Ungesetzlicher Grenzübertritt', StGB) and any support of such actions, section 105 (§ 105 'Staatsfeindlicher Menschenhandel', StGB). Both of these sections are included in the rehabilitation and compensation act after the unification, which repealed all political sentences based on sections that criminalize the collection, transmission, and publication of information or unlawful emigration.
13 Similar, but not identical to, the International Standard Classification of Occupations (ISCO).
14 Ideally, we would be able to measure the baseline economic opportunities in the GDR and construct a measure of net economic opportunities, but the necessary data do not exist. However, because of the centrally planned economy, it is conceivable that most citizens' economic opportunities in the GDR were approximately constant during our study period. However, empirically, we might underestimate any effect of economic opportunities since our measure is a proxy variable.
15 The specification is as follows: $y_{{c}^{\prime}q} = \alpha _0 + \beta {\rm job}{\rm s}_{{c}^{\prime}( q-1) } + \alpha _q + \alpha _{{c}^{\prime}} + {\rm \epsilon }_{{c}^{\prime}q}$, where the index, c′ denotes an occupation group in a particular year and q the respective quarter.
16 See the Appendix for a formal derivation of these mechanisms.
17 In earlier years, the last place of residence of a prisoner was not recorded.
18 In Appendix D we present the estimates where we repeat the analysis from the previous section but estimate the effects of economic opportunities abroad on the number of exit prisoners separately for all prisoners who used to live in counties with and without TV access. We find that the number of exit prisoners tends to increase more in counties with access to West German TV (compared to those without such access) when economic conditions improve.
Acemoglu, D and Robinson, JA (2000) Democratization or repression? European Economic Review 44(4–6), 683–693.CrossRefGoogle Scholar
Acemoglu, D and Robinson, JA (2006) Economic backwardness in political perspective. American Political Science Review 100(1), 115–131.CrossRefGoogle Scholar
Adelman, H (1998) Why refugee warriors are threats. The Journal of Conflict Studies 18(1), 49–69.Google Scholar
Albrecht, H and Ohl, D (2016) Exit, resistance, loyalty: military behavior during unrest in authoritarian regimes. Perspectives on Politics 14(1), 38–52.CrossRefGoogle Scholar
Alexander, D (2017) Incentives or disincentives? Working Paper. Chicago, IL: University of Chicago Harris School of Public Policy:Google Scholar
Bai, J (2009) Panel data models with interactive fixed effects. Econometrica 77(4), 1229–1279.Google Scholar
Bauer, T and Zimmermann, KF (1997) Unemployment and wages of ethnic Germans. The Quarterly Review of Economics and Finance 37(Suppl. 1), 361–377.CrossRefGoogle Scholar
Borjas, GJ (2014) Immigration Economics. Cambridge, MA: Harvard University Press.CrossRefGoogle Scholar
Bruce, G (2010) The Firm: The Inside Story of the Stasi. Oxford: Oxford University Press.Google Scholar
Bueno de Mesquita, E and Dickson, ES (2007) The propaganda of the deed: terrorism, counterterrorism, and mobilization. American Journal of Political Science 51(2), 364–381.CrossRefGoogle Scholar
Bundesanstalt für Arbeit (1979–1989) Amtliche Nachrichten der Bundesanstalt für Arbeit. Arbeitsstatistik – Jahreszahlen [Official News from the Federal Labour Office. Labour Statistics - Annual figures] (various years).Google Scholar
Burkhardt, F and Burkhardt-Osadnik, L (1974) Betrachtungen Zur Binnenwanderung in der DDR [On Internal Migration in the GDR]. Jahrbuch für Wirtschaftsgeschichte [Yearbook for Economic History], 15. Berlin: Akademie Verlag, pp. 115–122.Google Scholar
Cingranelli, DL, Richards, DL and Clay, K (2014) The CIRI Human Rights Dataset. Version 2014.04.14.Google Scholar
Clark, WR, Golder, M and Golder, SN (2017) The British Academy Brian Barry Prize essay: an exit, voice and loyalty model of politics. British Journal of Political Science 47(4), 719–748.CrossRefGoogle Scholar
Crabtree, C, Darmofal, D and Kern, HL (2015) A spatial analysis of the impact of west German television on protest mobilization during the east German revolution. Journal of Peace Research 52(3), 269–284.CrossRefGoogle Scholar
Danneman, N and Ritter, EH (2014) Contagious rebellion and preemptive repression. Journal of Conflict Resolution 58(2), 254–279.CrossRefGoogle Scholar
Dimitrov, MK and Sassoon, J (2014) State security, information, and repression: a comparison of communist Bulgaria and Ba'thist Iraq. Journal of Cold War Studies 16(2), 3–31.CrossRefGoogle Scholar
Docquier, F and Rapoport, H (2012) Globalization, brain drain and development. Journal of Economic Literature 50(3), 681–730.CrossRefGoogle Scholar
Dragu, T and Polborn, M (2013) The administrative foundation of the rule of law. The Journal of Politics 75(4), 1038–1050.CrossRefGoogle Scholar
Eisenfeld, B (1996) Die Zentrale Koordinierungsgruppe. Bekämpfung von Flucht und Übersiedlung (MfS-Handbuch) [The Central Coordination Group. Combating flight and resettlement (MfS Handbook)]. Berlin: Bundesbeauftragter für die Unterlagen der Staatssicherheitsdienstes der ehemaligen DDR.Google Scholar
Esberg, J (2018) Anticipating Dissent: The Repression of Politicians in Pinochet's Chile. Working Paper.Google Scholar
Escribà-Folch, A (2012) Authoritarian responses to foreign pressure: spending, repression, and sanctions. Comparative Political Studies 45(6), 683–713.CrossRefGoogle Scholar
Fulbrook, M (1995) Anatomy of A Dictatorship: Inside the GDR, 1949–89. Oxford: Oxford University Press.Google Scholar
Gehlbach, S (2006) A formal model of exit and voice. Rationality and Society 18(4), 395–418.CrossRefGoogle Scholar
Gibas, M (2000) Propaganda in der DDR. Erfurt: LZT, Landeszemtrale für Politische Bildung Thüringen.Google Scholar
Ginkel, J and Smith, A (1999) So you say you want a revolution: a game theoretic explanation of revolution in repressive regimes. Journal of Conflict Resolution 43(3), 291–316.CrossRefGoogle Scholar
Göbel, J and Meisner, M (2019) Ständige Ausreise: Schwierige Wege aus der DDR [Permanent Departure: Difficult Ways out of the GDR]. Berlin: Ch. Links Verlag.Google Scholar
Greenhill, KM (2010) Weapons of Mass Migration: Forced Displacement, Coercion, and Foreign Policy. Ithaca, NY: Cornell University Press.CrossRefGoogle Scholar
Haggard, S and Kaufman, RR (2018) The Political Economy of Democratic Transitions. Princeton, NJ: Princeton University Press.10.2307/j.ctv39x5bxCrossRefGoogle Scholar
Haggard, S and Webb, SB (1993) What do we know about the political economy of economic policy reform? The World Bank Research Observer 8(2), 143–168.CrossRefGoogle Scholar
Hannum, H (1987) The Right to Leave and Return in International Law and Practice. Dordrecht: Martinus Nijhoff.CrossRefGoogle Scholar
Harris, JR and Todaro, MP (1970) Migration, unemployment and development: a two-sector analysis. American Economic Review 60(1), 126–142.Google Scholar
Hill, DW and Jones, ZM (2014) An empirical evaluation of explanations for state repression. American Political Science Review 108(3), 661–687.CrossRefGoogle Scholar
Hirschman, AO (1970) Exit, Voice, and Loyalty: Responses to Decline in Firms, Organizations, and States. Cambridge, MA: Harvard University Press.Google Scholar
Hirschman, AO (1993) Exit, voice, and the fate of the German Democratic Republic. World Politics 45(2), 173–202.CrossRefGoogle Scholar
Hofbauer, H, Billmeier, M and Warnhagen, I (1985) Die Berufliche Eingliederung von Übersiedlern aus der DDR und Berlin (Ost) [The labor market integration of immigrants from the GDR and Berlin (East)]. Mitteilungen aus der Arbeitsmarkt- und Berufsforschung [Communications from labour market and occupational research] 18(3), 340–355.Google Scholar
Hollyer, JR, Rosendorff, BP and Vreeland, JR (2011) Democracy and transparency. The Journal of Politics 73(4), 1191–1205.CrossRefGoogle Scholar
Horz, CM, Marbach, M (2020) Replication Data for: Economic Opportunities, Emigration and Exit Prisoners, https://doi.org/10.7910/DVN/R5YV1L, Harvard Dataverse, V1CrossRefGoogle Scholar
Human Rights Watch (2019) World Report 2019. Available from https://www.hrw.org/world-report/2019.Google Scholar
Judt, M (2007) Häftlinge für Bananen? Der Freikauf politischer Gefangener aus der DDR und das "Honecker-Konto". [Prisoners for Bananas? The release of political prisoners from the GDR and the Honecker account.] Vierteljahrschrift für Sozial-und Wirtschaftsgeschichte [Quarterly Journal for Social and Economic History] 94(4), 417–439.Google Scholar
Keohane, RO and Nye, JS (1972) Transnational Relations and World Politics. Cambridge, MA: Harvard University Press.CrossRefGoogle Scholar
Kern, HL and Hainmueller, J (2009) Opium for the masses: how foreign media can stabilize authoritarian regimes. Political Analysis 17(4), 377–399.CrossRefGoogle Scholar
Kuran, T (1997) Private Truths, Public Lies: The Social Consequences of Preference Falsification. Cambridge, MA: Harvard University Press.Google Scholar
Leblang, D (2010) Familiarity breeds investment: diaspora networks and international investment. American Political Science Review 104(3), 584–600.CrossRefGoogle Scholar
Lee, ES (1966) A theory of migration. Demography 3(1), 47–57.CrossRefGoogle Scholar
Lochen, H-H and Meyer-Seitz, C (1992) Die geheimen Anweisungen zur Diskriminierung Ausreisewilliger: Dokumente der Stasi und des Ministeriums des Innern. [The secret instructions to discriminate against those willing to leave the country: Documents of the Stasi and the Ministry of the Interior] Köln: Bundesanzeiger Verlag.Google Scholar
Lohmann, S (1994) The dynamics of informational cascades: the Monday demonstrations in Leipzig, East Germany, 1989–91. World Politics 47(1), 42–101.CrossRefGoogle Scholar
Majumdar, S and Mukand, SW (2004) Policy gambles. American Economic Review 94(4), 1207–1222.CrossRefGoogle Scholar
Mayer, W (2002) Flucht und Ausreise: Botschaftsbesetzungen als wirksame Form des Widerstands und Mittel gegen die politische Verfolgung in der DDR. [Escape and exit: Embassy occupations as an effective form of resistance and a means against political persecution in the GDR.] Berlin: Anita Tykve Verlag.Google Scholar
Miller, MK and Peters, ME (2020) Restraining the huddled masses: migration policy and autocratic survival. British Journal of Political Science, 50(2), 403–433.Google Scholar
Nepstad, SE (2013) Mutiny and nonviolence in the Arab Spring: exploring military defections and loyalty in Egypt, Bahrain, and Syria. Journal of Peace Research 50(3), 337–349.CrossRefGoogle Scholar
Pepinsky, TB (2009) Economic Crises and the Breakdown of Authoritarian Regimes: Indonesia and Malaysia in Comparative Perspective. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Pfaff, S (2006) Exit-Voice Dynamics and the Collapse of East Germany: The Crisis of Leninism and the Revolution of 1989. Durham, NC: Duke University Press.Google Scholar
Plück, K (1995) Innerdeutsche Beziehungen auf kommunaler und Verwaltungsebene, in Wissenschaft, Kultur und Sport und ihre Rückwirkungen auf die Menschen im geteilten Deutschland. [Intra-German Relations at the Municipal and Administrative Level, in Science, Cultural Affairs and Sports and their Repercussions on the people in a Divided Germany] In Materialien der Enquete-Kommission: Aufarbeitung von Geschichte und Folgen der SED-Diktatur in Deutschland [Materials of the Enquete Commission: Coming to Terms with the History and Consequences of the SED Dictatorship in Germany], vol. 5/3. Baden-Baden: Deutschen Bundestag, pp. 2015–2064.Google Scholar
Rainsford S (2012) Leaving Cuba: the difficult task of exiting the island, British Broadcasting Corporation, 21 July.Google Scholar
Ravenstein, EG (1889) The laws of migration. Journal of the Royal Statistical Society 52(2), 241–305.CrossRefGoogle Scholar
Ritter, EH and Conrad, CR (2016) Preventing and responding to dissent: the observational challenges of explaining strategic repression. American Political Science Review 110(1), 85–99.10.1017/S0003055415000623CrossRefGoogle Scholar
Ronge, V and Köhler, A (1984) 'Einmal BRD-Einfach': Die DDR-Ausreisewelle Im Frühjahr 1984 [A single ticket to the BRD: The Wave of Emigration from the GDR in Spring 1984]. Deutschland Archiv 17(12), 1280–1286.Google Scholar
Ross, C (2004) East Germans and the Berlin Wall: popular opinion and social change before and after the border closure of August 1961. Journal of Contemporary History 39(1), 25–43.CrossRefGoogle Scholar
Salehyan, I (2008) The externalities of civil strife: refugees as a source of international conflict. American Journal of Political Science 52(4), 787–801.CrossRefGoogle Scholar
Salehyan, I and Gleditsch, KS (2006) Refugees and the spread of civil War. International Organization 60(2), 335–366.CrossRefGoogle Scholar
Schmidt, CM (1994) The economic performance of Germany's East European immigrants. Discussion Papers 963, CEPR. Available from http://cepr.org/active/publications/discussion_papers/dp.php?dpno=963.Google Scholar
Schurig, A (2016) Republikflucht (§§ 213, 214 StGB/DDR): Gesetzgeberische Entwicklung, Einfluss des MfS und Gerichtspraxis am Beispiel von Sachsen. [Desertion from the Republic (§§ 213, 214 StGB/DDR): Legislative development, influence of the MfS and court practice at the example of Saxony.] Berlin: de Gruyter.CrossRefGoogle Scholar
Shadmehr, M (2014) Mobilization, repression, and revolution: grievances and opportunities in contentious politics. Journal of Politics 76(3), 621–635.CrossRefGoogle Scholar
Siegel, DA (2011) When does repression work? Collective action and social networks. Journal of Politics 4(4), 993–1010.CrossRefGoogle Scholar
Smith, A (2008) The perils of unearned income. Journal of Politics 70(3), 780–793.10.1017/S0022381608080754CrossRefGoogle Scholar
Statistisches Bundesamt (1993) Sonderreihe mit Beiträgen für Das Gebiet der Ehemaligen DDR. Heft 3, Bevölkerungsstatistische Übersichten 1946 bis 1989. [Special Series with Contributions on the Former Territory of the GDR. Volume 3, Population Statistics 1946 to 1989.] Wiesbaden: Statistisches Bundesamt.Google Scholar
Svolik, MW (2012) The Politics of Authoritarian Rule. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Svolik, MW (2013) Contracting on violence: moral hazard in authoritarian repression and military intervention in politics. Journal of Conflict Resolution 57(5), 765–794.CrossRefGoogle Scholar
Stevis M and Parkinson J (2016) African dictatorship fuels migrant crisis, The Wall Street Journal, 20 October.Google Scholar
Thomson, H (2018) Coercive institutions and repression under authoritarian regimes: potential insights from archives in Central and Eastern Europe. Comparative Politics Newsletter 28(1), 79–86.Google Scholar
Todaro, MP (1969) A model of labor migration and urban unemployment in less developed countries. American Economic Review 59(1), 138–148.Google Scholar
Truex, R (2019) Focal points, dissident calendars, and preemptive repression. Journal of Conflict Resolution 63(4), 1032–1052.CrossRefGoogle Scholar
Tsourapas, G (2018) Labor migrants as political leverage: migration interdependence and coercion in the Mediterranean. International Studies Quarterly 62(2), 383–395.CrossRefGoogle Scholar
Tyson, SA (2018) The agency problem underlying repression. The Journal of Politics 80(4), 1297–1310.CrossRefGoogle Scholar
UN (2016) International Migration Report 2015: Highlights. Document ST/ESA/SER.A/375, Department of Economic and Social Affairs.Google Scholar
Voigt, D, Belitz-Demiriz, H and Meck, S (1990) Die Innerdeutsche Wanderung und der Vereinigungsprozeß: Soziodemographische Struktur und Einstellungen von Flüchtlingen/Übersiedlern aus der DDR vor und nach der Grenzöffnung. [Intra-German Migration and the Unification Process: Socio-demographic Structure and Attitudes of Immigrants from the GDR before and after the Opening of the Border] Deutschland Archiv 23(5), 732–746.Google Scholar
Wintrobe, R (1998) The Political Economy of Dictatorship. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Wright, J, Frantz, E and Geddes, B (2015) Oil and autocratic regime survival. British Journal of Political Science 45(2), 287–306.CrossRefGoogle Scholar
Zolberg, AR, Suhrke, A and Aguayo, S (1989) Escape from Violence: Conflict and the Refugee Crisis in the Developing World. New York: Oxford University Press.Google Scholar
View in content
Figure 1. Freedom of foreign movement restrictions in authoritarian polities around the worldNote: countries not classified as authoritarian polities (Polity IV score of strictly less than −5) are in light gray.Source: CIRI project supplemented with information from Hannum (1987) for the German Democratic Republic and the People's Democratic Republic of Yemen.
Figure 2. Monthly number of emigrants from East to West GermanyNote: the dashed line highlights November 1961 when the construction of the Berlin Wall was completed.Source: Government of West Germany, collected by the authors (see Appendix for the sources). Data from November 1961 to December 1964 is missing in the archival materials.
Table 3. Estimates of the effect of the number of open positions (per 1,000) in the FRG (jobst-1) on the number of GDR exit prisoners in a quarter-year
Horz and Marbach Dataset
https://doi.org/10.7910/DVN/R5YV1L
Horz and Marbach supplementary material
Steinert, Christoph Valentin 2021. Who Is a Political Prisoner?. Journal of Global Security Studies, Vol. 6, Issue. 3,
Carlo M. Horz (a1) and Moritz Marbach (a2) | CommonCrawl |
1 Differentiation over addition and constant multiple: the linearity
2 Differentiation over compositions: the Chain Rule
3 Differentiation over multiplication and division
4 The rate of change of the rate of change
5 Repeated differentiation
6 Change of variables and the derivative
7 Implicit differentiation and related rates
8 Radar gun: the math
9 The derivative of the inverse function
10 Reversing differentiation
Differentiation over addition and constant multiple: the linearity
In this chapter, we will be taking a broader look at how we compute the rate of change.
If a function is defined at the nodes of a partition, it is simply a sequence of numbers. And so is its difference quotient. What this means is that this procedure is a special kind of function, a function of functions: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{ccccccccccccccc} f & \mapsto & \begin{array}{|c|}\hline\quad \frac{\Delta }{\Delta x} \quad \\ \hline\end{array} & \mapsto & u=\frac{\Delta f}{\Delta x} . \end{array}$$ Furthermore, the derivative is defined as a limit. Unlike the limits we saw prior to derivatives, this one has a parameter, the location $x$. That is why with the input a differentiable function $f$, the output of this limits is another function $f'$. What this means is that this process is a special kind of function too, a function of functions: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{ccccccccccccccc} f & \mapsto & \begin{array}{|c|}\hline\quad \frac{d}{dx} \quad \\ \hline\end{array} & \mapsto & f' . \end{array}$$ We need to understand how these two functions operate. We would like to develop shortcuts and algebraic rules for evaluating both difference quotients and the derivatives. The latter will be found without resorting to using limits!
What happens to the output function of differentiation as we perform algebraic operations with the input functions?
The idea of addition of the change is illustrated below:
Here, the bars that represent the change of the output variable are stacked on top of each other, then the heights are added to each other and so are the height differences. The algebra behind this geometry is very simple: $$(A+B)-(a+b)=(A-a)+(B-b).$$ The idea leads to the Sum Rule for Differences from Chapter 1: the difference of the sum of two sequences is the sum of their differences. Below is its analog.
Theorem (Sum Rule). (A) The difference quotient of the sum of two functions is the sum of their difference quotients; i.e., for any two functions $f,g$ defined at the adjacent nodes $x$ and $x+\Delta x$ of a partition, the difference quotients (defined at the corresponding secondary node) satisfy: $$\frac{\Delta(f+g)}{\Delta x}=\frac{\Delta f}{\Delta x}+\frac{\Delta g}{\Delta x}.$$ (B) The sum of two functions differentiable at a point is differentiable at that point and its derivative is equal to the sum of their derivatives; i.e., for any two functions $f,g$ differentiable at $x$, we have at $x$: $$\frac{d(f+g)}{d x}=\frac{d f}{d x}+\frac{d g}{d x}.$$
Proof. Applying the definition to the function $f+g$, we have: $$\begin{array}{lll} \Delta(f+g)(c)&=(f+g)(x+\Delta x)-(f+g)(x)\\ &=f(x+\Delta x)+g(x+\Delta x)-f(x)-g(x)\\ &=\big( f(x+\Delta x)-f(x) \big) +\big(g(x+\Delta x)-g(x) \big)\\ &=\Delta f(c)+\Delta g(c). \end{array}$$ Now, the limit with $c=x$: $$\begin{array}{lll} \frac{\Delta(f+g)}{\Delta x}(x)&=\frac{\Delta f}{\Delta x}(x)+\frac{\Delta g}{\Delta x}(x)&\text{ ...by SR...}\\ &\to\frac{d f}{d x}+\frac{d g}{d x} &\text{ as } \Delta x\to 0.\\ \end{array}$$ $\blacksquare$
In terms of motion, if two runners are running away from each other starting from a common location, then the distance between them is the sum of the distances they have covered.
The formula in the Lagrange notation is as follows: $$(f + g)'(x)= f'(x) + g'(x).$$
The same proof applies to subtraction of the change.
Exercise. State the Difference Rule.
In terms of motion, if two runners are running along with each other starting from a common location, then the distance between them is the difference of the distances they have covered.
The idea proportion of the change is illustrated below:
Here, if the heights triple then so do the height differences. The algebra behind this geometry is very simple: $$kA-ka=k(A-a).$$ The idea leads to the Constant Multiple Rule for Differences from Chapter 1: the difference of a multiple of a sequence is the multiple of the sequence's difference. Below is its analog.
Theorem (Constant Multiple Rule). (A) The difference quotient of a multiple of a function is the multiple of the function's difference quotient; i.e., for any function $f$ defined at the adjacent nodes $x$ and $x+\Delta x$ of a partition and any real $k$, the difference quotients (defined at the corresponding secondary node) satisfy: $$\frac{\Delta(kf)}{\Delta x}=k\frac{\Delta f}{\Delta x}.$$ (B) A multiple of a function differentiable at a point is differentiable at that point and its derivative is equal to the multiple of the function's derivative; i.e., for any function $f$ differentiable at $x$ and any real $k$, we have at $x$: $$\frac{d(kf)}{dx}=k\frac{d f}{dx}.$$
Proof. Applying the definition to the function $c\,f$, we have: $$\begin{array}{lll} \Delta(k\cdot f)(c)&=(k\cdot f)(x+\Delta x)-(k\cdot f)(x)\\ &=k\cdot f(x+\Delta x)-k\cdot f(x)\\ &=k\cdot \big( f(x+\Delta x)-f(x) \big)\\ &=k\cdot \Delta f\, (c). \end{array}$$ Now, the limit with $c=x$: $$\begin{array}{lll} \frac{\Delta(kf)}{\Delta x}(x)&=\frac{k\Delta f}{\Delta x}(x)\\ &=k\frac{\Delta f}{\Delta x}(x)&\text{ ...by CMR...}\\ &\to k\frac{d f}{d x}(x)&\text{ as } \Delta x\to 0.\\ \end{array}$$ $\blacksquare$
In terms of motion, if the distance is re-scaled, such as from miles to kilometers, then so is the velocity -- at the same proportion.
The formula in the Lagrange notation is as follows: $$(k\cdot f)'(x) = k\cdot f'(x).$$ Here is another way to write these formulas in the Leibniz notation. This is the Sum Rule: $$\frac{d}{dx}\big( u+v \big) = \frac{du}{dx} + \frac{dv}{dx},$$ and the Constant Multiple Rule: $$\frac{d}{dx}\big( cu \big) = c\frac{du}{dx}.$$
The two theorems can be combined into one. It relies on the following idea: given two functions $f,g$, their linear combination is a new function $pf+qg$, where $p,q$ are two constant numbers.
Theorem (Linearity of Differentiation). (A) The difference quotient of a linear combination of two functions is the linear combination of their difference quotients; i.e., for any two functions $f,g$ defined at the adjacent nodes $x$ and $x+\Delta x$ of a partition, the difference quotients (defined at the corresponding secondary node) satisfy: $$\frac{\Delta(pf+qg)}{\Delta x}=p\frac{\Delta f}{\Delta x}+q\frac{\Delta g}{\Delta x}.$$ (B) A linear combination of two functions differentiable at a point is differentiable at that point and its derivative is equal to the linear combination of their derivatives; i.e., for any two functions $f,g$ differentiable at $x$, we have at $x$: $$\frac{d(pf+qg)}{d x}=p\frac{d f}{d x}+q\frac{d g}{d x}.$$
In other words, our "function of functions" has the same property as a linear polynomial: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{ccccccccccccccc} pf+qg & \mapsto & \begin{array}{|c|}\hline\quad \frac{d}{dx} \quad \\ \hline\end{array} & \mapsto & pf' +qg'. \end{array}$$
The hierarchy of polynomials and their derivatives was used in Chapter 7 to model free fall.
The derivative of a constant polynomial is zero:
$$(c)'=0.$$
The derivative of a linear polynomial is constant:
$$(mx+b)'=(mx)'+(b)'=m(x)'+0=m\cdot 1=m.$$
The derivative of a quadratic polynomial is linear:
$$(ax^2+bx+c)'=(ax^2)'+(bx)'+(c)'=a(x^2)'+b(x)'+0=a\cdot 2x+b\cdot 1=2ax+b.$$ And so on: combined with the Power Formula, the two rules above allow us to differentiate all polynomials. Every time, the degree goes down by $1$! The general result is as follows.
Theorem. The derivative of a polynomial of degree $n>0$, $$f(x)=a_nx^n+a_{n-1}x^{n-1}+...+a_{2}x^2+a_{1}x+a_0,\ a_n\ne 0,$$ is a polynomial of degree $n-1$, $$f'(x)=na_nx^{n-1}+(n-1)a_{n-1}x^{n-2}+...+2a_{2}x+a_{1},\ a_n\ne 0.$$
Exercise. Prove the theorem.
Differentiation over compositions: the Chain Rule
How does one express the derivative of the composition of two functions in terms of their derivatives?
Example. Treating functions as transformations suggest an easy answer.
If the first transformation is a stretch by a factor of $2$, i.e., the derivative is $2$, and
the second transformation is a stretch by a factor of $3$, i.e., the derivative is $3$, then
the composition of the two transformations is a stretch by a factor of $3\cdot 2=6$, i.e., the derivative is $6$:
We multiply the derivatives. $\square$
Example. Let's confirm this idea with a very simple example. Consider two linear polynomials: $$\begin{array}{lllll} x&=qt&\Longrightarrow & \frac{\Delta x}{\Delta t}=\frac{dx}{dt}&=q\\ \quad\quad\circ&&&&\ \ \times\\ y&=mx&\Longrightarrow& \frac{\Delta y}{\Delta x}=\frac{dy}{dx}&=m\\ \hline y&=m(qt)=mqt&\Longrightarrow& \frac{\Delta y}{\Delta t}=\frac{dy}{dt}&=m\cdot q&=\frac{\Delta x}{\Delta t}\cdot\frac{\Delta y}{\Delta x}=\frac{dx}{dt}\cdot\frac{dy}{dx} \end{array}$$ We see their derivatives and, which is the same think for linear polynomials, their difference quotients. In either case, we see how the intermediate variable, whether it is the difference $\Delta x$ or the differential $dx$, is "cancelled": $$\frac{\tiny{\Delta x}}{\Delta t}\cdot\frac{\Delta y}{\tiny{\Delta x}}=\frac{\Delta y}{\Delta t},\quad \frac{\tiny{dx}}{dt}\cdot\frac{dy}{\tiny{dx}}=\frac{dy}{dt}.$$
$\square$
Example. We pose the following problem. Suppose a car is driven through a mountain terrain. Its location and its speed, as seen on a map, are known. The grade of the road is also known. How fast is the car climbing?
We set up two functions, for the location and the altitude. Then their composition is what we are interested in:
The graph of the second function is literally the profile of the road.
We already know that if the location, $f$, depends on time continuously and the altitude, $g$, depends continuously on location, then the altitude depends on time continuously as well, $g\circ f$. We shall also see that the differentiability of both functions implies the differentiability of the composition.
However, let's first dispose of the "Naive Composition Rule": $$(f \circ g)' \neq f'\circ g'.$$ We carry out, again, a "unit analysis" to show that such a formula simply cannot be true. Suppose
$t$ is time measured in $\text{hr}$,
$x=f(t)$ is the location of the car as a function of time -- measured in $\text{mi}$,
$y=g(x)$ is the altitude of the road as a function of (horizontal) location -- measured in $\text{ft}$, and
$y=h(t)=g(f(t))$ is the altitude of the road as a function of time -- measured in $\text{ft}$.
$f'(t)$ is the (horizontal) velocity of the car on the road -- measured in $\frac{\text{mi}}{\text{hr}}$, and
$g'(x)$ is the rate of incline (slope) of the road -- measured in $\frac{\text{ft}}{\text{mi}}$, with the input still measured in $\text{mi}$.
It doesn't even matter now what $h'$ is measured in; just try to compose these two functions... It is impossible because the units of the output of the former and the input of the latter don't match! However, this is possible:
$f'(t)\cdot g'(x)$ is their product -- measured in $\frac{\text{mi}}{\text{hr}}\cdot \frac{\text{ft}}{\text{mi}}=\frac{\text{ft}}{\text{hr}}$; compare to
$h'(t)$ is the altitude of the road as a function of time -- measured in $\frac{\text{ft}}{\text{hr}}$.
Why does this make sense?
1. How fast you are climbing is proportional to your horizontal speed.
2. How fast you are climbing is proportional to the slope of the road.
Thus, the derivative of the composition of two linear functions is the product of the two derivatives! Considering the fact that, as far as derivatives at a fixed point are concerned, all functions are linear, we have strong evidence in support of this conjecture.
Unfortunately, derivatives aren't fractions! But difference quotients are: $$\frac{\Delta y}{\Delta x}\cdot\frac{\Delta x}{\Delta t}=\frac{\Delta y}{\Delta t}.$$ The only difference from the other rules we have considered is that there are two partitions and $f$ must map the partition for $t$ to the partition of $x$:
Theorem (Chain Rule). (A) The difference quotient of the composition of two functions is found as the product of the two difference quotients; i.e., for any function $x=f(t)$ defined at two adjacent nodes $t$ and $t+\Delta t$ of a partition and any function $y=g(x)$ defined at the two adjacent nodes $x=f(t)$ and $x+\Delta x=f(t+\Delta t)$ of a partition, we have the difference quotients (defined at the secondary nodes $c$ and $q=f(c)$ within these edges of the two partitions respectively) satisfy, provided $\Delta x\ne 0$: $$\frac{\Delta (g\circ f)}{\Delta t}(c)= \frac{\Delta g}{\Delta x}(q) \cdot \frac{\Delta f}{\Delta t}(c).$$ (B) The composition of a function differentiable at a point and a function differentiable at the image of that point is differentiable at that point and its derivative is found as a product of the two derivatives; specifically, if $x=f(t)$ is differentiable at $t=c$ and $y=g(x)$ is differentiable at $x=q=f(c)$, then we have: $$\frac{d (g\circ f)}{dt}(c)= \frac{dg}{dx}(q) \cdot \frac{df}{dt}(c).$$
Proof. The formula for difference quotients is deduced as follows: $$\begin{array}{lll} \frac{\Delta (g\circ f)}{\Delta t}(c)&=\frac{(g\circ f)(t+\Delta t)-(g\circ f)(t)}{\Delta t}\\ &=\frac{g(f(t+\Delta t))-g(f(t))}{f(t+\Delta t)-f(t)}\frac{f(t+\Delta t)-f(t)}{\Delta t}\\ &=\frac{g(x+\Delta x)-g(x)}{\Delta x}\frac{f(t+\Delta t)-f(t)}{\Delta t}\\ &=\frac{\Delta g}{\Delta x}(q) \cdot \frac{\Delta f}{\Delta t}(c). \end{array}$$ Now we are to take the limit of the formula, with $c=t$, as $$\Delta t \to 0.$$ Now, since $x=x(t)$ is continuous, we conclude that we also have: $\Delta x \to 0$. Therefore, we have: $$\begin{array}{lll} \ \frac{\Delta g}{\Delta t} &=&\ \frac{\Delta g}{\Delta x}(f(t))&\cdot&\ \ \frac{\Delta f}{\Delta t}(t)\\ \quad \downarrow&&\quad \downarrow&&\quad \downarrow\\ \ \frac{dg}{dt} & = &\ \frac{dg}{dx}(f(t))&\cdot&\ \ \frac{df}{dt}(t) \end{array} $$ The idea seems to have worked out... The trouble is, we assumed that $\Delta x \neq 0$! What if $x=f(t)$ is constant in the vicinity of $t$? A complete proof will be provided later. $\blacksquare$
Exercise. Find another, non-constant, example of a function $x=f(t)$ such that $\Delta f$ may be zero even for small values of $\Delta t$.
The formula in the Lagrange notation is as follows: $$(g\circ f)'(t) = g'(f(t))\cdot f'(t).$$
Example. Find the derivative of: $$y = (1 + 2x)^{2}.$$ The function is computed in two consecutive steps (that's how we know this is a composition):
step 1: from $x$ we compute $1+2x$, and then
step 2: we square the outcome of the first step.
We then introduce an additional, disposable, variable in order to store the outcome of step 1: $$u=1+2x.$$ Then step 2 becomes: $$y=u^2.$$ This is our decomposition: $x \mapsto u \mapsto y$. Now the derivatives: $$\begin{array}{llll} u & = 1 + 2x &\Longrightarrow&\frac{du}{dx} &= 2 \\ y & = u^{2} &\Longrightarrow&\frac{dy}{du} &= 2u \\ \text{CR } & &\Longrightarrow&\frac{dy}{dx} & = \frac{dy}{du}\cdot\frac{du}{dx} = 2u\cdot 2 = 4u. \end{array} $$ Done. But the answer must be in terms of $x$! Last step: substitute $u = 1 + 2x$. Then the answer is $4(1+2x)$. To verify, expand, $1 + 4x + 4x^{2}$, then use PF. $\square$
Example. Now a very simple example that doesn't allow us to circumvent CR. Let $$y=\sqrt{3x+1}.$$ This is the abbreviated computation (decomposition, the derivatives, CR): $$\begin{array}{llll} x \mapsto u=3x+1 \mapsto y=\sqrt{u}\\ \underbrace{x \mapsto u=3x+1} \\ \qquad \frac{du}{dx} = 3 \\ \qquad\qquad\qquad\underbrace{u \mapsto y=\sqrt{u}}\\ \underbrace{ \qquad\qquad\qquad \frac{dy}{du}= \frac{1}{2\sqrt{u}} } \\ \frac{dy}{dx} = \frac{du}{dx}\cdot\frac{dy}{du} = 3\cdot \frac{1}{2\sqrt{u}}= 3\cdot \frac{1}{2\sqrt{3x+1}}. \end{array} $$ $\square$
Example. Find the derivative of: $$z = e^{\sqrt{3x+1}}$$ Three functions this time: $$ x \mapsto u = 3x+1 \ \mapsto y = \sqrt{u} \ \mapsto z = e^{y}.$$ Fortunately, we already know the derivative of the exponent from the last example. We just append that solution with one extra step: $$\begin{array}{llll} x \mapsto u=3x+1 \mapsto y=\sqrt{u} \mapsto z = e^{y}\\ \underbrace{x \mapsto u=3x+1} \\ \qquad \frac{du}{dx} = 3 \\ \qquad\qquad\qquad\underbrace{u \mapsto y=\sqrt{u}}\\ \underbrace{ \qquad\qquad\qquad \frac{dy}{du}= \frac{1}{2\sqrt{u}} } \\ \frac{dy}{dx} = \frac{du}{dx}\cdot\frac{dy}{du} = 3\cdot \frac{1}{2\sqrt{u}} \\ \qquad\qquad\qquad\qquad\qquad\qquad \underbrace{ y \mapsto z = e^{y} }\\ \underbrace{ \qquad\qquad\qquad\qquad\qquad\qquad \frac{dz}{dy}=e^y }\\ \frac{dz}{dx} = \left( \frac{du}{dx}\cdot\frac{dy}{du} \right) \cdot\frac{dz}{dy} =3\cdot \frac{1}{2\sqrt{u}}\cdot e^y=3\frac{1}{2\sqrt{3x+1}} e^{\sqrt{3x+1}}. \end{array} $$ We have applied CR twice! $\square$
The lesson we have learned is: three functions -- three derivatives -- multiply them: $$\begin{array}{rrr} &x &\mapsto u&\mapsto y&\mapsto z \\ \frac{dz}{dx} & = \frac{du}{dx} &\cdot \frac{dy}{du} &\cdot \frac{dz}{dy} \end{array}$$ These "fractions" appear to cancel again... $$\frac{dz}{dx} = \frac{\not{du}}{dx} \cdot \frac{\not{dy}}{\not{du}} \cdot \frac{dz}{\not{dy}}.$$ This is the Generalized Chain Rule about the derivative of the composition (a "chain"!) of $n$ functions.
The short version of the Chain Rule says:
the derivative of the composition is the product of the derivatives,
as functions.
Example. However, if we fix the location $x=a$, we can make sense of the derivative of the composition as the composition of the derivatives, after all. Indeed, suppose at point $a$ we have the derivative $$\frac{dy}{dx}=m.$$ What if we, again, think of the differentials $dx$ and $dy$ as two new variables -- related to each other by the above equation?
Then we think of the derivative, $m$, not as a number but a linear function: $$dy=m\cdot dx.$$ If now there is another variable with $$\frac{dx}{dt}=q,$$ we think of $q$ as a linear function: $$dx=q\cdot dt.$$ Then, we have to substitute $q$: $$\begin{array}{lllll} x=x(t)&=qt&\Longrightarrow& dx&=q\cdot dt\\ \quad\quad\circ&\quad\circ&&&\quad\quad\circ\\ y=y(x)&=mx&\Longrightarrow& dy&=m\cdot dx\\ \hline y=y(x(t))&=m(qt)&\Longleftrightarrow& dy&=m\cdot (q\cdot dt) \end{array}$$ We have the composition! $\square$
We can use the Chain Rule to find formulas for other important functions.
Theorem. For any $a>0$, we have: $$\left( a^x\right)'=a^x\ln a.$$
Proof. We represent this exponential function in terms of the natural exponential function: $$a^x=e^{\ln a^x}=e^{x\ln a}.$$ Then, $$\left( a^x\right)'=\left( e^{x\ln a} \right)'\ \overset{\text{CR}}{=\! =\! =}\ e^{x\ln a} \cdot (x\ln a)'=a^x\cdot \ln a.$$ $\blacksquare$
Exercise. Use the idea from the proof above to find the derivative of $x^x$.
Differentiation over multiplication and division
What happens to the output function of differentiation as we perform such an algebraic operation as multiplication with the input functions?
We already know that if the width and the height ($f$ and $g$) of a rectangle are changing continuously then so is its area ($f\cdot g$):
We shall also see that the differentiability of both dimensions implies the differentiability of the area.
However, let's first make sure we avoid the so-called "Naive Product Rule": $$(f\cdot g)' \neq f'\cdot g'.$$ The formula is extrapolated from the Sum Rule but it simply cannot be true. Let's recast the problem in the terms of motion and take a good look at the units. Suppose
$x$ is time measured in $\text{sec}$,
$y=f(x)$ is the location of the first person -- measured in $\text{ft}$, and
$y=g(x)$ is the location of the second person -- measured in $\text{ft}$.
$f'(x)$ is the velocity of the first person -- measured in $\frac{\text{ft}}{\text{sec}}$, and
$g'(x)$ is the velocity of the second person -- measured in $\frac{\text{ft}}{\text{sec}}$.
Suppose they are running in two perpendicular directions (east and north), then
$y=f(x)\cdot g(x)$ is the area of the rectangle enclosed by the two persons -- measured in $\text{ft}^2$.
$y=\left( f(x)\cdot g(x) \right)'$ is the rate of change of the area -- measured in $\frac{\text{ft}^2}{\text{sec}}$.
Meanwhile,
$f(x)'\cdot g(x)'$ is an unknown quantity -- measured in $\frac{\text{ft}}{\text{sec}}\cdot \frac{\text{ft}}{\text{sec}}=\frac{\text{ft}^2}{\text{sec}^2}$!
We do notice now that the product of the location and velocity gives the right units: $$f'f,\ g'g \text{ and also } f'g,\ g'f.$$ Which one(s)?
The correct idea -- cross-multiplication -- is illustrated below:
As the width and the depth are increasing, so is the area of the rectangle. But the increase of the area cannot be expressed entirely in terms of the increases of the width and depth! This increase is split into two parts corresponding to the two terms in the right-hand side of the formula below. It is based on the Product Rule for Differences from Chapter 1: $$\Delta (f \cdot g)(c)=f(x+\Delta x) \cdot \Delta g(c) + \Delta f(c) \cdot g(x).$$
Theorem (Product Rule). (A) The difference quotient of the product of two functions is found as a combination of these functions and their difference quotients. In other words, for any two functions $f,g$ defined at the adjacent nodes $x$ and $x+\Delta x$ of a partition, the difference quotients (defined at the corresponding secondary node $c$) satisfy: $$\frac{\Delta (f\cdot g)}{\Delta x}(c)=f(x+\Delta x) \cdot \frac{\Delta g}{\Delta x}(c) + \frac{\Delta f}{\Delta x}(c) \cdot g(x).$$ (B) The product of two functions differentiable at a point is differentiable at that point and its derivative is found as a combination of these functions and their derivatives; specifically, given two functions $f,g$ differentiable at $x$, we have: $$\frac{d (f\cdot g)}{dx}(x)=f(x) \cdot \frac{dg}{dx}(x) + \frac{df}{dx}(x) \cdot g(x).$$
Proof. $$\begin{array}{lll} \Delta (f \cdot g)(c)&=(f \cdot g)(x+\Delta x)- (f \cdot g)(x)\\ &=f(x+\Delta x) \cdot g(x+\Delta x)- f(x) \cdot g(x)\\ &=f(x+\Delta x) \cdot g(x+\Delta x)- f(x+\Delta x) \cdot g(x) +f(x+\Delta x) \cdot g(x)- f(x) \cdot g(x)\\ &=f(x+\Delta x) \cdot (g(x+\Delta x)- g(x)) +(f(x+\Delta x) - f(x)) \cdot g(x)\\ &=f(x+\Delta x) \cdot \Delta g(c) + \Delta f(c) \cdot g(x). \end{array}$$ Now, the limit with $c=x$: $$\begin{array}{lll} \frac{\Delta (f \cdot g)(x)}{\Delta x}&=f(x+\Delta x) \cdot \frac{\Delta g}{\Delta x} (c)&+ \frac{\Delta f}{\Delta x}(c) \cdot g(x)\\ &\quad\quad \downarrow\quad\quad \quad\ \downarrow&\quad\ \downarrow\quad \quad \quad \\ &\quad\ f(x)\quad \quad \cdot\frac{d g}{d x}(x)&+\ \frac{d f}{d x}(x)\ \cdot g(x)&\text{ as } \Delta x\to 0.\\ \end{array}$$ The first limit is justified by the fact that $f$, as a differentiable function, is continuous. $\blacksquare$
In terms of motion, it is as if two runners are unfurling a flag while running east and north respectively.
The formula in the Lagrange notation is as follows: $$(f \cdot g)'(x) = f(x)\cdot g'(x) + f'(x)\cdot g(x).$$
Example. Let $$y = xe^{x}. $$ Then, $$ \begin{array}{lllll} u & = x & \Longrightarrow &\frac{du}{dx} &= (x)' = 1, \\ v & = e^{x} & \Longrightarrow &\frac{dv}{dx} &= (e^{x})' = e^{x}. \end{array} $$ Apply PR via "cross-multiplication", the idea of which comes from the picture above: $$\frac{dy}{dx} = x\cdot e^{x} + e^{x}\cdot 1 = e^{x}(x + 1).$$ $\square$
Next, the derivatives under division? We already know that if the width and the height ($f$ and $g$) of a triangle are changing continuously then so is the tangent of its base angle ($f/g$):
We shall also see that the differentiability of either dimension implies the differentiability of the tangent.
However, let's first make sure we avoid the so-called "Naive Quotient Rule": $$(f/ g)' \neq f'/ g'.$$ We can repeat the "unit analysis" to show that such a formula simply cannot be true. The runners still are running in two perpendicular directions, and we have:
$y=f(x)/ g(x)$ is unitless, and then
$y=\left( f(x)/ g(x) \right)'$ is measured in $\frac{1}{\text{sec}}$, while
$f(x)'/ g(x)'$ is unitless!
The following is based on the Quotient Rule for Differences from Chapter 1: $$\Delta (f / g)(c)=\frac{f(x+\Delta x) \cdot \Delta g(c) - \Delta f(c) \cdot g(x)}{g(x)g(x+\Delta x)}.$$
Theorem (Quotient Rule). (A) The difference quotient of the quotient of two functions is found as a combination of these functions and their difference quotients. In other words, for any two functions $f,g$ defined at the adjacent nodes $x$ and $x+\Delta x$ of a partition, the difference quotients (defined at the corresponding secondary node $c$) satisfy: $$\frac{\Delta (f/ g)}{\Delta x}(c)=\frac{f(x+\Delta x) \cdot \frac{\Delta g}{\Delta x}(c) - \frac{\Delta f}{\Delta x}(c) \cdot g(x)}{g(x)g(x+\Delta x)},$$ provided $g(x),g(x+\Delta x) \ne 0$. (B) The quotient of two functions differentiable at a point is differentiable at that point and its derivative is found as a combination of these functions and their derivatives; specifically, given two functions $f,g$ differentiable at $x$, we have: $$\frac{d (f/ g)}{dx}(x)=\frac{f(x) \cdot \frac{dg}{dx}(x) - \frac{df}{dx}(x) \cdot g(x)}{g(x)^2},$$ provided $g(x) \ne 0$.
Proof. We start with the case $f=1$. Then we have: $$\begin{array}{lll} \frac{\Delta (1/g)(x)}{\Delta x}&=\frac{\frac{1}{g(x+\Delta x)}- \frac{1}{g(x)}}{\Delta x}\\ &=\frac{g(x)- g(x+\Delta x)}{\Delta x g(x+\Delta x)g(x)} \\ &=-\frac{g(x+\Delta x)- g(x)}{\Delta x}\cdot \frac{1}{g(x+\Delta x)\cdot g(x)} \\ &=-\frac{\Delta g}{\Delta x}(c)\cdot \frac{1}{g(x+\Delta x)\cdot g(x)} &\text{ with }c=x\\ &\to -\frac{dg}{dx}(x)\cdot\frac{1}{g(x) \cdot g(x)}&\text{ as } \Delta x\to 0. \end{array}$$ The limit of the second fraction is justified by the fact that $g$, as a differentiable function, is continuous. Alternatively, we represent the reciprocal of $g$ as a composition: $$z=\frac{1}{g(x)}\ \Longrightarrow\ z=\frac{1}{y},\ y=g(x)\ \Longrightarrow\ \frac{dz}{dy}=-\frac{1}{y^2},\ \frac{dy}{dx}=g'(x)\ \Longrightarrow\ \frac{dz}{dx}=-\frac{1}{g(x)^2}g'(x),$$ by the Chain Rule. Now the general formula follows from the Product Rule. $\blacksquare$
The formula is similar to the Product Rule in the sense that it also involves cross-multiplication.
The formula in the Lagrange notation is as follows: $$\left( \frac{f(x)}{g(x)} \right)' = \frac{f'(x)\cdot g(x) - f(x)\cdot g'(x)}{g(x)^2},$$
Example. The tangent: $$\begin{aligned} (\tan x)' & = \left( \frac{\sin x}{\cos x} \right)'\\ & \ \overset{\text{QR}}{=\! =\! =} \frac{(\sin x)' \cos x – \sin x (\cos x)'}{(\cos x)^{2}} \\ & = \frac{\cos x \cos x – \sin x (-\sin x)}{\cos^{2} x} \\ & = \frac{\cos^{2}x + \sin^{2}x}{\cos^{2}x} \quad \text{...use the Pythagorean Theorem...} \\ & = \sec^{2}x. \end{aligned} $$ $\square$
In the Leibniz notation, this is the form of the Product Rule: $$\frac{d}{dx} \left(uv \right) = \dfrac{du}{dx}\cdot v + \dfrac{dv}{dx}\cdot u,$$ and the Quotient Rule: $$\frac{d}{dx} \left(\frac{u}{v}\right) = \dfrac{\dfrac{du}{dx}\cdot v – \dfrac{dv}{dx}\cdot u}{v^{2}}.$$
More examples of differentiation...
Example. Find $$(x^{2} + x^{3})' = \lim_{h \to 0} \frac{(x + h)^{2} +(x + h)^{3} - x^{2} - x^{3}}{h}=...$$ Seems like a lot of work... Instead use SR and PF: $$\begin{array}{lllll} (x^{2} + x^{3})' & = (x^{2})' + (x^{3})' \\ & = 2x +3x^{2}. \end{array} $$ $\square$
Example. We can differentiate any polynomial easily now: $$\begin{array}{lllll} (x^{77} + & 5x^{18} + 6x^{3} - x^{2} + 88)'& \text{ ...try to expand } (x+h)^{77} !\\ & \ \overset{\text{SR}}{=\! =\! =} (x^{77})' + (5x^{18})' + (6x^{3})' - (x^{2})' + (88)' \\ & \ \overset{\text{CMR}}{=\! =\! =}(x^{77})' + (5x^{18})' + (6x^{3})' - (x^{2})' + 0 \\ & \ \overset{\text{PF}}{=\! =\! =} 77x^{77 - 1} + 5\cdot 18x^{13 - 1} + 6\cdot 3x^{3 - 1} - 2x^{2 - 1} \\ & = 77x^{76} + 90x^{17} - 18x^{2} - 2x. \end{array}$$ $\square$
Example. Find $$ \left( \frac{\sqrt{x}}{x^{2} + 1} \right)'.$$ Consider: $$ \begin{array}{lllll} u & = \sqrt{x} &\Longrightarrow &\frac{du}{dx} &= \frac{1}{2\sqrt{x}}, \\ v & = x^{2} + 1 &\Longrightarrow &\frac{dv}{dx} &= 2x. \end{array} $$ Then, $$\frac{d}{dx} \left( \frac{u}{v} \right) = \frac{ \dfrac{1}{2\sqrt{x}} (x^2 + 1) - \sqrt{x}\cdot 2x}{( x^2 + 1)^2}. $$ No need to simplify. $\square$
Example. This is a different kind of example. Evaluate: $$\lim_{x \to 5} \frac{2^{x} - 32}{x - 5}. $$ It's just a limit. But we recognize that this is the derivative of some function. We compare the expression to the formula in the definition: $$ f'(a) = \lim_{x \to a} \frac{f(x) - f(a)}{x - a}, $$ and match. So, we have here: $$a = 5 ,\ f(x) = 2^{x}, \ f(5) = 2^{5} = 32.$$ Therefore, our limit is equal to $f'(5)$ for $f(x) = 2^{x}$. Compute: $$f'(x) = (2^{x})' = 2^{x} \ln 2, $$ so $$f'(5) = 2^{5} \ln 2 = 32 \ln 2.$$ $\square$
This is another interpretation of the formulas. Let's represent the Sum Rule, the Constant Multiple Rule, and the Chain Rule as diagrams: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\la}[1]{\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \begin{array}{ccc} f,g&\ra{\frac{d}{dx}}&f',g'\\ \ \da{+}&SR &\ \da{+}\\ f+g & \ra{\frac{d}{dx}}&(f+g)'=f'+g' \end{array}\qquad \begin{array}{ccc} f&\ra{\frac{d}{dx}}&f'\\ \ \da{\cdot c}& CMR &\ \da{\cdot c}\\ cf & \ra{\frac{d}{dx}}&(cf)'=cf' \end{array}\qquad \begin{array}{ccc} f,g&\ra{\frac{d}{dx}}&f',g'\\ \ \da{\circ}& CR &\ \da{\circ }\\ f\circ g & \ra{\frac{d}{dx}}&(f\circ g)'=f'\circ g' \end{array} $$ In the first diagram, we start with a pair of functions at the top left and then we proceed in two ways:
right: differentiate, then down: add the results; or
down: add them, then right: differentiate the result.
The result is the same! (Neither the Product Rule nor the Quotient Rule has such an interpretation.)
The rate of change of the rate of change
If a function is known at the nodes of a partition, its difference quotient is also a function -- known at the secondary nodes. Can we treat the latter as a function too? What is the partition then? We saw in Chapter 7 how this idea is implemented in order to derive the acceleration from the velocity.
What can we say about the rate of change of this change? If we know only three values of a function (first line) at the ends of an interval, we compute the difference quotients along the two intervals (second line) and place the results at the corresponding edge: $$\begin{array}{ccccccc} -&f(x_1)&---&f(x_2)&---&f(x_3)&-&\\ -&-\bullet-&\frac{\Delta f}{\Delta x_2}&-\bullet-&\frac{\Delta f}{\Delta x_3}&-\bullet-&-\\ -&-\bullet-&---&\frac{\frac{\Delta f}{\Delta x_3} -\frac{\Delta f}{\Delta x_2}}{c_3-c_2}&---&-\bullet-&-&\\ &x_1&c_2&x_2&c_3&x_3&\\ \end{array}$$ To find the change of this new function, we carry out the same operation and place the result in the middle (third line).
Let's review the construction of the difference quotient.
First, we have an augmented partition of an interval $[a,b]$. We partition it into $n$ intervals with the help of the nodes (the end-points of the intervals): $$a=x_{0},\ x_{1},\ x_{2},\ ... ,\ x_{n-1},\ x_{n}=b;$$ and also provide secondary nodes: $$ c_{1} \text{ in } [x_{0},x_{1}], \ c_{2} \text{ in } [x_{1},x_{2}],\ ... ,\ c_{n} \text{ in } [x_{n-1},x_{n}].$$
If a function $y=f(x)$ is defined at the nodes $x_k,\ k=0,1,2,...,n$, the difference quotient of $f$ is defined at the secondary nodes of the partition by: $$\frac{\Delta f}{\Delta x}(c_{k})=\frac{f(x_{k+1})-f(x_k)}{x_{k+1}-x_k},\ k=1,2,...,n.$$
The function represents the slopes of the secant lines over the nodes of the partition. In particular, when the location is represented by a function known only at the nodes of the partition, the velocity is then found in this manner. It is now especially important that we have utilized the secondary nodes as the inputs of the new function. Indeed, we can now carry out a similar construction with this function and find the acceleration!
We have now a new augmented partition, of what? The interval is $$[p,q],\ \text{ with } p=c_0 \text{ and } q=c_n.$$ We partition it into $n-1$ intervals with the help of the nodes that used to be the secondary nodes in the last partition: $$p=c_{1},\ c_{2},\ c_{3},\ ... ,\ c_{n-1},\ c_{n}=b.$$ Then the increments are: $$\Delta c_k=c_{k+1}-c_k.$$ Now, what are the secondary nodes? The primary nodes of the last partition of course! Indeed, we have: $$ x_{1} \text{ in } [c_{1},c_{2}], \ x_{2} \text{ in } [c_{2},c_{3}],\ ... ,\ x_{n-1} \text{ in } [c_{n-1},c_{n}].$$
We apply the same construction to this partition to the function $g=\frac{\Delta f}{\Delta x}$. The difference quotient function of $g$ is defined at the secondary nodes of the new partition by: $$\frac{\Delta g}{\Delta x}(x_{k})=\frac{g(c_{k+1})-g(c_k)}{c_{k+1}-c_k},\ k=1,2,...,n.$$
Definition. The second difference quotient of $f$ is defined at the nodes of the partition (denoted) by: $$\frac{\Delta^2 f}{\Delta x^2}(x_{k})=\frac{\frac{\Delta f}{\Delta x}(c_{k+1})-\frac{\Delta f}{\Delta x}(c_k)}{c_{k+1}-c_k},\ k=1,2,...,n.$$
Note that there are:
$n+1$ values of $f$ (at the nodes),
$n$ values of $\frac{\Delta f}{\Delta x}$ (at the secondary nodes), and
$n-1$ values of $\frac{\Delta^2 f}{\Delta x^2}$ (at the nodes except $a$ and $b$).
We will often omit the subscripts for the simplified notation: $$\frac{\Delta^2 f}{\Delta x^2}(x)=\frac{\frac{\Delta f}{\Delta x}(c+\Delta c)-\frac{\Delta f}{\Delta x}(c)}{\Delta c}.$$
Notice that the higher value of the second difference quotient means higher values of the curvature of the graph of $y=f(x)$. As another way to see this, imagine yourself driving along a straight part of the road and seeing the tree ahead to remain the same (no curvature), then, as you start to turn, the trees start to pass your field of vision from right to left (curvature):
This construction will be repeatedly used for approximations and simulations. It will be followed, when necessary, by taking its limit.
Let's differentiate $\sin x$ for the second time. In Chapter 7, we found its difference quotient over a mid-point partition with a single interval. This time we will need at least two intervals:
three nodes $x$: $a-h$, $a$, and $a+h$, and
two secondary nodes $c$: $a-h/2$ and $a+h/2$.
We use the two formulas for the difference quotients of $\sin x$ and $\cos x$ from Chapter 7. We write the former for the two secondary nodes, but we re-write the latter for the partition with two nodes $a-h/2,\ a+h/2$ and a single secondary node $x=a$: $$\begin{array}{lllll} \frac{\Delta}{\Delta x}(\sin x)&=\frac{ \sin (h/2)}{h/2}\cdot\cos c,& \frac{\Delta }{\Delta x}(\cos x)=-\frac{ \sin (h/2)}{h/2}\cdot\sin a,\\ \end{array}$$ Therefore, we have at $a$: $$\begin{array}{lllll} \frac{\Delta^2}{\Delta x^2}(\sin x)&=\frac{\Delta }{\Delta x}\left(\frac{\Delta}{\Delta x}( \sin x)\right)(a)\\ &=\frac{\Delta}{\Delta x}\left(\frac{ \sin (h/2)}{h/2}\cdot\cos c\right)&\text{ ...by the first formula... }\\ &=\frac{ \sin (h/2)}{h/2}\frac{\Delta \cos}{\Delta x}(a)&\text{ ...by CMR... }\\ &=\frac{ \sin (h/2)}{h/2}\left(-\frac{ \sin (h/2)}{h/2}\cdot\sin a\right)&\text{ ...by the second formula }\\ &=-\left(\frac{ \sin (h/2)}{h/2}\right)^2\cdot\sin a. \end{array}$$
Similarly, we find: $$\frac{\Delta }{\Delta x}(\cos x)=-\frac{ \sin (h/2)}{h/2}\cdot\sin c\ \Longrightarrow\ \frac{\Delta^2}{\Delta x^2}(\cos x)=-\left(\frac{ \sin (h/2)}{h/2}\right)^2\cdot\cos a.$$
For the exponential function, we need a left-end partition with two intervals:
two secondary nodes $c$: $a-h$ and $a$.
Then, we find at $a$: $$\frac{\Delta }{\Delta x}(e^x)=\frac{ e^h-1}{h}\cdot e^{c-h/2}\ \Longrightarrow\ \frac{\Delta^2}{\Delta x^2}(e^x)=\left(\frac{ e^h-1}{h}\right)^2\cdot e^{a-h}.$$
Repeated differentiation
Example. Let's continue to differentiate the sine: $$\begin{array}{lll} (\sin x)' & = \cos x &\\ (\cos x)' & = -\sin x & \Longrightarrow &(\sin x)' ' &=-\sin x\\ (-\sin x)' & = -\cos x & \Longrightarrow &(\sin x)' ' ' &=-\cos x\\ (-\cos x)' & = \sin x & \Longrightarrow &(\sin x )' ' ' ' &= \sin x. \end{array} $$ And we are back where we started, i.e., the differentiation process for this particular function is cyclic! $\square$
We use the following terminology and notation for the consecutive derivatives of function $f$: $$\begin{array}{|l|l|l|l|} \hline \text{function } & f & f^{(0)}&\\ \text{first derivative } & f' & f^{(1)}&\frac{df}{dx}\\ \text{second derivative } & f' '=(f')' & f^{(2)}=\left(f^{(1)}\right)'&\frac{d^2f}{dx^2}=\frac{d}{dx}\left( \frac{df}{dx} \right)\\ \text{third derivative } & f' ' '=(f' ')'& f^{(3)}=\left(f^{(2)}\right)'&\frac{d^3f}{dx^3}=\frac{d}{dx}\left( \frac{d^2f}{dx^2} \right)\\ ...&&...&...\\ n\text{th derivative } & & f^{(n)}=\left(f^{(n-1)}\right)'&\frac{d^nf}{dx^n}=\frac{d}{dx}\left( \frac{d^{n-1}f}{dx^{n-1}} \right)\\ ...&&...&...\\ \hline \end{array}$$
Thus, a given differentiable function may produce a sequence of functions: $$ \newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{ccc} f & \mapsto & \begin{array}{|c|}\hline\quad \tfrac{d}{dx} \quad \\ \hline\end{array} & \mapsto & f' & \mapsto & \begin{array}{|c|}\hline\quad \tfrac{d}{dx} \quad \\ \hline\end{array} & \mapsto & f' '& \mapsto & ...& \mapsto & f^{(n)}& \mapsto &... \end{array}$$ provided the outcome of each step is differentiable as well. In the abbreviated form the sequence is: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\la}[1]{\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.}\begin{array}{ccc} f &\ra{\frac{d}{dx}} &f' &\ra{\frac{d}{dx}} & f' ' &\ra{\frac{d}{dx}} &...&\ra{\frac{d}{dx}} & f^{(n)} &\ra{\frac{d}{dx}} & ... \end{array}$$
Note that, for a fixed $x$, the sequence of numbers: $$f(x),\ f'(x),\ f' '(x),\ ...,\ f^{(n)}(x),\ ...$$ is just that, a sequence, a concept familiar from Chapter 7. However, as the example of $\sin x$ shows, this sequence doesn't have to converge: $$\left( \sin x \right)^{(n)}\Big|_{x=0},\ n=0,1,2,3,...\ \leadsto\ 0,-1,0,1,0,...$$ We will see in Chapter 15 that some "linear combinations" of the derivatives that produce a sequence convergent to the function...
Let's try to compute as many consecutive derivatives as possible, or even all of them, for the functions below.
Example. The positive integer powers. The PF applies: $$(x^{n})' = nx^{n-1}.$$ The power decreases by $1$ every time. Therefore, $$(x^{n})^{ (n+1)} = 0.$$ Then, it stays $0$: $$(x^{n})^{ (n+1)} = (x^{n})^{ (n+2)}=...=0.$$ The powers in the sequence of derivatives decrease to $0$ and then remain constant. $\square$
Example. The exponent. Since $$(e^{x})' = e^{x},$$ we have: $$(e^{x})^{(n)} = e^{x}.$$ The function remains the same! The sequence of derivatives is constant. $\square$
Example. The trig functions. Same for both sine and cosine: $$\begin{aligned} (\sin x)^{(4n)} & = \sin x \\ (\cos x)^{(4n)} & = \cos x \end{aligned}$$ The sequence of derivatives is cyclic for both functions. $\square$
Example. The negative integer powers. We apply PF again: $$\begin{aligned} (x^{-1})' & = -1x^{-2}, \\ (-x^{-2})' & = 2x^{-3},\\ ... \end{aligned}$$ The power goes down by $1$ every time and, as a result, tends to $–\infty$. The sequence doesn't stop. $\square$
Exercise. Show that the same happens with all non-integer powers.
Differentiation creates a certain dynamic among the functions: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\la}[1]{\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\lra}[1]{\xleftarrow{\quad\quad#1\quad}\!\to} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.}\begin{array}{|ccccccccc|} \hline &&&&&&&& \ \curvearrowleft^{\frac{d}{dx}}\\ x^n &\ra{\frac{d}{dx}} &nx^{n-1} &\ra{\frac{d}{dx}} &...&\ra{\frac{d}{dx}} & \text{constant} &\ra{\frac{d}{dx}} & 0\\ \hline \frac{1}{n} &\ra{\frac{d}{dx}} &-\frac{1}{n^2} &\ra{\frac{d}{dx}} & \frac{2}{n^3} &\ra{\frac{d}{dx}} & ...\\ \hline \sin x&\ra{\frac{d}{dx}}&\cos x\\ \ \ua{\frac{d}{dx}}& &\ \da{\frac{d}{dx}}\\ -\cos x & \la{\frac{d}{dx}}& -\sin x\\ \hline e^{-x}&\lra{\frac{d}{dx}}&-e^{-x}\\ \hline \ \curvearrowleft^{\frac{d}{dx}}\\ e^x \\ \hline \end{array}$$
Warning: Starting in Chapter 17, we will see that the function and its derivative are two animals of very different breeds. As a result, the dynamics discussed above will disappear in higher dimensions.
The repeated differentiation process may fail to continue when the $k$th derivative is not differentiable, i.e., when the following limit does not exist: $$f^{(k)}(a)=\lim_{h\to 0} \frac{f^{(k-1)}(a+h)-f^{(k-1)}(a)}{h}.$$
Definition. A function $f$ is called twice, thrice, ..., $n$ times differentiable when $f',f' ',f' ' ',..., f^{(n)}$ exists. When the derivative exists for all $n$, we call the function smooth.
The functions that we have treated above are smooth inside their domains.
Example. This function is differentiable but not twice differentiable: $$f(x)=\begin{cases} -x^2&\text{ if } x<0;\\ x^2&\text{ if } x\ge 0. \end{cases}$$ Its graph looks smooth:
There is no doubt in which direction a beam of light would bounce off such a surface. However, let's compute the derivatives. It is easy for $x\ne 0$ because there is only one formula: $$f(x)=\begin{cases} -2x&\text{ if } x<0;\\ 2x&\text{ if } x> 0. \end{cases}$$ For the case of $x=0$, we consider the two one-sided limits: $$\lim_{h\to 0^-}\frac{f(0+h)-f(0)}{h}=\lim_{h\to 0^-}\frac{f(h)}{h}=\lim_{h\to 0}\frac{-h^2}{h}=\lim_{h\to 0}(-h)=0;$$ $$\lim_{h\to 0^+}\frac{f(0+h)-f(0)}{h}=\lim_{h\to 0^+}\frac{f(h)}{h}=\lim_{h\to 0}\frac{h^2}{h}=\lim_{h\to 0}h=0.$$ They match! Therefore, $$f'(0)=0.$$ We have discovered that $f'(x)=2|x|$. It's not differentiable at $0$! $\square$
Example. More examples of this kind:
$\sin\frac{1}{x}$ is discontinuous at $x=0$;
$x\sin\frac{1}{x}$ is continuous at $x=0$ but not differentiable;
$x^2\sin\frac{1}{x}$ is differentiable at $x=0$ but not twice differentiable.
Exercise. Prove the above statements.
Below we visualize the relation between these classes of functions:
What is the geometric meaning of these higher derivatives for a given function?
Let's consider the first derivative. It represents the slopes of the function. Then the second derivative represents the rate of change of these slopes. Notice how changing slopes are seen as rotating tangents:
Specifically, we see:
decreasing slopes = tangents rotate clockwise;
increasing slopes = tangents rotate counter-clockwise.
This matches our convention from trigonometry that counter-clockwise is the positive direction for rotations.
Even though we typically have functions with the $n$th derivative for each positive integer $n$, only the first two reveal something visible about the graph of the original function.
Above we compare
the shapes of the patches of the graph of the function $f$ to the sign of the values of the first derivative $f'$; and
the shapes of the patches of the graph of the function $f$ to the sign of the values of the second derivative $f' '$.
There are three main levels of analysis of a function:
Analysis at level $0$: the values of $f$. We ask, how large? The findings are about the values, $x$- and $y$-intercepts, asymptotes and other large-scale behavior, periodicity, etc.
Analysis at level $1$: the slopes of $f$. We ask, up or down? The findings are about the angles, increasing/decreasing behavior, critical points, etc.
Analysis at level $2$: the rate of change of the slopes of $f$. We ask, concave up or down? The findings are about the change of steepness, concavity, telling a maximum from a minimum, etc.
We can go on and continue to discover more and more subtle but less and less significant properties of the function...
This three-level analysis also applies to our study of motion, below.
The derivative of the velocity and, therefore, the second derivative of the position, is called the acceleration. The concept allows one to add another level of analysis of motion:
Analysis at level $0$: the location, where?
Analysis at level $1$: the velocity, how fast? forward or back?
Analysis at level $2$: the acceleration, how large is the force?
Suppose $t$ is time and $y$ is the vertical dimension, the height. Now the specific case of free fall... These are the initial conditions:
$y_0$ is the initial height, $y_0=y\Big|_{t=0}$, and
$v_y$ is the initial vertical component of velocity, $\frac{dy}{dt}\Big|_{t=0}$.
Then, we have: $$\begin{array}{lll} y&=y_0+v_yt-\tfrac{1}{2}gt^2&\Longrightarrow& \frac{dy}{dt}&=v_y&-gt&\Longrightarrow&\frac{d^2y}{dt^2}&=-g. \end{array}$$ Now, from the point of the physics of the situation, the derivation should go in the opposite direction:
when there is no force, the velocity is constant;
when the force is constant, the velocity is linear on time, etc.
However, at this point we still unable to answer these questions:
How do we know that only the derivatives of constant functions and none others are zero?
How do we know that only the derivatives of linear functions and none others are constant?
How do we know that only the derivatives of quadratic functions and none others are linear?
This reversed process is called antidifferentiation. So far, we cannot justify even this, simplest conclusion: $$f'=0 \Longrightarrow f=c,\ \text{ for some real number }c.$$ We will study these and related questions in Chapter 9.
Change of variables and the derivative
If the distance is measured in miles and time in hours, the velocity is measured in miles per hour. If the distance is measured in kilometers and time in minutes, the velocity is measured in kilometers per minute. In either case, we are dealing with the same functions just measured in different units. If the two distance functions match, do the velocity functions too?
Let's recall that we can interpret every composition as a change of variables. We are especially interested in a change of units because we often measure quantities in multiple ways:
length and distance: inches, miles, kilometers, light years;
time: minutes, seconds, hours, years;
weight: pounds, kilograms, karats;
temperature: degrees of Celsius, of Fahrenheit,
How does such a change affect calculus as we know it?
If $$y=f(x)$$ is a relation between two quantities $x$ and $y$, then either one may be replaced with a new variable. Let's call them $t$ and $z$ respectively and suppose these replacements are given by some functions:
case 1: $x=g(t)$;
case 2: $z=h(y)$.
These substitutions create new relations:
case 1: $y=k(t)=f(g(t))$;
case 2: $z=k(x)=h(f(x))$.
The Chain Rule gives us the rate of change for each pair:
$$\frac{dk}{dt}=\frac{df}{dx}\frac{dg}{dt};$$
$$\frac{dk}{dx}=\frac{dh}{dy}\frac{df}{dx}.$$
Most often, the conversion formula of a change of units is linear.
This is for Case 1.
Theorem (Linear Chain Rule I). If $$g(t)=mt+b$$ and $y=f(x)$ is differentiable, then the derivative of $y=k(t)=f(g(t))$ is given by: $$k'(t)=mf'(mt+b).$$
Example. What if $x$ is time and we change the moment from which we start measuring time, e.g., the "daylight savings time"? We have: $$g(t)=t+t_0\ \Longrightarrow\ k'(t)=f'(t+t_0).$$ $\square$
Example. Suppose $x$ is time and $y$ is the location, then function $g$ may represent the change of units of time, such as to seconds, $x$, from minutes, $t$: $$x=g(t)=60t.$$ Then, the change of the units won't change a lot about our calculus:
if $f$ is the location as a function of seconds, $k$ is the location as a function of minutes, and $k(t)=f(60t)$;
also $f'$ is the velocity as a function of seconds, $k'$ is the velocity as a function of minutes, and $k'(t)=60f'(60t)$;
also $f' '$ is the acceleration as a function of seconds, then $k' '$ is the acceleration as a function of minutes, and $k' '(t)=60^2f' '(60t)$.
Thus, the graphs of the new quantities describing motion are simply re-scaled versions of the graphs of the old ones. $\square$
Theorem (Linear Chain Rule II). If $$h(y)=my+b,$$ and $y=f(x)$ is differentiable, then the derivative of $z=k(x)=h(f(x))$ is given by: $$k'(x)=mf'(x).$$
Example. What if $y$ is the location and we change the place from which we start measuring, e.g., the Greenwich meridian? We have: $$h(x)=y+y_0\ \Longrightarrow\ k'(x)=f'(x).$$ We can also change the direction of the $x$-axis: $$h(x)=-y\ \Longrightarrow\ k'(x)=-f'(x).$$ $\square$
Example. Suppose $x$ is time and $y$ is the location, then function $h$ may represent the change of units of length, such as from miles, $y$, to kilometers, $x$: $$z=h(y)=1.6y.$$ Then, the change of the units will change very little about the calculus that we have developed; the coefficient, $m=1.6$, is the only adjustment necessary. Furthermore,
if $f$ is the location in miles, then $k$ is the location in kilometers: $k(x)=1.6f(x)$;
also $f'$ is the velocity with respect to miles, $k'$ is the velocity with respect to kilometers, and $k'(x)=1.6f'(x)$;
also $f' '$ is the acceleration with respect to miles, $k' '$ is the acceleration with respect to kilometers, and $k' '(x)=1.6f' '(x)$.
Thus, the quantities describing motion are simply replaced with their multiples. The new graphs are the vertically stretched versions of the old ones. $\square$
Example. Recall the example when we have a function $f$ that records the temperature -- in Fahrenheit -- as a function $f$ of time -- in minutes -- replaced with another to records the temperature in Celsius as a function $g$ of time in seconds:
$s$ time in seconds;
$m$ time in minutes;
$F$ temperature in Fahrenheit;
$C$ temperature in Celsius.
The conversion formulas are: $$m=s/60,$$ and $$C=(F-32)/1.8.$$
These are the relations between the four quantities: $$g:\quad s \xrightarrow{\quad s/60 \quad} m \xrightarrow{\quad f\quad} F \xrightarrow{\quad (F-32)/1.8\quad} C.$$ And this is the new function: $$F=k(s)=(f(s/60)-32)/1.8.$$ Then, by the Chain Rule, we have: $$\frac{dF}{ds}=\frac{dF}{dC}\frac{dC}{dm}\frac{dm}{ds}=\frac{1}{1.8}\cdot f'(m)\cdot \frac{1}{60}.$$ $\square$
Exercise. Provide a similar analysis for the sizes of shoes and clothing.
Example. The conversion of the number of degrees $y$ to the number of radians $x$ is: $$x=\frac{\pi}{180}y.$$ Then, $$\frac{dx}{dy}=\frac{\pi}{180}.$$ Therefore, the trigonometric differentiation formulas, such as $\left( \sin x \right)'=\cos x$, don't hold anymore! Indeed, let's denote sine and cosine for degrees by $\sin_dy$ and $\cos_dy$ respectively: $$\sin_dy=\sin \left( \frac{\pi}{180}y \right) \text{ and } \cos_dy=\cos \left( \frac{\pi}{180}y \right).$$ Then, $$\begin{array}{lll} \frac{d}{dy}\sin_d y&=\frac{d}{dy}\sin \left( \frac{\pi}{180}y \right)\\ &=\frac{\pi}{180}\cos \left( \frac{\pi}{180}y \right)\\ &=\frac{\pi}{180}\cos_dy. \end{array}$$ $\square$
Example. What if we are to change our unit to a logarithmic scale? For example, $$x=10^t.$$ Then, for any function $y=f(x)$, we have by the Chain Rule: $$\frac{dy}{dt}=\frac{dy}{dx}\Bigg|_{x=10^t}\cdot \left( 10^t \right)'=\frac{dy}{dx}\Bigg|_{x=10^t}10^t\ln 10.$$ The effect on the derivative is not proportional! $\square$
This is the summary of the Chain Rule: $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}[1]{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \begin{array}{ccccccccc} & f(g(x)) &\ra{\frac{d}{dx}} &f'(g(x))g'(x) \\ \small\text{substitution }&\quad \da{u=g(x)} & &\ \ \ua{CR} \\ & f(u) &\ra{\frac{d}{du}} &f'(u) \end{array}$$ The method allows us to get from left to right at the top (differentiation with respect to $x$) by taking a detour. We follow the path around the square: substitution, differentiation with respect to $u$, the Chain Rule formula with back-substitution.
Implicit differentiation and related rates
We differentiate functions; can we differentiate relations?
Recall from Chapter 2 that relations are represented by equations, but not the kind we are used to: $$\underbrace{x^{2}}_{\text{a number}} - \underbrace{1}_{\text{a number}}=0\quad \leadsto \text{ find a particular number } x.$$ After the substitution, the equation should be true. The equations we are interested in are equations of functions, such as the familiar equation of the circle: $$\underbrace{x^2}_{\text{a function}}+\underbrace{y^{2}}_{\text{a composition of two functions}} =0 \quad \leadsto \text{ find a particular function } y=y(x).$$ After the substitution, the equation should be true for all $x$.
The equation implicitly defines this function. As we have done in the past, we can make the function $y=y(x)$ explicit by solving the equation for $y$: $$y = \sqrt{1 - x^{2}} \text{ or } y=–\sqrt{1 - x^{2}}.$$
However, what if we want only the rate of change of this, unknown, function?
We will rely on the following fact: if two functions are equal, for all nodes $x$, of a partition then so are their difference quotients, for all secondary nodes $c$: $$f(x)=g(x) \text{ for all } x\ \Longrightarrow\ \frac{\Delta f}{\Delta x}(c)=\frac{\Delta g}{\Delta x}(c) \text{ for all } c.$$
Example (circle). Find the secant line through the two points on the circle of radius $1$ centered at $0$: $$(0,1) \text{ and } \left( \tfrac{\sqrt{2}}{2},\tfrac{\sqrt{2}}{2} \right).$$
Typically, a curve has been the graph of a function $y = x^{2}$, $y = \sin x$, etc., given explicitly. This time the equation is: $$x^{2} + y^{2} = 1.$$ To find the slope of the secant line, we need the difference quotient of the function but there is no, explicit, function!
The idea is to consider the above equation as a relation between the two variables. In fact, we think of $y=y(x)$ as a function of $x$, i.e.: $$ x^{2} + y(x)^{2} = 1.$$ We will also assume that
the two $x$-values $x_0 =0$ and $x_1\frac{\sqrt{2}}{2}$ are nodes of a partition of the $x$-axis, and
the two $y$-values $y_0 =1$ and $y_1= \frac{\sqrt{2}}{2}$ are nodes of a partition of the $y$-axis.
We apply the Chain Rule to both sides of the equation: $$\begin{array}{rll} \frac{\Delta }{\Delta x} \left( x^{2} + y^{2} \right) & = \frac{\Delta }{\Delta x} (1) &\Longrightarrow\\ \frac{\Delta }{\Delta x} x^{2} + \frac{\Delta }{\Delta x}y^{2} &= 0 &\Longrightarrow\\ (x_0+x_1) + (y_0+y_1) \frac{\Delta y}{\Delta x} &= 0 &\Longrightarrow\\ \frac{\Delta y}{\Delta x} &= -\frac{x_0+x_1}{y_0+y_1} &\text{ for } y_0+y_1\ne 0. \end{array}$$ We have found a formula for the difference quotient but it is still implicit -- because we don't have a formula for $y=y(x)$. Fortunately, we don't need the whole function, just those two points on its graph. We substitute these into the formula above to find: $$\frac{\Delta y}{\Delta x}= -\frac{0+\frac{\sqrt{2}}{2}}{1+\frac{\sqrt{2}}{2}}= -\frac{\sqrt{2}}{1+\sqrt{2}}.$$ Finally, from the point-slope formula we obtain the answer: $$y - \frac{\sqrt{2}}{2} = -\frac{\sqrt{2}}{1+\sqrt{2}}\left( x - \frac{\sqrt{2}}{2}\right). $$ We can automate this formula and find more secant lines:
What about the derivative? We will rely on the following fact: if the values of two functions are equal for all $x$ then so are the values of their derivatives: $$f(x)=g(x) \text{ for all } x\ \Longrightarrow\ f'(x)=g'(x) \text{ for all } x.$$ We can put it simply as: if two functions are equal then so are their derivatives; i.e., $$\begin{array}{|c|}\hline\quad f=g \ \Longrightarrow\ f'=g' \quad \\ \hline\end{array}$$
Differentiating an equation of functions and finding the derivative of a function defined by this equation is called implicit differentiation.
Let's consider two examples of how this idea may help us with finding tangents to implicit curves.
Example (circle). Find the tangent line for the circle of radius $1$ centered at $0$ at the point $\left( \tfrac{\sqrt{2}}{2},\tfrac{\sqrt{2}}{2} \right)$.
Typically, a curve has been the graph of a function $y = x^{2}$, $y = \sin x$, etc., given explicitly. This time the equation is: $$x^{2} + y^{2} = 1.$$ To find the slope of the tangent line, we need the derivative, but there is no function to differentiate!
Our approach is to differentiate the equation above as a relation between the two variables. As we differentiate, we think of $y=y(x)$ as a function of $x$, i.e.: $$ x^{2} + y(x)^{2} = 1.$$ This is the result, via the Chain Rule: $$\begin{array}{rll} \frac{d}{dx} \left( x^{2} + y^{2} \right) & = \frac{d}{dx} (1) &\Longrightarrow\\ \frac{d}{dx} x^{2} + \frac{d}{dx}y^{2} &= 0 &\Longrightarrow\\ 2x + 2y \frac{dy}{dx} &= 0 &\Longrightarrow\\ \frac{dy}{dx} &= -\frac{x}{y} &\text{ for } y\ne 0. \end{array}$$
We have found a formula for the derivative, but it is still implicit -- because we don't have a formula for $y=y(x)$. Fortunately, we don't need the whole function, just a single point on its graph: $$ x = \frac{\sqrt{2}}{2},\ y = \frac{\sqrt{2}}{2} $$ We substitute these into the formula above to find: $$\frac{dy}{dx}\Bigg|_{x = \frac{\sqrt{2}}{2},\ y = \frac{\sqrt{2}}{2}}= -\frac{x}{y}\Bigg|_{x = \frac{\sqrt{2}}{2},\ y = \frac{\sqrt{2}}{2}}= -1.$$ Finally, from the point-slope formula we obtain the answer: $$y - \frac{\sqrt{2}}{2} = -1\left( x - \frac{\sqrt{2}}{2}\right). $$
Note that we could use the explicit formula $y = \sqrt{1 - x^{2}}$ with the same result: $$\frac{dy}{dx} \overset{\text{CR}}{=} \frac{-2x}{2\sqrt{1 - x^{2}}} = -\frac{x}{1 - x^{2}},$$ after we substitute $x = \frac{\sqrt{2}}{2}$. However, it's only explicit for the upper half of the circle. For a point below the $x$-axis, we'd need to start over and use the other formula, $y = -\sqrt{1 - x^{2}}$.
Observe also that the derivative $\frac{dy}{dx}$ is undefined at $x= \pm 1$ (implicit or explicit) because the denominator is $0$. How do we find the tangent? From the formula we can proceed in two directions: $$x^{2} + y^{2} = 1 \leadsto \begin{cases} y \text{ depends on } x,\\ x \text{ depends on } y. \end{cases} $$ Then, we can try implicit differentiation of the same equation -- but with respect to $y$ this time. The computation is very similar, and the result is: $$\frac{dx}{dy} = -\frac{y}{x}.$$ The formula is defined for $y = 0$, at the points $(-1,0),\ (1, 0)$. Then, $\frac{dx}{dy} = 0$ at these points. Therefore, the tangent line is $x - 1 = 0 (y-0)$, or $x = 1$. $\square$
Example (Folium of Descartes). This curve is given by: $$x^{3} + y^{3} = 6xy.$$
We differentiate the equation as before: $$\frac{d}{dx} \left( x^{3} + y^{3} \right) = \frac{d}{dx} (6xy). $$ Using CR we notice that every time if we see $y$, the factor $\frac{dy}{dx}$ also appears: $$\begin{array}{rll} \frac{d}{dx} (x^{3}) + \frac{d}{dx} (y^{3}) & = 6\frac{d}{dx} (xy) \\ 3x^{2} + 3y^{2}\cdot \frac{dy}{dx} &= 6 \left(y + x\frac{dy}{dx} \right) . \end{array}$$ Solve for $\frac{dy}{dx}$. $$\begin{array}{rll} 3x^{2} + 3y^{2} \frac{dy}{dx} & = 6y + 6x \frac{dy}{dx} \\ (3y^{2} - 6x) \frac{dy}{dx} & = 6y – 3x^{2} \\ \frac{dy}{dx} & = \underbrace{\frac{6y – 3x^{2}}{3y^{2} - 6x}}_{\text{Fails at } (0,0)!} \end{array}$$ The end result is: if we know the location $(x, y)$, you know the slope of the tangent at that point. For example, at the tip of the curve we have $x=y$. Therefore, the slope is $\frac{dy}{dx}=-1$. $\square$
Note that in either example, we can cut the curve into pieces each of which is the graph of a function:
Now, implicit differentiation also helps with situations when several quantities depend on each other implicitly as well on time. If we differentiate this dependence equation, we get a dependence between their derivatives. The result is related rates.
Example (air balloon). Suppose we have an air balloon, spherical in shape. Air is pumped in it at the rate of $5 {}^{\text{in }^{3}}/_{\text{sec}}$. What is the rate of growth of the radius at different radii?
Step one in word problems: introduce variables; let
$t$ be time,
$V$ be the volume, and
$r$ be the radius.
Next, $V$ depends on $t$ and at that moment we have $$\frac{dV}{dt} = 5,$$ according to the condition. Furthermore, this is a sphere, so $$V = \frac{4}{3}\pi r^{3}.$$ Here we see that $V$ also depends on $r$; altogether, this is the dependencies we face: $$\begin{array}{cccc} t &\to &r\\ &\searrow&\downarrow\\ &&V \end{array}$$ We could reverse the last arrow by finding the inverse: $r = \sqrt[3]{\frac{3}{4\pi}V}$. Instead, we differentiate the equation itself. Thus, if two variables are related (via an equation), then so are their derivatives, i.e., the rates of change (hence, "related rates").
Keeping in mind that both $V$ and $r$ are functions of time, we differentiate the relation with respect to $t$: $$V= \frac{4}{3} \pi r^{3}.$$ The left-hand side is very simple: $$\frac{d}{dt}V=\frac{dV}{dt},$$ but in the right-hand side $r(t)^{3}$ is a composition: $$\frac{d}{dt}\left( \frac{4}{3} \pi r^{3} \right) = \frac{4}{3} \pi \cdot 3r^{2} \frac{dr}{dt}.$$ Thus, we have: $$\frac{dV}{dt} = \frac{4}{3} \pi \cdot 3r^{2} \frac{dr}{dt}.$$
Recall that the rate of change of $V$ is $5$ (at a given moment), so: $$5 = 4\pi r^{2} \frac{dr}{dt},$$ or $$\frac{dr}{dt} = \frac{5}{4\pi r^{2}}.$$
Next, what's the rate of growth of $r$ when $r = 1,\ r = 2,\ r = 3$? $$\begin{array}{lll} r = 1: & \frac{dr}{dt} = \frac{5}{4\pi}; \\ r = 2: & \frac{dr}{dt} = \frac{5}{16\pi}; \\ r = 3: & \frac{dr}{dt} = \frac{5}{36\pi}. \end{array} $$ Indeed, we see a slow-down. $\square$
Example (sliding ladder). Suppose a $10$ ft. ladder stands against the wall and its bottom is sliding at $2$ ft/sec. How fast is the top moving when it is $6$ ft from the floor?
Introduce variables:
$x$ the distance of the bottom from the wall,
$y$ the distance of the top from the floor, both functions of
$t$ the time.
Translate the information, as well as the question, about the variables into equations: $$\begin{array}{ll|l} &\text{quantities:}&\text{functions:}\\ \hline \text{always}& x^{2} + y^{2} = 10^{2}&(x(t))^{2} + (y(t))^{2} = 10^{2}\\ \text{now}&\frac{dx}{dt} = 2&x'(t_0)=2\\ \text{now}&y = 6&y(t_0)=6 \\ \text{now}&\frac{dy}{dt} = ?& y'(t_0)=? \end{array}$$ That's the purely mathematical problem to be solved.
We differentiate the equation with respect to the independent variable, $t$: $$\begin{array}{rlll} \frac{d}{dt}\left( x^{2} + y^{2} \right) & = \frac{d}{dt}\left(100\right) \\ 2x\frac{dx}{dt} + 2y\frac{dy}{dt} & = 0,& \text{ solve for } \frac{dy}{dt} \\ \frac{dy}{dt} &= - \frac{x}{y}\frac{dx}{dt},& \text{ substitute } \\ &= -\frac{x}{6} 2, & \text{ now } x=8 \text{ comes from } x^{2} + y^{2} = 100, \\ &= -\frac{8}{6} 2 \\ & = -\frac{8}{3}. \end{array} $$ It is going down! $\square$
Exercise. Solve the problem for the moment when the ladder hits the floor.
Radar gun: the math
Problem. Suppose you are driving at a speed $80$ mph when you see a police car positioned off $40$ feet the road. What is the radar gun's reading?
How does the radar gun work? In fact, how does a radar work? A signal is sent, it bounces off an object, and, when it comes back, the time lapse is recorded. Then, the distance to the object is computed as: $$S = \underbrace{\text{ signal's speed }}_{\text{ known }} \cdot \underbrace{\text{ time passed }}_{\text{ measured }}. $$
A radar run does this twice.
A signal is sent, it comes back, the time is measured. Then the second time:
$S_{1} =$ speed $\cdot$ time, at time $t=t_1$,
$S_{2} =$ speed $\cdot$ time, at time $t=t_2$.
Then, the reading is computed as: $$\text{ estimated speed }= \frac{\text{ change of distance }}{\text{ time between signals }}. $$ No radar gun can do better than that!
To summarize: $$\frac{dS}{dt}\approx \frac{\Delta S}{\Delta t}, $$ where
$\Delta S=S_{2} - S_{1}$ is the change of distance between the two cars,
$\Delta t=t_2-t_1$ is the time between signals.
Now the question, is the reading of the radar gun $80$ m/h?
To get an idea of what can happen, consider this extreme example: what if you are just passing in front of the police car, like this?
It is conceivable that at time $t_1$ your car is the same distance from the intersection as it is past the intersection at time $t_2$. Then the $\Delta S=0$! So, the reading can be off by a lot...
These are the variables:
$S$, the distance between the police car to yours.
$P$, the distance between your car to the intersection.
$t$, the time, the independent variable, also
$D=40$, distance from the police car to the road.
Since $80$ m/h is your speed, $\frac{dP}{dt} = 80$. That's what the radar gun is meant to detect. But what does the radar measure in reality is $\frac{dS}{dt}$!
How good an approximation of the real velocity $\frac{dP}{dt}$ is the perceived velocity $\frac{dS}{dt}$? The spreadsheet contains a column of locations $P$ of your car (distances to the intersection), the next one is for the distance $S$ to the police car (plotted first), and finally the average rate of change of $S$.
As we can see, the approximation is best away from the intersection. But, within $75$ feet from the intersection, the reading will be less than $70$ mph!
Next, we establish a functional relation between the two via the Pythagorean Theorem: $$P^{2} + D^{2} = S^{2} \gets \text{These aren't numbers, but variables, i.e., functions.} $$ This connects $P$ and $S$, but not $\frac{dP}{dt}$ and $\frac{dS}{dt}$ yet. We differentiate equation with respect to $t$ to get there: $$\begin{aligned} \frac{d}{dt}\left(P^{2} + D^{2}\right) & = \frac{d}{dt}\left(S^{2}\right)\ \Longrightarrow \\ 2P\cdot \frac{dP}{dt} + 2D\underbrace{\frac{dD}{dt}}_{=0} &= 2S\cdot \frac{dS}{dt}\ \Longrightarrow \\ P\cdot\frac{dP}{dt} &= S\cdot \frac{dS}{dt}\ \Longrightarrow \\ \frac{dS}{dt} &= \frac{P}{S}\frac{dP}{dt}. \end{aligned}$$ Thus, we finally have a relation between these functions. In fact, this is what the radar gun shows: $$\frac{dS}{dt} = \frac{P}{\sqrt{P^2+D^2}}\cdot 80. $$ We plot this relation below, to confirm our earlier conclusions:
Furthermore, we can simplify this relation: $$\cos \alpha = \frac{P}{S}, $$ where $\alpha$ is the angle between the road ahead of you and the direction to the police car.
How does $\alpha$ change as you drive?
Early: $\alpha$ is close to $0$, so $\cos \alpha$ close to $1$, and, therefore, $\frac{dS}{dt}$ is close to $80$.
Then, as $\alpha$ increases, $\cos \alpha$ decreases toward $0$, and so does $\frac{dS}{dt}$.
In the middle, we have $\alpha = \frac{\pi}{2}$, $\cos \alpha = 0$, $\frac{dS}{dt} = 0$.
As $\alpha$ passes $\frac{\pi}{2}$, $\cos \alpha$ decreases to negative values, and so does $\frac{dS}{dt}$.
Late: $\alpha$ approaches $\pi$, and $\cos \alpha$ approaches $1$, and, therefore, $\frac{dS}{dt}$ approaches $80$.
Conclusion: The radar gun always underestimates your speed: $$\left| \frac{dS}{dt} \right| < 80. $$ Unless, the police car is on the road! In that case, what can you do to "improve" the reading? What do you want $\alpha$ to be -- as large as possible!
The derivative of the inverse function
Let's recall from Chapter 3 that for a given one-to-one and onto function $y=f(x)$, its inverse is the function, $x=f^{-1}(y)$, that satisfies $$f^{-1}(y)=x \text{ if and only if }f(x)=y.$$ An idea is that a function and its inverse represent the same relation:
$x$ and $y$ are related when $y=F(x)$, or
$x$ and $y$ are related when $x=F^{-1}(y)$.
For example, these are pairs of functions inverse to each other: $$\begin{array}{rl} y=x+2& \text{ vs. } & x=y-2,\\ y=3x&\text{ vs. } & x=\frac{1}{3}y,\\ y=x^2&\text{ vs. } & x=\sqrt{y} \quad\text{ for }x,y\ge 0,\\ y=e^x&\text{ vs. } & x=\ln y \quad\text{ for }y > 0. \end{array}$$
Can we express the derivative of the inverse of a function in terms of the derivative of the function?
Let's recall that the inverse "undoes" the effect of the function. The idea applies especially well to the interpretation of the functions as transformations. Indeed, what is the meaning of the derivative of a transformation? It is its stretch ratio. Now, if $f$ stretches the $x$-axis at the rate of $k$ (at x=a$), then $f^{-1}$ shrinks the $y$-axis at the rate of $k$ (at $b=f(a)$); i.e., $$\frac{df^{-1}}{dy}(b)=\frac{1}{\frac{df}{dx}(a)}.$$ It's the reciprocal!
We can also guess this relation from the following simple picture:
As the $xy$-plane is flipped about the diagonal, this is what happens: $$\text{slope of }f=\frac{\text{ change of }y}{\text{ change of } x}=\frac{A}{B}=\frac{1}{B/A}=\frac{1}{\text{slope of }f^{-1}}.$$
Now the algebra.
We will need the following, algebraic, property of the inverse presented in Chapter 3: $$\begin{array}{lll} f^{-1}(f(x))=x\quad\text{for all }x;\\ f\left(f^{-1}(y) \right) = y \quad\text{for all }y. \end{array}$$ Here is a flowchart representation of this idea: $$\begin{array}{ccccccccccccccc} x & \mapsto & \begin{array}{|c|}\hline\quad f \quad \\ \hline\end{array} & \mapsto & y & \mapsto & \begin{array}{|c|}\hline\quad f^{-1} \quad \\ \hline\end{array} & \mapsto & x &(\text{same }x);\\ y & \mapsto & \begin{array}{|c|}\hline\quad f^{-1} \quad \\ \hline\end{array} & \mapsto & x & \mapsto & \begin{array}{|c|}\hline\quad f \quad \\ \hline\end{array} & \mapsto & y &(\text{same }y). \end{array}$$
Example. Suppose we want to find the derivative of the logarithm. We'll use only its definition via the exponential function, as follows. We differentiate this equation (that amounts to the definition of the logarithm) of functions: $$e^{\ln x} = x.$$ The flow chart below shows the dependencies: $$ x \mapsto u = \ln x \mapsto y =e^{u} $$ By CR, we have: $$\begin{aligned} (e^{\ln x})' & = (\ln x)'\cdot e^{u} \\ & = (\ln x)' e^{\ln x} \\ & = (\ln x)' x &=(x)'=1. \end{aligned}$$ Therefore, $$(\ln x)' = \frac{1}{x},$$ whenever $x > 0$. $\square$
Similarly, we can find the derivatives of $\sin^{-1} x$, $\cos^{-1} x$, etc. Let's find the general formula instead.
We differentiate the equation: $$f^{-1}(f(x))=x.$$ Then, by CR, we have $$\frac{\Delta f^{-1}}{\Delta y}\frac{\Delta f}{\Delta x}=1 \text{ and } \frac{d f^{-1}}{dy}\frac{df}{dx} = 1.$$ The other equation produces the same result!
We have proven the theorem below. We just need to be careful with the variables, as follows.
Theorem (Inverse Rule). (A) The difference quotient of the inverse of a function is found as the reciprocal of the its difference quotient; i.e., for any function $f$ defined at two adjacent nodes $x$ and $x+\Delta x$ of a partition with $f(x)\ne f(x+\Delta x)$ so that its inverse function $f^{-1}$ is defined at the two adjacent nodes $f(x)$ and $f(x+\Delta x)$ of a partition, we have the difference quotients (defined at the secondary nodes $c$ and $q$ within these edges of the two partitions respectively) satisfy: $$\frac{\Delta f^{-1}}{\Delta y}(q)= \frac{1}{\frac{\Delta f}{\Delta x}(c)}.$$ (B) For any one-to-one function $y=f(x)$ differentiable at $x=a$, its inverse $x=f^{-1}(y)$ is differentiable at $b=f(a)$ and we have: $$\frac{df^{-1}}{dy}(f(a))= \frac{1}{\frac{df}{dx}(a)},$$ or $$\frac{df^{-1}}{dy}(b)= \frac{1}{\frac{df}{dx}(f^{-1}(b))}.$$
The formulas in the Lagrange notation are as follows: $$\left( f^{-1} (f(a)) \right)' =\frac{1}{f'(a)}$$ and $$\left( f^{-1} (b) \right)' =\frac{1}{f'(f^{-1}(b))}.$$
Example. Find $(\sin^{-1} y)'$. There is no formula for this function, but its meaning is (for $-\pi/2\le x\le \pi/2$) as follows: $$y = \sin x, \text{ or } x = \sin^{-1}y.$$ Since $(\sin x)'=\cos x$, we conclude: $$(\sin^{-1} y)' =\frac{1}{\cos x}= \frac{1}{\cos(\sin^{-1}y)}.$$ That may be the answer, but it's too cumbersome and should be simplified. We need express $\cos x$ in terms of $\sin x$, which is $y$. By the Pythagorean Theorem $$\sin^{2} x + \cos^{2} x = 1,$$ we have $$\begin{aligned} \cos x & = \sqrt{1 - \sin^{2} x} \\ & = \sqrt{ 1 - y^{2}}. \end{aligned} $$ Therefore, $$(\sin^{-1} y)'=\frac{1}{\sqrt{ 1 - y^{2}}}.$$ $\square$
We can apply the theorem to other trigonometric functions. These are the results: $$\begin{aligned} (\sin^{-1}x)' & = \frac{1}{\sqrt{1 - x^{2}}}; \\ (\cos^{-1}x)' & = -\frac{1}{\sqrt{1 - x^{2}}}; \\ (\tan^{-1}x)' & = \frac{1}{1+x^{2}}. \end{aligned}$$
Exercise. Prove the formulas.
Exercise. Since $(\sin^{-1}x)' = -(\cos^{-1}x)'$, does it mean that $\sin^{-1}x = -\cos^{-1}x$?
We can re-write the Inverse Rule in the Leibniz notation: $$\frac{dx}{dy}=\frac{1}{\frac{dy}{dx}}\text{ or }\frac{dx}{dy}\frac{dy}{dx}=1.$$ Then the derivatives of inverses are the reciprocals of each other. Even though the derivatives aren't fractions, the difference quotients, i.e., the slopes of the secant lines, are:
Then the formula follows from these limits: $$\begin{array}{lll} \frac{\Delta x}{\Delta y}&\cdot& \frac{\Delta y}{\Delta x} &=1\\ \quad \downarrow&&\quad \downarrow\\ \ \frac{dx}{dy}&\cdot&\ \frac{dy}{dx}&=1& \text{ as } \Delta x\to 0,\ \Delta y\to 0. \end{array}$$ The fact that $$\Delta x\to 0 \Longrightarrow\ \Delta y\to 0$$ follows from the continuity of $f$.
Furthermore, if we concentrate on a single point $(a,b)$, where $b=f(a)$, on the graph of $y=f(x)$ and its tangent line, the derivatives $$\frac{dy}{dx}\Bigg|_{x=a} \text{ and } \frac{dx}{dy}\Bigg|_{y=b}$$ are indeed fractions and the reciprocals of each other:
Theorem (Reciprocal Powers). For any positive integer $n$ we have: $$\frac{ d y^{\frac{1}{n}} }{dy}=\frac{1}{n} y^{ \frac{1}{n}-1 }.$$
Proof. The inverse of $x=y^{\frac{1}{n}}$ is $y=x^n$. Therefore, $$\begin{array}{lll} \frac{ d y^{\frac{1}{n}} }{dy}&=\frac{1}{\frac{d x^{n}}{dx}} &=\frac{1}{nx^{n-1}} &=\frac{1}{n\left( y^{\frac{1}{n}} \right)^{n-1}}&=\frac{1}{n y^{\frac{n-1}{n}} } &=\frac{1}{n}y^{\frac{1}{n}-1}. \end{array}$$ $\blacksquare$
Theorem (Rational Powers). For any positive integers $n$ and $m$ we have: $$\frac{ d y^{\frac{m}{n}} }{dy}=\frac{m}{n} y^{ \frac{m}{n}-1 }.$$
Reversing differentiation
We have encountered the following question several times by now: when we know the velocity at every moment of time, how do we find the location? The question applies equally to the velocity acquired from the location as its difference quotient or as its derivative. We need to "reverse" the effect of differentiation on a function.
But let's start with an even simpler problem: if we know the displacements during each of the time periods, can we find our location? Just add them together to find the total displacement! This is about the difference. Suppose we have a function $y=p(x)$ defined at the secondary nodes, $c$, of a partition. How do we find a function $y=f(x)$ defined at the nodes, $x$, of the partition so that $g$ is its difference: $$\Delta f(c)=p(c)?$$ In other words, we need to solve this equation for $f$: $$\Delta f=p.$$ Suppose this function $g$ is known but $f$ isn't except for one, initial, value: $y_0=f(a)$. Then the above equation becomes: $$\Delta f(c_1)=f(x_0+\Delta x_1)-f(x_0)=p(c_1),$$ and we can solve it: $$f(x_1)= f(x_0+\Delta x_1)=f(x_0)+p(c_1).$$ We continue in this manner and find the rest of the values of $f$: $$f(x_{k+1})= f(x_k+\Delta x_k)=f(x_k)+p(c_k).$$ This formula is recursive: we need to know the last value of $f$ in order to find the next. Though not an explicit formula, the solution is very simple!
Now, the difference quotient. Suppose we have a function $y=g(x)$ defined at the secondary nodes, $c$, of a partition. How do we find a function $y=f(x)$ defined at the nodes, $x$, of the partition so that $g$ is its difference quotient: $$\frac{\Delta f}{\Delta x}(c)=g(c)?$$ In other words, we need to solve this equation for $f$: $$\frac{\Delta f}{\Delta x}=g.$$ We follow exactly the process above. Suppose this function $g$ is known but $f$ isn't except for one, initial, value: $y_0=f(a)$. Then the above equation becomes: $$\frac{\Delta f}{\Delta x}(c_1)=\frac{f(x_0+\Delta x_1)-f(x_0)}{\Delta x_1}=g(c_1),$$ and we can solve it: $$f(x_1)= f(x_0+\Delta x_1)=f(x_0)+g(c_1)\Delta x_1.$$ We continue in this manner and find the rest of the values of $f$: $$f(x_{k+1})= f(x_k+\Delta x_k)=f(x_k)+g(c_k)\Delta x_k.$$ This formula is also recursive, but, within this limitation, the problem of reversing the effect of the difference quotient is solved!
These two problems are similar to the one of finding the inverse of a function. This is how inverse functions appear in algebra; they come from solving equations, for $x$: $$\begin{array}{llllll} x^{2} & = 4 & \Longrightarrow& x = 2 & \text{ via } \sqrt{\ \cdot\ }; \\ 2^{x} & = 8 & \Longrightarrow& x = 3 & \text{ via } \log_{2}(\cdot ); \\ \sin x & = 0 & \Longrightarrow& x = 0 & \text{ via } \sin^{-1}(\cdot ), \text{ etc.} \end{array}$$ Now, what if we know the result of differentiation and want to know where it came from? We have just discovered that the inverse of the difference is the sum, no surprise! There may be also some explicit formulas. For example, we can solve this equation, for $f$: $$\Delta f=(e^h-1)\cdot e^c.$$ This is the solution: $$f=e^x.$$ Similarly, we solve the equation: $$\frac{\Delta }{\Delta x}(f)=\frac{ \sin (h/2)}{h/2}\cdot\cos c.$$ by $$f=\sin x.$$
This is called anti-differentiation.
What about the derivative? Because it not a fraction but a limit of a fraction, there is no formula, even recursive. Some particular cases are considered in Chapter 9.
The above approach still applies. We solve equations with respect to a variable function; for example: $$\begin{array}{llllll} f' & = 2x & \Longrightarrow& f &= x^{2}; \\ f' & = \cos x & \Longrightarrow& f &= \sin x ;\\ f' & =e^x & \Longrightarrow& f &= e^x. \end{array}$$
Example. The importance of this "inverse" problem stems from the need to find location from velocity or velocity from acceleration. For example, this is what we derive from our experience with differentiation. For the motion of a free fall, we have the following for the horizontal component:
the acceleration is zero $\Longrightarrow$ the velocity is constant $\Longrightarrow$ the location is a linear function.
And for the vertical component, we have:
the acceleration is constant $\Longrightarrow$ the velocity is a linear function $\Longrightarrow$ the location is a quadratic function.
We illustrate the idea with a diagram: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{ccccccccccccccc} x^2 & \mapsto & \begin{array}{|c|}\hline\quad \tfrac{d}{dx} \quad \\ \hline\end{array} & \mapsto & 2x; \\ 2x & \mapsto & \begin{array}{|c|}\hline\quad \left( \tfrac{d}{dx}\right)^{-1} \quad \\ \hline\end{array} & \mapsto & x^2 ... \end{array}$$ ... are there any others? Yes, $(x^{2} + 1)' = 2x $. And more: $$\begin{array}{cclc} &&x^2+1\\ &\nearrow&\\ 2x&\to&x^2\\ &\searrow\\ &&x^2-1 \end{array}$$ As a function -- a function of functions -- $\tfrac{d}{dx}$ isn't one-to-one!
Exercise. We can make any function one-to-one by restricting its domain. How?
Retrieved from "https://calculus123.com/index.php?title=Differentiation&oldid=1621" | CommonCrawl |
아세아태평양축산학회지 (Asian-Australasian Journal of Animal Sciences)
아세아태평양축산학회 (Asian Australasian Association of Animal Production Societies)
Effects of Replacing Nonfiber Carbohydrates with Nonforage Detergent Fiber from Cassava Residues on Performance of Dairy Cows in the Tropics
Kanjanapruthipong, J. (Department of Animal Sciences, Kasetsart University) ;
Buatong, N. (Department of Animal Sciences, Kasetsart University)
https://doi.org/10.5713/ajas.2004.967
Four Holstein$\times$Indigenous cows with ruminal canulas were used in a 4$\times$4 Latin square design with 28 d periods to determine the effect of replacing nonforage fiber source (NFFS) from cassava residues for non-fiber carbohydrates (NFC) on ruminal fermentation characteristics and milk production. Dietary treatments contained 17% forage neutral detergent fiber (FNDF) from corn silage and 0, 3, 6 and 9% nonforage NDF from cassava residues and 11% nonforage NDF from other NFFS, so that levels of nonforage NDF were 11, 14, 17 and 20% dry matter (DM). Intakes of DM and net energy for lactation, average daily gain and milk fat percentage were not different (p>0.05). Ruminal pH, ammonia concentrations, acetate to propionate ratios, 24 h in sacco fiber digestibility significantly increased with increasing contents of nonforage NDF from cassava residues. Concentrations of VFA, urinary excretion of purine derivatives, milk protein percentage, production of milk and 4% FCM significantly decreased. These results suggest that NFC in diets is one of the limiting factors affecting productivity of dairy cows in the tropics and thus NFFS is better used as partial replacements for FNDF.
Nonforage Fiber;Cassava Residues;Dairy Cows;Tropics
연구 과제 주관 기관 : Kasetsart University Research, Development Institute, Kasetsart University
Batajoo, K. K. and R. D. Shaver. 1994. Impact of nonfiber carbohydrate on intake, digestion, and milk production by dairy cows. J. Dairy Sci. 77:1580-1588.
Cunningham, K. D., M. J. Cecava and T. R. Johnson. 1993. Nutrient digestion, nitrogen and amino acid flows in lactating cows fed soybean hulls in place of forage or concentrate. J. Dairy Sci. 76:3523-3535.
Geissler, C., M. Hoffman and B. Hickel. 1976. Ein beitrag zur gas hromatographischen bestimmung flüchtiger fettsäuren. Arch. Tierernährung. 26:123-129.
Hoover, W. H. and S. R. Stokes. 1991. Balancing carbohydrates and proteins for optimum rumen microbial yield J. Dairy Sci. 74:3630-3644.
Kanjanapruthipong, J., N. Buatong and S. Buaphan. 2001. Effects of roughage neutral detergent fiber on dairy performance under tropical conditions. Asian-Aust. J. Anim. Sci. 14(10):1400-1404.
Erwin, E. S., G. J. Macro and B. M. Emesy. 1961. Volatile fatty acid analysis of blood and rumen fluid by gas chromatography. J. Dairy Sci. 44:1768-1771.
Mertens, D. R. 1997. Creating a system for meeting the fiber requirements of dairy cows. J. Dairy Sci. 80:1463-1481. https://doi.org/10.3168/jds.S0022-0302(97)76075-2
Sarwar, M., J. L. Firkins and M. L. Eastridge. 1992. Effects of varying forage and concentrate carbohydrates on Nutrient digestibilities and milk production by dairy cows. J. Dairy Sci. 75:1553-1542.
Cant, J. P., E. J. De Peters and R. L. Baldwin. 1993. Mammary amino acid utilization in dairy cows fed fat and its relationship to milk protein depression. J. Dairy Sci. 76:762-774.
Weatherburn, M. W. 1967. Phenol-hypochlorite reaction for determination of ammonia. Anal. Chem. 39:971-974. https://doi.org/10.1021/ac60252a045
Firkins, J. L. 1997. Effects of feeding nonforage fiber sources on site of fiber digestion. J. Dairy Sci. 80:1426-1437. https://doi.org/10.3168/jds.S0022-0302(97)76072-7
Bhatti, S. A. and J. L. Firkins. 1995. Kinetics of hydration and functional specific gravity of fibrous feed by-products. J. Anim. Sci. 73:1449-1458.
Erdman, R. A. 1988. Dietary buffering requirements of the lactating dairy cows: a review. J. Dairy Sci. 71:3246-3266. https://doi.org/10.3168/jds.S0022-0302(88)79930-0
Armentano, L. and M. Pereira. 1997. Measuring the effectiveness of fiber by animal response trials. J. Dairy Sci. 80:1416-1425.
Allen, M. S. 2000. Effects of diet on short-term regulation of feed intake by lactating dairy cattle. J. Dairy Sci. 83:1598-1624. https://doi.org/10.3168/jds.S0022-0302(00)75030-2
Younker, R. S., S. D. Winland, J. L. Firkins and B. L. Hull. 1998. Effects of replacing forage fiber or nonfiber carbohydrates with dried brewers grains. J. Dairy Sci. 81:2645-2656.
Fujimaki, T., Y. Kobayashi, M. Wakita and S. Hoshino. 1994. Influence of amino acid supplements to a straw-maize-based urea diet on duodenal digesta flow and digestion in sheep. Asian-Aust. J. Anim. Sci. 7(1):137-145.
Spicer, L., C. B. Theurer, J. Sowe and T. H. Noon. 1986. Ruminal and post-ruminal utilization of nitrogen and starch from sorghum grain, corn-and barley-based diets by beef steers. J. Anim. Sci. 62:521-530.
Swain, S. M. and L. E. Armentano. 1994. Quantitative evaluation of fiber from nonforage sources used to replace alfalfa silage. J. Dairy Sci. 77:2318-2331.
Cant, J. P., E. J. De Peters and R. L. Baldwin. 1993. Mammary uptake of energy metabolites in dairy cows fed fats and its relationship to milk protein depression. J. Dairy Sci. 76:2254-2265.
Kanjanapruthipong, J. and N. Buatong. 2003. Formulating diets on an equal forage neutral detergent fiber from various sources of silage for dairy.
Wang, J. H., S. H. Choi and M. K. Song. 2003. Effects of concentrate to roughage ratio on the formation of cis-9, tran-11 CLA and trans-11 octadecenoic acid in the rumen fluid and plasma of sheep when fed high oleic or high linoleic acids. Asian- Aust. J. Anim. Sci. 16(11):1604-1609.
Crant, R. J. 1997. Interactions among forages and nonforage fiber sources. J. Dairy Sci. 80:1438-1446. https://doi.org/10.3168/jds.S0022-0302(97)76073-9
SAS/STAT$^{\circledR}$ User's Guide, Version 6, 4 th Edition. Vol 2. 1989. SAS Inst., Cary, NC.
Allen, Dm M. and R. J. Grant. 2000. Interactions between forage and wet corn gluten feed as sources of fiber in diets for lactating dairy cows. J. Dairy Sci. 83:322-331.
Borchers, R. 1977. Allantoin determination. Anal. Biochem. 79:612-613.
Pantoja, J., J. L. Firkins, M. L. Eastridge and B. L. Hull. 1994. Effects of fat saturation and source of fiber on site of nutrient digestion and milk production by lactating dairy cows. J. Dairy Sci. 77:2341-2356.
Van Soest, P. J., J. B. Robertson and B. A. Lewis. 1991. Methods for dietary fiber, neutral detergent fiber, and nonstarch polysaccharides in relation to animal nutrition. J. Dairy Sci. 74:3583-3597.
Association of official Analytical chemists. 1980. Official Methods of Analysis 13th Ed. AOAC. Washington, DC.
Knapp, D. M. and RIC R. Grummer. 1991. Response of lactating dairy cows to fat supplementation during heat stress. J. Dairy Sci. 74:2573-2579.
Ohajuruka, O. A., Z. Wu and D. L. Palmquist. 1991. Ruminal metabolism. Fiber, and protein digestion by lactating dairy cows fed calcium soap or animal vegetable fat. J. Dairy Sci. 74:2601-2609.
Feeding and lying behavior of heat-stressed early lactation cows fed low fiber diets containing roughage and nonforage fiber sources vol.98, pp.2, 2015, https://doi.org/10.3168/jds.2014-8154 | CommonCrawl |
General treatment of essential boundary conditions in reduced order models for non-linear problems
Alejandro Cosimo1,
Alberto Cardona1 &
Sergio Idelsohn1,2
Advanced Modeling and Simulation in Engineering Sciences volume 3, Article number: 7 (2016) Cite this article
Inhomogeneous essential boundary conditions must be carefully treated in the formulation of Reduced Order Models (ROMs) for non-linear problems. In order to investigate this issue, two methods are analysed: one in which the boundary conditions are imposed in an strong way, and a second one in which a weak imposition of boundary conditions is made. The ideas presented in this work apply to the big realm of a posteriori ROMs. Nevertheless, an a posteriori hyper-reduction method is specifically considered in order to deal with the cost associated to the non-linearity of the problems. Applications to nonlinear transient heat conduction problems with temperature dependent thermophysical properties and time dependent essential boundary conditions are studied. However, the strategies introduced in this work are of general application.
Currently, many engineering problems of practical importance are suffering from the so-called "curse of dimensionality" [1]. In this context, the need of optimising non-linear multiphysics problems makes necessary to develop numerical techniques which can efficiently deal with the high computational cost characterising such applications. A wide-spread strategy is to consider the formulation of Reduced Order Models, which can be implemented by adopting either the Proper Orthogonal Decomposition (POD) method [2, 3], or the proper generalised decomposition (PGD) technique [4, 5]. The discussion in this paper only considers POD-based ROMs, from now on referred to as ROMs. The ideas presented here apply to the big realm of a posteriori ROMs, despite the fact that an a posteriori hyper-reduction method, referred to as Hyper Reduced Order Model (HROM), is specifically considered in order to deal with the cost associated to the non-linearity of the problems.
In what follows, let \(\mathcal {S}^h \subset \mathcal {S}\) and \(\mathcal {V}^h \subset \mathcal {V}\) be the trial and test finite dimension subspaces of the functional spaces \(\mathcal {S}\) and \(\mathcal {V}\) used in the definition of a variational problem. Generally, in the formulation of ROMs, an approximate solution \(\widehat{T}^h\) to \(T^h \in \mathcal {S}^h\) is sought in a subspace of \(\mathcal {S}^h\) by defining a new basis \({\varvec{X}} \in \mathbb {R}^{N \times k}\), where N is the number of degrees of freedom (DOFs) of the High Fidelity (HF) model and k is the dimension of the basis spanning the subspace of \(\mathcal {S}^h\). If a Bubnov-Galerkin projection is used, approximate versions \(\widehat{w}^h \in {span}\{ {\varvec{X}} \}\) of the test functions \(w^h\) are built, and functions \(T^h \in \mathcal {S}^h\) are approximated by affine translations of the test functions \(\widehat{w}^h\). In POD-based ROMs, the new basis \({\varvec{X}}\) is built by computing the singular value decomposition (SVD) [6] of a set of snapshots that are given by time instances of the spatial distribution of the solution of a training problem [7]. It is well-known that the vectors comprising this basis inherit the behaviour of the snapshots [8], hindering the possibility of reproducing non-admissible test functions. That is why careful attention must be paid on how the snapshots for building \({\varvec{X}}\) are collected. This issue is studied in detail in this work.
The concept of consistent snapshots collection procedures for nonlinear problems was first introduced by Carlberg et al. [9, 10]. As they pointed out in [10] "most nonlinear model reduction techniques reported in the literature employ a POD basis computed using as snapshots \(\{{\varvec{T}}_n | n=1, \cdots , n_t\}\) Footnote 1, which do not lead to a consistent projection". In the last expression \(n_t\) is given by the number of time steps comprising the training problem and \({\varvec{T}}_n\) are the parameters such that \(T^h_n={\varvec{N}}^T{\varvec{T}}_n\) with \({\varvec{N}}\) given by the shape functions used for interpolation. The lack of consistency of these formulations is produced by the fact that when computing the POD basis with time instances of \(T^h\), that is by \(\{{\varvec{T}}_n | n=1, \ldots , n_t\}\), if non-zero essential boundary conditions are present, \({span}\{ {\varvec{X}} \} \not \subset \mathcal {V}^h \) because some elements \({\varvec{v}} \in {span}\{ {\varvec{X}} \}\) are not identically zero at the portion of the boundary with non-homogeneous essential conditions.
Carlberg et al. [10] proposed two alternative procedures to collect snapshots for \(T^h\), for which the following comments apply when considering the general case of time dependent essential boundary conditions:
Snapshots of the form \(\{{\varvec{T}}_n - {\varvec{T}}_{n-1} | n=1, \ldots , n_t\}\). The problem with this strategy is that the set of snapshots is characterised by a high frequency content, giving a less compressible SVD spectrum [11] than when using a collection procedure based on the snapshots of the solution. Another problem of this strategy is the handling of time dependent essential boundary conditions. In this case, it cannot be guaranteed that the snapshots will be identically zero at the boundary with essential boundary conditions.
Snapshots of the form \(\{{\varvec{T}}_n - {\varvec{T}}_0 | n=1, \ldots , n_t \}\), where \({\varvec{T}}_0\) is the initial condition. With this strategy it cannot be guaranteed that functions in \({span}\{ {\varvec{X}} \}\) will be test functions, for instance, when essential boundary conditions are different from \(T^h_0\). Amsallem et al. [12] have observed that this strategy leads to more accurate ROMs than the previous strategy. As it is discussed in that work, using a different initial condition \({\varvec{T}}^*_0\) in the online stage requires in principle recomputing the snapshots for reconstructing the POD modes for projection. Several fast alternatives to solve this problem are proposed in [12].
Gunzburger et al. [13] presented two schemes for handling inhomogeneous essential boundary conditions in the context of ROMs, without performing any additional treatment to reduce the cost associated to non-linearities. They assumed that the Dirichlet boundary is divided into a set of P non-overlapping portions where the involved field is imposed as \(\beta _p(t)g_p({\varvec{x}})\), for \(p=1, \ldots , P\), where \(g_p\) are given functions and \(\beta _p\) are time dependent parameters. In a first approach, the solution is written in terms of a linear combination of test functions vanishing in the portion of the boundary with essential boundary conditions, and in terms of a linear combination of particular solutions of the steady state version of the problem to be solved. In a second approach, they proposed to express the solution in terms of a set of POD basis functions not vanishing on the Dirichlet boundary, and adding a set of equations describing the essential boundary condition. Then, they use the QR decomposition on the resulting system in order to obtain a set of test functions vanishing on that boundary. These techniques proved to work well in the context of ROMs [13]. However, in their work Gunzburger et al. did not consider any particular treatment for reducing the cost associated to non-linearities. Besides, they did not propose any methodology for dealing with the inherent computational cost of the strong imposition of essential boundary conditions.
In the work of González et al. [1], the problem of imposing non-homogeneous essential boundary conditions in the context of a priori model order reduction methodologies (PGD) is tackled. They imposed the Dirichlet conditions by constructing a global function that verifies the essential boundary conditions, using the technique of transfinite interpolation [14]. A good example of interpolation functions is given by the inverse distance function, and as exposed by Rvachev et al. [14], different interpolation functions can be built based on the theory of R-functions. Although the methodology presented by González et al. is really appealing and show particular advantages in the PGD context, its use requires large symbolic algebra computations, leading to very complex algebraic expressions even in the case of quite simple academic problems that could hinder its application to practical problems. In the current study, we are looking to develop physically-based techniques that can be easily applied to domains coming from three-dimensional industrial problems.
In this work we analyse the treatment of time dependent inhomogeneous essential boundary conditions from a general point of view, taking into consideration the costs associated to non-linear problems and to the strong imposition of the essential boundary conditions. Alternatives based on the weak imposition of the boundary conditions are evaluated, combined with a reduction of the number of degrees of freedom at the boundary. The presented ideas are applied to nonlinear transient heat conduction problems with temperature dependent thermophysical properties; however, the introduced strategies are of widely general application.
This section describes first the problem statement and two variational formulations, one weakly imposing Dirichlet boundary conditions and the other strongly imposing those conditions. Then, an HROM that considers strong enforcement of boundary conditions is presented. Finally, the formulation of two alternative HROMs that weakly impose Dirichlet conditions is introduced.
Problem statement, variational formulation and finite element discretisation
The physical problem under consideration is a nonlinear transient heat conduction problem, with temperature dependent thermophysical properties. The problem is described by the equation
$$\begin{aligned} \rho c \dot{T} = Q + \nabla \cdot (k \nabla T) \qquad \forall \ (\mathbf {x},t)\in \Omega \times (t_0, \infty ) \end{aligned}$$
where \(\rho \) is the density, k is the thermal conductivity, c the heat capacity, T is the temperature, Q is the external heat source per unit volume, and \(\Omega \) is the space domain. The temperature field should verify the initial condition \(T({\varvec{x}},t=0) = T_0 \ \forall \ {\varvec{x}}\in \Omega \), where \(T_0\) is the given initial temperature field. Additionally, the following set of conditions must be verified at the disjoint portions \(\Gamma _d, \Gamma _q, \Gamma _c\) of the external boundary: \(T|_{\Gamma _d}=T_d\), \(k\nabla T\cdot \mathbf {n}|_{\Gamma _q}=q_w\) and \(k\nabla T\cdot \mathbf {n}|_{\Gamma _c}=h_f(T_f-T)\), where \(\Gamma _d \cup \Gamma _q \cup \Gamma _c = \partial \Omega \), and where \(T_d\) is the imposed temperature at the boundary \(\Gamma _d\), \(q_w\) is the external heat flow at the boundary \(\Gamma _q\), \(h_f\) is the heat convection coefficient, \(T_f\) is the external fluid temperature at the portion the boundary \(\Gamma _c\) and \(\mathbf {n}\) is the outward normal to the boundary under consideration.
In what follows, we briefly present the variational formulation of the problem and its finite element discretisation. Essential boundary conditions can be enforced strongly or weakly. In order to strongly enforce Dirichlet boundary conditions, let \(\mathcal {S} = \{ T \in \mathcal {H}^1(\Omega )\ / \ T|_{\Gamma _d} = T_d \}\) be the space of trial solutions and \(\mathcal {V} = \{ v \in \mathcal {H}^1(\Omega )\ / \ v|_{\Gamma _d} = 0 \}\) be the space of weighting or test functions, where \(\mathcal {H}^1\) is the first order Sobolev space. Then, the variational formulation is given as follows: Find \(T \in \mathcal {S}\) such that \(\forall w \in \mathcal {V}\)
Let \(\mathcal {S}^h \subset \mathcal {S}\) and \(\mathcal {V}^h \subset \mathcal {V}\) be subspaces of the trial and test functional spaces. Then, in matrix notation, \(T^h \in \mathcal {S}^h\) is given by \(T^h({\varvec{x}},t_n) = {\varvec{N}}^T {\varvec{T}}_n\), where \({\varvec{N}}\) denotes the finite element basis and \({\varvec{T}}_n \in \mathbb {R}^N\) denotes the FEM degrees of freedom, with N the dimension of the FEM space. Then, using a Bubnov-Galerkin projection and a modified Backward-Euler scheme for time integration, the residual of the nonlinear thermal problem in its discrete expression reads [11]
$$\begin{aligned} \varvec{\varPi }_n = \frac{{\varvec{H}}^c_n - {\varvec{H}}^c_{n-1}}{\Delta t} + {\varvec{G}}^k_n + {\varvec{F}}_n - {\varvec{Q}}_n = {\varvec{0}}, \end{aligned}$$
$$\begin{aligned} {\varvec{G}}_n^k&= \left( \int _{\Omega } \nabla {\varvec{N}} k_{n} \nabla {\varvec{N}}^T \Omega + \int _{\Gamma _c} h_{f_{n}} {\varvec{N}} {\varvec{N}}^T \;d\Gamma \right) \; {\varvec{T}}_n,\end{aligned}$$
$$\begin{aligned} \displaystyle {\varvec{F}}_n&= -\int _{\Gamma _q} {\varvec{N}} q_{w_{n}} \;d\Gamma - \int _{\Gamma _c} h_{f_{n}} {\varvec{N}} T_{f_{n}} \;d\Gamma ,\end{aligned}$$
$$\begin{aligned} \displaystyle {\varvec{Q}}_n&= \int _{\Omega } {\varvec{N}} Q_{n} \;d\Omega ,\end{aligned}$$
$$\begin{aligned} \displaystyle {\varvec{H}}_n^c&= \int _{\Omega } \rho c_{n} {\varvec{N}} {\varvec{N}}^T \;d\Omega \; {\varvec{T}}_n. \end{aligned}$$
In order to weakly impose Dirichlet boundary conditions, the use of Lagrange multipliers is adopted. The idea is to remove from the trial and test function spaces, the constraint over the portion of the boundary corresponding to essential boundary conditions. Accordingly, let \(\mathcal {V} = \{ v \in \mathcal {H}^1(\Omega )\}\) be the space of trial and test functions for the temperature, and let \(\mathcal {Q} = \{ q \in \mathrm {L}_2(\Gamma )\}\) be the space of trial and test functions for the Lagrange multipliers. Then, the variational formulation is given as follows: Find \((T,\lambda ) \in \mathcal {V} \times \mathcal {Q}\) such that \(\forall (w,q) \in \mathcal {V} \times \mathcal {Q}\)
$$\begin{aligned}&\int _{\Omega } w \left[ \rho c\frac{\partial T}{\partial t} - Q \right] \;d\Omega + \int _{\Omega } \nabla w \cdot (k \nabla T)\;d\Omega + \int _{\Gamma _c} w h_f(T-T_f) \;d\Gamma \nonumber \\&\qquad - \int _{\Gamma _q} w q_w \;d\Gamma + \int _{\Gamma _d} w \lambda \;d\Gamma + \int _{\Gamma _d} q (T-T_d) \;d\Gamma = 0, \quad \text{ for } t >0; \end{aligned}$$
$$\begin{aligned}&\int _{\Omega } w T \;d\Omega = \int _{\Omega } w T_0 \;d\Omega , \quad \text{ for } t = 0 . \end{aligned}$$
As it was similarly done before, let \(\mathcal {V}^h \subset \mathcal {V}\) and \(\mathcal {Q}^h \subset \mathcal {Q}\). Therefore, in matrix notation, \(T^h \in \mathcal {V}^h\) and \(\lambda ^h \in \mathcal {Q}^h\) are given by \(T^h({\varvec{x}},t_n) = {\varvec{N}}^T {\varvec{T}}_n\) and \(\lambda ^h({\varvec{x}},t_n) = \bar{{\varvec{N}}}^T \varvec{\lambda }_n\) where \({\varvec{N}}\) denotes the finite element basis for the temperature field, and \({\varvec{T}}_n \in \mathbb {R}^N\) denotes the FEM nodal degrees of freedom. Similarly, \(\bar{{\varvec{N}}}\) denotes the finite element basis for the Lagrange multipliers, and \(\varvec{\lambda }_n \in \mathbb {R}^{N_\lambda }\) denotes the parameters corresponding to the Lagrange multipliers. Then, the residual characterising the FEM discretisation can be written as
$$\begin{aligned} \varvec{\varPi }_n = \begin{bmatrix} \varvec{\varPi }_{{\varvec{T}}_n}\\ \varvec{\varPi }_{\varvec{\lambda }_n} \end{bmatrix} = \begin{bmatrix} {\displaystyle \frac{{\varvec{H}}^c_n - {\varvec{H}}^c_{n-1}}{\Delta t}} + {\varvec{G}}^k_n + {\varvec{F}}_n + {\varvec{B}}^\lambda _n - {\varvec{Q}}_n \\ {\varvec{B}}^{T_d}_n \end{bmatrix} = {\varvec{0}}, \end{aligned}$$
where the new terms with respect to the previous formulation are given by
$$\begin{aligned} {\varvec{B}}^\lambda _n&= {\varvec{B}}^\lambda (\varvec{\lambda }_n, t_n) = \int _{\Gamma _d} {\varvec{N}} \bar{{\varvec{N}}}^T \;d\Gamma \; \varvec{\lambda }_n\end{aligned}$$
$$\begin{aligned} \displaystyle {\varvec{B}}^{T_d}_n&= {\varvec{B}}^{T_d}({\varvec{T}}_n, t_n) = \int _{\Gamma _d}\bar{{\varvec{N}}}{\varvec{N}}^T \;d\Gamma \; {\varvec{T}}_n - \int _{\Gamma _d}\bar{{\varvec{N}}}T_d \;d\Gamma . \end{aligned}$$
HROM formulation by strongly enforcing boundary conditions
The HROM associated to the formulation given by Eq. (3) is here introduced. Each non-linear contribution to \(\varvec{\varPi }_n\) is hyper-reduced separately as done by Cosimo et al. [11]. Therefore, each of these terms has associated a particular POD basis \(\varvec{\varPhi }_i\) for its gappy data reconstruction [15–17]. In what follows, suffices \(i \in \{ c,k,f,q\}\) are used to identify each term. We emphasise that the sampling is performed independently for each term, but the number of sampling points \(n_s\) and the number of gappy modes \(n_g\) are always the same for all of them. In what follows, \(\widehat{\cdot }\) denotes the vector of \(n_s\) components sampled from the associated complete term. To compute the POD modes \(\varvec{\varPhi }_i\), snapshots are taken for each individual contribution at each time step, after convergence of the Newton-Raphson scheme.
To obtain the hyper-reduced residual \(\varvec{\varPi }^p_n\) we project the gappy approximation to \(\varvec{\varPi }_n\) with the basis \({\varvec{X}}\), which leads to
$$\begin{aligned} \varvec{\varPi }^p_n = {\varvec{A}}_c\frac{\widehat{{\varvec{H}}}_n^c - \widehat{{\varvec{H}}}_{n-1}^c}{\Delta t} + {\varvec{A}}_k\widehat{{\varvec{G}}}^k_n + {\varvec{A}}_f\widehat{{\varvec{F}}}_n - {\varvec{A}}_q\widehat{{\varvec{Q}}}_n, \end{aligned}$$
where \({\varvec{A}}_i = {\varvec{X}}^T\varvec{\varPhi }_i(\widehat{\varvec{\varPhi }}_i^T\widehat{\varvec{\varPhi }}_i)^{-1}\widehat{\varvec{\varPhi }}_i^T\), with \(i \in \{c,k,f,q\}\). Note that matrices \({\varvec{A}}_i\) are computed in the off-line stage.
In what follows, a consistent snapshot collection strategy for \({\varvec{X}}\) taking into account general essential boundary conditions is introduced. When solving the variational problem given by Eq. (2) in a finite dimensional space, an approximate solution \(T^h \in \mathcal {S}^h\) is described as \(T^h = T_d^h + v^h\), where \(v^h \in \mathcal {V}^h\) and \(T_d^h\) is the finite dimensional version of \(T_d\). Then, the trial solutions \(T^h\) and the test functions \(w^h\) are given by
$$\begin{aligned} T^h&= T^h_d + v^h = {\varvec{N}}^{I,T}{\varvec{T}}^I_ n + {\varvec{N}}^{B,T}{\varvec{T}}^B_n, \end{aligned}$$
$$\begin{aligned} w^h&= {\varvec{N}}^{I,T} {\varvec{\eta }}, \end{aligned}$$
where \({\varvec{\eta }}\) are the parameters associated to the test functions and the DOFs \({\varvec{T}}_n\) are discriminated in terms of parameters describing the boundary with Dirichlet boundary conditions, \({\varvec{T}}^B\), and the DOFs \({\varvec{T}}^I\) that are not part of that boundary. Functions \({\varvec{N}}^I\) and \({\varvec{N}}^B\) are the FEM shape functions associated to the internal and boundary DOFs, respectively.
Functions with global support are used in the context of ROMs, in contrast to FEM basis functions whose support is local. Therefore, the notion of internal/boundary degrees of freedom is lost in ROMs, making it necessary to express \(T^h\) and \(w^h\) as
$$\begin{aligned} T^h&\simeq \widehat{T}^h = T_d^h + {\varvec{N}}^T {\varvec{X}} {\varvec{a}}_n,\end{aligned}$$
$$\begin{aligned} w^h&\simeq \widehat{w}^h = {\varvec{N}}^T {\varvec{X}} {\varvec{w}}_n, \end{aligned}$$
where \({\varvec{a}}_n\) and \({\varvec{w}}_n\) are the amplitudes associated to the modes \({\varvec{X}}\).
In order to get admissible test functions \(\widehat{w}^h\), the restriction \(\widehat{w}^h|_{\Gamma _d}=0\) must be satisfied. That is why, for the design of a consistent snapshot collection strategy, the snapshots must be of the form \({\varvec{T}} - {\varvec{T}}_d\). Then, the problem resides in the correct description of \(T_d^h\). A possible solution is to describe it as in standard FEM, i.e., \(T_d^h = {\varvec{N}}^{B,T}{\varvec{T}}^B_n\), but this could lead to a snapshots set with a very high frequency content, decreasing the compressibility of the signal [11].
In order to avoid this inconvenience, we propose to compute a set of static modes that describe the behaviour of the portion of the boundary with essential boundary conditions. The procedure is similar to that followed by the Craig–Bampton or by the Guyan–Irons methods [18–21]. Since we want to build a set of static modes to describe the boundary, we consider only the term \({\varvec{G}}^k\) in Eq. (3). Simplifying notation, this term at time instant \(t_n\) is given by \({\varvec{G}}^k = {\varvec{K}} {\varvec{T}}\), where \({\varvec{K}}\) is any linearisation of the stiffness matrix and \({\varvec{T}} \equiv {\varvec{T}}_n\). We can neglect non-linearities at this point because we are only interested in finding a basis for expressing the essential boundary conditions. Then, by partitioning in internal and boundary DOFs, we write:
$$\begin{aligned} {\varvec{G}}^k = {\varvec{K}} {\varvec{T}} = \left[ \begin{array}{c@{\quad }c} {\varvec{K}}_{II} &{} {\varvec{K}}_{IB} \\ {\varvec{K}}_{BI} &{} {\varvec{K}}_{BB} \end{array} \right] \left[ \begin{array}{c} {\varvec{T}}^I \\ {\varvec{T}}^B \end{array} \right] = {\varvec{0}}, \end{aligned}$$
from which we get by static condensation \({\varvec{T}}^I = - {\varvec{K}}_{II}^{-1} {\varvec{K}}_{IB} {\varvec{T}}^B\). In this case \({\varvec{T}}^I\) can be regarded as the response to an imposed temperature \({\varvec{T}}^B\) in the portion of the boundary where only the term \({\varvec{G}}^k\) is considered. Then, the static modes, which describe the response to unit temperatures imposed at \(\Gamma _d\), are given by
$$\begin{aligned} {\varvec{\varPhi }}_B = \left[ \begin{array}{c} - {\varvec{K}}_{II}^{-1} {\varvec{K}}_{IB}\\ {\varvec{I}} \end{array} \right] , \end{aligned}$$
and the function \(T_d^h\) used to denote the essential boundary conditions is assumed to lie in \({span}\{ {\varvec{\varPhi }}_B \}\). We remark that this procedure is similar to the method proposed by Gunzburger et al. [13] that considers particular solutions derived from the steady state version of the system of equations, but interpreted from a different perspective.
Then, the approximation \(\widehat{T}^h\) is given by \(\widehat{T}^h = {\varvec{N}}^T {\varvec{\varPhi }}_B {\varvec{T}}^B_n + {\varvec{N}}^T {\varvec{X}} {\varvec{a}}_n \simeq T^h\). Note that the static modes \({\varvec{\varPhi }}_B\) have the property to be interpolatory at the boundary \(\Gamma _d\), thus \({\varvec{T}}^B_n\) has the physical interpretation to be the value of the field at the nodes lying on \(\Gamma _d\).
From this equation the following snapshots collection procedure arises: once the static modes were computed, take snapshots of the form \(S_p=\{{\varvec{T}}_n - {\varvec{\varPhi }}_B {\varvec{T}}^B_n | n=1, \ldots , n_t \}\). This strategy has the following advantages:
The snapshots given by \(S_p\) tend to preserve the compressibility posed by the field \(T^h\).
General essential boundary conditions can be represented by \({\varvec{\varPhi }}_B\), while keeping simple the process of imposing essential boundary conditions because of the interpolatory property of \({\varvec{\varPhi }}_B\) at \(\Gamma _d\).
Using different initial conditions in the online stage does not require recomputing the snapshots for \({\varvec{X}}\) or considering another alternative.
It should be observed that the computational cost increases with the number of static modes. This is because, on each Newton iteration, the temperature field must be computed at least on the nodes involved by the gappy data procedure. That is, the cost of the operation \({\varvec{\varPhi }}_B {\varvec{T}}^B_n\) can be very high if a large number of static modes is used. In the worst case scenario, the number of static modes is given by the number of DOFs at the portion of the boundary with essential boundary conditions. In some cases, additional assumptions can be adopted to reduce the number of static modes. For instance, if the shape of the essential boundary condition does not change in time in some portion \(\Gamma _d^\theta \) of \(\Gamma _d\), a new static mode \({\varvec{\Theta }}\) can be built by summing up all the static modes with support on \(\Gamma _d\) times the considered shape factor. A more general alternative is to describe the behaviour of the boundary by additionally approximating the boundary parameters \({\varvec{T}}^B_n\) by \({\varvec{T}}^B_n = \varvec{\Psi }_B {\varvec{d}}_n^\psi \), where \(\varvec{\Psi }_B\) are POD modes computed from a set of snapshots representative of the behaviour of the boundary, and \({\varvec{d}}_n^\psi \) are the associated parameters. It should be noted that this kind of idea was already applied in substructuring of linear problems [22, 23].
In the examples section, all the static modes associated to the portion of the boundary with essential boundary conditions are retained, and no other approximation is applied to the boundary DOFs.
HROM formulation by weakly enforcing boundary conditions
Two alternative HROMs associated to the formulation given by Eq. (10) are now introduced, aimed at reducing the temperature DOFs \({\varvec{T}}_n\) and Lagrange multipliers \(\varvec{\lambda }_n\). In the first one, \([{\varvec{T}}_n; \varvec{\lambda }_n]\) is reduced as a unit like \([{\varvec{T}}_n; \varvec{\lambda }_n] = {\varvec{X}}^{\varvec{c}} {\varvec{c}}_n\), where the POD modes \({\varvec{X}}^{\varvec{c}}\) are built from a set of snapshots composed by the temperature field and the Lagrange multipliers, and \({\varvec{c}}_n\) denote the associated parameters. A second alternative is to reduce each physical quantity separately like \({\varvec{T}}_n = {\varvec{X}} {\varvec{a}}_n\) and \(\varvec{\lambda }_n = {\varvec{Y}} {\varvec{b}}_n\), where \({\varvec{X}}\) and \({\varvec{a}}_n\) are the POD modes and the parameters associated to the temperature field, and \({\varvec{Y}}\) and \({\varvec{b}}_n\) are the POD modes and the parameters associated to the Lagrange multipliers. From a general point of view, weakly enforcing boundary conditions has the following advantages with respect to the use of static modes to represent essential boundary conditions:
Test functions for the temperature field are not required to meet the constraint \(T|_{\Gamma _d} = 0\).
As previously introduced, the cost of computing the product \({\varvec{\varPhi }}_B {\varvec{T}}^B_n\) can be a penalising factor when a large number of static modes is required. By using Lagrange multipliers, this problem can be avoided.
When adopting the first option, the residual \(\varvec{\varPi }_n\) given by Eq. (10) is projected to the space spanned by \({\varvec{X}}^{\varvec{c}}\) and each term is separately hyper-reduced as done in [11], and the expression is quite similar to the one given by Eq. (13) but taking into account the terms involving the restriction over the Dirichlet boundary.
In the second approach proposed in this section, each term of the residual \(\varvec{\varPi }_n\) from Eq. (10) is projected separately according to
$$\begin{aligned} \begin{bmatrix} \varvec{\varPi }_{{\varvec{T}}_n}^p\\ \varvec{\varPi }_{\varvec{\lambda }_n}^p \end{bmatrix} = \begin{bmatrix} {\varvec{X}}^T\varvec{\varPi }_{{\varvec{T}}_n}\\ {\varvec{Y}}^T \varvec{\varPi }_{\varvec{\lambda }_n} \end{bmatrix} \end{aligned}$$
Then, again, each contribution is separately hyper-reduced following the work of Cosimo et al. [11]. It should be observed that this option is more difficult to implement than the process of reducing \([{\varvec{T}}_n; \varvec{\lambda }_n]\) as a unit: the DOFs must be partitioned into temperature DOFs and Lagrange multipliers, which complicates the implementation of the gappy data procedure as these two different unknowns are represented by two different vectors.
The techniques presented here apply to higher order problems as well. For instance, let us consider a fourth order one-dimensional problem in which Hermite polynomials are used in the FEM discretisation. In the case of imposing the essential boundary conditions strongly, the procedure for computing static modes is applied exactly in the same way as described before, with static modes obtained by imposing unit displacements and unit rotations at the boundary. In the case of imposing weakly the essential boundary conditions, the only difference with the thermal case is that we will have independent Lagrange multipliers fields for each degree of freedom. We note finally that an extension to fourth order problems of the techniques presented in [1] in the context of the a priori reduced order method PGD, was proposed by Quesada et al. [24].
We will show the application of the proposed snapshot collection strategies to two non-linear transient heat conduction problems with time dependent essential boundary conditions. To assess the performance and robustness of the proposed methods, we study the relative error introduced by the HROM. The relative error \(\epsilon \) characterising the HROM as a function of time is measured as \(\frac{\Vert T_R - T_H\Vert }{\max \limits _{t}\Vert T_H\Vert }\), where \(T_R\) is the solution obtained with the HROM, \(T_H\) is the High Fidelity solution for same problem and \(\Vert \cdot \Vert \) denotes the \(L_2\) norm. Tri-linear hexahedral elements are used in the examples to interpolate the temperature field. The Lagrange multipliers field is interpolated with bi-linear quadrilateral elements. In what follows, \(n_p\) is used to denote the number of POD modes for \({\varvec{T}}_n\), and \(n_{\lambda }\) is used to denote the number of POD modes for \(\varvec{\lambda }_n\).
This example has been presented by Gunzburger et al. [13]. It consists of a linear transient heat conduction problem with \(\rho =k=c=1\) and with a non-linear heat source equal to \(Q(T)=-T^2\). The domain to be analysed is a \(1 \times 1 \times 0.1428\) cuboid. It is discretised using tri-linear hexahedral elements with a total of 675 degrees of freedom. A time step \(\Delta t = 0.01\) is used for the time interval [0, 1]. The body is initially at temperature \(T_0=0\). A time dependent essential boundary condition, \(T_d({\varvec{x}},t)\), is imposed, given by
$$\begin{aligned} T_d({\varvec{x}},t)\equiv T_d(x,y,z,t)= {\left\{ \begin{array}{ll} 2t 4x(1-x) &{}\text{ if } y=1 \wedge 0\le t < 0.5,\\ 2(1-t) 4x(1-x) &{}\text{ if } y=1 \wedge 0.5\le t \le 1,\\ 4(t-t^2)4x(1-x) &{}\text{ if } y=0 \wedge 0\le t \le 1,\\ |\sin (2\pi t)|4y(1-y) &{}\text{ if } x=0 \wedge 0\le t \le 1,\\ |\sin (4\pi t)|4y(1-y) &{}\text{ if } x=1 \wedge 0\le t \le 1. \end{array}\right. } \end{aligned}$$
The sides \(z=0\) and \(z=0.1428\) of the domain are insulated. Different time instants of the High Fidelity solution of the problem can be observed in Fig. 1. A total of 20 gappy points and 20 gappy modes were used in all cases.
High fidelity solution of Example 1
Relative error for \({\varvec{T}}_n\) when using static boundary modes, for various numbers of POD modes
The error obtained using static modes to represent the essential boundary condition can be observed in Fig. 2, where different numbers of projection modes were considered. As it can be seen, good results are obtained. Additionally, the results are comparable to the ones obtained by Gunzburger et al.
We have two alternatives for weakly imposing the essential boundary conditions. When reducing \({\varvec{T}}_n\) and \(\varvec{\lambda }_n\) as a unit, the error behaves as shown in Fig. 3. Although the obtained results seem to be quite good, we remark that convergence is not achieved for the cases \(n_p<7\), \(n_p=10\) and \(n_p=11\). Monotone convergence, for any number of projection modes, is achieved only when using 12 or more modes. This behaviour is related to the fact that the temperature field must have enough freedom to be able to meet the restrictions imposed by the Lagrange multipliers.
Relative error for \({\varvec{T}}_n\) and \(\varvec{\lambda }_n\) multipliers when reducing them as a unit
The error obtained when reducing \({\varvec{T}}_n\) and \(\varvec{\lambda }_n\) separately can be observed in Fig. 4. In these tests, we took \(n_\lambda =4\). It should be kept in mind that \(n_p\) should be greater than \(n_\lambda \), otherwise \({\varvec{T}}_n\) will not have enough freedom to satisfy the restrictions. In this case, convergence can be achieved for \(n_p \ge 4\), but a good approximation error to the temperature field is observed for \(n_p \ge 7\). We remark that when \(n_\lambda > 4\) in this numerical experiment, a bad conditioning of the reduced iteration matrix was obtained. A pivoting strategy was used to get convergence, with elimination of the equations associated to zero pivots, and it was observed that most of the time the constraint equations corresponding to modes higher than four were eliminated.
Relative error for \({\varvec{T}}_n\) and \(\varvec{\lambda }_n\) multipliers when reducing them separately
When comparing the three different alternatives, it is observed that the lowest error option is when using static modes. Nevertheless, the cost is higher than in the strategies that impose the Dirichlet boundary conditions weakly. Concerning the two latter alternatives, it is observed that the strategy of reducing \({\varvec{T}}_n\) and \(\varvec{\lambda }_n\) as a unit leads to the lowest errors, for the same number of reduced DOFs. For example, when using that alternative with \(n_p=12\) the error for the temperature field is \(O(10^{-4})\), but when reducing \({\varvec{T}}_n\) and \(\varvec{\lambda }_n\) separately with \(n_p=8\) and \(n_\lambda =4\), the error is \(O(10^{-3})\). The approximation error to \(\varvec{\lambda }_n\) is always lower when reducing \({\varvec{T}}_n\) and \(\varvec{\lambda }_n\) as a unit. However, as seen in the numerical experiments, the number of POD modes needed to describe \({\varvec{T}}_n\) and \(\varvec{\lambda }_n\) as a unit must be fairly large in order to provide enough freedom to the temperature field to satisfy the restrictions imposed by the Lagrange multipliers.
We consider next a non-linear transient heat conduction problem, where the heat capacity is \(c=0.1792 ~ T + 495.20\) and the thermal conductivity is \(k=0.25~ T + 70\). The material density is \(\rho = 1\). The domain to be analysed is a \(\pi \times \pi \times 0.4487\) cuboid. It is discretised using tri-linear hexahedral elements with a total of 675 degrees of freedom. A time step \(\Delta t = 1\) is used for the time interval [0, 600]. The body is initially at temperature \(T_0=1200\). A time dependent essential boundary condition is imposed on side \(x=0\). The other sides of the domain are insulated. The essential boundary condition \(T_d({\varvec{x}},t)\) is given by
$$\begin{aligned} T_d({\varvec{x}},t)&= T_0\frac{e^{-t/600}-1}{e^{-1}-1}\cos \left( \frac{\gamma _{\tau }}{2} + \frac{\pi t}{300} \right) \cos (\pi \gamma _{\tau }) + T_0, \quad \gamma _{\tau }=\frac{26 \pi t}{600}+\frac{y}{2}. \end{aligned}$$
Different time instants of the High Fidelity solution of the problem can be observed in Fig. 5. A total of 60 gappy points and 60 gappy modes were used in all cases, except for the equations corresponding to the Lagrange multipliers when reducing \({\varvec{T}}_n\) and \(\varvec{\lambda }_n\) separately, where five gappy points and modes were used.
The results obtained for the different schemes can be observed in Figs. 6, 7 and 8. Similar comments as in the previous example apply in this case. We remark that the scheme that weakly imposes the Dirichlet boundary conditions and that reduces \({\varvec{T}}_n\) and \(\varvec{\lambda }_n\) as unit, begins to converge for \(n_p \ge 13\).
Relative error for \({\varvec{T}}_n\) when using static modes
Several alternatives for building Hyper-Reduced Order Models to solve nonlinear thermal problems with time dependent inhomogeneous essential boundary conditions were analysed and compared.
One strategy considers the use of static modes for strongly imposing the boundary conditions. This approach is similar to the method presented by Gunzburger et al. [13] who proposed to use particular solutions instead of static modes. A good behaviour was obtained by using static modes and the results were comparable to the ones obtained by Gunzburger et al. Even though this method proved to be a robust technique for describing essential boundary conditions, the associated computational cost is high for models that require a large number of static modes.
In order to work out the disadvantages of the static modes approach, two other alternatives that are based on a weak imposition of the essential boundary conditions were studied. One alternative consists in reducing the primal and the secondary fields as a unit, while the other consists in reducing them separately. It was observed that, for the same number of reduced DOFs, the former approach led to the lowest errors for the primal (temperature) field. The performed numerical experiments also made evident that the number of POD modes used for describing the primal and the secondary fields as a unit, must be large enough in order to provide enough freedom to the primal (temperature) field to satisfy the restrictions imposed by the Lagrange multipliers.
In a future work, the case with time dependent variation of the support of the essential boundary conditions will be studied.
For the sake of conciseness, in this work we do not consider the objective function \(T^h\) to depend on a set of analysis parameters \({\varvec{\mu }}\). If this were the case, the snapshots collection strategies introduced herein apply directly just by applying them to each of the training parameters \({\varvec{\mu }}_i\).
González D, Ammar A, Chinesta F, Cueto E. Recent advances on the use of separated representations. Int J Numer Methods Eng. 2010;81(5):637–59.
Kunisch K, Volkwein S. Galerkin proper orthogonal decomposition methods for a general equation in fluid dynamics. SIAM J Numer Anal. 2002;40(2):492–515.
Bergmann M, Bruneau C-H, Iollo A. Enablers for robust POD models. J Comput Phys. 2009;228(2):516–38.
Néron D, Ladevèze P. Proper generalized decomposition for multiscale and multiphysics problems. Arch Comput Methods Eng. 2010;17(4):351–72.
Strang G. The fundamental theorem of linear algebra. Am Math Mon. 1993;100(9):848–55.
Sirovich L. Turbulence and the dynamics of coherent structures. I—coherent structures. II—symmetries and transformations. III—dynamics and scaling. Q Appl Math. 1987;45:561–71.
Chatterjee A. An introduction to the proper orthogonal decomposition. Curr Sci. 2000;78(7):808–17.
Carlberg K, Bou-Mosleh C, Farhat C. Efficient non-linear model reduction via a least-squares Petrov–Galerkin projection and compressive tensor approximations. Int J Numer Methods Eng. 2011;86(2):155–81.
Carlberg K, Farhat C, Cortial J, Amsallem D. The GNAT method for nonlinear model reduction: effective implementation and application to computational fluid dynamics and turbulent flows. J Comput Phys. 2013;242:623–47.
Cosimo A, Cardona A, Idelsohn S. Improving the k-compressibility of hyper reduced order models with moving sources: applications to welding and phase change problems. Comput Methods Appl Mech Eng. 2014;274:237–63.
Amsallem D, Zahr MJ, Farhat C. Nonlinear model order reduction based on local reduced-order bases. Int J Numer Methods Eng. 2012;92(10):891–916.
Gunzburger MD, Peterson JS, Shadid JN. Reduced-order modeling of time-dependent PDEs with multiple parameters in the boundary data. Comput Methods Appl Mech Eng. 2007;196(4–6):1030–47.
Rvachev VL, Sheiko TI, Shapiro V, Tsukanov I. Transfinite interpolation over implicitly defined sets. Comput Aided Geom Des. 2001;18(3):195–220.
Everson R, Sirovich L. Karhunen–Loeve procedure for gappy data. J Opt Soc Am A. 1995;12:1657–64.
Ryckelynck D. Hyper-reduction of mechanical models involving internal variables. Int J Numer Methods Eng. 2009;77(1):75–89.
Hernández JA, Oliver J, Huespe AE, Caicedo MA, Cante JC. High-performance model reduction techniques in computational multiscale homogenization. Comput Methods Appl Mech Eng. 2014;276:149–89.
Guyan RJ. Reduction of stiffness and mass matrices. AIAA J. 1965;3(2):380.
Irons B. Structural eigenvalue problems-elimination of unwanted variables. AIAA J. 1965;3(5):961–2.
Craig R, Bampton M. Coupling of substructures for dynamic analysis. AIAA J. 1968;6:1313–9.
Géradin M, Cardona A. Flexible multibody dynamics: a finite element approach. London: Wiley; 2001.
Craig R, Chang CJ. Substructure coupling for dynamic analysis and testing. Technical Report CR-2781, NASA. 1977.
Rixen DJ. Force modes for reducing the interface between substructures. In: SPIE proceedings series. Society of Photo-Optical Instrumentation Engineers; 2002.
Quesada C, Xu G, González D, Alfaro I, Leygue A, Visonneau M, Cueto E, Chinesta F. Un método de descomposición propia generalizada para operadores diferenciales de alto orden. Revista Internacional de Métodos Numéricos para Cálculo y Diseño en Ingeniería. 2014;31:188–97.
All authors contributed to the development of the theory. The computer code for the numerical simulations was developed by AC. All authors read and approved the final manuscript.
CIMEC-Centro de Investigación de Métodos Computacionales (UNL / Conicet), ruta 168 s/n, Predio Conicet "Dr A. Cassano", 3000, Santa Fe, Argentina
Alejandro Cosimo, Alberto Cardona & Sergio Idelsohn
International Center for Numerical Methods in Engineering (CIMNE) and Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
Sergio Idelsohn
Alejandro Cosimo
Alberto Cardona
Correspondence to Sergio Idelsohn.
This work received financial support from CONICET Consejo Nacional de Investigaciones Científicas y Técnicas (PIP 1105), Agencia Nacional de Promoción Científica y Tecnológica (PICT 2013-2894), and Universidad Nacional del Litoral (CAI+D2011) from Argentina, and from the European Research Council under the Advanced Grant: ERC-2009-AdG "Real Time Computational Mechanics Techniques for Multi-Fluid Problems".
Cosimo, A., Cardona, A. & Idelsohn, S. General treatment of essential boundary conditions in reduced order models for non-linear problems. Adv. Model. and Simul. in Eng. Sci. 3, 7 (2016). https://doi.org/10.1186/s40323-016-0058-8
Accepted: 30 January 2016
HROM
Reduced Order Models
Essential boundary conditions
Model order reduction: POD, PGD and reduced bases | CommonCrawl |
Chemopreventive effect of Betulinic acid via mTOR -Caspases/Bcl2/Bax apoptotic signaling in pancreatic cancer
Yangyang Guo1,
Hengyue Zhu1,
Min Weng1,
Cheng Wang ORCID: orcid.org/0000-0003-4823-14841 &
Linxiao Sun1
BMC Complementary Medicine and Therapies volume 20, Article number: 178 (2020) Cite this article
A Correction to this article was published on 26 April 2021
This article has been updated
Pancreatic cancer is aggressive with no symptoms until the advanced stage reached. The increased resistance of pancreatic cancer to chemotherapy demonstrates a dilemma in the clinical field. Hence, it is a matter of great urgency to develop an effective drug to treat patients with pancreatic cancer. Betulinic acid is a major triterpene isolated from spina date seed. Several studies have suggested its low toxicity and side effects to patients with malaria and inflammation. However, relevant studies on betulinic acid in inhibiting cancer were insufficient and the molecular mechanism was unclear. This study aimed to systematically explore the potential anti-cancer functions of betulinic acid in pancreatic cancer, and investigate its underlying molecular mechanism.
The Counting Kit-8 assay, colony formation, transwell invasion assay, wound healing assay, flow cytometry and xenograft nude mice model were used to evaluate the effect of betulinic acid on the proliferation, invasion and migration ability of pancreatic cancer cells.
Our results showed that betulinic acid obviously suppressed pancreatic cancer both in vitro and in vivo in a dose-dependent manner. We also determined that betulinic acid inhibited pancreatic cancer by specifically targeting mTOR signaling rather than Nrf2 or JAK2.
These findings clarify that betulinic acid is a potential and valuable anticancer agent for pancreatic cancer, and indicate the specific molecular target of betulinic acid.
Pancreatic cancer is one of the fatal malignancy in the world. Global Cancer Observatory (GCO, http://gco.iarc.fr) shows that approximate 400,000 people died from pancreatic cancer each year, ranking the seventh leading causes of cancer death [1]. The overall five-year survival rate of pancreatic cancer is far below 10% and the lowest of almost all types of cancers [2]. Surgery is considered to be the only potential treatment, followed by adjuvant chemotherapy. However, pancreatic cancer is not sensitive to most of the current chemotherapeutic drugs [3]. Over 80% of patients with pancreatic cancer are diagnosed when the lesion is not suitable for operation [2]. Therefore, it is urgent to develop an effective drug with less toxic and side effects to treat patients with pancreatic cancer.
Spina date seed has served as an anti-insomnia food therapy in Chinese history. Betulinic acid, a major natural product extracted from spina date seed, exhibits multiple biological activities such as anti-malarial, anti-inflammatory and anti-HIV [4]. Steele et al. have also suggested that betulinic acid can be an anti-malarial natural product both in vitro and in vivo experiments [5]. Jinbo et al. have demonstrated that betulinic acid can regulate the expression of inflammatory cytokines to improve inflammation [6]. In addition, betulinic acid can also interfere with HIV-1 maturation and inhibit its fusion [7]. The broad biological activities of betulinic acid against different types of cancer have been reported recently. However, the potential molecular mechanism and the specific intracellular targets of betulinic acid are unclear. The purpose of this study was to investigate the effects of betulinic acid on the pancreatic cancer cells, and to explore the molecular mechanism of betulinic acid. This study will provide a new idea for the diagnosis and treatment of pancreatic cancer, and further to deeply understand the anticancer mechanism of betulinic acid.
Drugs and antibodies
Betulinic acid was purchased from YuanYe biotechnology (Shanghai, USA) and dissolved in DMSO as 100 mM. Counting Kit-8 (CCK-8) assay and Annexin V-FITC Apoptosis kit were obtained from BestBio Company (Shanghai, China). mTOR antibody (ab2732), Caspase-3 antibody (ab2302), p62 antibody (ab155686) were provided by Abcam. After that, S6K1 antibody (CST 9202), p-S6K1 antibody (CST 9204S), AMPK antibody (CST 2532S), p-AMPKα1 antibody (CST 2537), p-mTOR antibody (CST 5536S), Caspase8 antibody (CST 4790), Bax antibody (CST 5023S) and LC3A/B antibody (CST 12741) were bought from Cell Signaling Technology. Bcl2 antibody (12789–1-AP) and GAPDH antibody (AP0063) were acquired from Proteintech and Bioworld Technology, respectively.
Cells and cell culture
The American Type Culture Collection (ATCC, Manassas, VA, USA) provided human pancreatic cancer cell line PANC-1 and SW1990. Dulbecco's modified Eagle's medium (DMEM; GENOM, Hangzhou, China) supplemented with 10% fetal bovine serum (FBS; Thermo Fisher Scientific, Waltham, MA, USA) and 1% Penicillin-Streptomycin (Gibco/Thermo Fisher Scientific) were used to maintain the cells at 37 °C in a 5% CO2 humidified atmosphere. Cells were sub-cultured every 2–3 days.
The proliferation of PANC-1 and SW1990 cells was measured by using CCK-8 assay according to the manufacturer's instructions [8]. Cells were cultured in 96-well plates (5 × 103/well) for 24 h and treated with the indicated concentrations of betulinic acid. Then, the cells were treated with 100 μl of CCK-8 solution and incubated in the dark for another 2 h at 37 °C. The viability of cells was quantified using a Multiskan Spectrum spectrophotometer (Thermo Fisher Scientific, Inc.) with the optical density (OD) at 450 nm. The following formula was used to calculate % cell viability:
$$ \%\mathrm{cell}\ \mathrm{viability}=\frac{{\mathrm{OD}}_{450}\left(\mathrm{treatedcells}\right)-{\mathrm{OD}}_{450}\left(\mathrm{blank}\ \mathrm{cells}\right)}{{\mathrm{OD}}_{450}\left(\mathrm{controlcells}\right)-{\mathrm{OD}}_{450}\left(\mathrm{blank}\ \mathrm{cells}\right)}\times 100 $$
Real time cellular analysis
The proliferation assay was measured via cell culture E16-Plate (ACEA Biosciences, San Diego, USA) at 2 × 105 cells/well. Label-free Real-time Cellular Analysis System (RTCA; Roche, Penzberg, Germany) was applied to automatically record the cell growth index and normalized at every time point following treatment.
Colony formation assay
The cells were plated in 6-well plates at a density of 500–1000 cells/well. The cells were treated with betulinic acid after the cells growth can be visible to the naked eye. After 24 h of treatment, cell colonies were fixed with formaldehyde and stained with crystal violet for counting [9].
PANC-1 and SW1990 human pancreatic cancer cells were grown on glass coverslips at a density of 5 × 103 cells/well, and fixed in 4% formaldehyde for 15 min. Then cells were permeabilized with 0.1% Triton X-100, and blocked in 4% normal goat serum in PBS for 1 h. Immunofluorescence staining was performed using primary antibodies against Ki67 (1:100; Cell Signaling). Appropriate secondary antibodies were obtained from Santa Cruz.
Migration assay
The exponentially growing cells were seeded in 6-well plates at a density of 5× 105 cells/well and incubated at 37 °C for 48 h. After that, a crystal pipette tip was used to scratch the culture area to create a linear gap in the confluent cell monolayer. Detached cells were washed away with PBS, followed by adding betulinic acid to fill the linear gap. An inverted microscope was then adopted to capture images of the culture area every 24 h [10].
Transwell assay
The invasion capacity of PANC-1 and SW1990 cells in vitro was assessed by Transwell (Costar, New York, NY, USA) assay. Cells cultured on 500 μl of serum-free medium at density of 1 × 105 cells with betulinic acid were inoculated in the upper chamber, followed by plastered with reduced growth factor Matrigel® to do the invasion assay. Meanwhile, a medium containing 10% FBS was added into the lower chamber as a chemoattractant. After incubating for a while, Q-tip was used to remove cells lying on the upper surface of the membrane. Formaldehyde and 0.5% crystal violet (Sigma) were then used to fix and stain the invaded cells in correct order. To ensure the accuracy of the counting, five random fields under a microscope were selected to calculate the number of invaded cells [9].
Flow cytometry analysis
The cells were treated with betulinic acid in a 6-well plate (5 × 105/ml, 2 ml/well), and washed with PBS. The cells were harvested and resuspended in binding buffer at a density of 5 × 105 cells/ml when the cells reached 85% confluence. Annexin V-FITC (5 μl) was used to incubate cells at room temperature for 15 min, then added 5 μl propidium iodide (PI) for another 5 min. All the incubation processes were performed under dark reaction conditions. In the end, flow-cytometry was carried out using a FACS C6 instrument, and data were analyzed using FlowJo 7.6 (USA).
Protein extraction and western blotting
Following treatment with different concentrations of betulinic acid, the cells were lysed in ice-cold RIPA lysis buffer (Beyotime, Shanghai, China) supplemented with 10% PhosSTOP (Roche, Basel, Switzerland), 1% PMSF (Beyotime, Shanghai, China) and 1% DTT. After incubation on ice for 30 min, the cells were collected by centrifugation for 10 min (12,000×g, 4 °C). The protein concentration was calculated using a Pierce BCA protein assay kit (Beyotime, Shanghai, China) from the supernatant. Total protein was subjected to 12% SDS-PAGE before transferred onto PVDF membranes (Bio-Rad Laboratories, Inc.). Membranes were blocked with 5% skim non-fat milk in TBS-T for 1 h at room temperature, followed by incubating with antibodies at 4 °C overnight. After three 7–10 min washing in TBST, membranes were incubated with the secondary antibodies for another 1 h at room temperature. Another three 5-min washing were done with TBST, and the protein bands were visualized using chemiluminescence detection on autoradiographic film. Image-Pro Plus was used to detect the intensity of signals for quantification. Normalization was required according to GAPDH antibody.
Nude mouse tumorigenicity assay
Male nude mice (BALB/c) were obtained from the Experimental Animal Centre of Wenzhou Medical University (Wenzhou, China). The inclusion criteria were mice aged 6–8 wks old and weighing 18–22 g. Mice were fed standard chow and water in the environment with controlled temperature, humidity and light, followed by a night of fasting the day before the experiment. 5 × 106 PANC-1 cells in 100 μl of PBS were injected subcutaneously in the left neck of experimental mice (n = 5), then intragastric administration of betulinic acid (40 mg/kg·d) for 30 days. In the meantime, another 5 model mice received injection of 5 × 106 PANC-1 cells (control group). The size of tumors were monitored daily until they become bulky or necrotic. The formula: V = (length×width2)/2 was defined to assess the tumor volume, length was always the longest dimension [11]. After the monitoring, these mice were killed by a lethal dose of carbon dioxide to examine tumor formation. The animal experiments including animals' euthanasia were performed in compliance with all regulatory institutional guidelines for animal welfare (National Institutes of Health Publications, NIH Publications No. 80–23). The guidelines were approved by the Institutional Review Board of Wenzhou Key Laboratory of Surgery, and the Institutional Animal Care and Use Committee of Wenzhou Medical University, China.
Histopathological examination
After fixing in formalin, the tumor specimens were embedded in paraffin and then cut into 4-μm sections. Hematoxylin and eosin (HE, Yuanye Biotechnology, Shanghai, China) were used to stain the sections. DM4000 B LED microscope system (Leica Microsystems, Germany) and DFC420C 5 M digital microscope camera (Leica Microsystems) were applied to examine and take photos of slides, respectively.
SPSS 18.0 (IBM, Armonk, USA) and GraphPad Prism 6.0 (GraphPad Software Inc., San Diego, CA, USA) were used for statistical analysis (mean ± standard deviation). The mean of each pair groups was compared by One-way ANOVA and the Student-Newman Keuls tests. P < 0.05 was considered as statistically significant. LSD method was then used for intergroup comparison if the analysis of variance were statistically significant.
Cell proliferation inhibitory effect of betulinic acid on pancreatic cancer
CCK-8 assay, RTCA, colony formation assay and Ki67 Immunofluorescence were conducted to detect the antitumor effect of betulinic acid on the proliferation of PANC-1 and SW1990 cells. As shown in Fig. 1a and c, the rate of cell viability in betulinic acid-treated cells was significantly decreased compared with control cells (P < 0.05). The half maximal inhibitory concentration values of betulinic acid for PANC-1 and SW1990 cells at 24 h were 47 and 38 μM, respectively. The antitumor effect of betulinic acid on both PANC-1 and SW1990 cells were subsequently monitored (Fig. 1b and d). RTCA showed that the cell proliferation of PANC-1 and SW1990 reduced remarkably after treatment with 20 and 60 μM betulinic acid than the DMSO. Plate colony formation assays were performed to detect the proliferation of PANC-1 and SW1990 cells after treatment with betulinic acid (Fig. 2a), which revealed that betulinic acid potently inhibited the proliferation and colony formation of PANC-1 and SW1990 cells. These results indicated that Betulinic acid treatment inhibited the proliferation of PANC-1 and SW1990 cells in a dose-dependent manner.
Betulinic acid inhibits PANC-1 and SW1990 cells proliferation. CCK8 assay of PANC-1(a) and SW1990 (c) cells incubated with 5 μM,10 μM, 20 μM, 30 μM, 40 μM, 50 μM, 60 μM, 70 μM, 80 μM, 90 μM. Betulinic acid or an equal volume of DMEM medium for 24 h. Label-free Real-time Cellular Analysis (RTCA) following PANC-1 (b) and SW1990 (d) cells incubated with betulinic acid (20 μM, 60 μM) or an equal volume of DMEM medium for 24 h. (e) Ki67 Immunofluorescence following PANC-1 and SW1990 cells incubated with betulinic acid (20 μM, 60 μM) or an equal volume of DMEM medium for 24 h
Betulinic acid inhibit PANC-1 and SW1990 cells Colony formation and invasion. a Colony formation assay of PANC-1 and SW1990 cells incubated with betulinic acid (20 μM, 60 μM) or an equal volume of DMEM medium for 24 h. *Data are presented as mean ± SD, N = 3; *, P < 0.05;**, P < 0.01; ***, P < 0.001; ****, P < 0.0001, compared with control. b Transwell assay following PANC-1 and SW1990 cells incubated with Betulinic acid (20 μM, 60 μM) or an equal volume of DMEM medium for 24 h. *Data are presented as mean ± SD, N = 3; *, P < 0.05; **, P < 0.01; ***, P < 0.001; ****, P < 0.0001, compared with control
The inhibition of betulinic acid on invasion and migration
In addition, we also carried out cell invasion by Transwell test, and determined the ability of cells migration with betulinic acid by wound healing experiment. Transwell assay showed that PANC-1 and SW1990 cells with DMSO had strong invasive ability (Fig. 2b). Compared with the vehicle group, cells treated with betulinic acid inhibited cell invasion significantly. Consistently, the migration ability of cells decreased gradually after adding 20 and 60 μM betulinic acid. Wound healing experiments showed that betulinic acid remarkably suppressed the migration of PANC-1 cells and SW1990 cells (Fig. 3). These results suggests that betulinic acid inhibits the invasion and migration of Panc-1 and SW1990 cells in a dose-dependent manner.
Betulinic acid inhibits PANC-1 and SW1990 cells migration. Wound healing assay following PANC-1 and SW1990 cells incubated with betulinic acid (20 μM, 60 μM) or an equal volume of DMEM medium for 24 h
Betulinic acid induces pancreatic cancer cells apoptosis
To investigate the effect of betulinic acid on apoptosis of pancreatic cancer cell lines, PANC-1 and SW1990 cells were treated with betulinic acid at concentrations of 0, 20 and 60 μM for 24 h. Apoptosis was identified by AnnexinV-FITC/PI method, which showed that the percentage of apoptosis of PANC-1 cells treated with betulinic acid increased from 3.29 to 29.5% and from 6.03 to 37.52% in SW1990 cells (Fig. 4). With the increase of betulinic acid concentration, the degree of apoptosis of PANC-1 and SW1990 cells enhanced. These findings shows that betulinic acid treatment does dependently promotes apoptosis of PANC-1 and SW1990 cells.
Betulinic acid promotes PANC-1 and SW1990 cells apoptosis. Flow cytometry for apoptosis [apoptosis ratio was calculated as (Q2 + Q3)/(Q1 + Q2 + Q3 + Q4)] of PANC-1 and SW1990 cells incubated with Betulinic acid (20 μM, 60 μM) or an equal volume of DMEM medium for 24 h.. *Data are presented as mean ± SD, N = 3; **, P < 0.01; ***, P < 0.001; ****, P < 0.0001, compared with control
Betulinic acid inducing apoptosis depends on mTOR signaling
We further determine the possible mechanism of apoptosis induced by betulinic acid, the expression of apoptosis-related and autophagy-related proteins was detected using western blotting. As showed in Fig. 5, betulinic acid treatment increased the expression of cleaved caspase3, cleaved caspase8 and Bax, while the anti-apoptotic protein Bcl-2 was down-regulated, which further confirmed the apoptosis induction of PANC-1 and SW1990 cells by betulinic acid. We also found that there was no significant change in LC-3B and p62 in betulinic acid treatment group, indicating that betulinic acid had no effect on autophagy of PANC-1 and SW1990 cells. In addition, in PANC-1 and SW1990 cells, p-mTOR was down-regulated, while p-AMPK was up-regulated, showing that AMPK/mTOR signal transduction was involved in autophagy and apoptosis induced by betulinic acid. Furthermore, the treatment of betulinic acid down-regulated the expression of p-S6K in PANC-1 and SW1990 cells, suggesting that protein synthesis was also inhibited. We speculated that betulinic acid may inhibit the proliferation and apoptosis of pancreatic cancer cells by promoting the activation of AMPK, inhibiting the activation of mTOR, as well as inducing autophagy and inhibiting protein synthesis.
Betulinic acid inducing apoptosis is dependent on mTOR/S6K1-Caspases/Bcl2/Bax apoptotic signaling. After 24 h treatment with Betulinic acid (20 μM, 60 μM), the expression of AMPK/mTOR signaling pathway autophagy and apoptosis-related proteins was detected by western blotting analysis
Betulinic acid inhibits tumor growth in xenograft nude mice model
From Fig. 6a and b, a significant difference in tumor volume was found from the tumor image after 30 days. The mean value of tumor volume and weight showed statistically significant differences between groups (Fig. 6c and d). After treatment with betulinic acid 40 mg/kg, the tumor volume and weight of transplanted PANC-1 cells decreased dramatically. In addition, a small number of tumor cells were observed in the betulinic acid treatment group, while a lot of tumor cells were observed in the control group (Fig. 6e). These results reveals that betulinic acid inhibits tumor growth in vivo.
Betulinic acid inhibits tumor growth of cell xenografts in nude mice. To further verify the effect of the Betulinic acid on PDAC cells, PANC-1 cells xenograft tumors were treated with Betulinic acid. When the diameter of the tumors reached 1 mm, the mice were randomly divided to two groups with five mice in each group. After 30 days of treatment, the mice were killed (a) and the tumors were exfoliated (b). The tumor volume (c) was measured every three day for 30 days. Tumors weight (d) was measured after tumors exfoliated. e HE stain showed that Betulinic acid significantly inhibits tumor growth of cell xenografts in nude mice. One-way ANOVA with Tukey's multiple comparison tests was utilized to analyze the subcutaneous tumor growth. All the experiments were performed in triplicate and the data are presented as the mean ± SD. The t-test was used for data analysis. *P < 0.05, **P < 0.01
Pancreatic cancer is one of the fatal malignancy in both sexes around the world. Several first-line chemotherapy agents such as gemcitabine have successfully improved survival of patients with different cancers, however, these agents met with limited outcomes in pancreatic cancer. The effective drug for treating pancreatic cancer is still restrained [12, 13]. Recent studies have showed several natural products may be novel candidates for developing pancreatic cancer therapeutics [3]. Such as Bangladeshi medicinal plant extracts, which showed obvious cytotoxicity to three pancreatic cancer cell lines with low toxicity [14], while Betulinic acid is a triterpene mainly derived from spina date seed. In China, spina date seed is widely used for insomnia therapy [15]. Yogeeswari et al. have suggested a variety of biological activities of betulinic acid include anti-inflammatory sterilization, human immunodeficiency virus (HIV) suppression and cytotoxicity against several tumors [16]. However, studies on the effect of betulinic acid in pancreatic cancer is insufficient, and the potential molecular mechanism of anti-tumor activities of betulinic acid remains unknown. In this study, we showed the inhibition of betulinic acid on cell proliferation, invasion and migration of two pancreatic cancer cell lines. Moreover, we engaged the xenograft nude mice model to confirm the anti-tumor bioactivity of betulinic acid in vivo.
The abnormal proliferation, invasion and migration ability of cancer cells require characteristic alterations on several pivotal signaling pathways. Nuclear factor erythroid 2-related factor 2 (Nrf2) signaling pathway can not only response to cellular stress, but also activate Nrf2-dependent transcriptional programs to further promote cancer hallmark proteins [17]. Cancer cells via facilitating Nrf2 signaling to circumvent the inhibition of autophagy, which means the activation of Nrf2 signaling acts as a protection of cancer cells [18]. Janus kinase 2 (JAk2) signaling pathway has participated in regulating immune system and cell growth [19]. JAK2 has been widely known for its high mutation in myeloproliferative neoplasms (~ 96% cancer patients with V617 mutation in exon 14 of JAK2) [20]. This leads to the development of several effective JAK2 inhibitors in clinical therapy, such as ruxolitinib [21]. However, betulinic acid in our study had no effects on Nrf2 and JAK2, but significant inhibited the phosphorylation of mTOR in a dose-dependent manner in two pancreatic cancer cell lines (PANC-1 and SW1990) (Fig. 5). This indicates that the inhibition of pancreatic cancer cells by betulinic acid is through the targeting of mTOR signaling, rather than Nrf2 or JAK2.
The mammalian target of rapamycin (mTOR) signaling pathway is crucial in cell growth and division via regulating autophagy, apoptosis and other critical intracellular processes [22]. Studies based on prevailing cancer models have indicated that the abnormal activation of mTOR signaling drives tumorigenesis in a p53 independent manner [23]. Owing to the important role of mTOR signaling in oncogenesis, mTOR inhibitors such as rapamycin have great expectations in clinical chemotherapy, but their toxicity and side effects on normal cells are difficult to predict [24]. Recent novel natural products such as curcumin with hypo-toxicity targeting mTOR have showed a new therapeutic method via inhibiting mTOR signaling pathway [25]. Our results also clarified that betulinic acid was a specific mTOR inhibitor (Fig. 5). Furthermore, the expression changes of phosphorylated AMPKα1 (upstream inhibiting protein of mTOR) and S6K1 (direct downstream substrate of mTOR) confirmed the specificity of betulinic acid on mTOR. These results collectively demonstrated that betulinic acid might be a potential and valuable mTOR inhibitor with hypo-toxicity in pancreatic cancer.
As a valuable cancer therapy targets, mTOR signaling pathway has mainly modulated cancer cell autophagy or induced apoptosis [26, 27]. The nutritional status, growth factor and other environmental stresses caused by mTORC1 complex can inhibit autophagy process to affect cancer cells [28]. Recently, the mTOR signaling pathway has also been reported to induce the apoptosis of non-small lung cancer [26], esophageal cancer [29] and myeloid leukemia cancer [30]. Nevertheless, the exact molecular mechanism of mTOR signaling regulated by betulinic acid in pancreatic cancer remains unclear. Our data showed that autophagy markers like LC3-I, LC3-II and P62 were not affected by betulinic acid, whereas apoptosis related proteins including Bax, bcl-2, cleaved caspase 8 and cleaved caspase 3 were regulated by betulinic acid in a dose-dependent manner (Fig. 5). Based on all above data, we speculated that betulinic acid may specifically inhibit mTOR signaling via inducing apoptosis in pancreatic cancer. Consequently, our study demonstrated that betulinic acid inhibited pancreatic cancer both in vitro and in vivo. The inhibition effect of betulinic acid may through targeting mTOR signaling to specifically active Caspases/Bcl2/Bax apoptotic signaling. Further studies are required to explore the bioactive structure of betulinic acid and responsive domain of mTOR.
Based on a variety of cell and mouse experiments, our results showed that betulinic acid can obviously suppress pancreatic cancer both in vitro and in vivo in a dose-dependent manner, which expands the anticancer class of betulinic acid. Furthermore, we explored the potential mechanism by which betulinic acid inhibited pancreatic cancer, and found that betulinic acid induces apoptosis by specifically through targeting mTOR signaling rather than Nrf2 or JAK2. These findings point out that betulinic acid acts as a potential and valuable anticancer agent for pancreatic cancer and indicate the specific molecular target of betulinic acid.
The datasets used and analyzed during this study would be available upon request from the corresponding author.
A Correction to this paper has been published: https://doi.org/10.1186/s12906-021-03254-w
GCO:
Global Cancer Observatory
mTOR:
Mammalian target of rapamycin
Nrf2:
Nuclear factor erythroid 2-related factor 2
JAK2:
Janus kinase 2
S6K1:
Ribosome protein subunit 6 kinase 1
AMPK:
HIV:
Human immunodeficiency virus
CCK-8:
Counting Kit-8
PBS:
Phosphate buffer saline
Verma V, Li J, Lin C. Neoadjuvant therapy for pancreatic cancer: systematic review of postoperative morbidity, mortality, and complications. Am J Clin Oncol. 2016;39(3):302–13.
Perysinakis I, Avlonitis S, Georgiadou D, Tsipras H, Margaris I. Five-year actual survival after pancreatoduodenectomy for pancreatic head cancer. ANZ J Surg. 2015;85(3):183–6.
Torres MP, Rachagani S, Purohit V, Pandey P, Joshi S, Moore ED, et al. Graviola: a novel promising natural-derived drug that inhibits tumorigenicity and metastasis of pancreatic cancer cells in vitro and in vivo through altering cell metabolism. Cancer Lett. 2012;323(1):29–40.
Dutta D, Chakraborty B, Sarkar A, Chowdhury C, Das P. A potent betulinic acid analogue ascertains an antagonistic mechanism between autophagy and proteasomal degradation pathway in HT-29 cells. BMC Cancer. 2016;16(1):23.
Steele JC, Warhurst DC, Kirby GC, Simmonds MS. In vitro and in vivo evaluation of betulinic acid as an antimalarial. Phytother Res. 1999;13(2):115–9.
Jingbo W, Aimin C, Qi W, Xin L, Huaining L. Betulinic acid inhibits IL-1β-induced inflammation by activating PPAR-γ in human osteoarthritis chondrocytes. Int Immunopharmacol. 2015;29(2):687–92.
Aiken C, Chen CH. Betulinic acid derivatives as HIV-1 antivirals. Trends Mol Med. 2005;11(1):31–6.
Wang Z, Mudalal M, Sun Y, et al. The effects of leukocyte-platelet rich fibrin (L-PRF) on suppression of the expressions of the pro-inflammatory cytokines, and proliferation of schwann cell, and neurotrophic factors.[J]. Sci Rep. 2020;10:2421.
Wang W, Wang Y, Liu M, et al. Betulinic acid induces apoptosis and suppresses metastasis in hepatocellular carcinoma cell lines in vitro and in vivo.[J]. J Cell Mol Med. 2019;23:586–95.
Yu S, Zhang Y, Li Q, et al. CLDN6 promotes tumor progression through the YAP1-snail1 axis in gastric cancer.[J]. Cell Death Dis. 2019;10:949.
Zhang G, Feng W, Wu J. Down-regulation of SEPT9 inhibits glioma progression through suppressing TGF-β-induced epithelial-mesenchymal transition (EMT).[J]. Biomed Pharmacother. 2020;125:109768.
Wolfgang CL, Herman JM, Laheru D, Klein AP, Erdek MA, Fishman EK, Hruban RH. Recent progress in pancreatic cancer. CA Cancer J Clin. 2013;63(5):318–48.
Heinemann V. Gemcitabine: progress in the treatment of pancreatic cancer. Oncology. 2001;60(1):8–18.
George S, Bhalerao SV, Lidstone EA, Ahmad IS, Abbasi A, Cunningham BT, Watkin KL. Cytotoxicity screening of Bangladeshi medicinal plant extracts on pancreatic cancer cells. BMC Complement Altern Med. 2010;10(1):52.
Wang J, Wang Z, Wang X, et al. Combination of Alprazolam and Bailemian capsule improves the sleep quality in patients with post-stroke insomnia: a retrospective study. Front Psychiatry. 2019;10:411.
Yogeeswari P, Sriram D. Betulinic acid and its derivatives: a review on their biological properties. Curr Med Chem. 2005;12(6):657–66.
Cloer EW, Goldfarb D, Schrank TP, Weissman BE, Major MB. NRF2 activation in cancer: from DNA to protein. Cancer Res. 2019;79(5):889–98.
Towers CG, Fitzwalter BE, Regan DP, Goodspeed A, Morgan MJ, Liu C, et al. Cancer cells upregulate NRF2 signaling to adapt to autophagy inhibition. Dev Cell. 2019;50(6):690–703.
Slattery ML, Lundgreen A, Kadlubar S, Bondurant KL, Wolff RK. JAK/STAT/SOCS-signaling pathway and colon and rectal cancer. Mol Carcinog. 2013;52(2):155–66.
Passamonti F, Maffioli M, Caramazza D, Cazzola M. Myeloproliferative neoplasms: from JAK2 mutations discovery to JAK2 inhibitor therapies. Oncotarget. 2011;2(6):485–90.
Thomas S, Snowden JA, Zeidler MP, Danson S. The role of JAK/STAT signalling in the pathogenesis, prognosis and treatment of solid tumours. Br J Cancer. 2015;113(3):365–71.
Guertin DA, Sabatini DM. Defining the role of mTOR in cancer. Cancer Cell. 2007;12(1):9–22.
Skeen J, Bhaskar PT, Chen CC, Chen WS, Peng XD, Nogueira V, et al. Akt deficiency impairs normal cell proliferation and suppresses oncogenesis in a p53-independent and mTORC1-dependent manner. Cancer Cell. 2006;10(4):269–80.
Easton J, Houghton PJ. mTOR and cancer therapy. Oncogene. 2006;25(48):6436–46.
Kuo CJ, Huang CC, Chou SY, Lo YC, Kao TJ, Huang NK, et al. Potential therapeutic effect of curcumin, a natural mTOR inhibitor, in tuberous sclerosis complex. Phytomedicine. 2019;54:132–9.
Liu X, Jiang Q, Liu H, Luo S. Vitexin induces apoptosis through mitochondrial pathway and PI3K/Akt/mTOR signaling in human non-small cell lung cancer A549 cells. Biol Res. 2019;52(1):1–7.
Paquette M, Elhoujeiri L, Pause A. mTOR pathways in cancer and autophagy. Cancers. 2018;10(1):18.
Morselli E, Galluzzi L, Kepp O, Vicencio J, Criollo A, Maiuri MC, Kroemer G. Anti- and pro-tumor functions of autophagy. Biochim Biophys Acta. 2009;1793(9):1524–32.
Jiang J, Pi J, Jin H, Cai J. Oridonin-induced mitochondria-dependent apoptosis in esophageal cancer cells by inhibiting PI3K/AKT/mTOR and Ras/Raf pathways. J Cell Biochem. 2019;120(3):3736–46.
Tian Y, Jia S, Shi J, Gong G, Yu J, Niu Y, et al. Polyphyllin I induces apoptosis and autophagy via modulating JNK and mTOR pathways in human acute myeloid leukemia cells. Chem Biol Interact. 2019;311:108793.
The study was supported by Natural Science Foundation of Zhejiang Province (Q20H030022). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Key Laboratory of Diagnosis and Treatment of Severe Hepato-Pancreatic Diseases of Zhejiang Province, Zhejiang Provincial Top Key Discipline in Surgery, Wenzhou Medical University First Affiliated Hospital, Wenzhou, Zhejiang, China
Yangyang Guo, Hengyue Zhu, Min Weng, Cheng Wang & Linxiao Sun
Yangyang Guo
Hengyue Zhu
Min Weng
Cheng Wang
Linxiao Sun
All authors have read and approved the manuscript. C.W. and L.S. designed the experiment and was responsible for writing the manuscript. Y.G. conducted the most experiment and analyzed the results. H.Z. and M.W. participated in the experiment and helped to analyze the data.
Corresponding authors
Correspondence to Cheng Wang or Linxiao Sun.
The animal study protocols including the method involving animal's euthanasia were approved by the Institutional Animal Care and Use Committee of Wenzhou Medical University, China. The methods were also performed according to the guidelines approved by the Institutional Review Board of Wenzhou Key Laboratory of Surgery, China.
The authors declare that they have no competing interest.
The original online version of this article was revised: the authors reported that a mistake in Fig. 1B and 1D and the group is not marked in Fig. 6A and 6B.
Guo, Y., Zhu, H., Weng, M. et al. Chemopreventive effect of Betulinic acid via mTOR -Caspases/Bcl2/Bax apoptotic signaling in pancreatic cancer. BMC Complement Med Ther 20, 178 (2020). https://doi.org/10.1186/s12906-020-02976-7
Received: 16 December 2019
Betulinic acid
mTOR signaling | CommonCrawl |
April 2017 , Volume 54, Issue 2, pp 721–743 | Cite as
An Assessment and Extension of the Mechanism-Based Approach to the Identification of Age-Period-Cohort Models
Maarten J. Bijlsma
Rhian M. Daniel
Fanny Janssen
Bianca L. De Stavola
Many methods have been proposed to solve the age-period-cohort (APC) linear identification problem, but most are not theoretically informed and may lead to biased estimators of APC effects. One exception is the mechanism-based approach recently proposed and based on Pearl's front-door criterion; this approach ensures consistent APC effect estimators in the presence of a complete set of intermediate variables between one of age, period, cohort, and the outcome of interest, as long as the assumed parametric models for all the relevant causal pathways are correct. Through a simulation study mimicking APC data on cardiovascular mortality, we demonstrate possible pitfalls that users of the mechanism-based approach may encounter under realistic conditions: namely, when (1) the set of available intermediate variables is incomplete, (2) intermediate variables are affected by two or more of the APC variables (while this feature is not acknowledged in the analysis), and (3) unaccounted confounding is present between intermediate variables and the outcome. Furthermore, we show how the mechanism-based approach can be extended beyond the originally proposed linear and probit regression models to incorporate all generalized linear models, as well as nonlinearities in the predictors, using Monte Carlo simulation. Based on the observed biases resulting from departures from underlying assumptions, we formulate guidelines for the application of the mechanism-based approach (extended or not).
Age-period-cohort analysis Identification Causal inference Mechanisms Front-door criterion
Demographers, epidemiologists, sociologists, and others have attempted to break down outcomes of interest into constituent effects caused by, or associated with, age, calendar time, and time of birth—an approach known as age-period-cohort (APC) analysis. Age effects refer to changes in the outcome as the age of individuals in the study population progresses. For example, as individuals age, cardiovascular function declines, and hence older individuals tend to have worse cardiovascular health than younger individuals. Period effects refer to changes that occur in an outcome as calendar time progresses. They can represent sudden changes or temporary changes in an outcome, such as spikes in death rates due to war or famine, but may also represent gradual changes such as those produced by the accumulation of minor improvements in public health infrastructure over time that influence mortality rates in all age groups. Finally, birth cohort effects represent differences between generations that are not attributable to differences in age or calendar time. Conceptually, they commonly represent the effects of shared formative experiences of individuals in a birth cohort, either in utero or during other critical phases in the life course (Ben-Shlomo and Kuh 2002). The effect of these formative years would then remain largely constant in that cohort throughout the remaining life course and are therefore independent of age and calendar time. For example, the cohort that was in utero during the Dutch Hunger Winter in 1944–1945 had worse health even later in life (Ekamper et al. 2014) compared with other cohorts. Furthermore, birth cohort has been found to be strongly tied to smoking behavior in various Western countries (Preston and Wang 2006; Verlato et al. 2006).
Unfortunately, decomposing an outcome (Y) into the separate effects of age (A), period (P), and cohort (C) using, for example, a linear regression model (e.g., E(Y|A,P,C) = η + α · A + β · P + θ · C) imposes an identification problem. Because A = P – C, these three variables are linearly dependent, and consequently any linear model involving these three variables cannot have a unique solution. To circumvent this, researchers have introduced various techniques that constrain the model specification (e.g., Clayton and Schifflers 1987; Held and Riebler 2012; Holford 2006; Yang et al. 2008) so that a solution can be found. However, the technical constraints that have been proposed are arbitrary and do not lead to meaningful measures of effect (Bell and Jones 2014; Luo 2013). Estimation of the parameters may be unbiased but only under the constraints that have been imposed, and hence the estimates do not reflect the true effects of age, period, and cohort that we seek (Luo 2013). Luo and Fienberg both argued in favor of a paradigm shift in a recent discussion in Demography (Fienberg 2013; Luo 2013): APC analysis, they argued, needs to become more theoretically informed. Simply fitting a regression model to an outcome given age, period, and cohort, without any forethought or theoretical reasoning, cannot result in meaningful effect estimates for these variables.
Few authors have explicated what they mean by true or meaningful effect estimates. Viewed from one perspective, because the relationship A = P – C always holds, a regression model with age, period, and cohort as covariates truly has infinitely many solutions, and thus there is no problem to be solved. However, because those who write about this problem talk of one special solution of those infinitely many solutions that is correct/true/valid/meaningful, it must be that they are (albeit implicitly) thinking of a hypothetical world, different from the actual world, in which age, period, and cohort can be manipulated such that the identity A = P – C is broken.
More formally, one could take an explicitly causal perspective using potential outcomes with age, period, and cohort as independent exposures. Let Y(a,p,c) be the potential outcome that would occur if A were set to a, P were set to p, and C were set to c, without necessarily abiding by the relationship a = p – c (Rubin 1974). Then the causal model,
$$ E\left( Y\left( a, p, c\right)\right)={\upeta}^{\ast }+{\upalpha}^{\ast}\cdot a+{\upbeta}^{\ast}\cdot p+{\uptheta}^{\ast}\cdot c, $$
has one solution, and this is presumably the true solution to which the various authors on this topic refer.
In fact, imagining a hypothetical world in which time can be manipulated is difficult enough. Contemplating one in which three different aspects of time—namely, age, period, and cohort—can be independently manipulated requires an even wilder imagination and is therefore unlikely to be truly of interest. More realistically, we can view Eq. (1) as being shorthand for
$$ E\left( Y\left({c}_a,{c}_p,{c}_c\right)\right)={\upeta}^{\prime }+{\upalpha}^{\prime}\cdot {c}_a+{\upbeta}^{\prime}\cdot {c}_p+{\uptheta}^{\prime}\cdot {c}_c, $$
where c a , c p , and c c are the set of all immediate consequences of age, period, and cohort, respectively. Thus, if being born in a particular cohort meant being born during a famine, it is this famine that we imagine we could manipulate—say, "prevent"—rather than the cohort of birth itself. But because we may not have all these consequences at our disposal, Eq. (2) is replaced (as a shorthand) by Eq. (1).
Given this reframing of the model of interest as a causal model, it makes sense to consider methods from causal inference (Pearl 2000) to analyze data from APC studies. Undertaken by Winship and Harding (2008), this was dubbed the "mechanism-based approach." In particular, their approach uses Pearl's front-door criterion to identify the APC causal parameters α*, β*, and θ* (Pearl 2000). In short, the mechanism-based approach uses intermediate variables on the path between one of the three APC variables and the outcome in order to estimate (1) the effect of one of these three variables on the outcome indirectly, and (2) the effect of the remaining two APC variables directly (with the method generalizable to modeling intermediate variables for two of the three APC variables). The approach naturally leads to drawing a directed acyclic graph (DAG) (Glymour 2006; VanderWeele et al. 2008) depicting the assumed relationships among A, P, C, the intermediate variables being considered, and the outcome. It thus motivates researchers to be explicit about their substantive assumptions.
The method requires that a complete set of intermediate variables can be found for at least one of the three APC variables. By a complete set of intermediate variables for A, for example, we mean a set of variables M 1, M 2, . . . ,M K that are affected by age and which themselves affect the outcome Y in such a way that all the effect of A on Y is via this set of intermediate variables.
However, in a realistic setting, finding a complete set of intermediate variables even for just one of the three APC variables is unlikely. Also, the partial set of intermediate variables that may be available could be dependent on more than one APC variable. Furthermore, there may be variables that affect both the intermediate variable(s) and the outcome. All these settings (if they cannot somehow be accounted for) threaten the mechanism-based approach with bias, and one of the aims of this article is to demonstrate these potential sources of bias and their magnitude in these realistic scenarios.
Another challenge also arises: the mechanism-based approach has been developed for use in linear and probit regression models for Y, and in linear and probit regression models for the intermediate variables M 1, M 2, . . . ,M K . Although some analytical solutions (e.g., Winship and Mare 1983) could be adopted to extend this approach to using logistic regression models for outcome and/or mediators, they are complex to implement. Moreover, only approximate methods are available to deal with settings where the variables included in the outcome model interact or have some other nonlinear effects, even when Y, M 1, M 2, . . . ,M K are all continuous and modeled using linear models (Jiang and VanderWeele 2015; Preacher and Hayes 2008; VanderWeele 2015).
In this article, in order to illustrate possible pitfalls one may encounter using mechanism-based APC models, we assess their performance under realistic settings: namely when (1) only a partial set of mediators is available, (2) some of the intermediate variables are affected by two or more of the APC variables (a feature that is not acknowledged in the analysis), and (3) unmeasured confounding affects the intermediate variables and the outcome. Furthermore, we extend the mechanism-based approach to settings with any fully parametric model for the outcome and intermediate variables by approximating the estimation of the APC parameters using Monte Carlo simulation. R code demonstrating the mechanism-based approach, and its extension, is available in Online Resource 1.
The Mechanism-Based Approach
The mechanism-based approach exploits the fact that age, period, and cohort affect the outcome through intermediate variables (Winship and Harding 2008). The key idea is that while age, period, and cohort are deterministically related, the intermediate variables along the paths from these to the outcome (hereafter, mediators) will be affected by other (APC-independent) causes and hence can be used (if measured) to circumvent the identification problem. We now discuss this in more detail.
Consider for simplicity the setting depicted in Fig. 1, which shows a causal directed acyclic graph (DAG). Causal DAGs are formal graphical representations of the assumed causal relationships between the variables under study (Glymour 2006). Here, the number of mediators K is equal to 2, and the mediators M 1 and M 2 being considered lie on causal pathways from P to Y. Note that there is no arrow from P to Y in the DAG, representing the assumption that all the effect of P on Y is via M 1 and M 2. Also note that there is no arrow from either A or C to the mediators, nor shared common causes of the mediators and any other variables in the DAG. Finally, the two paths from P to Y are separate in the sense that M 1 and M 2 do not affect each other, nor does any variable along either path affect a variable on the other path or share any common causes: M 1 and M 2 are assumed to be conditionally independent given P. These strong structural assumptions concerning the roles of M 1 and M 2 (some of which can be relaxed, which we discuss in Online Resource 2) allow the identification of the APC effects in a two-stage procedure.
Causal directed acyclic graph, showing the age effect (α*), cohort effect (θ*), and the period effect (the β*s). The bold arrows represent deterministic relationships, and the nonbold arrows represent stochastic relationships
In the first step, separate models for each mediator on P are fitted. In the second step, a model for the outcome Y on A, C, M 1, and M 2 is fitted. If Fig. 1 is correct, and the outcome and the mediators are continuous variables and modeled using linear regression, or the outcome and mediators are binary and modeled using probit regression models, and none of these models in truth includes product terms or other nonlinearities, then the effect of A and C on Y (α* and θ*) is equal to their regression coefficients in the outcome (second-step) model, while the effect of P on Y (β*) is equal to the sum of the effects along the two pathways involving M 1 and M 2. The effect along a pathway is equal to the product of two regression coefficients; for the P – M 1 – Y pathway, it is the product of the coefficient for P in the (first-step) regression of M 1 on P and the coefficient for M 1 in the (second-step) regression of Y (that also includes M 2, A, and C as covariates). Similar calculations apply to the P – M 2 – Y pathway, and the effects along the two pathways are then summed to obtain the effect of P on Y. These calculations are an application of the path-tracing rules that are widely used in structural equation modeling (Mulaik 2009; Wright 1934), with standard errors for the estimated effect of P estimated using the delta method (in simple settings) or, more generally, the bootstrap (MacKinnon et al. 2004). See Online Resource 2 for an applied example of the path-tracing rule.
In a real-life setting, a number of situations may occur that make mechanism-based estimation of APC effects less straightforward. First, if a complete set of mediators is not available for the selected APC variable(s), then the effect estimators of the three APC variables described earlier will be biased for α*, β*, and θ* because the required assumption that (at least) one of the three APC variables is fully mediated by a set of measured mediators would not be met. Second, a variable that we believe to be a mediator for one of the three APC variables may actually be a mediator for more than one. In this case, the regression coefficient for the APC variable that we did not believe to be mediated by any of the mediators for P, M 1, M 2, . . . ,M K will capture only the component of its effect that is not mediated. Third, the relationship between mediators and outcome may be confounded: that is, a variable (either measured or unmeasured) may have a causal effect on both the mediator(s) and the outcome. If this confounding is not controlled for in the outcome model, the effect of the mediator on the outcome will be estimated with bias, and consequently also the effect of the APC variable that is assumed to be mediated by it. Finally, the outcome and/or the mediators may not be of a type that can be modeled by linear or probit regression, or even if they can, when the models require product terms or other nonlinearities. In this situation, the path-tracing rule needed to derive the causal effect of the mediated APC variable cannot be used (Mulaik 2009).
Simulation: Approach
We assess the mechanism-based approach through simulations. In our simulations, we attempt to re-create a realistic setting in which APC analyses are performed: namely, the study of cardiovascular mortality. However, to demonstrate particular pitfalls, we isolate sources of bias in the APC effect estimates and therefore simplify this real-world setting into three scenarios.
In each scenario, the study population and outcome are the same. The individuals in the study population are generated to be aged 40–95 years during the calendar years 1990–2015, and hence the whole data set comprises birth cohorts from birth years 1896–1975. Age and period for each record are generated according to uniform distributions but are then categorized into five-year groups (for A and P) and cohort is dependent on these categories (C = P – A). The outcome is mortality due to cardiovascular disease (CVD), coded as 1 (death due to CVD) or 0 (alive or dead from other causes). It was generated in all scenarios according to either a probit or logistic regression model, with the probability of CVD death generated as a function of age, cohort, and the period mediators. Age is set to account for 70 % of the effect of the APC variables on cardiovascular mortality; period (via its mediators) for 20 %; and birth cohort for 10 %. We believe that these percentages approximately correspond to realistic effects of age, period, and cohort in the period 1990–2015 in Western countries. The difference in incidence of CVD death between individuals aged 40 and those aged 95 is very large, whereas the difference in incidence of CVD death in these age groups between the year 1990 and 2015 is much smaller (Peeters et al. 2011). Because of the linear dependency phenomenon, it is unknown what part of these differences is truly attributable to each dimension.
Simulation: Mediators and Confounders
We simulate settings where P is the variable that has measured mediating variables. Results, however, easily generalize to the alternative scenarios where A or C play this role, with due numerical differences given their unequal assumed strength of effects. Four mediators on the path from P to Y are included in the simulation study: body mass index (BMI), smoking, statin therapy, and unmeasured. We choose the first three variables because they are commonly described variables that are believed to affect CVD mortality; many other variables that are also believed to affect CVD mortality (e.g., Blackmore and Ozanne 2015; Capewell et al. 2000) are represented by the unmeasured variable. Together, these four variables account for the entire period effect on the outcome. The unmeasured and BMI variables are continuous, whereas smoking and statin therapy are binary. We set each of the measured mediators to account for ~20 % of the period effect on cardiovascular mortality; we set the unmeasured mediator to account for ~40 %.
Table 1 shows the direction of the effects of period on the mediators as well as each mediator on the outcome. The effect of period on the unmeasured mediators is linear, and its effect on the measured mediators is nonlinear but monotonically increasing or decreasing. Initially, these variables are made to act as mediators only on the path between P and CVD mortality; however, in some scenarios, they are also affected by A or C, in which case the total effects of these latter variables changes. In the scenario that includes confounding (see the section, Simulation: Scenarios and Variants), the confounder is presence or absence of a particular gene, randomly assigned to be present in 50 % of individuals and set to have a positive effect on both the mediators and the outcome (Sabol et al. 1999; Smith and Newton-Cheh 2015).
Direction of effect of period on the mediators, mediators on cardiovascular mortality, and the direction along the entire path of period on cardiovascular mortality
Direction of Effect
Period on Mediator
Mediator on Outcome
Period on Outcome
Unmeasured
Simulation: Scenarios and Variants
In all simulations, we generate data for 100,000 individuals, each measured once. We simulate three scenarios, with each scenario simulated 1,000 times. In the data-generating process for the first scenario (simple), A and C have a direct effect on Y, and only the effect of P is mediated. In the second scenario (more causes), A has an additional (negative) effect on the mediator BMI, which amounts to roughly 30 % of the total age effect, and C has an additional (positive) effect on smoking, which amounts to roughly 30 % of the total cohort effect. Finally, in the third scenario (confounding), genotype confounds the relationship between BMI and Y and between smoking and Y (Fig. 2): genotype has a positive effect on both BMI and smoking, and has a positive effect on CVD mortality. Genotype accounts for roughly 33 % of the association between age and CVD death and 35 % of association between cohort and CVD death.
Causal directed acyclic graph of the three scenarios investigated by simulation. Bold arrows between age (A), period (P), and cohort (C) represent a deterministic relationship, whereas the remaining arrows represent stochastic causal relationships. Circled variables represent variables that are omitted from the estimation model (in some simulation set-ups)
Each scenario has two variants. In the first variant, we generate Y and the binary mediators using probit regression models; in the second variant, logistic regression models are used instead. In both variants, linear regression models are used to generate continuous variables. Because probit and logistic regression models transform parameters into probabilities in a different way, the probabilities of cardiovascular mortality somewhat differ between the two variants. We varied the value for the intercept in each variant so that the age-specific probabilities of cardiovascular mortality were similar to those found in high-income countries. However, because of these differences in transformation, the extent of the bias found with the variants is not directly comparable.
Finally, to demonstrate how the unequally distributed strength of age, period, and cohort affect the bias, we perform two sets of simulations in which we vary these strengths. In the first set, we vary the size of the period effect from 0 % to 100 % in 20 % increments, while correspondingly reducing the size of the age effect and keeping the size of the cohort dimension constant at 10 % (excluding the last increment, where cohort is necessarily set to 0 %). The second set is identical, but then the cohort effect size is varied and period kept constant at 20 %. In both sets, bias is generated by removing the first mediator (unmeasured) from the estimation model in the simple scenario. These simulations were done with probit, logistic, and linear regression variants. The logistic and linear regression variants, plus a third set of simulations in which the age effect size is varied, are described in Online Resource 2.
Simulation: Estimation
Estimation of the APC effects according to the mechanism-based approach consists of fitting separate regression models for CVD death as function of A, C, and the P mediators and of each of the mediators as functions of P. In all estimation models, A, P, and C are treated as categorical variables through dummy coding (10 dummy variables for age, 4 dummy variables for period, and 14 dummy variables for cohort because cohort categories were forced to overlap in order to maintain the linear identity, C = P – A). By generating cohort in this way, we maintain the linear dependency among age, period, and cohort and therefore follow the equal interval width definition (Luo and Hodges 2016).
Following how we generated the variables in each variant, in the probit variant, we use probit regression models for CVD death and logistic regression models for the logistic variant. The effects of P (and A and C when appropriate) on continuous mediators are estimated using linear regression and for binary mediators using logistic or probit regression models. In all three scenarios, we first perform our estimation under entirely correct assumptions. That is, in the second scenario, we also model the additional paths from A and C to their mediators (as described in Online Resource 2); and in the third scenario, we also control for the confounder. Moreover, in all scenarios, the parametric forms used to fit these models are the same as those used to generate the data. Then, to investigate the effect of incorrect assumptions, in the second scenario, we omit to model the path from A and C to the mediators; and in the third scenario, we do not control for confounding. Additionally, to explore the effect of including an incomplete set of mediators, in all three scenarios, we first remove the unmeasured mediators from the estimation model, then BMI, followed by smoking; finally, we remove all period mediators (i.e., fit an age-cohort model). For completeness, we also report the results of adopting a more traditional APC approach in Online Resource 2.
Results obtained from each scenario, variant, and model specification are summarized as means of each parameter's estimates over the 1,000 simulations, which we compare with the estimated values obtained from the correctly specified models to estimate the bias. Because of the very large sample size, the estimates obtained from the correct models can be interpreted as the true values.
Extending the Mechanism-Based Approach Through Monte Carlo Integration
When a model with a general nonlinear link function is used for a mediator or for the outcome (or both)—for example, Poisson or logistic regression—or if the models include product terms or other nonlinearities, the path-tracing method cannot be used (Mulaik 2009). A different approach is then required.
The basic intuition of our approach is as follows. First, similar to the traditional approach, we estimate individual relations among age, period, cohort, and mediators, and then between mediators and outcome (Step 1 and Step 2, respectively). The difference now is that the statistical models used for these steps are allowed to have nonlinear functional forms. However, this enhanced flexibility comes at a price: the traditional multiplication of coefficients along pathways is no longer possible. Therefore, Monte Carlo integration is used instead (Robert and Casella 2004); the coefficients from Steps 1 and 2 are used in Steps 3 and 4 to generate a new data set that does not suffer from APC linear dependency (Pearl's front-door criterion makes this possible) but that reflects the original data structure. In Step 5, an APC model is then fitted to this newly created data set to provide estimates of the effects of age, period, and cohort. Our approach is, in many ways, analogous to the Monte Carlo estimation of the parametric g-formula (Hernán and Robins 2013; Keil et al. 2014). A similar approach has also been suggested in mediation analysis (VanderWeele 2015).
We treat age, period, and cohort as continuous variables to simplify the presentation. However, in our simulations, we model age, period, and cohort as categorical variables. Furthermore, we describe here only the case where there is one mediator for period and where the mediator has only one cause. See Online Resource 2 for a description of a more general setting. We proceed in six steps.
Mediator estimation: Fit a model for the mediator. If it is continuous, we can use linear regression; for example,
$$ M={\upgamma}_0+{\upgamma}_1 \cdot P+\nu, $$
where we assume \( \nu \sim N\Big(0,{\upsigma}_M^2 \)). Note that the assumption on the distribution of the error terms in the linear regression model is nontrivial if there are nonlinearities involving M in the model for Y. If instead a mediator is binary, we can use logistic regression; for example,
logit{E(M| P)} = γ0 + γ1 ⋅ P.
Let \( \left({\widehat{\upgamma}}_0,{\widehat{\upgamma}}_1\right) \) be the estimates of (γ0, γ1) from the appropriate model. If the mediator is continuous, also save the estimate of the error variance, \( {\widehat{\upsigma}}_M^2 \). These estimates will be used in Step 3.
Outcome estimation: Fit a model for the outcome. If the outcome is continuous, fit a model using linear regression; for example,
$$ Y={\updelta}_0 + {\updelta}_1\cdotp A+{\updelta}_2\cdotp C+{\updelta}_3\cdotp M+\upxi, $$
where \( \upxi \sim N\Big(0,{\upsigma}_Y^2 \)). If the outcome is binary, we use logistic regression; for example,
$$ \mathrm{logit}\left\{ E\left( Y| A, M, C\right)\right\}={\updelta}_0 + {\updelta}_1\cdotp A+{\delta}_2\cdotp C+{\updelta}_3\cdotp M. $$
Let \( \left({\widehat{\updelta}}_0,{\widehat{\updelta}}_1,{\widehat{\updelta}}_2,{\widehat{\updelta}}_3\right) \) be the estimates of (δ0, δ1, δ2, δ3) from the appropriate model. If the outcome is continuous, also save the estimate of the error variance, \( {\widehat{\upsigma}}_Y^2 \). These estimates will be used in Step 4.
Mediator simulation: For each of a range of period values, p, simulate the mediator \( \overset{\sim }{M}\left(\overset{\sim }{p}\right) \). The values \( \overset{\sim }{p} \) could be randomly generated—for example, using a discrete uniform distribution—but their range should be equal to the range empirically observed in the data that were used for estimation. For example, if we have data ranging from 1990 to 2015, that would be the range of values for \( \overset{\sim }{p} \) that we use. If instead we categorize this into five-year periods (1990–1994, 1995–1999, and so on), then we generate values of \( \overset{\sim }{p} \) corresponding to these categories. If a mediator is continuous, use the estimates of the linear regression model in Step 1 to simulate
$$ \tilde{M}\left(\tilde{p}\right)={\widehat{\upgamma}}_0+{\widehat{\upgamma}}_1 \cdot \tilde{p}+\widehat{\nu}, $$
where \( \widehat{\nu} \) is randomly drawn from \( N\Big(0,{\widehat{\upsigma}}_M^2 \)). If instead the mediator is binary, use the estimates from the logistic regression model in Step 1 to simulate \( \overset{\sim }{M} \) ( \( \overset{\sim }{p}\Big) \) from a Bernoulli distribution with mean
$$ {\widehat{\upmu}}_{m,\tilde{p}}=\frac{ \exp \left({\widehat{\upgamma}}_0+{\widehat{\upgamma}}_1 \cdot \tilde{p}\right)\ }{1+ \exp \left({\widehat{\upgamma}}_0+{\widehat{\upgamma}}_1 \cdot \tilde{p}\right)\ }. $$
The number of \( \overset{\sim }{M}\left(\overset{\sim }{p}\right) \) values to be simulated need not be equal to the number of observations in the data as long as the entire empirical range is covered—but the more values we simulate, the less our final estimates will be affected by Monte Carlo error. The values of \( \overset{\sim }{M}\left(\overset{\sim }{p}\right) \) will be used in Step 4.
Outcome simulation: For each of a range of age, period, and cohort values (a, p, c), simulate the potential outcome \( \overset{\sim }{Y}\left(\overset{\sim }{a},\overset{\sim }{p},\overset{\sim }{c}\right) \). Because \( \overset{\sim }{p} \) is already generated in Step 3, the values \( \overset{\sim }{p} \) can be reused instead of regenerated. As previously, the range of these values should be equal to the range empirically observed in age, period, and cohort in the data, respectively. However, we choose \( \overset{\sim }{a} \), \( \overset{\sim }{p} \), and \( \overset{\sim }{c} \) independently—that is, the identity \( \overset{\sim }{a}=\overset{\sim }{p} - \overset{\sim }{c} \) should not hold. If the outcome is continuous, then using the linear regression estimates of Step 2 and the simulated mediator values of Step 3 simulate the following:
$$ \tilde{Y}\left(\tilde{a},\tilde{p},\tilde{c}\right)={\widehat{\updelta}}_0+{\widehat{\updelta}}_1 \cdot \tilde{a}+{\widehat{\updelta}}_2 \cdot \tilde{c}+{\widehat{\updelta}}_3 \cdot \tilde{M}\left(\tilde{p}\right)+\widehat{\upxi}, $$
where \( \widehat{\upxi} \) is randomly drawn from \( N\Big(0,{\widehat{\upsigma}}_Y^2 \)), and \( \overset{\sim }{M}\left(\overset{\sim }{p}\right) \) is as generated in Step 3. If instead the outcome is binary, use the estimates from the logistic regression model in Step 2 and the simulated mediator values of Step 3 to simulate \( \overset{\sim }{Y}\left(\overset{\sim }{a},\overset{\sim }{p},\overset{\sim }{c}\right) \) from a Bernoulli distribution with mean calculated as follows:
$$ {\widehat{\upmu}}_{y,\tilde{a},\tilde{p},\tilde{c}}=\frac{ \exp \left({\widehat{\updelta}}_0+{\widehat{\updelta}}_1 \cdot \tilde{a}+{\widehat{\updelta}}_2 \cdot \tilde{c}+{\widehat{\updelta}}_3 \cdot \tilde{M}\left(\tilde{p}\right)\right)\ }{1+ \exp \left({\widehat{\updelta}}_0+{\widehat{\updelta}}_1 \cdot \tilde{a}+{\widehat{\updelta}}_2 \cdot \tilde{c}+{\widehat{\updelta}}_3 \cdot \tilde{M}\left(\tilde{p}\right)\right)\ }. $$
APC effect estimation: Estimate age, period, and cohort effects using the simulated values \( \overset{\sim }{Y}\left(\overset{\sim }{a},\overset{\sim }{p},\overset{\sim }{c}\right) \). If \( \overset{\sim }{Y}\left(\overset{\sim }{a},\overset{\sim }{p},\overset{\sim }{c}\right) \) was generated as a continuous variable, use linear regression with \( \overset{\sim }{a} \), \( \overset{\sim }{p} \), and \( \overset{\sim }{c} \) as covariates. If instead \( \overset{\sim }{Y}\left(\overset{\sim }{a},\overset{\sim }{p},\overset{\sim }{c}\right) \) was generated as a binary variable, use logistic regression with \( \overset{\sim }{a} \) , \( \overset{\sim }{p} \), and \( \overset{\sim }{c} \) as covariates. These models will be identifiable because \( \overset{\sim }{a} \), \( \overset{\sim }{p} \), and \( \overset{\sim }{c} \) should have been chosen independently. The estimated parameters can be interpreted as the age, period, and cohort effects from the causal model (Eq. (1)).
Standard error estimation: Use the nonparametric bootstrap to estimate the standard errors for the parameter estimates (Efron and Tibshirani 1994). This step consists of resampling with replacement from the original data, a data set of equal size, and then repeating Steps 1–5, and saving the parameter estimates of the effects of \( \overset{\sim }{a} \), \( \overset{\sim }{p} \), and \( \overset{\sim }{c} \) at the end of Step 5. The standard deviations of the distributions of the \( \overset{\sim }{a} \), \( \overset{\sim }{p} \), and \( \overset{\sim }{c} \) effect estimates can be used as estimates of the standard errors; alternatively, for example, the empirical 2.5 % and 97.5 % quantiles of these distributions can be used to derive 95 % confidence intervals directly (with improvements such as bias-corrected and accelerated intervals to be recommended).
Online Resource 1 includes R code demonstrating the application of this technique. In linear settings, this technique results in findings identical to the traditional path-tracing method, as long as a sufficiently large number of Monte Carlo simulations are carried out.
Linear Dependency and Expected Bias
Although age, period, and cohort could be pairwise-independent, the three variables together have a deterministic relationship: A = P – C. This linear dependency will determine the (direction of the) bias when the mechanism-based approach is used and not all the mediators are included in the estimation model. We demonstrate this here.
In a linear setting, the causal model described in Eq. (1) could be expressed as
$$ Y\left( a, p, c\right)={\upeta}^{\ast }+{\upalpha}^{\ast } \cdot a+{\upbeta}^{\ast } \cdot p+{\uptheta}^{\ast } \cdot c+{\upvarepsilon}^{\ast }, $$
where α*, β*, and θ* represent the effects of age, period, and cohort; η*, the intercept; and ε*, the mean-zero error terms. This model corresponds to an (unidentified) associational model:
$$ Y=\upeta +\upalpha \cdot A+\upbeta \cdot P+\uptheta \cdot C+\upvarepsilon . $$
Because P = A + C, we can rewrite Eq. (3) as
$$ Y=\upeta +\upalpha \cdot A+\upbeta \cdot \left( A+ C\right)+\uptheta \cdot C+\upvarepsilon =\upeta +\left(\upalpha +\upbeta \right) \cdot A+\left(\uptheta +\upbeta \right) \cdot C+\upvarepsilon . $$
Eq. (4) shows that if we fit an age-cohort model (omitting period) and interpret the coefficients of A and C as representing the age and cohort effect, respectively, the period effect is attributed to age and cohort in equal parts, thereby biasing the presumed age and cohort effects in the direction of the period effect. In the same way, because A = P – C, and C = P – A, we have
$$ Y=\upeta +\left(\upbeta +\upalpha \right) \cdot P+\left(\uptheta -\upalpha \right) \cdot C+\upvarepsilon $$
$$ Y=\upeta +\left(\upalpha -\uptheta \right) \cdot A+\left(\upbeta +\uptheta \right) \cdot P+\upvarepsilon $$
when fitting period-cohort and age-period models, respectively. As Eqs. (5) and (6) show, in the period-cohort model, the period parameter is biased in the direction of the age effect, while the cohort parameter is biased in the opposite direction. Similarly, in the age-period model, the period parameter is biased in the direction of the cohort effect, while the age parameter is biased in the opposite direction. Of course, in all three models, the effect of the omitted variable would also be biased (unless its effect is null) given that its effect estimate is effectively set to 0.
The same logic applies when we use mediators to estimate the effects of age, period, or cohort while estimating the other two effects directly. Consider our earlier example in which we use two mediators on the period path but estimate the effects of age and cohort directly (Fig. 1). However, this time, we omit the variable M 2 from our estimation model, perhaps because it was not measured (Fig. 3, left). When fitting a model for the outcome (Y) using A, C, and the measured period mediator (M 1) as explanatory variables, there will be an association between A and the period mediators and between C and the period mediators because of the aforementioned linear dependency (Fig. 3, right). Because the model for Y includes M 1 (i.e., we condition on M 1), the paths from A or C to Y via M 1 are blocked. However, because we do not condition on the unmeasured mediator (M 2), the pathways from A and C to Y via M 2 are not blocked and hence their regression parameters will be biased by the contribution of the additional paths from, respectively, A and C via M 2 (i.e., δ4 ∙ γ3 for both).
Causal directed acyclic graph when the age effect (δ1) and cohort effect (δ2) are estimated directly, and the period effect is estimated using mediators as per Pearl's front-door criterion, while one period mediator is unmeasured. Left: effect estimates if M 2 is measured, and P is included in the estimation process. Right: relationships that form due to linear dependency when period is excluded from estimation
Scenario 1: Simple
Removing mediators from the estimation of the period path introduces bias (Fig. 4 for probit variant, see Fig. S3 in Online Resource 2 for logistic variant). The relative magnitude of the bias appears strongest for birth cohort, which had weak negative parameter estimates when the model was correctly specified, but these estimates become strongly negative when mediators are removed from the estimation model. The directions of the bias are as expected (see the earlier section, Linear Dependency and Expected Bias). On average, removing the unmeasured period mediators has a negative effect on the estimated age and cohort effects, while it introduces a positive bias on the period estimates; the estimated parameters for the age and cohort effects become less positive, while those for the period effect become less negative. The same occurs when smoking and statins are additionally removed. This is as expected because the paths via the unmeasured period mediators, smoking, and statins all have a negative effect on the outcome (Table 1). Removing BMI leads to the opposite: that is, it introduces a positive bias on the estimated effects of age and cohort, and a negative one on that of period. Again, this is as expected because the path from P to Y through BMI is positive (Table 1).
Average estimated parameters for the APC effects in Scenario 1 (simple) using the mechanism-based approach. Summary of 1,000 simulations
Scenario 2: More Causes
In the scenario in which age (in addition to period) is a cause of BMI, and cohort (in addition to period) is a cause of smoking, we find that not including these relationships in the estimation model also results in bias. As expected, this bias is largest for age, where the age effect is overestimated when the negative effect of age on BMI is not included in the estimation model (Fig. 5). A similar but much weaker bias is found for cohort when the effect of C on smoking is not included (Fig. 5 for the probit variant; Fig. S4 in Online Resource 2 for logistic variant). There is no bias in the estimation of the period effect (Fig. 5), except in the logistic variant (Fig. S4).
Average APC estimates in Scenario 2 (more causes) using the mechanism-based approach. Summary of 1,000 simulations
These results follow our expectation. The effect of age on the outcome via BMI is negative. Because age is not included as a cause of BMI in the estimation model, this negative effect is subtracted from the total age effect, thereby resulting in an overestimation of the age effect. The effect of not modeling birth cohort as a cause of smoking follows the same logic, but the bias is weaker because the effect size from birth cohort to the outcome via smoking is also smaller. In this scenario, the period effect estimates are not biased because period is correctly modeled as a cause of the four mediators. The logistic variant is nevertheless sensitive to this bias because of the nonlinearities in its estimation procedure. Finally, removing mediators from the estimation model results in biases in the same direction as found in the simple scenario (see Online Resource 2, Figs. S1 and S4), and the same explanations for these directions apply.
Scenario 3: Confounding
In the scenario of a variable (genotype) confounding the relationships between BMI and CVD death, and between smoking and CVD death (both mediators on the P–Y path), we find that failing to control for this confounder results in bias in all three age, period, and cohort effect estimates. The age parameter estimates become somewhat negatively biased when we do not control for genotype in our outcome model (Fig. 6 for the probit variant; Online Resource 2, Fig. S5 for the logistic variant). The same occurs in the cohort effect, while the period effect suffers from a small positive bias (Fig. 6).
Average APC estimates in Scenario 3 (confounding) using the mechanism-based approach. Summary of 1,000 simulations
The bias observed here is caused by collider stratification (Cole et al. 2010; Elwert and Winship 2014). Collider stratification bias—also known as endogenous selection—occurs when two variables both have causal effects on a third variable (the collider), and the collider is conditioned upon (Elwert and Winship 2014; Greenland 2003). Doing so creates an artificial association between the two causes of the collider that is of sign opposite to the product of the signs of the effects into the collider. In traditional APC analysis, collider stratification does not occur because APC effects on an outcome are estimated without adjusting for confounders or mediators.
In the scenario considered here, genotype has a positive causal effect on BMI, on smoking, and on CVD mortality. Therefore, BMI and smoking are colliders in the paths between P and genotype. By including BMI and smoking (together with the other two mediators) in our outcome model, we create an additional spurious association between A and Y via A – P – BMI – genotype – Y, and one between C and Y via the path C – P – smoking – genotype – Y. Both of these spurious associations are negative because those induced by collider stratification—namely, P-BMI-genotype and P-smoking-genotype—are both negative, while A – P, C – P, and genotype – CVD are all positive.
Varying Effect Sizes
Increasing the size of the period effect and keeping the size of the cohort effect constant resulted in increased bias when the unmeasured mediator was removed from the estimation model (Fig. 7 for the probit variant, Figs. S6 and S7 in Online Resource 2 for the logistic and continuous variants). Keeping the period effect constant and increasing the size of the cohort effect resulted in roughly equal amounts of bias in the cohort effect estimate (Fig. 7 and Figs. S6 and S7 in Online Resource 2); the bias was equal in the continuous variant because that estimation is done without link function transformations.
Varying the effect sizes of period from 0 % to 100 % of the total APC effect in 20 % increments while keeping cohort effect constant (upper), varying the effect sizes of cohort from 0 % to 100 % of the total APC effect in 20 % increments while keeping period effect constant (lower). When period accounts for 100 %, the correctly specified cohort trend is a horizontal line at y = 0. No bias when cohort accounts for 100 % because then the period effect (source of bias) accounts for 0 %. Only cohort figures (probit variant) are shown. Arrows in lower figure indicate the size and direction of the bias
We assessed the performance of mechanism-based APC models in realistic settings and extended the method to incorporate all generalized linear models, as well as nonlinearities in the predictors, using Monte Carlo simulation. We found that in simple scenarios in which a single set of mediators for P is not affected by unmeasured confounding, the mechanism-based approach (extended or not) performed reasonably well for the estimation of all three age, period, and cohort effects, especially if the mediators that were not included in the estimation had opposite signs (e.g., unmeasured and BMI) so that their unmeasured contributions (partially) cancelled out. The bias we observed was in line with our expectations derived from the APC linear dependency. In the scenarios with additional complications—that is, either when mediators were caused by more than one of three APC components, or there was unmeasured confounding of the mediator-outcome relationships—we found additional bias. This, again, was in line with our expectations. Findings were similar for the probit and logistic variants, although we did not directly compare these variants because of their differences in transforming effects into probabilities.
(Un)testable Assumptions
APC models solve the linear dependency problem by imposing modeling constraints by fiat (Fienberg 2013). The mechanism-based approach does not differ from this because it is (commonly, as explained in the next paragraph) not possible to identify from the data whether a variable is a mediator for age, period, cohort, a combination of two of these, or all three variables. This is, of course, the same problem that occurs in any conventional APC analysis when attempting to decompose some outcome into age, period, and cohort effects. Therefore, the untestable assumptions that are made in conventional APC analysis move to the mediator stage of the modeling procedure.
The addition of mediators to the APC model can be tested (Winship and Harding 2008). We omitted this test from our assessment because the test is conditional on having (at least) a full set of mediators for one of the three APC variables. We consider it likely that in the majority of real-life applications, it will not be possible to find a full set of mediators for even one of the APC variables, and hence we focused our assessment on possible biases that may be encountered, such as those due to missing mediators.
Simulating Bias
In our simulations, we considered four mediators on the period path and kept the more causes scenario separate from the confounding scenario in order to illustrate, separately, possible pitfalls that may be encountered when the mechanism-based approach is used. In a real application of the method, many more mediators may exist, which may also have more than one cause, and their relation with the outcome may be confounded. However, a more complicated scenario need not necessarily result in more biased estimates because biases of opposed sign may cancel each other out, such as when BMI was removed from the estimation model in our simulations (because BMI was the only positive mediator). Most importantly, when potential bias is assessed, it is the size of the omitted pathways that matters, relative to the size of the included pathways. This was demonstrated in our analysis varying the size of the period effect and omitting a period mediator from the estimation model: a larger period effect resulted in a larger bias, whereas keeping the size of the period effect constant and increasing the size of the cohort effect resulted in roughly the same magnitude of bias and smaller relative bias. Analogous simulations where instead age or cohort mediators had been removed would have yielded the same conclusions; only the sign of the bias would differ as shown in the earlier section on linear dependency and expected bias.
Confounding and Colliding
Because the mechanism-based approach uses a mediation approach to APC analysis, it is also subject to biases that are not present in traditional APC analysis: (1) confounding of the mediator-outcome relationship, and (2) collider stratification bias (Elwert and Winship 2014; Greenland 2003).
Arguably, traditional APC analysis is not affected by confounding because age, period, and cohort are time dimensions and therefore are not causally affected by other variables. In our confounding scenario, the effect of BMI (and smoking) on CVD mortality was confounded by genotype, which thereby also affected the estimation of the period effect. Ideally, such a confounder would be controlled. However, if the confounder is unmeasured—and therefore cannot be controlled—a difficult choice has to be made. Removing the mediator induces bias via an omitted pathway, whereas including the mediator induces confounding bias. Confounding of the mediator-outcome relationship (and hence collider stratification bias) resulted in more bias than omitting the relevant pathway in our simulations, but this is dependent on the relative strengths of mediation and confounding.
In our confounding simulations, BMI was affected by period and by genotype (as was smoking). Because of the linear dependency of period with age and cohort, collider stratification bias also affected the age and cohort estimates. Also in this scenario, conditioning on genotype would have prevented this bias. If conditioning on both causes of the collider is not possible, a choice has to be made between removing and keeping the mediator in the analysis. Greenland's (2003) second rule of thumb regarding endogenous selection states that including mediators that are also colliders induces larger bias than the size of the mediator's dependence on its causes (Elwert and Winship 2014). This suggests removing such mediators from the analysis, despite inducing bias via omitted pathways.
Directed Acyclic Graphs (DAG)
We encourage investigators to draw causal DAGs of the relations among APC variables, mediators, and outcome. To represent the deterministic relationship among age, period, and cohort in the causal DAG, we used bold arrows, which is in line with conventions in causal inference (Spiegelhalter et al. 2002; Spirtes et al. 2001) but differs from the representation used by Winship and Harding (2008). We used these arrows because the relationships among age, period, and cohort are fundamentally different from those of other relationships in the DAG (which are stochastic and causal rather than deterministic).
By drawing the relations between APC and mediators, we are clear about the assumptions underlying our analyses. Such clarity sets the mechanism-based approach apart from other APC approaches, where transparency about constraints can be lacking (Luo 2013). Drawing causal DAGs and redrawing them after one of the three age, period, or cohort variables is removed (Fig. 3) helps explicate otherwise hidden assumptions about the relationships among age, period, cohort, and their mediators and can help identify possible biases, such as collider stratification bias.
Guidelines for Application of the Mechanism-Based Approach
Based on our assessment, when estimating age, period, and cohort effects by applying the mechanism-based approach (extended or not), we suggest considering the following questions:
Which APC variable is believed to have the weakest effect on the outcome in question?
Which APC variable has putative mediators available in the data?
How well measured are these mediators?
How exhaustive are these mediators of all the pathways linking the respective APC variable and the outcome?
Are there common causes of these mediators (e.g., is more than one APC variable affecting the mediators)?
Are mediator-outcome relationships potentially confounded, and how severely?
To minimize bias, select the APC variable for which the answers to questions 1–6 are most favorable. For this variable, model the mediated pathways; the other two effects can be modeled directly. The degree of bias that remains in the analysis is dependent on the answers to the preceding questions for the chosen APC variable and on ordinary modeling concerns, such as correct modeling of the functional form of the relationships among APC variables, mediators, and outcome.
Answering questions 1–6 will likely require making assertions that cannot be tested in the data, and is thereby similar to setting constraints by fiat like in various other APC approaches. However, the difference is that the answers to these questions—particularly, questions 1, 4, 5, and 6—can be motivated based on substantive theoretical reasoning. As described in our Introduction, various authors have argued that age-period-cohort analysis needs to become more theoretically informed. A large suite of APC methods exist that solve the linear identification problem nontransparently and without substantive theoretical justification. A strength of the mechanism-based approach is that it motivates researchers to be explicit about their (substantive theoretical) assumptions.
We demonstrated the performance of the mechanism-based approach to APC modeling in nonideal circumstances. Biases occurred when the assumed causal relations did not coincide with the truth, such as when paths from causes to mediators were omitted from the estimation model, or when there was unmeasured confounding of the mediator-outcome relationship. The direction of the bias followed our expectations based on APC linear dependency. Size of bias is dependent on the size of the effects involving confounders, omitted intermediate variables, or pathways. Our extension of the mechanism-based approach increases its utility by allowing it to be easily usable for models with nonlinear link functions and parameterizations of any complexity. Our brief guidelines, aided by causal DAGs, offer a useful tool for researchers who wish to implement this approach.
Open access funding provided by Max Planck Society. The authors would like to thank Dr. Christopher Winship, Department of Sociology, Harvard University, for his comments on a draft version of this article. This work was partly financed by the Netherlands Organisation for Scientific Research (NWO) in relation to the research programme Smoking, alcohol and obesity - Ingredients for improved and robust mortality projections. Rhian Daniel acknowledges support from a Career Development Award in Biostatistics funded by the Medical Research Council UK (Grant No. G1002283) and a Sir Henry Dale Fellowship jointly funded by the Wellcome Trust and the Royal Society (Grant No. 107617/Z/15/Z). Bianca De Stavola acknowledges support from the Economic and Social Research Council (Grants ES/I025561/1, ES/I025561/2, and ES/I025561/3).
13524_2017_562_MOESM1_ESM.r (8 kb)
ESM 1 (R 7 kb)
13524_2017_562_MOESM2_ESM.docx (1.2 mb)
ESM 2 (DOCX 1186 kb)
Bell, A., & Jones, K. (2014). Another "futile quest"? A simulation study of Yang and Land's hierarchical age-period-cohort model. Demographic Research, 30(article 11), 333–360. doi: 10.4054/DemRes.2014.30.11 CrossRefGoogle Scholar
Ben-Shlomo, Y., & Kuh, D. (2002). A life course approach to chronic disease epidemiology: Conceptual models, empirical challenges and interdisciplinary perspectives. International Journal of Epidemiology, 31, 285–293.CrossRefGoogle Scholar
Blackmore, H. L., & Ozanne, S. E. (2015). Programming of cardiovascular disease across the life-course. Journal of Molecular and Cellular Cardiology, 83, 122–130.CrossRefGoogle Scholar
Capewell, S., Beaglehole, R., Seddon, M., & McMurray, J. (2000). Explanation for the decline in coronary heart disease mortality rates in Auckland, New Zealand, between 1982 and 1993. Circulation, 102, 1511–1516.CrossRefGoogle Scholar
Clayton, D., & Schifflers, E. (1987). Models for temporal variation in cancer rates. II: Age-period-cohort models. Statistics in Medicine, 6, 469–481.CrossRefGoogle Scholar
Cole, S. R., Platt, R. W., Schisterman, E. F., Chu, H., Westreich, D., Richardson, D., & Poole, C. (2010). Illustrating bias due to conditioning on a collider. International Journal of Epidemiology, 39, 417–420.CrossRefGoogle Scholar
Efron, B., & Tibshirani, R. J. (1994). An introduction to the bootstrap. Boca Raton, FL: CRC Press.Google Scholar
Ekamper, P., van Poppel, F., Stein, A. D., & Lumey, L. H. (2014). Independent and additive association of prenatal famine exposure and intermediary life conditions with adult mortality between age 18–63 years. Social Science & Medicine, 119, 232–239.CrossRefGoogle Scholar
Elwert, F., & Winship, C. (2014). Endogenous selection bias: The problem of conditioning on a collider variable. Annual Review of Sociology, 40, 31–53.CrossRefGoogle Scholar
Fienberg, S. E. (2013). Cohort analysis' unholy quest: A discussion. Demography, 50, 1981–1984.CrossRefGoogle Scholar
Glymour, M. M. (2006). Chapter 16: Using causal diagrams to understand common problems in social epidemiology. In J. M. Oaks & J. S. Kaufman (Eds.), Methods in social epidemiology (pp. 387–422). San Francisco, CA: Jossey-Bass.Google Scholar
Greenland, S. (2003). Quantifying biases in causal models: Classical confounding versus collider-stratification bias. Epidemiology, 14, 300–306.Google Scholar
Held, L., & Riebler, A. (2012). A conditional approach for inference in multivariate age-period-cohort models. Statistical Methods in Medical Research, 21, 311–329.CrossRefGoogle Scholar
Hernán, M. A., & Robins, J. M. (2013). Causal inference. Boca Raton, FL: Chapman & Hall/CRC.Google Scholar
Holford, T. R. (2006). Approaches to fitting age-period-cohort models with unequal intervals. Statistics in Medicine, 25, 977–993.CrossRefGoogle Scholar
Jiang, Z., & VanderWeele, T. J. (2015). Jiang and VanderWeele respond to "Bounding natural direct and indirect effects." American Journal of Epidemiology, 182, 115–117.CrossRefGoogle Scholar
Keil, A. P., Edwards, J. K., Richardson, D. B., Naimi, A. I., & Cole, S. R. (2014). The parametric g-formula for time-to-event data: Intuition and a worked example. Epidemiology, 25, 889–897.CrossRefGoogle Scholar
Luo, L. (2013). Assessing validity and application scope of the intrinsic estimator approach to the age-period-cohort problem. Demography, 50, 1945–1967.CrossRefGoogle Scholar
Luo, L., & Hodges, J. S. (2016). Block constraints in age-period-cohort models with unequal-width intervals. Sociological Methods & Research, 45, 700–726.CrossRefGoogle Scholar
MacKinnon, D. P., Lockwood, C. M., & Williams, J. (2004). Confidence limits for the indirect effect: Distribution of the product and resampling methods. Multivariate Behavioral Research, 39, 99–128.CrossRefGoogle Scholar
Mulaik, S. A. (2009). Structural equation models. In S. A. Mulaik (Ed.), Linear causal modeling with structural equations (pp. 119–138). Boca Raton, FL: CRC Press.Google Scholar
Pearl, J. (2000). Causality: Models, reasoning, and inference. Cambridge, UK: Cambridge University Press.Google Scholar
Peeters, A., Nusselder, W. J., Stevenson, C., Boyko, E. J., Moon, L., & Tonkin, A. (2011). Age-specific trends in cardiovascular mortality rates in the Netherlands between 1980 and 2009. European Journal of Epidemiology, 26, 369–373.CrossRefGoogle Scholar
Preacher, K. J., & Hayes, A. F. (2008). Asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models. Behavior Research Methods, 40, 879–891.CrossRefGoogle Scholar
Preston, S., & Wang, H. (2006). Sex mortality differences in the United States: The role of cohort smoking patterns. Demography, 43, 631–646.CrossRefGoogle Scholar
Robert, C., & Casella, G. (2004). Monte Carlo integration. In C. Robert (Ed.), Monte Carlo statistical methods (pp. 79–122). New York, NY: Springer.CrossRefGoogle Scholar
Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66, 688–701.CrossRefGoogle Scholar
Sabol, S. Z., Nelson, M. L., Fisher, C., Gunzerath, L., Brody, C. L., Hu, S., . . . Hamer, D. H. (1999). A genetic association for cigarette smoking behavior. Health Psychology, 18, 7–13.Google Scholar
Smith, J. G., & Newton-Cheh, C. (2015). Genome-wide association studies of late-onset cardiovascular disease. Journal of Molecular and Cellular Cardiology, 83, 131–114.CrossRefGoogle Scholar
Spiegelhalter, D. J., Thomas, A., Best, N. G., & Lunn, D. (2002). Winbugs user manual (Version 1.4). Cambridge, UK: MRC Biostatistics Unit.Google Scholar
Spirtes, P., Glymour, C., & Scheines, R. (2001). Causation, prediction, and search. Cambridge, MA: MIT Press.Google Scholar
VanderWeele, T. J. (2015). Explanation in causal inference: Methods for mediation and interaction. New York, NY: Oxford University Press.Google Scholar
VanderWeele, T. J., Hernán, M. A., & Robins, J. M. (2008). Causal directed acyclic graphs and the direction of unmeasured confounding bias. Epidemiology, 19, 720–728.CrossRefGoogle Scholar
Verlato, G., Melotti, R., Corsico, A. G., Bugiani, M., Carrozzi, L., Marinoni, A., & ISAYA Study Group. (2006). Time trends in smoking habits among Italian young adults. Respiratory Medicine, 100, 2197–2206.Google Scholar
Winship, C., & Harding, D. J. (2008). A mechanism-based approach to the identification of age–period–cohort models. Sociological Methods & Research, 36, 362–401.CrossRefGoogle Scholar
Winship, C., & Mare, R. D. (1983). Structural equations and path analysis for discrete data. American Journal of Sociology, 89, 54–110.CrossRefGoogle Scholar
Wright, S. (1934). The method of path coefficients. Annals of Mathematical Statistics, 5, 161–215.CrossRefGoogle Scholar
Yang, Y., Schulhofer-Wohl, S., Fu, W. J., & Land, K. C. (2008). The intrinsic estimator for age-period-cohort analysis: What it is and how to use it. American Journal of Sociology, 113, 1697–1736.CrossRefGoogle Scholar
1.Unit PharmacoEpidemiology & PharmacoEconomics (PE2), Department of PharmacyUniversity of GroningenGroningenThe Netherlands
2.Max Planck Institute for Demographic ResearchRostockGermany
3.Centre for Statistical Methodology and Department of Medical StatisticsLondon School of Hygiene and Tropical MedicineLondonUK
4.Population Research Centre (PRC), Faculty of Spatial SciencesUniversity of GroningenGroningenThe Netherlands
5.Netherlands Interdisciplinary Demographic InstituteUniversity of GroningenThe HagueThe Netherlands
Bijlsma, M.J., Daniel, R.M., Janssen, F. et al. Demography (2017) 54: 721. https://doi.org/10.1007/s13524-017-0562-6 | CommonCrawl |
Effects of acute ingestion of caffeinated chewing gum on performance in elite judo athletes
Aleksandra Filip-Stachnik ORCID: orcid.org/0000-0003-2008-61581,
Robert Krawczyk1,
Michal Krzysztofik1,
Agata Rzeszutko-Belzowska2,
Marcin Dornowski3,
Adam Zajac1,
Juan Del Coso4 &
Michal Wilk1
Previous investigations have found positive effects of acute ingestion of capsules containing 4-to-9 mg of caffeine per kg of body mass on several aspects of judo performance. However, no previous investigation has tested the effectiveness of caffeinated chewing gum as the form of caffeine administration for judoists. The main goal of this study was to assess the effect of acute ingestion of a caffeinated chewing gum on the results of the special judo fitness test (SJFT).
Nine male elite judo athletes of the Polish national team (23.7 ± 4.4 years, body mass: 73.5 ± 7.4 kg) participated in a randomized, crossover, placebo-controlled and double-blind experiment. Participants were moderate caffeine consumers (3.1 mg/kg/day). Each athlete performed three identical experimental sessions after: (a) ingestion of two non-caffeinated chewing gums (P + P); (b) a caffeinated chewing gum and a placebo chewing gum (C + P; ~2.7 mg/kg); (c) two caffeinated chewing gums (C + C; ~5.4 mg/kg). Each gum was ingested 15 min before performing two Special Judo Fitness Test (SJFT) which were separated by 4 min of combat activity.
The total number of throws was not different between P + P, C + P, and C + C (59.66 ± 4.15, 62.22 ± 4.32, 60.22 ± 4.08 throws, respectively; p = 0.41). A two-way repeated measures ANOVA indicated no significant substance × time interaction effect as well as no main effect of caffeine for SJFT performance, SJFT index, blood lactate concentration, heart rate or rating of perceived exertion.
The results of the current study indicate that the use of caffeinated chewing gum in a dose up to 5.4 mg/kg of caffeine did not increase performance during repeated SJFTs.
Caffeine is recognized as the most commonly used psychoactive substance in the world [1] and is widely utilized by elite athletes as an ergogenic aid to increase physical performance during training and competition [2]. Indeed, recent scientific reviews and meta-analyses confirm the benefits of this substance on various types of exercise including aerobic-based [3], anaerobic-based [4], and strength/power exercise activities [5]. Moreover, the ergogenic effects of caffeine have been observed in intermittent sport disciplines, such as team sports [6] and combat sports [7], which require a substantial contribution from both oxidative and non-oxidative metabolism in addition to sport-specific technical and tactical skills.
In most previous investigations confirming the ergogenic effects of caffeine in sports performance, this stimulant was provided in doses from 3 to 9 mg per kg of body mass (i.e., mg/kg) in the form of anhydrous caffeine administered in a gelatin capsules. However, in the sport setting, caffeine is generally consumed in the form of caffeinated beverages such as coffee or tea, pre-work out supplements or in capsules/pills, although there are several other sources of caffeine [8]. Interestingly, an alternative method of caffeine delivery via chewing gum may provide an advantage over traditional forms of caffeine administration. Caffeine via chewing gum offers a different pharmacokinetic profile over the ingestion of caffeine in capsules, which results in an earlier increase in blood plasma caffeine concentration, usually between 5 and 15 min from intake [9]. Moreover, chewing gum allows caffeine to be absorbed directly into the bloodstream through the buccal mucosa, thereby bypassing hepatic metabolism [9]. This form of caffeine absorption may minimize the risk of gastrointestinal disorders in athletes. Regarding this issue, the use of caffeinated chewing gum in doses between 2 and 6 mg/kg has been found effective in increasing performance in several types of exercise, such as cycling [10,11,12], team sports-specific tests [13, 14], endurance running [15, 16] and jumping performance [17] although this is not always the case [18, 19].
Despite the evidence of ergogenic effects of caffeinated chewing gum, there is no study testing the efficacy of this form of caffeine administration in combat sports such as judo. The use of caffeinated chewing gum in judo may be more beneficial than the use of caffeine capsules because judo tournaments consist of elimination rounds habitually performed without a fixed schedule. Additionally, there is a need of performing several judo combats within the same competition day. To date, the studies examining the ergogenic effects of caffeine in judo used caffeine capsules [7, 20,21,22,23] or caffeine dissolved in water [24]. In these investigations, the acute intake of caffeine in a dose of 4-to-9 mg/kg was effective to enhance several aspects of judo performance during simulated combats [20, 23, 24], although the effect of this supplementation protocol of caffeine administration seems ineffective after rapid weight loss [7] and in women [22]. However, the use of a single administration of caffeine may have reduced applicability to the context of a real judo competition, where several combats take place in one day. In this context, repeated dosing of caffeine before each combat may be necessary [25]. Interestingly, Negaresh et al. [26] showed that a repeated-dosing of caffeine (i.e., before each match) improved wrestling performance in the last stages of a 5-match wrestling tournament in comparison to a single administration of a particular dose of caffeine before the tournament. This suggests that ingestion of caffeine in smaller doses prior to each combat during a tournament may offer greater performance enhancement than the use of a single and larger dose before competition.
Therefore, the aim of this study was to examine the effects of the ingestion of caffeinated chewing gum on judo performance in elite athletes. To improve the applicability of the experiment, we tested two caffeine supplementation protocols with a single and repeated dosing of caffeine that provided two different doses of caffeine. It was hypothesized that caffeine supplementation protocols would increase judo performance in comparison to the administration of a decaffeinated/placebo chewing gum.
Power analysis indicated that a minimum sample size of 9 participants should be included in the study in order to detect an effect size (ES) of 0.5, obtained from a study examining acute effects of caffeine on judo performance in Special Judo Fitness Test SJFT [21]. Power was analyzed using the following variables: the analysis was set to repeated measures ANOVA, within factors, the required power was set to 0.80, alpha was set to 0.05, and the correlation between repeated measures was set to r = 0.5. This calculation was performed with the G*Power software, v.3.1.9.2 [27]. Therefore, we recruited nine male healthy experienced judoists to participate in the study. The following anthropometric measurements were taken: height (WPT-60/150OW, Radwag, Poland), body mass and body fat percentage (InBody 370, Poland). Main characteristics of the study sample are depicted in Table 1. Participants were recruited from the Academic Sports Club of AZS AWF Katowice and testing was conducted during the competitive season. All athletes selected for the research were black belts, competed at the national and international level and were members of the Polish national team. The inclusion criteria were as follows: (a) free from neuromuscular and musculoskeletal disorders; (b) black belt and at least "good" level in the SJFT [28] (c) no medication nor dietary supplements use within the previous month; (d) self-described satisfactory health status. Participants were excluded if they reported (a) positive smoking status; (b) potential allergy to caffeine. All participants had previous experience in performing the SJFT test during training and/or investigations. None of the participants had previously used caffeinated chewing gum. The study protocol was approved by the Bioethics Committee for Scientific Research, at the Academy of Physical Education in Katowice, Poland, (3/2019) according to the ethical standards laid down in the 1964 Declaration of Helsinki and its later amendments. All participants provided their written informed consent prior to participation in this study.
Table 1 Main participants' characteristics
Pre experimental standardization
Prior to the first experimental trial, participants were instructed to maintain their usual hydration and dietary habits (including pre-workout meal) and habitual caffeine intake during the study period. In addition, the participants registered their food intake using "MyFitnessPal" software [29] 24 h before the first experimental trial. To produce a within-subject standardization of diet, participants replicated the same dietary pattern before the second and third trials. Habitual caffeine intake was measured by using a modified version of the validated questionnaire by Bühler et al. [30] that recorded the type and amount of caffeine-containing foods and dietary supplements. Habitual caffeine intake was assessed for the four weeks before the start of the experiment, following previous recommendations [31]. Participants were also asked to refrain from any source of caffeine and alcohol 24 h before each experimental trial and not to perform strenuous exercise in the 24 h before testing.
To investigate the effect of caffeine consumption via caffeinated chewing gum on judo performance the participants underwent a randomized, double-blind, placebo-controlled crossover experiment where each participant acted as his own control. Each participant took part in three identical experimental trials that included the ingestion of two chewing gums and several judo-specific performance measurements. The trials differed in the type of gum ingested as follows: (a) ingestion of two non-caffeinated (i.e., placebo) chewing gums (P + P; 0 mg/kg of caffeine); b) a caffeinated chewing gum and a placebo chewing gum (C + P; 200 mg of caffeine or ~ 2.7 mg/kg of caffeine); c) two caffeinated chewing gums (C + C; 400 mg of caffeine or ~ 5.4 mg/kg of caffeine). The trials were separated by seven days to allow complete recovery and substance wash-out.
Upon arrival to the laboratory, a blood sample was obtained to assess blood lactate concentration at baseline (BIOSEN C-line; EKF, United Kingdom). Then, participants wore their judogis and ingested the first chewing gum. Afterwards, the participants performed a 15-min standardized warm up, simulating a pre-competition warm up. Then, the participants performed the first SJFT, as described below. After the first SJFT, blood lactate concentration and the rating of perceived exertion (RPE; using the 6–20-point Borg scale; [32]) were obtained, while heart rate (Wearlink, Polar, Finland) was evaluated at the end of the SJFT and 1 min into recovery. After 5 min of passive recovery, participants performed a 4-min simulated combat activity with no performance measurements and blood lactate concentration was measured again. Immediately after the combat, the second chewing gum was consumed and then participants rested for 15 min before performing the second SJFT. The rating of perceived exertion was measured immediately after the second SJFT, heart rate was measured immediately after the second SJFT and 1 min into recovery, while blood lactate concentration was measured after the second SFJT and 30 post exercise. All testing was performed at the Strength and Power Laboratory of the Academy of Physical Education in Katowice, Poland under controlled ambient conditions. All experiments took place at the same time of the day (18:00–20:00 in the afternoon) during their habitual judo training time. Figure 1 contains a description of the experimental design used for the experiment.
Study design. SJFT-1, SJFT-2; Repetitions #1 and #2 of the Special Judo Fitness test; RPE-1, RPE-2; Measurements of the rating of perceived exertion obtained just after finishing the first and second SJFT, respectively; LaBASELINEmeasurement of blood lactate concentration measured before warm up for baseline ; La-SJFT-1– blood lactate concentration 3 min after finishing the first SJFT; LaPOST-COMBATblood lactate concentration after combat activity; La-SJFT-2– blood lactate concentration 3 min after finishing the second SJFT; La30min– blood lactate concentration 30 min after the end of the second SJFT
Administration of caffeinated and placebo chewing gum
In each trial, participants chewed the gums for 5 min and were then required to expectorate the chewed gum into a container. The gums were ingested 15 min before the onset of each SJFT considering that a large increase in plasma caffeine concentration after administration of 200 mg of caffeine via chewing gum occurs between 5 and 15 min after administration [8, 9]. During each intake, the gum contained 200 mg of caffeine from a commercially available chewing gum (Military Energy Gum; MarketRight Inc., Plano, IL, USA). The placebo was a commercially available non-caffeinated chewing gum, similar in taste, shape, and size. The chewing gums were placed in an opaque container in order to blind participants and experimenters from the conditions under investigation. Verbal questioning after every SJFT indicated that participants were unable to distinguish between the caffeine and non-caffeine-containing chewing gum (odds no greater than chance or 50:50).
Special judo fitness test and combat activity
The SJFT is considered a reliable and reproducible mean to measure judo performance and it was performed according to previous guidelines [28]. The SJFTs consisted of 3 consecutive rounds (first round with 15 s of duration, followed by two rounds of 30 s of duration) with 10-s rest intervals between them. In each round, the judoist under investigation (Tori) performed the highest number of Ippon-seoinage throws on two partners (Uke) which were of similar body mass and they were always the same for a particular Tori across the study. To standardize the test among conditions, Toris were positioned 6 m apart and the Uke was required to run from one Tori#1 to Tori#2 as fast as possible, to throw him using the Ippon-seoinage technique, continuing then the run from Tori#2 to Tori#1 to repeat the sequence of running plus throw as many times as possible during the time for each round (Fig. 1). The SJFT performance was measured by the number of complete and valid throws performed in the three rounds, which were confirmed in real time by two independent and experienced coaches blinded to the treatments. The sum of total throws completed in two SJFTs performed in the investigation was also calculated as a measurement of performance. Additionally, the SJFT index was calculated for each SJFT test as follows [28]:
$$Index=\frac{final \ heart \ rate \left(bpm\right)+heart \ rate \ 1 \ {min} \ after \ SJFT \left(bpm\right)}{number\ of \ throws}$$
In each SJFT, heart rate was measured by using a heart rate monitor (Polar, Finland). Five minutes after the first SJFT, the participants performed a 4 min combat activity, where the opponent changed every 2 min of the fight in order to induce fatigue of the central fighter. During combat activity the participants competed with opponents in the same weight category and with a similar sports level relative to their ranking. The participants were compelled to fight with the aim of scoring the most points or winning by ippon, yet the fight was continued regardless of the score. During the SJFTs and combat activity athletes were motivated by members of the research team (two black belt coaches) to exert maximal effort.
All calculations were performed using SPSS (version 25.0; SPSS, Inc., Chicago, IL, USA), and were expressed as means with standard deviations (± SD). Statistical significance was set at p < 0.05. Verification of differences in calorie intake, protein, fat, and carbohydrate ingestion between the P + P vs. C + P and C + C groups was performed using one-way analysis of variance (ANOVA) for repeated measures. Data obtained at baseline or after the combats were also tested with one-way ANOVAs of repeated measures. A two-way ANOVA with repeated measures (condition × time) were used to evaluate the effects of caffeine administration on all measured variables twice during the SFJT tests. Differences in the sum of total throws during the two SJFTs were determined by one-way ANOVA for repeated measures. In the event of a significant main effect, post-hoc comparisons were conducted using the Bonferroni test. Mauchly's test of sphericity was conducted to test for homogeneity of data and if violated (p < 0.05), the Greenhouse-Geisser adjustment value was used. To test for possible differences between the two doses of caffeine employed in this investigation, values in the total number of throws, the SJFT index, RPE, blood lactate concentration and heart rate were compared by using the related-samples Friedman's two-way analysis of variance by ranks. Effect sizes (Cohen's d) were reported where appropriate and were defined as large d > 0.80; moderate between 0.79 and 0.50; small between 0.49 and 0.20; and trivial as < 0.20 [33].
The one-way ANOVA indicated no significant differences in energy intake (3286 ± 254, 3288 ± 241, 3298 ± 250 kcal/day; p = 0.17) and in the proportions of protein/carbohydrate/fat (20/52/28, 20/52/28, 20/53/27 %; p = 0.74 for protein; p = 0.77 for carbohydrates; p = 0.88 for fat) in the diet of the judoists between P + P, C + P, and C + C conditions. Table 2 depicts performance and physiological variables during the two repetitions of the SJFT. The two-way ANOVA revealed no main effects of substance and time, nor interaction between these two variables in the number of throws performed in each repetition of the SJFT. As a result, the total number of throws was not different between P + P, C + P, and C + C (59.66 ± 4.15, 62.22 ± 4.32, 60.22 ± 4.08 throws, respectively; p = 0.063). Additionally, there were no main effect nor interaction differences in the SJFT index and in RPE for each repetition of the SJFT (Table 2).
Table 2 Performance and physiological variables with ingestion of two non-caffeinated chewing gum (P + P), a caffeinated chewing gum and a placebo chewing gum (C + P; 2.7 mg/kg of caffeine) and two caffeinated chewing gums (C + C; 5.4 mg/kg of caffeine) before executing two repetitions of the Special Judo Fitness Test (SJFT)
Blood lactate concentration at baseline was similar in all conditions (1.57 ± 0.47, 1.72 ± 0.50, 1.62 ± 0.70 mmol/L, respectively p = 0.835). There was no main effect nor interaction in post-SJFT blood lactate concentration (Table 2). Blood lactate concentration after the combat performed between the SJFTs (11.35 ± 3.62, 12.27 ± 2.43, 12.32 ± 3.42 mmol/L, respectively p = 0.733) nor 30 min after the end of the testing were similar among conditions (6.07 ± 3.23, 5.71 ± 2.86, 6.93 ± 2.39 mmol/L, respectively p = 0.078) for P + P, C + P, and C + C.
Heart rate at baseline was similar among conditions (89 ± 11, 84 ± 6, 85 ± 9 bpm, respectively p = 0.308). There was no main effect nor interaction in post-SJFT heart values (Table 2). Additionally, heart rate 1 min after the SJFTs (SJFT-1: 151 ± 15, 147 ± 12, 156 ± 24 bpm and SJFT-2: 151 ± 7, 145 ± 8, 155 ± 20) and after the combat performed between the SJFTs (186 ± 9, 186 ± 12, 186 ± 14 bpm) was not different among conditions (all p > 0.05).
The Friedman's test showed no significant differences for the total number of throws (p = 0.305), SJFT index (p = 0.489), RPE (p = 0.570), blood lactate concentration (p = 0.416) and heart rate (p = 0.964) between the two doses of caffeine under investigation.
The purpose of this study was to investigate the effects of the ingestion of caffeinated chewing gum on judo performance during two specific judo tests (SJFT) in elite judo athletes. The results of the presented study indicate that the ingestion of caffeinated chewing gum providing two doses of caffeine (2.7 and 5.4 mg/kg of body mass) by using two different protocols (C + P; C + C) did not increase the number of throws performed during the SJFTs when compared to the administration of decaffeinated chewing gum (P + P). In addition, none of the caffeine administration protocols via chewing gum changed the SFJT index, the rate of perceived exertion, blood lactate concentration or post-exercise heart rate. Collectively, the results of the current study suggest that the use of caffeinated chewing gum in a dose up to 5.4 mg/kg did not increase performance during repeated SJFTs.
Several previous studies analyzed the effectiveness of acute caffeine intake in combat sports [34], but only a few focused on judo performance [7, 20,21,22,23]. Overall, these investigations showed that the ingestion of a single caffeine capsule or caffeine dissolved in water increased the number of throws in the SJFT [20, 24], or there was an effect of small magnitude that did not reach statistical significance [21]. Additionally, it has been found that this protocol of caffeine administration induced a reduction in the rate of perceived exertion [20] and increased the number of attacks [24] and blood lactate concentration after simulated judo matches [23], and after the SJFT [21] which suggests a higher intensity and higher utilization of anaerobic-based pathways during exercise. In the current experiment, none of these benefits were found after the administration of caffeine via chewing gum, which is contradictory to previous results. The reasons for the differences between investigations can be associated to the administered dose and the habituation to caffeine in the participants of the experiment, in addition to the caffeine supplementation form used. In studies confirming ergogenic effects of caffeine, doses between 4 and 9 mg/kg of caffeine were administered [20, 23, 24]. These doses are above those provided in the C + P protocol (i.e., 2.7 mg/kg), but this does not explain the lack of ergogenic effects of the C + C protocol because the dose administered was 5.4 mg/kg of caffeine. Interestingly, Durkalec-Michalski et al. [24] investigated the effect of three different doses of caffeine (3, 6 and 9 mg/kg) and tested their effects in judoist with different habitual caffeine consumption (consumers and non-consumers). Among those who habitually consumed caffeine, only the dose of 9 mg/kg increased the number of throws during the SJFT while 6 and 9 mg/kg were ergogenic in those unhabituated to the use of caffeine. In the current investigation, all participants were caffeine consumers (Table 1) which may have reduced the efficacy of the C + C protocol, despite that the total dose of caffeine administered was above their habitual intake. Considering all the data collected, it seems reasonable to conclude that the administration of caffeine in caffeinated chewing gum in a dose up to 5.4 mg/kg was ineffective in enhancing SJFT performance in elite judoist habituated to caffeine. Future investigations should determine if higher doses can produce ergogenic effects in this type of elite athletes or whether the ergogenic effects of caffeine are present when investigating elite judoist with low caffeine consumption [21, 24].
The protocol of caffeine consumption may be an important factor affecting the level of acute responses in the presented research. Negaresh et al. [26] compared the effects of caffeine on wrestling performance following different protocols of consumption: a placebo, a high-dose of caffeine (10 mg/kg), a moderate-dose of caffeine (4 mg/kg), repeated-dose of caffeine (2 mg/kg before each fight for a total of 10 mg/kg) or a selective caffeine administration based on performance decrement previously measured (~ 2 mg/kg before each fight for a total of 6.16 ± 1.58 mg/kg). Interestingly, the two protocols that used repeated caffeine doses before each fight yielded the highest improvements in performance, particularly in the last stages of the simulated tournament. This means that, in combat sports, the use of repeated small doses of caffeine before each combat may be the recommended strategy to provide caffeine, instead of a single dose before the first combat. With this repeated dosing protocol, it is probable that athletes do not benefit from caffeine intake during the first combats (as it happened in the current investigation that entailed two SJFT interspersed by a simulated combat), but it will render benefits during the last combats of the competition. To determine if the use of chewing gum offers some benefits over the use of caffeine pills for this protocol of repeated and small dosing requires further investigation.
The ergogenic effects of caffeine can also be related with the sports level of athletes [35]. In our study the participants included elite judoists, based on judo belts and the results of the SJFT, and no previous study considering the effects of caffeine in judo has been conducted on such elite athletes. For highly trained individuals there is less 'potential for improvement' after caffeine ingestion because they have reached the upper limits of exercise performance and physical conditioning [36, 37], which may also explain the lack of ergogenic effects following acute caffeine ingestion in this study. The confirmation that caffeine's ergogenic properties could vary according to training status in judo athletes, may result from the comparison of our results and those obtained in previous research which found a positive effect of caffeine on judo performance [20, 24]. All of the elite judo athletes, who participated in the current investigation reached "good" or "excellent" results in the total throws performed in each SJFT (≥ 27 and ≥ 28 for junior and senior, respectively [28]). In contrast, the number of throws performed in the control/ baseline condition in the study of Astley et al. [20] (23.9 ± 1.7 throws), and Durkalec-Michalski et al. [24] (24.5 ± 2.5 throws) was significantly lower. Moreover, athletes who participated in our study had significantly greater judo training experience than in those two studies [20, 24] (15.6 ± 4.0 vs. 11.00 ± 4.5 years and junior athletes in age 16.1). Similarly, in the study of Lopes-Silva et al. [7] and Felippe et al. [21], where athletes had longer training experience (14.4 ± 8.9 and 15 ± 5 years, respectively), judo performance did not improve after caffeine ingestion. Moreover, only studies performed on younger athletes ( 16.1 ± 1.4 and 21.7 ± 3.7 years, respectively ) [20, 24] showed a positive effect of caffeine, which is contrary to the results of the present study by using judoists of 23.7 ± 4.4 years, as well as to results from previous studies conducted on more experienced participants (25.3 ± 5.7 and 23 ± 5 years, respectively) [7, 21]. Taking into account that peak performance for judo athletes typically occurs when they are 25.4 ± 3.8 years of age (based on World Championships and Olympic Games [38]) it may be suggested that experienced athletes, within this age frame, are close to reaching their individual physical possibilities and may be less susceptible to further performance enhancement following caffeine ingestion [35].
In addition to its strengths, the present study has several limitations that should be addressed: (1) due to using commercially available products used in the study, we provided absolute doses of caffeine instead of the use of doses individualized to body mass. We have analyzed the results of this investigation taking into account the exact relative dose provided to each individual, which varied between 2.37 and 3.06 mg/kg for C + P and between 4.74 and 6.01 mg/kg for C + C, and we concluded that this small differences in relative doses did not affect the results of the investigation; (2) we did not evaluate blood caffeine concentration, thus we are unable to verify the level of blood caffeine concentration obtained with the use of chewing gum. However, previous investigations using similar caffeinated chewing gum and dosing of caffeine induced blood caffeine concentrations similar to those of caffeine capsules [9], and ergogenic effects of caffeine in sports are evident when using caffeinated chewing gum [10,11,12,13,14,15,16,17]; (3) the study analyzed the effects of caffeine intake on judo performance by only using two repetitions of the SJFT. Although this is the most common testing of judo performance in the literature [28], the use of other judo-specific testing such as the judogi grip strength test [23] or the number and duration of offensive actions during combat [24] may help to understand other potential ergogenic effects of caffeine in judo. Additionally, a higher number of the SJFTs, to simulate a more fatiguing competitive scenario may have helped to clarify the effects of acute caffeine intake on judo performance. Thus, more research is needed to determine an effective caffeine supplementation strategy using other performance tests, and taking into account various daily levels of caffeine consumption and training status. Additionally, future studies should explore the effectiveness of different doses of caffeine, provided from caffeinated gum and capsules.
The results of the current investigation showed that ~ 2.7 mg/kg of caffeine (C + P) and ~ 5.4 mg/kg of caffeine (C + C) ingested via caffeinated chewing gum before two repetitions of the SJFT were ineffective to enhance the number of throws performed in this judo-specific test. In addition, these protocols of caffeine administration were also ineffective in inducing changes in the SJFT index, the rate of perceived exertion, heart rate and blood lactate concentration in elite judoists. From a practical perspective, the outcomes of this study and their comparison to previous literature suggests that, in elite judoists habituated to caffeine, doses lower than 6 mg/kg may be ineffective to improve judo-specific performance. One option to avoid the use of high doses of caffeine in judoists is to produce dishabituation to caffeine by reducing the amount of daily caffeine intake. Although the time course of re-sensitization to caffeine's ergogenic effect after ceasing caffeine use is potentially impacted by the duration and extent of prior caffeine exposure [39], habitual users should cease caffeine ingestion at least 4 days prior to competition for dishabituation to occur [40, 41].
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
McGuire S. Institute of Medicine. 2014. Caffeine in Food and Dietary Supplements: Examining Safety—Workshop Summary. Adv Nutr. 2014;5(5):585–586.
Aguilar-Navarro M, Muñoz G, Salinero J, Muñoz-Guerra J, Fernández-Álvarez M, Plata M, et al. Urine Caffeine Concentration in Doping Control Samples from 2004 to 2015. Nutrients. 2019;11:286.
Southward K, Rutherfurd-Markwick K, Badenhorst C, Ali A. The Role of Genetics in Moderating the Inter-Individual Differences in the Ergogenicity of Caffeine. Nutrients. 2018;10(10):1352.
Grgic J. Caffeine ingestion enhances Wingate performance: a meta-analysis. Eur J Sport Sci. 2018;18:219–25.
Grgic J, Trexler ET, Lazinica B, Pedisic Z. Effects of caffeine intake on muscle strength and power: a systematic review and meta-analysis. J Int Soc Sports Nutr. 2018;15(1):11.
Salinero JJ, Lara B, Del Coso J. Effects of acute ingestion of caffeine on team sports performance: a systematic review and meta-analysis. Res Sports Med Print. 2019;27:238–56.
Lopes-Silva JP, Felippe LJC, Silva-Cavalcante MD, Bertuzzi R, Lima-Silva AE. Caffeine Ingestion after Rapid Weight Loss in Judo Athletes Reduces Perceived Effort and Increases Plasma Lactate Concentration without Improving Performance. Nutrients. 2014;6(7):2931–45.
Wickham KA, Spriet LL. Administration of Caffeine in Alternate Forms. Sports Med. 2018;48(S1):79–91.
Kamimori GH, Karyekar CS, Otterstetter R, Cox DS, Balkin TJ, Belenky GL, et al. The rate of absorption and relative bioavailability of caffeine administered in chewing gum versus capsules to normal healthy volunteers. Int J Pharm. 2002;234:159–67.
Daneshfar A, Petersen CJ, Koozehchian MS, Gahreman DE. Caffeinated Chewing Gum Improves Bicycle Motocross Time-Trial Performance. Int J Sport Nutr Exerc Metab. 2020;30(6):427–34.
Paton C, Costa V, Guglielmo L. Effects of caffeine chewing gum on race performance and physiology in male and female cyclists. J Sports Sci. 2015;33(10):1076–83.
Paton CD, Lowe T, Irvine A. Caffeinated chewing gum increases repeated sprint performance and augments increases in testosterone in competitive cyclists. Eur J Appl Physiol. 2010;110:1243–50.
Ranchordas MK, King G, Russell M, Lynn A, Russell M. Effects of Caffeinated Gum on a Battery of Soccer-Specific Tests in Trained University-Standard Male Soccer Players. Int J Sport Nutr Exerc Metab. 2018;28(6):629–34.
Ranchordas MK, King G, Russell M, Lynn A, Russell M. Effects of Caffeinated Gum on a Battery of Soccer-Specific Tests in Trained University-Standard Male Soccer Players. Int J Sport Nutr Exerc Metab. 2018;28(6):629–345.
Dittrich N, Serpa MC, Lemos EC, De Lucas RD, Guglielmo LGA. Effects of Caffeine Chewing Gum on Exercise Tolerance and Neuromuscular Responses in Well-Trained Runners. J Strength Cond Res. 2021;35(6):1671–6.
Whalley PJ, Dearing CG, Paton CD. The Effects of Different Forms of Caffeine Supplement on 5-km Running Performance. Int J Sports Physiol Perform. 2020;15(3):390–4.
Venier S, Grgic J, Mikulic P. Acute Enhancement of Jump Performance, Muscle Strength, and Power in Resistance-Trained Men After Consumption of Caffeinated Chewing Gum. Int J Sports Physiol Perform. 2019;14(10):1415–21.
Russell M, Reynolds NA, Crewther BT, Cook CJ, Kilduff LP. Physiological and Performance Effects of Caffeine Gum Consumed During a Simulated Half-Time by Professional Academy Rugby Union Players. J Strength Cond Res. 2020;34(1):145–51.
Ryan EJ, Kim C-H, Muller MD, et al. Low-Dose Caffeine Administered in Chewing Gum Does Not Enhance Cycling to Exhaustion. J Strength Cond Res. 2012;26(3):844–50.
Astley C, Souza D, Polito M. Acute Caffeine Ingestion on Performance in Young Judo Athletes. Pediatr Exerc Sci. 2017;29:336–40.
Felippe LC, Lopes-Silva JP, Bertuzzi R, McGinley C, Lima-Silva AE. Separate and Combined Effects of Caffeine and Sodium-Bicarbonate Intake on Judo Performance. Int J Sports Physiol Perform. 2016;11:221–6.
Pereira LA, Cyrino ES, Avelar A, et al. A ingestão de cafeína não melhora o desempenho de atletas de judô. Mot Rev Educ Física UNESP. 2010;16(3):714–22.
Saldanha da Silva Athayde M, Kons RL, Detanico D. An Exploratory Double-Blind Study of Caffeine Effects on Performance and Perceived Exertion in Judo. Percept Mot Skills. 2019;126(3):515–29.
Durkalec-Michalski K, Nowaczyk PM, Główka N, Grygiel A. Dose-dependent effect of caffeine supplementation on judo-specific performance and training activity: a randomized placebo-controlled crossover trial. J Int Soc Sports Nutr. 2019;16(1):38.
Grgic J, Sabol F, Venier S, Tallis J, Schoenfeld BJ, Coso JD, et al. Caffeine Supplementation for Powerlifting Competitions: An Evidence-Based Approach. J Hum Kinet. 2019;68:37–48.
Negaresh R, Del Coso J, Mokhtarzade M, Lima-Silva AE, Baker JS, Willems MET, et al. Effects of different dosages of caffeine administration on wrestling performance during a simulated tournament. Eur J Sport Sci. 2019;19:499–507.
Faul F, Erdfelder E, Lang A-G, Buchner A. G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav Res Methods. 2007;39(2):175–91.
Sterkowicz-Przybycień K, Fukuda DH, Franchini E. Meta-Analysis to Determine Normative Values for the Special Judo Fitness Test in Male Athletes: 20+ Years of Sport-Specific Data and the Lasting Legacy of Stanisław Sterkowicz. Sports. 2019;7(8):194.
Teixeira V, Voci SM, Mendes-Netto RS, da Silva DG. The relative validity of a food record using the smartphone application MyFitnessPal: Relative validity of a smartphone dietary record. Nutr Diet. 2018;75(2):219–25.
Bühler E, Lachenmeier DW, Winkler G. Development of a tool to assess caffeine intake among teenagers and young adults. Ernahrungs Umsch. 2014;(61(4):58–63.
Filip A, Wilk M, Krzysztofik M, Del Coso J. Inconsistency in the Ergogenic Effect of Caffeine in Athletes Who Regularly Consume Caffeine: Is It Due to the Disparity in the Criteria That Defines Habitual Caffeine Intake? Nutrients. 2020;12(4):1087.
Borg G, Hassmén P, Lagerström M. Perceived exertion related to heart rate and blood lactate during arm and leg exercise. Eur J Appl Physiol. 1987;56:679–85.
Cohen J. Statistical Power Analysis for the Behavioral Sciences. New York (NY): Academic Press; 2013.
López-González LM, Sánchez-Oliver AJ, Mata F, Jodra P, Antonio J, Domínguez R. Acute caffeine supplementation in combat sports: a systematic review. J Int Soc Sports Nutr. 2018;15:60.
Pickering C, Grgic J. Caffeine and Exercise: What Next? Sports Med Auckl NZ. 2019;49:1007–30.
Berthelot G, Sedeaud A, Marck A, et al. Has Athletic Performance Reached its Peak? Sports Med Auckl Nz. 2015;45(9):1263–71.
Haugen T, Paulsen G, Seiler S, Sandbakk Ø. New Records in Human Power. Int J Sports Physiol Perform. 2018;13:678–86.
Franchini E, Fukuda DH, Lopes-Silva JP. Tracking 25 years of judo results from the World Championships and Olympic Games: Age and competitive achievement. J Sports Sci. 2020;38:1531–8.
Pickering C, Kiely J. What Should We Do About Habitual Caffeine Use in Athletes? Sports Med. 2019;49:833–42.
Irwin C, Desbrow B, Ellis A, O'Keeffe B, Grant G, Leveritt M. Caffeine withdrawal and high-intensity endurance cycling performance. J Sports Sci. 2011;29:509–15.
Van Soeren MH, Graham TE. Effect of caffeine on metabolism, exercise endurance, and catecholamine responses after withdrawal. J Appl Physiol. 1998;85:1493–501.
This study would not have been possible without our participants' commitment, time and effort.
The study was supported and funded by the statutory research of the Jerzy Kukuczka Academy of Physical Education in Katowice, Poland.
Institute of Sport Sciences, The Jerzy Kukuczka Academy of Physical Education in Katowice, Katowice, Poland
Aleksandra Filip-Stachnik, Robert Krawczyk, Michal Krzysztofik, Adam Zajac & Michal Wilk
College of Medical Sciences, Institute of Physical Culture Studies, University of Rzeszów, Rzeszów, Poland
Agata Rzeszutko-Belzowska
Faculty of Physical Education, Gdańsk University of Physical Education and Sport, Gdańsk, Poland
Marcin Dornowski
Centre for Sport Studies, Rey Juan Carlos University, Fuenlabrada, Spain
Juan Del Coso
Aleksandra Filip-Stachnik
Michal Krzysztofik
Adam Zajac
Michal Wilk
Conceptualization: A.F-S., R.K. Methodology: A.F-S., M.K., M.W.; Formal analysis and investigation: M.K., R.K., A.F-S. A.R-B. M.D. Writing - original draft preparation: A.F-S.; R.K., Writing - review and editing: A.F-S., M.W., M.K., J.D.C. A.R-B. M.D.; Supervision: A.Z., M.W., J.D.C. All authors read and approved the final manuscript.
Correspondence to Aleksandra Filip-Stachnik.
The study protocol was approved by the Bioethics Committee for Scientific Research, at the Academy of Physical Education in Katowice, Poland, (3/2019) according to the ethical standards ethical standards laid down in the 1964 Declaration of Helsinki and its later amendments. All participants gave their informed consent prior to their inclusion in the study.
Filip-Stachnik, A., Krawczyk, R., Krzysztofik, M. et al. Effects of acute ingestion of caffeinated chewing gum on performance in elite judo athletes. J Int Soc Sports Nutr 18, 49 (2021). https://doi.org/10.1186/s12970-021-00448-y
Ergogenic aid
Elite athlete
Exercise performance | CommonCrawl |
Chapters (3)
Physics and Astronomy (2)
Mathematical Proceedings of the Cambridge Philosophical Society (2)
Bulletin of the Australian Mathematical Society (1)
Ergodic Theory and Dynamical Systems (1)
Proceedings of the London Mathematical Society (1)
Australian Mathematical Society Inc (1)
London Mathematical Society Lecture Note Series (1)
INTERSECTIONS OF MULTICURVES FROM DYNNIKOV COORDINATES
Low-dimensional topology
Special aspects of infinite or finite groups
Topological manifolds
S. ÖYKÜ YURTTAŞ, TOBY HALL
Journal: Bulletin of the Australian Mathematical Society / Volume 98 / Issue 1 / August 2018
Published online by Cambridge University Press: 03 May 2018, pp. 149-158
Print publication: August 2018
We present an algorithm for calculating the geometric intersection number of two multicurves on the $n$ -punctured disk, taking as input their Dynnikov coordinates. The algorithm has complexity $O(m^{2}n^{4})$ , where $m$ is the sum of the absolute values of the Dynnikov coordinates of the two multicurves. The main ingredient is an algorithm due to Cumplido for relaxing a multicurve.
Symbol ratio minimax sequences in the lexicographic order
PHILIP BOYLAND, ANDRÉ DE CARVALHO, TOBY HALL
Journal: Ergodic Theory and Dynamical Systems / Volume 35 / Issue 8 / December 2015
Published online by Cambridge University Press: 05 August 2014, pp. 2371-2396
Consider the space of sequences of $k$ letters ordered lexicographically. We study the set ${\mathcal{M}}(\boldsymbol{{\it\alpha}})$ of all maximal sequences for which the asymptotic proportions $\boldsymbol{{\it\alpha}}$ of the letters are prescribed, where a sequence is said to be maximal if it is at least as great as all of its tails. The infimum of ${\mathcal{M}}(\boldsymbol{{\it\alpha}})$ is called the $\boldsymbol{{\it\alpha}}$ -infimax sequence, or the $\boldsymbol{{\it\alpha}}$ -minimax sequence if the infimum is a minimum. We give an algorithm which yields all infimax sequences, and show that the infimax is not a minimax if and only if it is the $\boldsymbol{{\it\alpha}}$ -infimax for every $\boldsymbol{{\it\alpha}}$ in a simplex of dimension 1 or greater. These results have applications to the theory of rotation sets of beta-shifts and torus homeomorphisms.
Contributor affiliations
By Frank Andrasik, Melissa R. Andrews, Ana Inés Ansaldo, Evangelos G. Antzoulatos, Lianhua Bai, Ellen Barrett, Linamara Battistella, Nicolas Bayle, Michael S. Beattie, Peter J. Beek, Serafin Beer, Heinrich Binder, Claire Bindschaedler, Sarah Blanton, Tasia Bobish, Michael L. Boninger, Joseph F. Bonner, Chadwick B. Boulay, Vanessa S. Boyce, Anna-Katharine Brem, Jacqueline C. Bresnahan, Floor E. Buma, Mary Bartlett Bunge, John H. Byrne, Jeffrey R. Capadona, Stefano F. Cappa, Diana D. Cardenas, Leeanne M. Carey, S. Thomas Carmichael, Glauco A. P. Caurin, Pablo Celnik, Kimberly M. Christian, Stephanie Clarke, Leonardo G. Cohen, Adriana B. Conforto, Rory A. Cooper, Rosemarie Cooper, Steven C. Cramer, Armin Curt, Mark D'Esposito, Matthew B. Dalva, Gavriel David, Brandon Delia, Wenbin Deng, Volker Dietz, Bruce H. Dobkin, Marco Domeniconi, Edith Durand, Tracey Vause Earland, Georg Ebersbach, Jonathan J. Evans, James W. Fawcett, Uri Feintuch, Toby A. Ferguson, Marie T. Filbin, Diasinou Fioravante, Itzhak Fischer, Agnes Floel, Herta Flor, Karim Fouad, Richard S. J. Frackowiak, Peter H. Gorman, Thomas W. Gould, Jean-Michel Gracies, Amparo Gutierrez, Kurt Haas, C.D. Hall, Hans-Peter Hartung, Zhigang He, Jordan Hecker, Susan J. Herdman, Seth Herman, Leigh R. Hochberg, Ahmet Höke, Fay B. Horak, Jared C. Horvath, Richard L. Huganir, Friedhelm C. Hummel, Beata Jarosiewicz, Frances E. Jensen, Michael Jöbges, Larry M. Jordan, Jon H. Kaas, Andres M. Kanner, Noomi Katz, Matthew S. Kayser, Annmarie Kelleher, Gerd Kempermann, Timothy E. Kennedy, Jürg Kesselring, Fary Khan, Rachel Kizony, Jeffery D. Kocsis, Boudewijn J. Kollen, Hubertus Köller, John W. Krakauer, Hermano I. Krebs, Gert Kwakkel, Bradley Lang, Catherine E. Lang, Helmar C. Lehmann, Angelo C. Lepore, Glenn S. Le Prell, Mindy F. Levin, Joel M. Levine, David A. Low, Marilyn MacKay-Lyons, Jeffrey D. Macklis, Margaret Mak, Francine Malouin, William C. Mann, Paul D. Marasco, Christopher J. Mathias, Laura McClure, Jan Mehrholz, Lorne M. Mendell, Robert H. Miller, Carol Milligan, Beth Mineo, Simon W. Moore, Jennifer Morgan, Charbel E-H. Moussa, Martin Munz, Randolph J. Nudo, Joseph J. Pancrazio, Theresa Pape, Alvaro Pascual-Leone, Kristin M. Pearson-Fuhrhop, P. Hunter Peckham, Tamara L. Pelleshi, Catherine Verrier Piersol, Thomas Platz, Marcus Pohl, Dejan B. Popović, Andrew M. Poulos, Maulik Purohit, Hui-Xin Qi, Debbie Rand, Mahendra S. Rao, Josef P. Rauschecker, Aimee Reiss, Carol L. Richards, Keith M. Robinson, Melvyn Roerdink, John C. Rosenbek, Serge Rossignol, Edward S. Ruthazer, Arash Sahraie, Krishnankutty Sathian, Marc H. Schieber, Brian J. Schmidt, Michael E. Selzer, Mijail D. Serruya, Himanshu Sharma, Michael Shifman, Jerry Silver, Thomas Sinkjær, George M. Smith, Young-Jin Son, Tim Spencer, John D. Steeves, Oswald Steward, Sheela Stuart, Austin J. Sumner, Chin Lik Tan, Robert W. Teasell, Gareth Thomas, Aiko K. Thompson, Richard F. Thompson, Wesley J. Thompson, Erika Timar, Ceri T. Trevethan, Christopher Trimby, Gary R. Turner, Mark H. Tuszynski, Erna A. van Niekerk, Ricardo Viana, Difei Wang, Anthony B. Ward, Nick S. Ward, Stephen G. Waxman, Patrice L. Weiss, Jörg Wissel, Steven L. Wolf, Jonathan R. Wolpaw, Sharon Wood-Dauphinee, Ross D. Zafonte, Binhai Zheng, Richard D. Zorowitz
Edited by Michael Selzer, Stephanie Clarke, Leonardo Cohen, Gert Kwakkel, Robert Miller, Case Western Reserve University, Ohio
Book: Textbook of Neural Repair and Rehabilitation
Published online: 05 May 2014
Print publication: 24 April 2014, pp ix-xvi
Edited by Michael E. Selzer, Stephanie Clarke, Leonardo G. Cohen, Gert Kwakkel, Robert H. Miller, Case Western Reserve University, Ohio
Symbolic dynamics and topological models in dimensions 1 and 2
By André de Carvalho, Toby Hall
Edited by Sergey Bezuglyi, Institute of Low-Temperature Physics and Engineering, Kharkov, Ukraine, Sergiy Kolyada, National Academy of Sciences of Ukraine
Book: Topics in Dynamics and Ergodic Theory
Print publication: 08 December 2003, pp 40-59
View extract
Symbolic dynamics has a long and distinguished history, going back to Hadamard's work on the geodesic flow on negatively curved surfaces. Because of its success in describing the dynamics of systems with Markov partitions, it is most closely associated with the study of such systems. However, in the 1970's, beginning with the work of Metropolis, Stein and Stein and culminating with the kneading theory of Milnor and Thurston, symbolic dynamics was used in a quite different way in the study of 1-dimensional discrete dynamics: the partition was fixed (given by the critical points) and the maps were allowed to vary in families, as long as the critical points remained the same. This is fundamentally different from the use of symbolic dynamics in the presence of Markov partitions. Not only does it not require the maps being studied to have such partitions, but also, by fixing the partition, it permits the description of all maps in a family in terms of the same symbols, thus allowing the comparison of different maps.
One of the conclusions of kneading theory is that every family of unimodal maps (i.e. continuous piecewise monotone self-maps of the unit interval with exactly two monotone pieces) presents essentially the same dynamical behaviour as it passes from trivial to chaotic dynamics.
ISOTOPY STABLE DYNAMICS RELATIVE TO COMPACT INVARIANT SETS
PHILIP BOYLAND, TOBY HALL
Journal: Proceedings of the London Mathematical Society / Volume 79 / Issue 3 / November 1999
Print publication: November 1999
Let $f$ be an orientation-preserving homeomorphism of a compact orientable manifold. Sufficient conditions are given for the persistence of a collection of periodic points under isotopy of $f$ relative to a compact invariant set $A$. Two main applications are described. In the first,~$A$ is the closure of a single discrete orbit of~$f$, and~$f$ has a Smale horseshoe, all of whose periodic orbits persist; in the second,~$A$ is a minimal invariant Cantor set obtained as the limit of a sequence of nested periodic orbits, all of which are shown to persist under isotopy relative to~$A$.
1991 Mathematics Subject Classification: 58F20, 58F15.
Period-multiplying cascades for diffeomorphisms of the disc
Jean-Marc Gambaudo, John Guaschi, Toby Hall
Journal: Mathematical Proceedings of the Cambridge Philosophical Society / Volume 116 / Issue 2 / September 1994
Published online by Cambridge University Press: 24 October 2008, pp. 359-374
Print publication: September 1994
It is a well-known result in one-dimensional dynamics that if a continuous map of the interval has positive topological entropy, then it has a periodic orbit of period 2i for each integer i ≥ 0 [15] (see also [12]). In fact, one can say rather more: such a map has a sequence of periodic orbits (P)i ≥ 0 with per (Pi) = 2i which form a period-doubling cascade (that is, whose points are ordered and permuted in the way which would occur had the orbits been created in a sequence of period-doubling bifurcations starting from a single fixed point). This result reflects the central role played by period-doubling in transitions to positive entropy in a one-dimensional setting. In this paper we prove an analogous result for positive-entropy orientation-preserving diffeomorphisms of the disc. Using the notion [9] of a two-dimensional cascade, we shall show that such diffeomorphisms always have infinitely many 'zero-entropy' cascades of periodic orbits (including a period-doubling cascade, though this need not begin from a fixed point).
Unremovable periodic orbits of homeomorphisms
Toby Hall
Journal: Mathematical Proceedings of the Cambridge Philosophical Society / Volume 110 / Issue 3 / November 1991
In [1], Asimov and Franks give conditions under which a collection of periodic orbits of a diffeomorphism f:M→M of a compact manifold persists under arbitrary isotopy of f. Together with the Nielsen–Thurston theory, their result has been of pivotal importance in recent work on the periodic orbit structure of surface automorphisms (for example [3, 4, 7, 8, 9, 12, 13]). However, their proof uses bifurcation theory and as such depends crucially upon the differentiability of f. The periodic orbit results which make use of the Asimov–Franks theorem are therefore applicable only in the differentiable case, a limitation which belies their topological character. In this paper we shall use classical Nielsen-theoretic methods to prove the analogue of the Asimov–Franks result for homeomorphisms. | CommonCrawl |
A perceptual map for gait symmetry quantification and pathology detection
Antoine Moevus1,
Max Mignotte1,
Jacques A. de Guise2 &
Jean Meunier1
The gait movement is an essential process of the human activity and the result of collaborative interactions between the neurological, articular and musculoskeletal systems, working efficiently together. This explains why gait analysis is important and increasingly used nowadays for the diagnosis of many different types (neurological, muscular, orthopedic, etc.) of diseases. This paper introduces a novel method to quickly visualize the different parts of the body related to an asymmetric movement in the human gait of a patient for daily clinical usage. The proposed gait analysis algorithm relies on the fact that the healthy walk has (temporally shift-invariant) symmetry properties in the coronal plane. The goal is to provide an inexpensive and easy-to-use method, exploiting an affordable consumer depth sensor, the Kinect, to measure the gait asymmetry and display results in a perceptual way.
We propose a multi-dimensional scaling mapping using a temporally shift invariant distance, allowing us to efficiently visualize (in terms of perceptual color difference) the asymmetric body parts of the gait cycle of a subject. We also propose an index computed from this map and which quantifies locally and globally the degree of asymmetry.
The proposed index is proved to be statistically significant and this new, inexpensive, marker-less, non-invasive, easy to set up, gait analysis system offers a readable and flexible tool for clinicians to analyze gait characteristics and to provide a fast diagnostic.
This system, which estimates a perceptual color map providing a quick overview of asymmetry existing in the gait cycle of a subject, can be easily exploited for disease progression, recovery cues from post-operative surgery (e.g., to check the healing process or the effect of a treatment or a prosthesis) or might be used for other pathologies where gait asymmetry might be a symptom.
Scientists and medical communities have been interested in the analysis of gait movement for a long time, in particular because, as mentioned in [1–3] symmetrical gait is expected in the case of healthy people, whereas asymmetrical gait is a common feature of subjects with musculoskeletal disorders.
Abnormal or atypical gait can be caused by different factors, either orthopedic (hip injuries [4], bone malformations, etc.), muscular, or neurological (Parkinson's disease, stroke [5], etc.). Consequently, different parts of the body can be involved or affected, which make gait analysis a complex procedure but also a reliable and accurate indicator for early detection (and follow-up) of a wide range of pathologies. It thus makes a 3D gait analysis (3DGA) procedure a powerful early clinical diagnostic tool [6] that is reliable and non-invasive, and which has been used successfully until now for screening test, detection and tracking of disease progression, joint deficiencies, pre-surgery planning, as well as recovery assessment from post-operative surgery or accident (rehabilitation). It is important to note that a gait analysis-based diagnostic tool also allows the reduction of the costs and amount of surgery per patient [7]. Also, a more appropriate medical prescription can be made by performing gait analysis before treating a patient, leading to a better recovery for the patients [6].
But nowadays, with the aging population, clinical diagnostics have to be cheaper, faster and more convenient for clinical [8–10] (or home [11]) usage while remaining accurate. However, analyzing a gait video sequence is often difficult, requires time, and subtle anomalies can be omitted by the human eye. Also, videos are not easy to annotate, store and share.
In this work, we focus on the design of both a reliable and accurate imaging system that is also inexpensive and easy to set up for daily clinical usage. This diagnostic tool is relying on the fact that the gait of healthy people is generally symmetrical in the coronal (frontal) plane (with half a period phase shift) and that asymmetrical gait may be a good indicator of pathologies and its progression [1–3, 5]. More precisely, the goal of our proposed GA-based diagnostic tool is to compute a perceptual color map of asymmetries from a video acquired by a depth sensor (Kinect) of a subject walking on a treadmill. The recording plane is the coronal plane in order to exploit the temporally shift-invariant properties of the movement. A perceptual color map of asymmetries is the compression of a subject's video mapped into a single color image in such manner that asymmetries of the body movements in the human gait cycle may be clearly visible and immediately quantifiable.
This paper is organized as follows. First, "Previous work" section makes a study of existing procedures in the 3DGA literature. In "Proposed model" section, we introduce details about the dataset that will be used in our gait analysis system and we describe our asymmetry map estimation model based on the multidimensional scaling (MDS) mapping procedure and a local search refinement strategy. Finally, we show experimental results in "Results" section, give a discussion in "Discussion" section and conclude in "Conclusion" section.
Current 3DGA techniques can be divided in two categories, namely, with or without markers.
Among the state-of-the-art marker-based approaches, the Vicon motion-tracking and capture system [12] offers millimeter resolution of 3D spatial displacements. Due to its accuracy, it is often used as ground-truth for validation in medical application. On the other hand, the high cost of this system inhibits its widespread usage for routine clinical practices. Basically, optical motion capture system consists in tracking infrared (IR) reflective markers with multiple IR cameras [13]. Optical motion capture is efficient, but requires a lot of space, time, and expertise to be installed and used. For instance, placing the markers on the subject is prone to localization errors and requires someone who understands both the subject's anatomy and the acquisition system. Also the subject might have to wear a special suit and change outfits, which is constraining both for the subject and for the physician.
Therefore, marker-less systems are a promising alternative for clinical environments and are often regarded as easy-to-set-up, easy-to-use, and non-invasive. They are either based on stereo-vision [14], structured light [15], or time-of-flight (TOF) [16] technologies. As a stereo-vision application, [17] used two camcorders to extract 3D information of the subjects and to measure the gait parameters. Although low-cost, the setup and calibration procedure of the system remains complex and only the lower parts of the body are measured. Also, stereo vision-based systems will not function properly if the subject's outfit lacks texture. However, the Kinect sensor is based on structured light technology which makes it robust to textureless surfaces. The Kinect remains also compact and affordable. The Kinect has two output modes: depth map or skeleton modes. The former consists of an image sequence where the value of each pixel is proportional to the inverse of the depth, whereas the latter is a set of 3D points and edges that represents 20 joints of the human body.
Recent researches have been conducted to test whether the Kinect is suitable for clinical usage or not. Clark et al. [18] used the skeleton mode to measure spatial–temporal gait variability (such as the stride duration, speed, etc.) and compared it with data acquired by the high-end Vicon MX system. They found encouraging results for the estimation of the length of the steps and strides and the average gait velocity. Nevertheless, due to the inability of the skeleton tracking algorithm to accurately localize important anatomical landmarks on the foot, some spatio-temporal parameters of gait remain poorly estimated such as the assessment of step and the stride time. In addition, the Kinect camera was placed facing the subject, without a treadmill. Therefore the system was based on the analysis of only one gait cycle, because the intrinsic working range of the Kinect depth sensor is between 800 and 4000 mm. This somehow compromised the accuracy and the reliability of their system.
Gabel et al. [11] also used the skeleton mode to perform a 3DGA. They asked people to wear wireless sensors (gyroscopes and pressure sensors) at movement points and to walk back and forth along a straight path for approximately 7 min. They found that the Kinect was capable of providing accurate and robust results, but only a few gait parameters were tested and further research is under investigation. Finally, it is worth mentioning that none of the methods, using the Kinect skeleton mode, provide a visual feedback of the gait of the subject.
In [19], the authors compared the Kinect with depth map output mode versus a Vicon system. They placed two Kinects in a different alignment with the subject (facing and on the side) and measured key gait parameters, such as stride duration and length, and speed. They found excellent results with an average difference of less than 5 % for both Kinect camera setups. They also found that using the depth map data allows to reduce drastically the computation time for background removal.
In [10], the authors proposed to use a treadmill and a Kinect depth sensor to quantify the gait asymmetry with a low-cost gait analysis system. More precisely, the authors computed an index for quantifying possible asymmetries between the two legs by first dividing each gait cycle in two sub-cycles (left and right steps), and by comparing these two sub-cycles, in terms of an asymmetry index (proportional to the difference of depth, over a gait cycle, between the two legs) after a rough spatial and temporal registration procedure. Although the system is able to distinguish whether the subject has a symmetric walk or not, no visualization or information on the location of the asymmetries is provided, unlike our method.
In [9], the Kinect camera was placed at the back of a treadmill and used to record a video sequence of the subject's walk. The authors then simply computed the mean of the obtained depth image sequence (over a gait cycle or a longer period) in order to compress the gait image sequence into one image which was finally called a depth energy image (DEI). Their results were conclusive since they were able to distinguish both visually and quantitatively asymmetries (a symmetric walk generating a DEI exhibiting a symmetric silhouette, in terms of mean depth and conversely). Nevertheless, this latter strategy is inherently inaccurate since taking the average (mean) depth over a gait cycle does not allow to detect all asymmetric body movements; indeed, movement variation of some parts of the body can clearly be different and asymmetric while keeping the same mean (in terms of mean depth).
In our work, the depth image sequence of the gait, containing a certain number of gait cycles (wherein each pixel of the video corresponds to a depth signal as a function of time, as shown in Fig. 1) is reduced to three dimensions with a multi-dimensional scaling (MDS) mapping [20] using a temporally shift invariant Euclidean distance. This allows us to quickly display the gait image cube into an informative color image (with red, green and blue channels) allowing to visualize the asymmetric body parts of the gait cycle of a subject with a color difference, in a perceptual color space, which is linearly related to the asymmetry magnitude.
Example of two depth signals for a gait cycle of a subject
Proposed model
The dataset consists of multiple sequences of people walking on a treadmill, facing an inexpensive commercial depth sensor (Kinect). The Kinect sensor outputs 30 depth maps per second (30 fps), with a resolution of 640 per 480 pixels. The dataset contains 51 sequences acquired from 17 (healthy) subjects (17 males, 26.7 ± 3.8 years old, 179.1 ± 11.5 cm height and 75.5 ± 13.6 kg with no reported clinical asymmetry or gait impairment) walking with or without simulated length leg discrepancy (LLD). Every subject had to walk normally (group A), then with a 5 cm sole under the left foot (group B), then with the sole under the right foot (group C).
Sequences are approximately 5 min long and contain around 180 gait cycles. For all sequences, the same relative position between the treadmill and the sensor is kept in order for the subject to be within the same image area. The institutional ethical review board approved the study.
The method can be divided into four steps: a pre-processing for the silhouette extraction ("Silhouette extraction" section), a MDS-based mapping ("Multidimensional scaling-based mapping" section), a local search refinement strategy ("Refined estimation" section) and a color space conversion step ("Color space conversion" section).
Silhouette extraction
Since the scene took place in a non-cluttered room where the treadmill is in the same position relatively to the camera, a 3D bounding box around the subjects can be set. Hence, by retrieving 3D information, we can convert this information back in the 2D image space to segment the subject's silhouette directly from a depth map. To do so, the depth sensor is considered as a pinhole camera model with intrinsic parameters, K, (see [[21], p. 30]) defined as:
$$K = \left[ {\begin{array}{*{20}c} f & 0 & {c_{u} } \\ 0 & f & {c_{v} } \\ 0 & 0 & 1 \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {575.82} & 0 & {240} \\ 0 & {575.82} & {240} \\ 0 & 0 & 1 \\ \end{array} } \right]$$
where f is the focal length in pixels and (c u , c v ) is the image center in pixels (values given by the manufacturer). From a depth map, a pixel at position (u, v)T with depth value, d is projected in 3D space, (X, Y, Z)T, using:
$$\left( {\begin{array}{*{20}c} X \\ Y \\ Z \\ \end{array} } \right) = dK^{ - 1} \left( {\begin{array}{*{20}c} u \\ v \\ 1 \\ \end{array} } \right) = d\left( {\begin{array}{*{20}c} {\frac{1}{f}*(u - c_{u} )} \\ {\frac{1}{f}*(v - c_{v} )} \\ 1 \\ \end{array} } \right)$$
First, the positions of the points around the subject, approximated by a 3D bounding box, are estimated. Second, the minimal and maximal depth, Z min and Z max , of the eight points (of the bounding box) are retrieved. Third, the eight points were projected back in the 2D image space where the minimal and maximal vertical and horizontal 2D position value (u min , u max , v min , and v max ) are finally estimated.
The necessity of working in 3D space is because of the spatial coherence of objects in the scene. For instance, in 3D space the treadmill is always beneath the subject whereas in an image it overlaps the subject, as shown in Fig. 2. Once this step is done, it is no more necessary to project the depth maps in the 3D space as long as the camera and the treadmill stay at the same relative position. In our case, some small adjustments of the enclosing parameters, u min , u max , v min , v max , Z min and Z max , (bounding box) were needed to encompass all sequences.
Setup and pre-processing steps. a Original depth map. b After clipping. c After treadmill removal
Now, with, the required information, the subject can be segmented in each frame of the original gait depth sequence (of N frames).
Background removal
Background removal is trivial since the subject is in the middle of the image in a non-cluttered room. Therefore, every pixel outside the bounding box is clipped to a default value (see subsection "Clipping step" below).
Treadmill removal
After background removal, the only objects remaining in the image are the treadmill and the subject. Because the treadmill is below the subject, it can be removed by selecting pixels with coordinates superior to a threshold T y (Y axis is going from top to bottom). An equation in the 2D-space can be derived from Eq. (2) in order to work directly on the image:
$$Y \, < \, T_{y}$$
$$\frac{d}{f}(v - c_v) < T_{y}$$
$$v \, < \frac{{fT_{y} }}{d} + c_v,\quad{\text{since}}\quad d > 0\;{\text{and }}f > 0$$
Figure 2 visually shows the different steps of the setup and pre-processing stage.
Clipping step
At this stage, it is important to recall that the core of the method, which is MDS- based, aims to preserve the pairwise distances between depth signals as well as possible in a final 3D (perceptual) color space. In order to get also an exploitable, high contrast (asymmetry) color image, that contains a wide dynamic range of color values (well distributed among all possible colors), the clipping value of the non- subject pixels have to be set carefully. Indeed, it is crucial that no big artificial pairwise distances are created, because those distances will induce a squeezing and penalizing effect on the other informative distances. For instance, if the relative value of the background differs a lot from the subject's pixel (depth) values, the asymmetries between the right and left legs might not be easily distinguishable. Therefore the default background value has to be set carefully. In this work, this clipping value is estimated as being the mean of all the depth values belonging to the subject in the whole sequence (see Fig. 3).
a An image where non-subject pixels where naively clipped to zero. The large difference of depth values, existing between the background and the subject, "squeezes" the depth values belonging to the subject and thus causes a decrease in depth resolution in this important and informative part of the image. b An image where background pixels where clipped to the mean (depth) value of subject's pixels for the whole sequence. Spatial and depth resolution and image details are preserved. c The distribution of the pixel values of the image for the naive clipping (semi-log scale). d The distribution of the pixel values of the image for the smart clipping (semi-log scale). It is important to notice that the whole point of using a default clipping value is to make the distribution of pixel value of the images uni-modal and continuous. This will ensure a well contrasted map for the human eye
Filtering step
Finally, the whole sequence is filtered with a 3D (3 × 3 × 3) median filter to remove some aberrations on the contours or on the top of the treadmill.
Multidimensional scaling-based mapping
The MDS-based mapping technique [20] aims at visualizing the (temporally shift-invariant) motion asymmetric body parts with a perceptual color difference which corresponds (perceptually and linearly) to the asymmetry magnitude. This mapping is achieved by considering each pair of pixels (i.e., pair of N-dimensional depth signals) in the original gait video sequence and by quantifying their (temporally shift-invariant) degree of asymmetry with a temporally shift-invariant pairwise Euclidean distance \(\beta_{{s_{ 1} ,s_{ 2} }}\) between each pair [s 1(t), s 2(t)] of depth signals:
$$\beta_{{s_{1} ,s_{2} }} = \mathop {\hbox{min} }\limits_{\tau } \left\{ {\left( {\mathop \sum \limits_{t = 0}^{N} {\mathbf{s}}_{1} (t + \tau ) - {\mathbf{s}}_{2} (t)} \right)^{2} } \right\}$$
where the maximal value of τ corresponds approximately to the number of frames in a gait cycle. In our application, τ max = 66 and, in order to decrease the computational load, τ is increased with a step size equals to 6 (≈0.2 s).
In addition, four points are important to consider in this step:
First, the use of the shift-invariant pairwise Euclidean distance is crucial in this MDS-based mapping step. Indeed, two pixels in the gait video cube, i.e., two depth signals (as a function of the time) with a perfect similar movement but in phase opposition (phase difference of half a gait cycle) like the legs and arms will have to be considered as symmetric with the same (perceptual) color in the final asymmetry map.
Second, in order to provide a final perceptual color asymmetry visualization map, the MDS mapping is achieved in a perceptual color space, namely the classical CIE 1976 L∗, a∗, b∗ (LAB) color space which is approximately perceptually uniform. In this color space, a color difference shall (perceptually) appear twice as large for a measured (temporally shift-invariant) asymmetry value which is twice bigger.
Third, as already said, MDS is a dimensionality reduction technique that maps objects lying in an original high N dimensional space to a lower dimensional space (3 in our application), but does so in an attempt that the between-signal distances are preserved as well as possible. The originally proposed MDS algorithm is not appropriate in our application and more generally for all large scale applications because it requires an entire N × N distance matrix to be stored in memory [with a O(N 3) complexity]. Instead, we have herein adopted a fast alternative, FastMap [22], with a linear complexity O(pN) (with p = 3, the dimensionality of the target space). In FastMap, the axes of the target space are constructed dimension by dimension. More precisely, it implicitly assumes that the objects are points in a p-dimensional Euclidean space and selects a sequence of p ≤ N orthogonal axes defined by distant pairs of points (called pivots) and computes the projection of the points onto the orthogonal axes.
The above-mentioned FastMap-based mapping method, which exploits an algebraic procedure, has the main advantage of being very fast (for large scale applications) but slightly less accurate than a MDS procedure exploiting a (gradient descent or a local search-based) optimization procedure [23]. For this reason, we decided to refine the estimated asymmetry map given by the FastMap as being the initial starting solution of a local search using a local exploration around the current solution. This step is now detailed in the following section.
Refined estimation
Linear stretching
The FastMap-based mapping method allows us to preserve the between-depth-signal (shifted Euclidean) distances, as well as possible in a final 3D (perceptual LAB) color space with a scale factor k, which we have now to estimate in order to be able to refine the solution with a local search algorithm.
To this end, let u be the three-dimensional vector (u = (L, A, B)t) corresponding to the three L, A, B color bands of the final asymmetry image to be estimated and let also β s,t denotes the Euclidean distance between two depth vectors, associated with a pair of sites at spatial (pixel) locations s, t. The scale factor k is the linear stretching factor which minimizes the following cost function:
$$\hat{k} = \mathop {{\text{argmin}}}\limits_{k} \underbrace {{\sum\limits_{{s,t_{{s \ne t}} }} {\left\{ {\overbrace {{k\beta _{{s,t}} }}^{{\beta _{{s,t}}^{{scaled}} }} - ||{\mathbf{u}}_{s} - {\mathbf{u}}_{t} ||_{2} } \right\}^{2} } }}_{{E_{o} }}$$
where the summation \(\mathop \sum \nolimits_{{s,t_{s \ne t} }} {\text{is done}}\) over all pairs of sites existing in the final image to be estimated. In our application, in order to speed up this estimation procedure, we take the subset of pixel pairs induced by the graph presented in the following section. In this way, a simple local discrete grid search routine, for the parameter k in a suitable range (k ∈ [0 − 1] with a fixed step size set to 0.005) or a least square estimation can be easily achieved.
Local search refinement
At this stage, we are very close to the solution of our optimization problem expressed in Eq. (7). To improve this solution, we use a deterministic local exploration around the current solution and a low radius of exploration (see the detailed Algorithm in "Appendix" and the validation of this local search procedure). For this local refinement, in order to decrease the computational load, we do not consider a complete graph but a graph in which each pixel is connected with its four nearest neighbors and N cnx equally spaced other pixels located within a square neighborhood window of fixed size N s pixels centered around the pixel (see Fig. 4). In addition, since this local search refinement strategy could be sensitive to noise, we add a regularization term allowing us to both incorporate knowledge concerning the types of estimated images a priori defined as acceptable solutions and to regularize the optimization problem. The regularization term used in our model is formulated in the (image) spatial domain and promotes a (regularized) estimated image u with spatial smoothness and edge-preserving properties (see Fig. 5). To this end, we have considered the generalized Gaussian Markov random field (GGMRF) regularization term initially proposed by Bouman and Sauer in tomographic reconstruction [24]:
$$\Omega \left( {\mathbf{u}} \right) = \mathop \sum \limits_{ < s,t > } \gamma_{st} \left| {{\mathbf{u}}_{s} - {\mathbf{u}}_{t} } \right|^{q}$$
where 1 ≤ q ≤ 2 is a parameter controlling the smoothness of the image to be estimated and/or the sharpness of the edges to be formed in the final estimated image. \(\gamma_{st} = (2\sqrt 2 + 4)^{ - 1}\;{\text{or}}\;(4 + 4\sqrt 2 )^{ - 1}\) depends on whether the pair of neighboring sites (relative to the second order neighborhood system), or binary clique <s, t> is horizontal/vertical or right diagonal/left diagonal. This regularization term has the advantage of including a Gaussian MRF prior for q = 2 and a more interesting edge-preserving absolute-value potential function with q = 1 somewhat similar to the L 1 regularizer proposed by Rudin et al. in [25]. In the regularization framework and under this constraint, an asymmetry map u can be seen as a solution to the following penalized cost function to be optimized:
$$\widehat{{\mathbf{u}}} = \mathop {{\text{argmin}}}\limits_{{\mathbf{u}}} \sum\limits_{{s,t_{s \ne t} }} {\left\{ {\beta_{s,t}^{scaled} - \| {\mathbf{u}}_{s} - {\mathbf{u}}_{{t}}\|_{2}} \right\}^{2} + \eta \mathop \sum \limits_{ < s,t > } \gamma_{st} \left| {{\mathbf{u}}_{s} - {\mathbf{u}}_{t} } \right|^{q} }$$
In this model, the set of β s,t , ({β s,t }) represents the observed data. The first term is related to the preservation of between-depth signal distances and can be viewed as a "goodness-of-fit" energy term. The second term corresponds to the regularization encoding some a priori expected properties of smoothness and of edge-preserving of the asymmetry image to be estimated. Let us also note that this model can be also easily viewed as a Bayesian optimization strategy formalizing a trade-off between a likelihood and an image prior expressing, via a prior distribution, that an acceptable estimated image is piecewise smoothed. η is the value controlling the contribution of these two terms.
Spatial neighborhood used in our model. Each pixel is connected with its four nearest neighbors and N cnx = 11 equally spaced other pixels located within a square neighbourhood window of fixed size N s = 13 pixels
Magnified regions extracted from an asymmetry map (at lower leg level for the subject #S05 without LLD), obtained without and with (η = 0.025) a regularization term [see Eq. (9)]. The slight but annoying ringing noise effect has been corrected by the regularization term allowing us to a priori favor a edge-preserved asymmetry map that is piecewise smoothed
It is important to mention that, at this stage, we are not assured that the LAB color values of the 3D asymmetry map are not saturated in the RGB space. In order to fix this problem, we use a simple linear stretching of the L, A, B color values such as L ∈ [0:100], and A, B have a maximal amplitude of 100 with a zero mean in order to ensure that a very small number of pixels are outside the RGB color space [23]. Once this linear stretching is achieved, a RGB conversion is done.
Our model takes, on average, approximately 175 ± 10 s for a Core i7 Intel©, 4930 K CPU @ 3.40 GHz, 6803 bogomips and non-optimized code running on Linux. More precisely, the two steps; i.e., (1) estimations of the FastMap-based rough asymmetry map and (2) local search refining, takes, respectively, on average, 50 ± 5 and 125 ± 10 s for a 300 × 640 × 480 image sequence.
Let us add that the local search refining procedure can be easily computed in parallel. Indeed, the objective function to be minimized (Eq. 9) can be viewed as a Gibbs energy field related to a non-stationary Markov random field (MRF) model defined on a graph with long-range pairwise interactions (or binary cliques <s, t>). Each binary clique of this MRF model is associated to a non-stationary potential since this energy-based model is spatially variant and depends on the distance between the depth vectors associated with each pair of pixels s, t. Consequently, Algorithm 1 can be also viewed as a simple iterative conditional modes (ICM) procedure [26] for a MRF model with non-stationary and long-range pairwise interactions. Consequently, a Jacobi-type version of this Gauss–Seidel based ICM procedure (proposed in Algorithm 1) can be also efficiently implemented by using the parallel abilities of a graphic processor unit (GPU) (embedded on most graphics hardware nowadays available on the market) and can be greatly accelerated (up to a factor of 200) as proposed in [27].
Source code (in C++ language under Linux) of our algorithm with the set of image sequences are publicly available at the following http address: http://www.iro.umontreal.ca/∼mignotte/ResearchMaterial/pamga.html for the scientific community.
This section presents the asymmetry maps obtained for the subjects with or without (simulated) pathologies. Sequences of 300 frames have been used (longer sequences did not yield significantly better results, see "Performance measures of the proposed model" section). This corresponds approximately to a range of 6–9 gait cycles depending on the subject's speed and step size. On average for all images, the correlation score [23] (see end of "Local search algorithm" in "Appendix") for the mapping of 300 frames to three color channels (according to our shifted Euclidean pairwise depth distance) is 93.5 ± 2 % which shows us that the FastMap-based MDS procedure is able to preserve a large quantity of information of the original image sequence (in terms of pairwise depth distances). We have used an offset of 400 frames (approximately 13 s) relatively to the beginning of the image sequence for all the subjects to allow them to get used to the treadmill. In addition, η, the value controlling the contribution between the likelihood and the regularization terms in Eq. (9) was set to η = 0.025 in all the following experiments.
Initial tests
In order to quantify the influence of the choice of the distance in the reliability of our asymmetry map, we have compared several (possibly shifted) distances. In addition to the shifted L2 norm (or Euclidean distance) between the two depth vectors, we have also considered; namely; the shifted L1 and Linf (infinite) norms, the L2 norm between the amplitude of their Fourier spectrums (Lmod), which is inherently invariant to translation, the L2 norm between their amplitude histograms (Lrad) providing also a distance invariant to translation and finally the L1 norm between the mean of these two depth vectors (Lmoy). Figure 6 allows seeing the different asymmetry maps obtained for the subject #S05 of our database. Here the L2 distance shows clearly, as color differences between the left and right side of the body, the gait asymmetry magnitude. For instance, with right LLD (case C), the asymmetry of arm swing is clearly noticeable and for both right and left LLD (cases B and C) leg color differences (motion asymmetries) are visible.
Asymmetry maps for subject #S05 for, respectively (from left to right) the normal gait and the left and right simulated LLD (cases A, B and C). The L2, L1, Linf, Lmod, Lrad and Lmoy distances are presented. For these cases, the ASI are, respectively 20.6/32.8/32.8, 22.3/35.6/30.9, 16.9/29.8/23.9, 20.9/30.3/29.2, 38.2/63.3/62.0 and 35.4/64.0/48.0. With right LLD (case C), the asymmetry of arm swing is clearly noticeable for the right LLD with the L2 (see the circled regions), L1 and Lmod distances. As expected, leg motion asymmetries are also visible for the left and right LLD. ASI curves are also presented (in which the Y axis shows the biggest mirrored difference (located on a horizontal line) as a function of the vertical distance from the top head of the subject (X axis) (see text)
Qualitatively, we can notice that the asymmetry maps based on the shifted L1 and Lmod distances visually appear as reliable as the asymmetry map given by the shifted L2 distance to detect motion asymmetries appearing as color differences between the left and right side of the body. In addition, we can also see quite clearly that the Linf norm provides a (correlated) noisy asymmetry map (with artifacts and without edge preservation) with which we can however see color differences or the presence of asymmetry cues in the lower legs. The Lrad and Lmoy distances are clearly invariant to translation distances, nevertheless, the maps based on these two distances are inaccurate because they fail to detect all asymmetric body movements. Indeed, movement variation of some parts of the body can be different and asymmetric while keeping the same mean (depth) or keeping the same histogram. This explains why, with the right LLD (case C), the asymmetry of arm swing cannot be detected with the Lrad and Lmoy distances whereas this defect is easily detected and clearly visualized with the L2, L1 and Lmod based distances asymmetry maps.
Performance measures of the proposed model
In order to get a quantitative measure of asymmetry, we propose to first estimate, for each line of the asymmetry map, the biggest mirrored differences. More precisely, for a line k of width m, (m being half the width of the asymmetry map) the set of biggest mirrored differences is:
$$\left\{ {{ \hbox{max} } ||p_{{i,{\mathbf{k}}}} - p_{{m - i,{\mathbf{k}}}} ||_{2} , \forall i \in [0,m/2]} \right\},$$
where p i,j is a pixel value at position (i, j). This set of biggest mirrored differences yields a vertical curve whose mean amplitude, allows computing a global asymmetry index (ASI). From each asymmetry color map, this ASI curve is estimated by a two step procedure. Firstly, by estimating, individually for each subject, the position of the (symmetrical) longitudinal axis of his body (head to tail). This axis is determined from the silhouette contour (located in places of strong gradient) and its optimal position is searched on both sides (±10 pixels) around the vertical center line of the image (since this axis is assumed to be not too far from it) by estimating the vertical line whose pixel coordinates are the most symmetric with respect to the subject's silhouette (body contour) in the median sense. Secondly, by seeking and recording the maximal color difference existing on either side (horizontally-oriented) of this preliminary estimated longitudinal axis. The ASI index is then the mean value of the ASI curve elements (see Algorithm "Estimation of the ASI" where the estimation procedure is outlined in pseudo-code).
Asymmetries can be detected visually, as shown by Figs. 6, 7, 8, 9 and 10, but also quantitatively with the ASI curves [in which the Y axis shows the biggest mirrored difference, located on a horizontal line, as a function of the vertical distance from the top head of the subject (X axis)] or with the ASI index mentioned above (see Algorithm in Fig. 11 and the ASI index obtained by the 17 subjects of our experiment in Fig. 12). For instance, in Fig. 6, the ASI curves of the L2 distance display a significant gap between case A and cases B–C in the leg areas as expected (identified with an arrow in Fig. 6). Similarly, the asymmetry of arm swing appears as a gap between case A and C curves. In terms of paired differences t tests and confidence values, we can notice (see Table 1) that the shifted Euclidean distance seems to be the most appropriate distance allowing us to discriminate, on average with the ASI index, an asymmetry difference with a confidence value around 98.5−99 %. In this case, the refining step does not allow to (statistically significantly) increase or decrease this confidence score but this step actually allows us to include a regularization term in our energy-based model [see Eq. (9)] and to estimate a denoised and smooth asymmetry map with edge- preserving properties and consequently to somewhat remove the slight annoying ringing artifact effects occurring for some estimated asymmetry maps (see Fig. 5; Table 2).
Asymmetry maps for subject #S02 for respectively (from left to right) the normal gait and with the left and right simulated LLD (cases A, B and C with L2 distance). For this case, the ASI is, respectively 21.2/27.8/28.5
Asymmetry maps for subject #S12 for respectively (from left to right) the normal gait and with the left and right simulated LLD (cases A, B and C with L2 distance). For this case, the ASI is respectively 22.6/27.6/31.5
Asymmetry map for subjects #09 and #01, the two worst results of the dataset. The corresponding ASI is 29.04 for the normal gait, 30.37 for left LLD, and 25.98 for right LLD for subject #09. The subject had naturally a strong arm swing but a sole on the left foot seems to help rectifying it. Concerning the subject #01, the ASI is 24.22 for the normal gait, 21.60 for left LLD, and 23.96 for right LLD
ASI curve and index estimation algorithm
ASI index for the 17 subjects (normal gait and left and right LLD)
Table 1 Critical values or cutoffs with the ASI index for paired t test
Table 2 Correlation ρ before the refining step and after the refining step for η = 0 and η = 0.025 for the first five subjects without the simulated length leg discrepancy (LLD)
Table 3 shows the average (µ) and standard deviation (σ) of the ASI for the three groups of subjects (for the shifted L2 distance) and paired difference t test and confidence value for the paired tests A ≠ B and A ≠ C, respectively (1) without refining, (2) with η = 0 (with refining but without prior) and (3) η = 0.025 (with refining and prior). The statistical difference for the paired t test were highly significant (see also "T test table for 16 degree of freedom" in "Appendix") for both left and right legs LLD groups (confidence value around 98.65 %). This demonstrates that this method can efficiently detect gait asymmetry. In practice, three subjects had a higher ASI for their normal gait than with LLD introduced with a sole (Fig. 10). By looking at their videos, the authors have noticed that those subjects already had a visible gait asymmetry (one arm swinging more than the other, tilted shoulders, etc).
Table 3 Statistics of the ASI index for the shifted L2 distance of 17 subjects
We recall that we have used 300 frames in our application. For a sequence of 150 frames, the confidence value is 96.17 % and for a sequence of 600 frames, we obtained a confidence value of 99.52 but at the price of two times more computational load.
The preceding experimental results have shown that asymmetries can be detected visually with the proposed asymmetry maps, whether in terms of color differences with respect to the middle of the (standing) vertical axis but also with respect to the difference of length or geometric anatomical shapes or movements (legs and arms) exhibited on either side of the body (along this vertical axis), see for instance the difference of (1) length between the legs (for the L2 distance) in Fig. 6 with left and right LLD or for the subjects shown in Figs. 8, 9 and 10 or (2) length between the two arms in Fig. 7 or (3) mean gait posture (slightly inclined with respect to a vertical axis), for the subject shown in Fig. 10 (LLD only).
In addition, this asymmetry can be quantified with the proposed ASI index. It is also worth mentioning that the asymmetry map along with the ASI curve allows us to know where are distributed the asymmetric motions along the subject's body. See for instance the circled areas and the gaps identified with arrows in these figures. The ASI curves thus provide quantitative local assessment of asymmetry. This cue could be a good indicator of pathologies and their progression over time for a more appropriate medical prescription leading to a better recovery for the subjects.
It is also important to understand that the VICON system is able to give very accurate but sparse (and generally not distributed equidistantly) measures (over the body) with which it is difficult to estimate a reliable dense asymmetry map without subsequent interpolation and extrapolation errors. By this fact, it makes a comparison between the Kinect and the VICON systems, in their ability to estimate an accurate gait asymmetric map, difficult to implement and to analyze. Although, it is also clear, that for a sufficient number of sensors distributed over the body, the VICON could be superior in terms of accuracy. Nevertheless, this late assertion does not detract from the originality of this work since the proposed estimation method of asymmetry map, based on the preservation of all the pairwise temporally shift invariant distances between depth signal as well as possible in a final 3D color space, with a MDS-based penalized likelihood strategy (and even the very concept of gait asymmetry map) has never been proposed to our knowledge, to date, and also remains inherently independent of the depth sensing technology used.
In our application, a paired sample t test is used to determine whether there is a statistically significant difference (increase) in the ASI index between the normal gait versus the left or right LLD (abnormal gait) groups and the p value (associated with this t test) actually quantifies the magnitude of this difference (i.e., a good confidence interval meaning that the difference is quite large). In our case, it just means that the increase in the ASI index, between the (A and B, C) groups, are statistically significant and then the asymmetry differences between these groups, in terms of ASI index, are real and are not due to standard error. Nevertheless, this does not mean that the ASI index can be used for separating normal from abnormal gait since, even if a majority of individuals have an ASI index above 30.00 for an abnormal gait (see Fig. 12 showing the scatter-plots of the ASI values for the different subjects with or without a LLD), there unfortunately are some subjects for which the normal gait remains more asymmetric (visually and in terms of ASI index) than some other subjects with a simulated LLD. In addition, as we have already mentioned in "Performance measures of the proposed model" section, that three subjects have a higher ASI for their normal gait than with LLD. More precisely, among the 17 subjects, three of them (#01, #09, #15) do not show a significant difference with or without LLD, two of them show a slight (but not significant) decrease in the ASI index with either the left or right simulated LLD (#07, #13) and one subject (#10), which has a visible gait asymmetry (one arm swinging more than the other along with tilted shoulders), has a higher ASI for his normal gait than with the right or left LLD introduced with a sole. Because of this fact, the ASI measure should not be used as an absolute measure for separating normal or abnormal gait, but rather as a relative measure, for example to analyze and quantify the gait recovery assessment through time or to check the adequacy of a prosthesis (or an adequate treatment) and to indicate, through an asymmetry map, where are located the strongest asymmetric areas of a subject's gait cycle.
η remains the sole and major internal parameter of our model which acts as a regularization term and is fixed once and for all experimentation. Let us recall that the number of frames (N = 300) used in our MDS mapping should not be viewed as a critical internal parameter since doubling or halving the number of frames does not (significantly) change the efficiency of the FastMap mapping (see "Performance measures of the proposed model" section). Similarly, the two parameters of the spatial neighborhood (N s = 13 and N cnx = 11) used in our model are not sensitive parameters since the more connections we use, the better the convergence behavior of the algorithm is but at the cost of more computation time.
In this paper, we have presented a new gait analysis system, based on Kinect depth sensor, which estimates a perceptual color map providing a quick overview of asymmetry existing in the gait cycle of a subject and an index (ASI), that was proved statistically significant with an approximately 98.75 % confidence value. While being inexpensive, marker-less, non-invasive, easy to set up and suitable for small room and fast diagnostic, this new gait analysis system offers a readable and flexible tool for clinicians to analyze gait characteristics which can be easily exploited for disease progression, recovery cues from post-operative surgery or might be used for other pathologies where gait asymmetry might be a symptom.
As future work, it would be necessary to validate the proposed method on real patients with different types of gait impairments. Besides, it would also be interesting to explore what could possibly be the other benefits of an asymmetry map estimation and visualization, which are not considered in this work, over a set of spatio-temporal gait parameters in a gait analysis system. An interesting research perspective would be specifically to analyze the topology and the pattern differences of these asymmetric areas in order to see if they are characteristic of a specific kind of disease (bone, neurodegenerative, muscular, etc.) or to simply determine, from the perceptual maps, that the asymmetry allows to localize the region of injury or to analyze the evolution of these asymmetric patterns through time to check the healing process or the effect of a treatment or a prosthesis.
Engsberg JR, Tedford KG, Harder JA, Mills JP. Timing changes for stance, swing, and double support in a recent below-knee-amputee child. Pediatr Exerc Sci. 1990;2(3):255–62.
Loizeau J, Allard P, Duhaime M, Landjerit B. Bilateral gait patterns in subjects fitted with a total hip prosthesis. Arch Phys Med Rehabil. 1995;76(6):552–7.
Hamill J, Bates B, Knutzen K. Ground reaction force symmetry during walking and running. Res Q Exerc Sport. 1984;55(3):289–93.
Miki H, Sugano N, Hagio K, Nishii T, Kawakami H, Kakimoto A, Nakamura N, Yoshikawa H. Recovery of walking speed and symmetrical movement of the pelvis and lower extremity joints after unilateral THA. J Biomech. 2004;37(4):443455.
Alexander LD, Black SE, Patterson KK, Gao F, Danells CJ, McIlroy WE. Association between gait asymmetry and brain lesion location in stroke patients. Stroke. 2009;40(2):537–44.
Wren TA, Gorton GE III, Ounpuu S, Tucker CA. Efficacy of clinical gait analysis: a systematic review. Gait Posture. 2011;34(2):149–53.
Wren TA, Kalisvaart MM, Ghatan CE, Rethlefsen SA, Hara R, Sheng M, Chan LS, Kay RM. Effects of preoperative gait analysis on costs and amount of surgery. J Pediatr Orthop. 2009;29(6):558–63.
Carse B, Meadows B, Bowers R, Rowe P. Affordable clinical gait analysis: an assessment of the marker tracking accuracy of a new low-cost optical 3d motion analysis system. Physiotherapy. 2013;99(4):347–51.
Rougier C, Auvinet E, Meunier J, Mignotte M, de Guise JA. Depth energy image for gait symmetry quantification. In: Engineering in Medicine and Biology Society, EMBC, 2011 annual international conference of the IEEE. IEEE; 2011, p. 5136–9.
Auvinet E, Multon F, Meunier J. Lower limb movement asymmetry measurement with a depth camera. In: Engineering in Medicine and Biology Society (EMBC), 2012 annual international conference of the IEEE; Aug 2012, p. 6793–6.
Gabel M, Gilad-Bachrach R, Renshaw E, Schuster A. Full body gait analysis with kinect. In: Engineering in Medicine and Biology Society (EMBC), 2012 Annual International Conference of the IEEE. IEEE; 2012, p. 1964–7.
Motion capture systems from vicon. http://www.vicon.com/. Accessed 26 Oct 2015.
Potdevin F, Gillet C, Barbier F, Coello Y, Moretto P. The study of asymmetry in able-bodied gait with the concept of propulsion and brake. 9th symposium on 3D analysis of human movement, Valenciennes, France; 2006.
Lazaros N, Sirakoulis GC, Gasteratos A. Review of stereo vision algorithms: from software to hardware. Int J Optomechatr. 2008;2(4):435–62.
Salvi J, Pages J, Batlle J. Pattern codification strategies in structured light systems. Pattern Recogn. 2004;37(4):827–49.
MATH Article Google Scholar
Hansard M, Lee S, Choi O, Horaud R. Time-of-flight cameras. Berlin: Springer; 2013.
Leu A, Ristic-Durrant D, Graser A. A robust markerless vision-based human gait analysis system. In: 2011 6th IEEE international symposium on applied computational intelligence and informatics (SACI), May 2011, p. 415–20.
Clark RA, Bower KJ, Mentiplay BF, Paterson K, Pua Y-H. Concurrent validity of the microsoft kinect for assessment of spatiotemporal gait variables. J Biomech. 2013;46(15):2722–5.
Stone EE, Skubic M. Evaluation of an inexpensive depth camera for passive in-home fall risk assessment. In: Pervasive Health; 2011, p. 71–7.
Cox TF, Cox MA. Multidimensional scaling. Boca Raton: CRC Press; 2000.
Ponce J, Forsyth D. Computer vision: a modern approach. 1st ed. USA: Prentice Hall; 2003.
Faloutsos C, Lin K-I. FastMap: a fast algorithm for indexing, data-mining and visualization of traditional and multimedia datasets. ACM; 1995, vol 24, no 2, p. 163–74.
Mignotte M. A bicriteria-optimization-approach-based dimensionality-reduction model for the color display of hyperspectral images. IEEE Trans Geosci Remote Sensing. 2012;50(2):501–13.
Bouman CA, Sauer K. A unified approach to statistical tomography using coordinate descent optimization. IEEE Trans Image Process. 1996;5(3):480–92.
Rudin L, Osher S, Fatemi E. Nonlinear total variation based noise removal algorithms. Phys D. 1992;60:259–68.
Besag J. On the statistical analysis of dirty pictures. J R Stat Soc. 1986;B-48:259–302.
MathSciNet Google Scholar
Jodoin P-M, Mignotte M. Markovian segmentation and parameter estimation on graphics hardware. J Electr Imaging. 2006;15(3):033005.
Moevus A, Mignotte M, de Guise J, Meunier J. Evaluating perceptual maps of asymmetries for gait symmetry quantification and pathology detection. In: 36th international conference of the IEEE engineering in medicine and biology society, EMBC'2014, Chicago, August 2014.
Jacobson NP, Gupta MR. Design goals and solutions for display of hyperspectral images. IEEE Trans Geosci Remote Sensing. 2005;43(11):2684–92.
AM carried out the work and drafted the manuscript. MM, JM and JADG have technically and bio-medically supervised this work. All authors read and approved the final manuscript.
Authors would like to thank E. Auvinet for his help with the dataset [10] and the FRQNT (Fonds de Recherche Qu´ebecois Nature et Technologies) for having supported this work. Ethical approbation was obtained from the research ethics board of our university for this project.
Département d'Informatique & Recherche Opérationnelle (DIRO), Faculté des Arts et des Sciences, Université de Montréal, Montréal, QC, H3C 3J7, Canada
Antoine Moevus, Max Mignotte & Jean Meunier
Laboratoire de Recherche en Imagerie et Orthopédie, Centre de recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montréal, QC, Canada
Jacques A. de Guise
Antoine Moevus
Max Mignotte
Jean Meunier
Correspondence to Max Mignotte.
Local search algorithm
There are two ways to validate the efficiency of our local search refinement strategy.
First in terms of the decrease of the cost function to be optimized [see Eq. (9)] with η = 0. To this end, Fig. 13 shows us the evolution of the energy function for our local search procedure (see Algorithm in Fig. 14) compared to a refinement strategy based on a (much more demanding) stochastic local search using the Metropolis criterion (as proposed in [23] or in [28]) with a starting temperature T 0 = 1 (ensuring that, at the beginning of the stochastic search, approximately one-third of the sites change their luminance values between two complete image sweeps) and a final temperature T f = 10−3 (ensuring that at the end of the stochastic search, very few sites change their channel color values). For this first experiment (and in all the following) experiments, we have considered a radius of exploration r = 7 and a graph in which each pixel is connected with its four nearest neighbors and N cnx = 11 equally spaced other pixels located within a square neighborhood window of fixed size N s = 13 pixels (see Fig. 4). In addition, we have considered a maximal number of iterations l max = 200.
Evolution of the energy function expressed in Eq. (9) (for η = 0, r = 7, N cnx = 11, N s = 13) given by our deterministic local search procedure (see Algorithm 1) compared to a stochastic local search using the metropolis criterion (as proposed in [23]) with T0 = 1 and T f = 10−3 (for the subject #05 without simulated length leg discrepancy)
We can notice (see Fig. 14) that our proposed deterministic refinement strategy allows us to noticeably improve the solution of our energy-based estimation model compared to the solution given by the initial FastMap algorithm and to obtain, in our application, similar minimization results given by a stochastic approach (without the need to tune or fit parameters such as initial or final temperatures). This might be due to the (nearly) convex shape of our cost function around the solution given by the FastMap algorithm.
Another way to evaluate the efficiency of the rough initial FastMap mapping and then our proposed mapping refinement strategy consists in computing the correlation metric [29] which is simply the correlation of the temporally shift-invariant Euclidean distance between each pairwise depth vectors in the high (N-)dimensional space (let X be this vector) and their corresponding (pairwise) Euclidean distances in the low (3D) dimensional (LAB color) space (let Y be this vector). The correlation ρ can be estimated by the following equation:
$${{\rho}_{X,Y}} = corr(X,Y) = \frac{cov(X,Y)}{{\sigma_{X} \sigma_{Y} }} = \frac{{\frac{{X^{t} Y}}{\left| X \right|} - \overline{X}\;\overline{Y} }}{{\sigma_{X} \sigma_{Y} }}$$
where X t, |X|, \(\overline{X}\) and σX, respectively represent the transpose, cardinality, mean, and standard deviation of X. This correlation factor (Pearson) will specifically quantify the degree of linear dependence between the variables X and Y and quantify how the FastMap technique is able to give a final 3D LAB mapping in which each pixel (LAB-)color value is set such that the between-color distances are preserved as well as possible [20]. A perfect correlation ρ = 1 indicates a perfect relationship between the initial set of between-depth signal distances and the final between-color distances in the final mapping (and a correlation of ρ = 0 indicates a total loss of information).
Importing Kinect data
Each pixel of a depth image of the Kinect sensor is a 16-bit (little-endian) unsigned integer (uint16) (represented by four digits in hexadecimal base) which actually represents the depth in millimeter estimated at this location. To recover a binary image (of the sequence), three steps are thus required: the first one is to get to the starting byte of the image, the second one is to read and convert the pixels of the image, and the final one is to store each pixel value. To recover the i th image, the starting point is evaluated as follows:
$${\text{image}}^i = 8+ ( 2 4+ 6 40 \times 4 80) \times i$$
For each pixel, four bytes are thus read sequentially and are converted into a decimal base according to the following scheme:
$$\begin{aligned} p_{{j_{read} }}^{i} &= \left[ {A123} \right]_{{{\text{little}} - {\text{endian}}}}^{\text{HEX}} \to 10 \times 16^{1} + 1 \times 16^{0} + 2 \times 16^{3} + 3 \times 16^{2} \hfill \\ &= [9121]^{\text{DECIMAL}} = p_{j}^{i} \hfill \\ \end{aligned}$$
where → designates the conversion of the read data to the decimal base. At this point the pixel \(p_{{j_{{}} }}^{i}\) is stored as a unit16 in a table. This table represents the i th image and once the i th image is fully recovered, it is saved in a 3D array at the i th position.
T test table for 16 degree of freedom
Table 4 lists a few selected critical values (cutoffs) and the associated confidence values (in percentage) for a t distribution with 16 degrees of freedom (as in our case) for two-sided critical regions.
Table 4 Few selected critical values (cutoffs) and associated confidence values (in percentage) for a two-tailed test and for a t-distribution with 16 degrees of freedom
In order to estimate the confidence values between two known values of cutoffs we have used, in our application, a Lagrange method based interpolation leading to the following polynomial interpolation:
$$\begin{aligned} &- 1 5. 7 3 6 3+ 1 1 9. 50 2x - 3 8. 2 8 7 5x^{ 2} - 2. 6 8 9 1 6x^{ 3} + 4. 8 8 6 9 5x^{ 4} - 1. 3 1 2x^{ 5} \hfill \\ & \quad + 0. 1 5 2 7 5x^{ 6} - 0.00 6 7 6 90 3x^{ 7} \hfill \\ \end{aligned}$$
Moevus, A., Mignotte, M., de Guise, J.A. et al. A perceptual map for gait symmetry quantification and pathology detection. BioMed Eng OnLine 14, 99 (2015). https://doi.org/10.1186/s12938-015-0097-2
Asymmetry map
Depth map
Kinect depth sensor
Locomotor disorders
Multidimensional scaling (MDS)
Perceptual color map
Temporal shift-invariance | CommonCrawl |
Wind-driven decadal sea surface height and main pycnocline depth changes in the western subarctic North Pacific
Akira Nagano ORCID: orcid.org/0000-0002-0032-47991 &
Masahide Wakita2
The northward shrinkage of the North Pacific western subarctic gyre (WSAG) in the early 2000s is associated with a sea surface height (SSH) elevation and is correlated to sea surface wind stress change. By using a Rossby wave model forced by wind stress, which computes the component variations due to the barotropic and first to fourth baroclinic modes, we estimated decadal changes in SSH and main pycnocline depth in the subpolar region. Realistic decadal SSH elevation and deepening of the main pycnocline depth associated with the northward shrinkage of the western subarctic gyre from the late 1990s to the mid-2000s were reproduced by the model. The sea surface elevation was caused primarily by the barotropic Rossby wave response to the relaxation of the Ekman suction due to the attenuation of the Aleutian Low by frequent La Niña occurrences after the late 1990s in addition to the long-term weakening of the westerly wind. The northward shrinkage of the WSAG was found to be associated with the intensification of an anticyclonic circulation centered around 43–44 ∘ N, 170–175 ∘ E. The westerly wind weakening deepened the main pycnocline in the western subarctic region through the baroclinic Rossby wave mode response to the wind stress change, which mostly accounts the equivalent halocline deepening at station K2 (47 ∘ N, 160 ∘ E). While the first baroclinic mode variation of the water density significantly attenuates during propagation, the higher mode variations, particularly the second and third mode variations, are locally excited through a quasi-resonant amplification mechanism and have profound impacts on the depth of the upper main pycnocline.
In the subpolar North Pacific, a basin-scale cyclonic circulation, called the subpolar gyre, is driven by the sea surface wind stress over the entire subpolar ocean (Dodimead et al. 1963; Ohtani 1973; Favorite et al. 1976; Nagata et al. 1992). The western boundary current of the subpolar gyre flows southwestward along the eastern coast of the Kamchatka Peninsula as the East Kamchatka Current and east of the Kuril Islands and Hokkaido as the Oyashio Current. It returns to the central subpolar region as an interior weak flow. The southern border of the subpolar gyre, i.e., the Oyashio Current, was reported to have migrated northward from the 1990s to the 2000s, an event suggested to have been caused by a change in the wind stress curl field (Kuroda et al. 2015). Wind-driven meridional shifts in the Oyashio Current have also been suggested by Sekine (1999) and others.
The subpolar gyre is known to have regional cyclonic circulations in its interior (Dodimead et al. 1963; Ohtani 1973; Favorite et al. 1976; Nagata et al. 1992). A regional cyclonic circulation, called the western subarctic gyre (WSAG), is embedded in the western part of the subpolar gyre, as schematically illustrated in Fig. 1. The southwestward current of the western border of the WSAG merges into the East Kamchatka Current. Due to the moderate baroclinic structures of the current and density in the WSAG, the main pycnocline (halocline) becomes shallower toward the center of the gyre (Miura et al. 2002), where the sea surface height (SSH) is observed to be minimal (Nagano et al. 2016).
Schematic diagram of the sea surface flows (arrows) in the western subarctic North Pacific. Locations of stations K1, K2, and KNOT are indicated by stars
In the western subarctic region, a temperature minimum layer, called the dichothermal layer, exists at a depth between 100 m and 200 m above the main pycnocline or halocline (Dodimead et al. 1963; Favorite et al. 1976). The dichothermal layer is occupied by the remnant of the winter mixed layer water formed during the previous winter (Miura et al. 2002; Wakita et al. 2010, 2013). In the late 1990s, hydrographic and chemical time series observations in the western subarctic region were initiated at stations K2 (47 ∘ N, 160 ∘ E) and KNOT (44 ∘ N, 155 ∘E) (Fig. 1). Using observation longer than 15 years, a decadal deepening of the halocline was revealed at these sites; in this layer, slow acidification and decreasing carbon dioxide release to the atmosphere in the winter were observed (Wakita et al. 2010, 2013, 2017).
Nagano et al. (2016) found that the WSAG shrank northward from the late 1990s to the mid-2000s on the basis of altimetric SSH during the period of 1992–2010. The increase in SSH in the western subarctic region due to the gyre change until 2000 was also monitored by Qiu (2002). Combining conductivity-temperature-depth (CTD) and SSH data using the altimetry-based gravest empirical mode (AGEM) method, (Nagano et al. 2016) estimated the change in the halocline or upper main pycnocline depth. As a result, the halocline at K2 was found to be displaced downward in association with the northward shrinkage of the WSAG; further, it is related to the decadal decrease in water density in the dichothermal layer, i.e., the base of the winter mixed layer at K2. In other words, water density in the dichothermal layer is substantially controlled by the WSAG via the change in the upper main pycnocline depth. We consider that barotropic variations, not affecting density in the subsurface layer, are included in the subarctic decadal SSH change. However, the AGEM method of Nagano et al. (2016) does not distinguish the barotropic variations from baroclinic ones, which are possibly related to the observed decadal density change. Another method that treats separately barotropic and baroclinic variations is required to examine the mechanism of the upper main pycnocline change.
Over the subpolar North Pacific, there are vigorous variations of the Aleutian Low and the westerly wind (e.g., Wallace and Gutzler 1981), which mainly drives the subpolar gyre. Isoguchi and Kawamura (2006) reported that seasonal to interannual variations in coastal sea level and SSH in the Oyashio and East Kamchatka Current regions are generated by the wind stress changes. The decadal gyre variations in the western subarctic region might be driven by changes in wind stress induced by those of the westerly wind and the Aleutian Low. The northward shrinkage of the WSAG, indicated by the time coefficient of the first empirical orthogonal function mode of SSH (Nagano et al. 2016), is found to be simultaneously linked to the changes in wind stress curl in the eastern subpolar region and the region southeast of the Kamchatka Peninsula, as implied by fairly high correlation in Fig. 2. Meanwhile, note that there is no significant trend in the annual mean potential density of the sea surface water at K2 (Wakita et al. 2017). Therefore, the decadal elevation in SSH and deepening of the halocline depth at the station are not mainly attributable to changes in water density due to thermal expansion and freshwater supply in the sea surface layer.
Map of correlation coefficient between wind stress curl and the time coefficient of the first empirical orthogonal function mode of SSH calculated by Nagano et al. (2016). For the calculation, we used wind stress data provided by the US National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR). Locations of stations K1, K2, and KNOT are indicated by stars
Using a β-plane two-layer model forced by wind stress, Qiu (2002) suggested that the change in the WSAG is expected to be driven by the change in wind stress, although this has not been fully examined by past investigators. Under the assumption of the Sverdrup balance in the interior region of the subpolar gyre, variations in the gyre volume transport and SSH excited by interannual to decadal changes in wind stress such as due to the Pacific Decadal Oscillation (PDO) (Mantua et al. 1997; Mantua and Hare 2002) were examined by Isoguchi et al. (1997), Ishi and Hanawa (2005), and Isoguchi and Kawamura (2006). These simple models, in which neither the bottom topography (e.g., Ripa 1978) nor the beta-dispersion (e.g., Schopf et al. 1981) were not taken into account, reproduced reasonable decadal gyre changes. In other words, decadal fluctuations in the subpolar region are well explained by long Rossby wave responses to wind stress changes. However, these low-vertical-resolution models are insufficient to examine the observed potential density change associated with the northward shrinkage of the WSAG reported by Wakita et al. (2010, 2013, 2017) and Nagano et al. (2016). A continuously stratified model is required to discuss the wind-driven density change in the western subarctic region.
In general, Rossby wave adjustments of the oceans to changes in wind stress are involved in the wind-driven changes of the circulations (e.g., LeBlond and Mysak 1978; Gill 1982; Pedlosky 1987). SSH changes accompanied by Rossby wave adjustments propagate westward with various speeds according to their spatial and temporal scales and are subject to eddy dissipation. Kawabe (2000, 2001) solved the vorticity gradient equation (e.g., LeBlond and Mysak 1978) with wind stress forcing to take into account the propagations of disturbances by Rossby waves and calculated interannual sea level variations at tide gauge stations in the North Pacific subtropical region. Adopting this method to the SSH changes in the North Pacific subpolar region, we can compute the changes in SSH and water potential density due to the individual barotropic and baroclinic mode changes excited by the wind stress changes. Moreover, we can examine the mechanism controlling how the WSAG shrinkage and main pycnocline deepening are caused by changes in wind stress.
In this study, we calculated variations in SSH, volume transport, and water potential density using a dynamical model of barotropic and baroclinic Rossby waves excited by changes in wind stress including vertical and horizontal eddy dissipation. We examined whether the wind-driven SSH calculation produces the decadal SSH change associated with the northward WSAG shrinkage. Using the calculated SSH and potential density variations, we identified disturbances that yield the northward gyre shrinkage and discuss the mechanism of the decadal deepening of the main pycnocline at K2. The data and calculation method are described in "Materials/method" section. In "Results and discussion" section, we determine the parameters required for the SSH calculation comparing between the observed and calculated SSH changes at K2, using the obtained parameters, we calculate the SSH changes and describe their characteristics, and discuss the potential density change at K2. A summary and conclusion are provided in "Conclusions"section.
Materials/method
SSH data
Daily SSH anomalies with horizontal grid intervals of 1/4 ∘ from January 1993 to December 2014 in the region of 40–60 ∘ N, 140 ∘ E–130 ∘ W were collected from the Archiving, Validation and Interpretation of Satellite Oceanographic (AVISO) delayed-time updated mapped data (DT-MSLA-H, http://www.aviso.altimetry.fr/duacs/) (AVISO 2016). We calculated the monthly mean SSH anomalies, added them to the mean dynamic topography (MDT_CNES-CLS13) compiled by Rio et al. (2011), and obtained the monthly absolute SSH. To analyze interannual to decadal variations in SSH, we smoothed them using a 15-month running mean filter. The SSH data at K1 (51 ∘ N, 165 ∘ E), K2, and KNOT were obtained from the smoothed SSH data at the nearest grids.
Wind stress data
To compute wind-derived SSH variations, we used the monthly mean momentum flux vector, τ= (τx, τy), where τx and τy are the zonal and meridional wind stresses, respectively, with horizontal grid intervals of 1 ∘ in the region of 40–54 ∘ N, 160 ∘ E–135 ∘ W (the region enclosed by a black square in Fig. 3) from January 1979 to December 2014. The data were provided by the US National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) (Kalnay et al. 1996). From the τ data, we computed the Ekman pumping velocity (wE=∇× τ/f=∂x(τy/f)−∂y(τx/f), where f is the Coriolis parameter). Upward Ekman vertical velocity was taken to be positive wE. Namely, positive wE indicates Ekman suction. To analyze interannual to decadal variations, we smoothed the wE data using a 15-month running mean filter. Using the smoothed wE data, we calculated the SSH variations, as will be described in "Results and discussion" section.
Map of the North Pacific including the bottom topography. Bottom topography is based on ETOPO1 data. The wind stress data in the region enclosed by a square were used for the calculations of SSH at stations K1, K2, and KNOT. NCEP/NCAR sea-level pressure (hPa) averaged during the study period is denoted by white contours with intervals of 2 hPa
Hydrographic data
Full-depth CTD data collected at the World Ocean Circulation Experiment (WOCE) P01 (47 ∘ N) line from May 21 to June 13, 1999, by Watanabe et al. (2001) and Fukasawa et al. (2004) and others on board the R/V Kaiyo-maru (Japan Fisheries Agency) were used to estimate the vertical structures of the baroclinic Rossby wave modes. The CTD sensors were calibrated before and after the cruise. Water sampling at the CTD stations was performed using Niskin bottles mounted on the CTD frame. By using the sampling data, the CTD data were calibrated. The accuracies of the temperature and salinity data used in this study are better than 0.00008 ∘C and 0.003 (psu), respectively. The zonally averaged profile of the potential density of seawater based on the CTD data collected in the western subarctic region between 160 ∘ E and 170 ∘ E were vertically averaged over 10 dbar to eliminate small-scale vertical variations and gridded from 10 dbar to 4000 dbar at intervals of 10 dbar. In addition, we used a climatological potential density profile around K2 based on the World Ocean Database 2013 (WOD2013) (Boyer et al. 2013) to examined the validity of the use of WOCE P01 data collected in such a short time for the calculation of the vertical structures of the baroclinic modes.
Model description and calculation procedure
We adopted the β-plane linearized form of the hydrostatically balanced equations of motion and continuity to estimate the interannual to decadal variations in SSH (η) due to Rossby waves forced by changes in wind stress, as applied by Kawabe (2000, 2001) in the subtropical region of the North Pacific. Over a flat bottom ocean, perturbations of horizontal current velocity vector u=(u,v), pressure p, and water density ρ are solved via separation of variables as follows:
$$ (\boldsymbol{u},p) = \sum_{n=0}^{\infty} \left[\boldsymbol{u}_{n}(t,x,y),p_{n}(t,x,y)\right]{\phi}_{n}(z), $$
$$ {\rho} = \sum_{n=0}^{\infty} {\rho}_{n}(t,x,y) \: h_{n} \frac{\mathrm{d}{\phi}_{n}}{\mathrm{d}z}, $$
where t is time and x, y, and z are the eastward, northward, and upward coordinates, respectively. The ϕn function is the nth eigenfunction, satisfying
$$\begin{array}{@{}rcl@{}} \frac{\mathrm{d}}{\mathrm{d}z} \left(\frac{1}{N^{2}} \frac{\mathrm{d}{\phi}_{n}}{\mathrm{d}z}\right) + \frac{1}{C^{2}_{n}} \, {\phi}_{n} &=& 0, \end{array} $$
where \({C^{2}_{n}}=gh_{n}\), g is the gravitational acceleration (9.80 ms −2), hn is the equivalent depth, \(N^{2}=-g\bar {\rho }^{-1}{\partial }\bar {\rho }/{\partial }z\) is the squared Brunt-Väisälä frequency, and \(\bar {\rho }\) is the mean vertical profile of potential density. The eigenfunctions are normalized as
$$\begin{array}{@{}rcl@{}} \frac{1}{D_{\mathrm{b}}} \int_{-D_{\mathrm{b}}}^{0} {\phi}_{m} {\phi}_{n} \: \mathrm{d}z = \left\{ \begin{array}{ll} 1 & (m = n) \\ 0 & (m \ne n), \end{array} \right. \end{array} $$
where Db is a constant bottom depth. Therefore, the vertical structure of the nth baroclinic Rossby wave mode, i.e., ϕn (n=1, 2, ⋯, ∞), is obtained as the nth eigenfunction of Eq. (3) with the boundary condition of no vertical velocity, i.e., dϕn/dz=0, at the sea surface and bottom, i.e., z=0 and −Db. The barotropic Rossby wave mode is represented by the zeroth mode (n=0) of \({C^{2}_{0}}=gD_{\mathrm {b}}\) and vertically constant ϕ0.
The governing equation of the meridional velocity perturbation due to the nth baroclinic mode of the Rossby waves, i.e., vn, which is called the vorticity gradient equation (e.g., LeBlond and Mysak 1978), forced by interannual to decadal variations in wind stress is obtained in Cartesian coordinates as
$$\begin{array}{@{}rcl@{}} {\begin{aligned} f^{2} \,\frac{{\partial}v_{n}}{{\partial}t} - C^{2}_{n} \, \frac{{\partial}}{{\partial}t}{\nabla}^{2}v_{n} - C^{2}_{n} \, {\beta} \, \frac{{\partial}v_{n}}{{\partial}x} &= \frac{C^{2}_{n}}{D} \, {\phi}_{n}(0) \left(\frac{{\partial}^{2}w_{\mathrm{E}}}{{\partial}t \, {\partial}y} - f \, \frac{{\partial}w_{\mathrm{E}}}{{\partial}x} \right) \\ &+ \left(A_{\mathrm{V}} \, N^{2}-A_{\mathrm{H}} \, C^{2}_{n} \, {\nabla}^{2}\right) \, {\nabla}^{2}v_{n} \\ &- \left(K_{\mathrm{V}} \, N^{2}-K_{\mathrm{H}} \, C^{2}_{n} \, {\nabla}^{2}\right) \, \frac{f^{2}}{C^{2}_{n}} \, v_{n}, \end{aligned}} \end{array} $$
where \({\nabla }^{2} = {\partial }^{2}_{x} + {\partial }^{2}_{y}\), AH (AV) and KH (KV) are the horizontal (vertical) eddy viscosity and diffusion coefficients, respectively, and β is the latitudinal variation of f (i.e., f=f0+βy). The first, second, and third terms of the right hand side of Eq. (5) represent external forcing, eddy viscosity, and eddy dissipation, respectively. We assume AV=KV (≡ DV) and AH=KH (≡ DH) hereafter. Further, we define B≡DVN2 and take to be constant, following to the presumption of KV∝N−2 in the western subarctic region by Andreev et al. (2002).
A single fluctuation of the Ekman pumping velocity, wE, is represented by a sustained forcing of superposed sinusoidal meridional modes as
$$\begin{array}{@{}rcl@{}} w_{\mathrm{E}} (t, x, y) = \sum_{m=1}^{\infty} W_{m} \: {\sin} \left(\frac{m\pi}{L}y\right) H(t-t_{0}) \: {\delta}(x-x_{0}), \end{array} $$
where Wm is amplitude of the mth meridional mode of wE, H is the Heaviside step function (defined as a function providing zero until time t=t0 and unity afterward), δ is the Dirac delta function (defined as a function providing zero for any x except for longitude x=x0), and L is the meridional length of the study region. t0 and x0 are time and the zonal position of the induction of wE, respectively, so that values of Wm are determined for every t0 and every x0. ∂2wE/∂t∂y in Eq. (5) is neglected because this term is not significantly effective due to its attribution to a zonal and temporal δ function.
Substituting Eq. (6) and the geostrophic relationship, ρ0f0vn=∂x pn, into Eq. (5), neglecting ∂t∂y wE, and integrating with respect to x, we obtain
$$\begin{array}{@{}rcl@{}} {\begin{aligned} \left[ f^{2} \,\frac{{\partial}}{{\partial}t} - C^{2}_{n} \, \frac{{\partial}}{{\partial}t} {\nabla}^{2} - C^{2}_{n} \, {\beta} \, \frac{{\partial}}{{\partial}x} + \left(\frac{f^{2}}{C^{2}_{n}}- {\nabla}^{2} \right) \left(B-D_{\mathrm{H}} \, C^{2}_{n} \, {\nabla}^{2}\right) \right] p_{n}\\ = \frac{{\rho}_{0} f^{2} C^{2}_{n}}{D} \, {\phi}_{n}(0) \sum_{m=1}^{\infty} W_{m} \: {\sin} \left(\frac{m\pi}{L}y\right) H(t-t_{0}) {\delta}(x-x_{0}), \end{aligned}} \end{array} $$
where ρ0 is the constant overall water density.
The solution of Eq. (7) is found as
$$\begin{array}{@{}rcl@{}} {\begin{aligned} p_{n} &= \frac{{\rho}_{0}f^{2}}{{\beta}D} \: {\phi}_{n}(0) \sum_{m=1}^{\infty} W_{m} \: {\sin} \left(\frac{m\pi}{L}y\right) \cdot \left\{ H(x-x_{0}) \right.\\& \left.- {\exp}[{-r_{mn}\,(t-t_{0})}]\: H[x- x_{0}+c_{mn}(t-t_{0})] \right\}, \end{aligned}} \end{array} $$
$$\begin{array}{@{}rcl@{}} {\begin{aligned} c_{mn} &= {\beta}\, C^{2}_{n} \, F^{-1}_{mn}, \\ r_{mn} &\,=\, \!\left[B\,+\,D_{\mathrm{H}}C^{2}_{n}\left({\kappa}^{2}\,+\,\frac{m^{2}{\pi}^{2}}{L^{2}}\right)\!\right]\left({\kappa}^{2}\,+\,\frac{m^{2}{\pi}^{2}}{L^{2}} \,+\, \frac{f^{2}}{C^{2}_{n}} \right) F^{-1}_{mn}, \\ F_{mn} &= f^{2} + C^{2}_{n} \left({\kappa}^{2} + \frac{m^{2}{\pi}^{2}}{L^{2}} \right). \end{aligned}} \end{array} $$
cmn is the propagation speed of the mth meridional mode and nth vertical mode Rossby wave, and rmn is the damping rate due to eddy dissipation. Depending on the vertical and meridional modes, zonal scales of Rossby waves are variable in this model. κ is the zonal wavenumber of the Rossby wave with decadal periods (here, we adopted 10 years to examine the decadal change in SSH studied by Nagano et al. (2016)); in other words, we set κ=2π/(10 year × cmn), although the calculated SSH is little dependent of κ. Therefore, variations in pressure caused by impacts of the change in wind stress at location x0 at time t0 propagate with the distinctive speed of cmn as a Rossby wave, being subject to damping at the rate of rmn. Since, in this model, the geostrophic equilibrium is assumed to be instantaneously accomplished (in reality, adjusted through the propagations of inertial gravity waves), an induction of wind stress forcing can excite fluctuations in pressure and velocity both to the east and west of the forcing region.
Note that Eq. (8) is the solution for a sustained forcing with the temporal step function; therefore, we differentiate Eq. (8) with respect to t0 to obtain the solution for an impulsive forcing, i.e., a Green's function of long Rossby wave. Next, replacing t0 and x0 with \(t^{'}\phantom {\dot {i}\!}\) and \(x^{'}\phantom {\dot {i}\!}\), and integrating with respect to them across the intervals of (−∞, t ] and [ x, xe], respectively, the total pressure at any time t and any location (x, y) due to the nth Rossby wave mode (Pn) is expressed as
$$\begin{array}{@{}rcl@{}} {\begin{aligned} P_{n} &= \frac{{\rho}_{0}f^{2}}{{\beta}D} \: {\phi}_{n}(0) \int_{-\infty}^{t} \int_{x}^{x_{\mathrm{e}}} \sum_{m=1}^{\infty} \frac{\partial W_{m}}{\partial t^{'}} \: {\sin} \left(\frac{m\pi}{L}y\right) \\ & \cdot \left\{H(x-x^{'}) - \exp[{-r_{mn}\,(t-t^{'})}]\right.\\&\quad \left. \cdot H[x-x^{'}+c_{mn}(t-t^{'})]\right\} \mathrm{d}x^{'} \mathrm{d}t^{'}, \end{aligned}} \end{array} $$
where xe is the position of the eastern boundary. Using Pn, the variation in SSH due to the change in wind stress (η) is estimated to be
$$\begin{array}{@{}rcl@{}} \eta = \frac{P_{z=0}}{{\rho}_{0}g} = \frac{1}{{\rho}_{0} g} \sum_{n=0}^{\infty} P_{n} \, {\phi}_{n}(0), \end{array} $$
where ρ0 is set to 1025 kg m−3. Using the hydrostatic relation, the change in the vertical density distribution due to the nth baroclinic mode, i.e., ρn, is expressed as ρn=−pn/ghn.
The volume transport due to the nth vertical mode through a zonal line between the longitude of the Kuril Islands (xw) and the longitude of K2 (xK2) from the sea surface (z=0) to a depth of −D is calculated as
$$\begin{array}{@{}rcl@{}} {{} \begin{aligned} Q_{n} = \int_{x_{\mathrm{w}}}^{x_{\mathrm{K2}}} V_{n} \mathrm{d}x \int_{-D}^{0} {\phi}_{n}(z) \mathrm{d}z &= \frac{1}{{\rho}_{0}f} \left[P_{n}(x_{\mathrm{K2}}) - P_{n}(x_{\mathrm{w}})\right]\\&\quad \cdot \int_{-D}^{0} {\phi}_{n}(z) \mathrm{d}z, \end{aligned}} \end{array} $$
where Vn is the total geostrophic velocity due to the nth mode and the geostrophic relationship, ∂xPn=ρ0fVn, was used to obtain the right hand side. To estimate the volume transport variation caused by the wind stress change in the interior region, we further assume no perturbation at the western edge of the western boundary current, i.e., Pn(xw)=0. Therefore, the volume transport due to the nth vertical mode is approximated as
$$\begin{array}{@{}rcl@{}} Q_{n} = \frac{P_{n}(x_{\mathrm{K2}})}{{\rho}_{0}f} \int_{-D}^{0} {\phi}_{n}(z) \mathrm{d}z. \end{array} $$
The total volume transport (Q) is calculated by summing Qn from n=1 to ∞ (\(Q = \sum _{n=1}^{\infty } Q_{n}\)). Integrating ϕ from the sea surface to the bottom (z=−Db), the volume transport due to the baroclinic Rossby wave modes vanishes, i.e., Q=Q0. Note that the volume transport should not be identical to the East Kamchatka Current transport because we neglect the pressure variation at the western boundary. If we neglect the time-varying, eddy viscosity, and eddy dissipation terms of Eq. (5), the familiar formula of the Sverdrup balance is obtained. Due to the rapid barotropic (n=0) response to the change in wind stress, the variation in the volume transport calculated by this model, i.e., Q, should be nearly identical to those based on the Sverdrup balance, as investigated by Isoguchi et al. (1997), Ishi and Hanawa (2005), and Isoguchi and Kawamura (2006).
To examine the characteristic curves of the barotropic Rossby wave in the study region, we calculated geostrophic contours, i.e., f/H, where H is water depth (not shown). Except near the Hawaiian-Emperor seamount chain, the Aleutian Arc, and the western and eastern boundaries of the ocean, the geostrophic contours around the latitudes of K2 and KNOT are largely parallel to the latitudinal lines. It should be noted that the zonal scale of the meridional deviation of the geostrophic contours is much smaller than those of barotropic disturbances (>70000 km). Accordingly, the barotropic disturbances around the latitudes of K2 and KNOT excited by the wind stress changes in the interior region are considered to propagate zonally, being not significantly affected by the seamount chain.
Calculation of the wind-driven SSH variation
Using the WOCE P01 CTD data collected in 1999 in the region between 160 ∘ E and 170 ∘ E, we calculated the average potential density, σθ=ρ−1000 (kg m−3), down to a depth of 4000 dbar, Db=4000 dbar (Fig. 4a). The potential density steeply increases with depth particularly in the top 200 dbar, where the halocline exists and compensates for the temperature inversion in the dichothermal layer (Nagano et al. 2016). These near sea surface characteristics are typical of the early summer density profile in the western subarctic North Pacific. The main pycnocline is present from just below the halocline to a depth of approximately 1500 dbar. In this paper, we discretized Eq. (3) in terms of z at intervals of 10 dbar and obtained Cn and ϕn by solving the eigenvalue problem.
a Vertical profile of the potential density, σθ, in kg m−3 averaged along the WOCE P01 line (47 ∘ N) from 160 ∘ E to 170 ∘ E. b Vertical eigenfunctions of the barotropic mode (black line), and the first (red line), second (blue line), third (green line), and fourth (gray line) baroclinic vertical modes of the potential density profile. The vertical thin solid line indicates zero
The eigenfunctions of the lowest four baroclinic Rossby wave modes, i.e., ϕn (n=1, 2, 3, and 4), are shown in Fig. 4b, concurrent with the vertical structure of the barotropic mode (n=0). The obtained ϕn is found to cross zero n times, i.e., it has n nodes. Although the potential density above the depth of the winter mixed layer (∼110 m) varies seasonally, the structure of the nth mode with n nodes under the depth of the winter mixed layer is likely to be almost independent of the CTD observation season.
The calculated phase speeds of the barotropic and baroclinic Rossby waves, cmn, are listed in Table 1; for the calculation of SSH at K2, we set f0=1.066×10−4 s−1 and β = 1.562×10−11 m−1 s−1 at the latitude of 47 ∘ N. The phase speeds of the barotropic (n=0) Rossby waves change depending on the meridional modes from 357.85 cm s−1 (m=1) to 23.85 cm s−1 (m=4), and the waves can propagate across the northern North Pacific basin in shorter than approximately 8 months. Discrepancies of phase speeds at KNOT and K1 due to the β-plane approximation fixed at K2 are estimated to be less than 6 %, so that the error of travel time of disturbances across the ocean is shorter than the temporal intervals of the τ data, i.e., 1 month. Baroclinic disturbances are transmitted primarily by the first (n=1) baroclinic Rossby waves with a phase speed of 0.4–0.5 cm s−1 but only slightly by the higher baroclinic mode waves because of their slow phase speeds (n=2, 3, and 4) and damping due to eddy dissipation. Despite that discrepancies of phase speeds at KNOT and K1 due to the β-plane approximation are approximately 17 %, spatiotemporal characteristics of calculated SSH variations are not significantly distorted because their propagations are quite slow.
Table 1 Westward propagation speeds (cm s−1) of the Rossby waves, i.e., cmn, with respect to the barotropic/baroclinic vertical (n) and meridional (m) modes
Note that, using the WOD2013 potential density profile around K2, we obtained the vertical structures of the baroclinic modes and the propagation speeds of Rossby waves. These are basically equivalent to those derived from the WOCE P01 CTD data. However, possibly because the WOD2013 climatology in the western subarctic region was constructed from a small number of data collected in layers deeper than 2000 dbar, the WOD2013 data provide artificial discontinuities in ϕ around a depth of approximately 2000 dbar. Therefore, for the present calculation of wind-driven SSH variations, we adopted the vertical structures and propagation speeds of the baroclinic modes calculated from the WOCE P01 CTD data.
Next, the Ekman vertical velocity, wE, was calculated from the NCEP/NCAR monthly mean momentum flux data. Because wE has a peak around the latitude of K2, we decomposed wE into the first to fourth meridional modes, as in Eq. (6) (m=1, 2, 3, and 4), setting L=14 ∘ latitude (equivalent to 1556 km). The mean zonal distributions of the amplitude of the meridional modes, i.e., Wm, are shown in Fig. 5a. The amplitude of the first meridional mode of wE (m=1, solid black line) is significantly larger than those of the other modes west of approximately 160 ∘ W owing to the strong westerly wind from the Eurasian continent and gradually decreases eastward. The second meridional mode (m=2, red line) has substantial amplitudes west of approximately 165 ∘ E, and the third (m=3, green line) and fourth (m=4, blue line) modes have nearly equivalent amplitudes to the residual error (dashed line) defined as the root-mean-squared difference between the raw and four mode-based synthetic values of wE.
Longitudinal distributions of a the mean value and b the standard deviation of Wm (10−6 m s−1) in Eq. (6). Values of the first (m=1), second (m=2), third (m=3), and fourth (m=4) modes are shown by black, red, green, and blue lines, respectively. In panel a, the mean residual error (10−6 m s−1) of the Ekman pumping velocity, wE, for the four meridional modes is indicated by a dashed line
As shown by the standard deviation of W in Fig. 5b, the variations in the amplitudes of the meridional modes are mostly greater with the larger meridional scales expressed by the lower modes. In particular, a significant peak in the standard deviation of the first meridional mode (m=1, black line) variation exceeding 0.4×10−6 m s−1 is present between approximately 150 ∘ W and 160 ∘ W, where the westerly wind on the southern limb of the Aleutian Low prevails and wind stress curl is correlated to the SSH variation associated with the WSAG change (Fig. 2). The second meridional mode (m=2, red line) has a peak variation near a longitude of approximately 167 ∘ E, the magnitude of which is greater than that of the first meridional mode (black line). This peak variation of the second mode coincides with the variation in wind stress curl in the region southeast of the Kamchatka Peninsula correlated to the WSAG change (Fig. 2). The variations in the third (m=3, green line) and fourth (m=4, blue line) meridional modes are significantly smaller than those in the lower modes. Because the meridional scales of higher modes are similar to or smaller than the resolution of the NCEP/NCAR wind stress data (∼2.5 ∘), we used the lowest four meridional modes to calculate the variations in SSH and potential density.
Equation (9) shows that the variation in total pressure can be calculated by the accumulation of linear Rossby wave responses to the changes in wind stress curl from past to time t and from the eastern end to location x. It should be noted that because we represented a single fluctuation of wE by the Heaviside step function of time in Eq. (6), the amplitudes of the variations in pressure and SSH are proportional to the derivative of W with respect to time, i.e., \({\partial W}/{\partial t^{'}}\phantom {\dot {i}\!}\) in Eq. (9) (because the derivative of the step function is the δ function). Therefore, to calculate the amplitude of the Rossby waves excited by the wind stress curl changes, we computed the differences in W between successive months for every longitudinal grid.
To determine the parameters related to damping due to vertical and horizontal eddy diffusion, we calculated variations in SSH, by using Eq. (10) and setting DH (the horizontal eddy diffusion coefficient) to 0 m2s−1, 1 m2s−1, 10 m2s−1, 50 m2s−1, and 100 m2s−1; and B (the vertical eddy diffusion coefficient) to 0 m2s−3, 1×10−8 m2s−3, 5×10−8 m2s−3, 1×10−7 m2s−3, and 2×10−7 m2s−3. Note that, at K2, the SSH variation is considered to be driven purely by the wind stress change to the east without being affected by topographic blocking and other gyre variations, in comparison with those at other sites such as K1 and KNOT. Accordingly, we determined the parameters based on the SSH variation at K2. Correlation coefficients between the observed and calculated SSH are listed in Table 2; to focus on decadal variations, we smoothed calculated SSH time series by using a 35-month running mean filter. The highest correlation is found to be 0.79 for the case of DH=10 m2s−1 and B=1×10−7 m2s−3, which is higher than the 90% confidence interval (0.73) for 4 equivalent degrees of freedom based on Student's t test. The estimated SSH variations are not sensitive to the diffusion parameters within ranges of 0<DH<100 m2s−1 and 5×10−8<B<2×10−7 m2s−3 (Table 2). As described below, reasonable decadal increases in SSH were computed at other sites in the western subarctic region. Therefore, we analyzed the SSH variations based on the values of DH (=10 m2s−1) and B (=1×10−7 m2s−3) that provide the highest correlation. The value of DH for the subpolar region is two orders of magnitude smaller than that for the subtropical region evaluated by Kawabe (2000). This is consistent with the fact that spatial eddy mixing scales in the subpolar region are smaller than those in the subtropical region (Stammer 1998). Meanwhile, the values of B for both regions are similar.
Table 2 Dependence of the correlation coefficients between the observed and simulated SSH at station K2 on the eddy dissipation coefficients B (m2s−3) and DH (m2s−1)
The calculated wind-derived SSH variation at K2 smoothed by a 35-month running mean filter is shown by the red thick line in Fig. 7a; for comparison, we display the altimetric SSH variation (blue thick line). The standard deviation of the calculated SSH variation during the period of the altimetric SSH observation is 4.0 cm, which is comparable to but larger than the standard deviation of the observed SSH variation (3.8 cm). It should be noted that the year of the background potential density observation at the WOCE P01 line, i.e., 1999, nearly corresponds to the midpoint of the decadal increase in SSH at K2. This suggests that the selection of the background density profile observed in 1999 is reasonable for the SSH calculation during the study period.
The effects of eddy dissipation on the damping rates are strongly dependent on the baroclinic modes but are mostly independent of the meridional modes (Kawabe 2000). Because the amplitude of the SSH disturbances due to the barotropic Rossby waves (n=0) is not significantly affected by the dissipation, the damping rates, defined as exp(−r× 1 year) in this paper, are nearly unity (Table 3). Due to the eddy dissipation, the amplitude of the first baroclinic Rossby waves (n=1) attenuates to approximately 40% after 1 year of propagation following excitation. For the higher baroclinic Rossby modes, i.e., the second (n=2), third (n=3), and fourth (n=4) modes, variations in SSH nearly vanish within 1 year. Because the propagation speeds of these higher baroclinic Rossby wave modes are slow as described above (Table 1), their contributions are localized around the forcing regions.
Table 3 Damping rates of the SSH disturbances per year, i.e., exp(−rmn×1 year), with respect to the vertical and meridional modes for the case of B=1×10−7 m2s−3 and DH=10 m2s−1
We performed comparisons between the calculated and altimetric SSH variations at other sites. The variation in SSH at K1 (marked by the northernmost star in Fig. 3) was calculated using the wE along the zonal line from 135 ∘ W to K1 and its correlation coefficient (0.43) is much lower than the 90% confidence interval. As described below, the SSH variation at K1 appears to be excited west of the southern end of the Aleutian Arc (longitude ∼180 ∘). Accordingly, we calculated the SSH variation at K1 forced by the wE in the region west of 180 ∘ alone. As a result, the correlation coefficient rose to 0.58. Therefore, the SSH variation at K1 is thought not to be affected by changes in the wind stress originated east of the Aleutian Arc but to be forced primarily by the local change in wind stress to the west. However, note that the standard deviation of the calculated SSH (3.0 cm) is much smaller than that of the observed SSH (5.2 cm). It is likely that the interannual to decadal SSH variation at K1 is significantly affected by changes in forcings other than wind stress.
Because station KNOT is located to the west of 160 ∘ E (Fig. 3), i.e., outside the calculation region, we cannot compute the SSH variation at this site. Instead, we obtained the SSH variation at a site east of KNOT, i.e., 44 ∘ N 160 ∘ E, and compared it to the observed SSH variation at this location. Even though the correlation coefficient between the calculated and observed variations in SSH is 0.51, which is lower than the 90% confidence interval, due to the influences of the subtropical water from the south (Tsurushima et al. 2002), the calculated SSH increases, consistent with the increase in observed SSH that we focus on in this paper. The decadal increase in SSH east of KNOT may be due to the change in wind stress in the subpolar region, as will be discussed below.
SSH disturbances can also propagate from the far east, outside the calculation region. Disturbances generated in the equatorial region due to El Niño propagate along the equator and the North American coast as equatorial and coastal Kelvin waves, and proceed westward as baroclinic Rossby waves (e.g., Endfield and Allen 1980 and Jacobs et al. 1994). Perhaps, other types of disturbances on interannual to decadal timescales are also excited by various kinds of forcing. Disturbances excited in the narrow region near the North American coast should be carried by baroclinic Rossby waves with a horizontal scale of the internal Rossby radius of deformation. Therefore, such disturbances decay just west of the coast due to the significant damping (Qiu et al. 1997) and do not affect the SSH variation in the western subarctic region.
As described below, this simple wind-driven Rossby wave model should be noted to exhibit some discrepancies between the simulated and observed SSH variations. The discrepancies are supposed to be attributable principally to two reasons. First, topographic effects such as the joint effect of baroclinicity and bottom relief (JEBAR), through which the ocean slowly respond to rapid forcing changes, were not taken into account in the model. As speculated by Frankignoul et al. (1997), the ocean can slowly respond in the barotropic manner to shorter timescale wind stress changes and produce variations on longer timescales. In the present model, the atmospheric variations caused by El Niño excite substantial depressions in SSH, but they were not observed prominently by the satellite altimetry. Accordingly, the El Niño-related SSH depressions are not convincing.
Second, the damping parameters were assumed to be uniform in the whole subpolar region. Characteristics of propagations of SSH fluctuations appear to be different with respect to longitudes. In Fig. 6, we show the spatial distribution of correlation coefficient between altimetric and simulated decadal SSH variations in the subpolar region. Significantly high-correlation (>0.73) areas are present in the northwestern and eastern parts of the calculation region. It should be, however, noted that a low-correlation (reaching minimum values of approximately − 0.2) area extends southwestward from 50 ∘ N, 160 ∘ W to 40 ∘N, 165 ∘E bounded by the western and eastern high-correlation areas. This distribution of the low correlation is different from those of major bottom topographies such as the Hawaiian-Emperor seamount chain (Fig. 3) but is likely to be a propagating baroclinic Rossby wave pattern. As in the present model, we set the damping parameters to obtain reasonable SSH variation in the WSAG region, disturbances propagating as baroclinic Rossby waves from the eastern to central subpolar North Pacific might be more suppressed than in the real ocean.
Map of correlation coefficient between altimetric and simulated SSH variations in the region of 40–50 ∘N, 160 ∘ E–135 ∘ W. Contour interval is 0.2. The 90% confidence interval (0.73) is illustrated by green contours. Variations in SSH smoothed by a 35-month running mean filter were used to calculate correlation coefficients. Locations of stations K1, K2, and KNOT are indicated by stars
Wind-driven variations in a SSH (cm) at station K2 (red line) and b absolute geostrophic volume transport (Sv) down to a depth of 4000 dbar to the east of K2 (black line). Time series smoothed by 15- and 35-month running mean filters are indicated by thin and thick lines, respectively. Positive values in panel b indicate northward transport. For comparison, the observed SSH at K2 and the North Pacific Index (NPI) are shown by blue and green lines, respectively, in panel a. The NPI was smoothed by a 15-month running mean filter and normalized by the standard deviation
Nevertheless, fluctuations in SSH excited in the eastern high-correlation area rapidly propagate westward as barotropic Rossby waves through the low-correlation area in Fig. 6 and contribute to the simulated SSH variations in the high-correlation western subarctic region, so that the wind-driven SSH variation component in the target region, i.e., the western subpolar North Pacific, is considered to be appropriately computed.
Wind-driven SSH and volume transport changes
Along with the decadal increase in altimetric SSH at K2 (blue thick line in Fig. 7a), the wind-driven SSH at that site (red thick line) increased during the altimetry observation period. The increase in SSH is found to have begun in the late 1990s, being consistent with the result of Nagano et al. (2016). To obtain the linear trend excluding sharp interannual variations, we examined the statistical significance by the Mann-Kendall trend test (e.g., Wilks 2019). Based on the Mann-Kendall trend test, Kendall's τ coefficient of the calculated wind-driven SSH change during the period of 1993–2014 at the site is 0.43. This value is higher than the 99% confidence level (0.11); in other words, this increasing trend is statistically significant. Accordingly, the SSH change can be approximated by a linear trend. Using Sen's method (Sen 1968), the linear trend was computed to be 0.54 cm year−1, which is the same order of magnitude as the 0.66 cm year−1 of the altimetry SSH trend during the same period.
At K1 and the site east of KNOT, the wind-driven increases in SSH were calculated to be 0.13 and 0.49 cm year−1, respectively (red lines in Fig. 8a and c). According to the above Kendall's criterion, these trends are also statistically significant. In particular, the wind-driven SSH elevation at the site east of KNOT is larger than the observed trend (0.38 cm year−1) (Fig. 8c). The magnitudes of the wind-driven SSH increases at K2 and the site east of KNOT are sufficiently large to explain the observed northward shrinkage of the WSAG. Therefore, it is suggested that the decadal SSH change is mostly attributable to the change in wind stress. Meanwhile, at K1, the calculated trend is much smaller than the observed value (0.67 cm year−1) (Fig. 8a), the northern part of the circulation being strongly affected by other factors than wind stress. On the basis of the decadal wind-driven SSH changes, the onset of the SSH increase occurred in order from the south, as in 1992, 1997, and 2004 at KNOT, K2, and K1, respectively (arrows in Fig. 8).
Wind-driven (red lines) and altimetry-based (blue lines) variations in SSH at stations a K1, b K2, and c east of KNOT. Smoothed time series by 15- and 35-month running mean filters are denoted by thin and thick lines, respectively. Arrows show estimated times at the latitudes when SSH began to increase due to the WSAG shrinkage
The intensification of the Aleutian Low via the atmospheric teleconnection by El Niño is illustrated by the depressions in the area-weighted sea level atmospheric pressure in the region of 30–65 ∘ N, 160 ∘ E–140 ∘ W, called the North Pacific Index (NPI) (Trenberth and Hurrel 1994). In association with the intensification of the Aleutian Low in winter, interannual (∼3-year) timescale depressions were produced in calculated SSH at K2 (red thin lines in Figs. 7a and 8b) and the site east of KNOT (red thin line in Fig. 8c). However, no clear corresponding depressions were observed in altimetric SSH at K2 and KNOT. In Fig. 9, we display Hovmöller diagrams of the altimetric SSH variations at the latitudes of K1, K2, and KNOT. In altimetric SSH, negative SSH disturbances were generated in 1995, 1998, 2003, and 2010 at K2 and KNOT (arrows in Fig. 9b and c) when the Aleutian Low was intensified by El Niño and induced positive fluctuations in wind stress, wE, in regions to the east of approximately 180 ∘ (arrows in Fig. 10b and c). Similar negative SSH disturbances are found also at K1 in the El Niño years (arrows in Fig. 9a) except in 1995 and the corresponding wE fluctuations occurred to the east of approximately 160 ∘ W (arrows in Fig. 10a). By the satellite altimetry, the El Niño-related negative SSH disturbances at the latitudes of KNOT and K2 reached the sites in 2003 and 2010, respectively, but other negative SSH disturbances appear to be canceled out around or to the east of the sites by regional positive disturbances to the west of ∼170 ∘ W (K1 and K2) or ∼180 ∘ (KNOT).
Hovmöller (longitude-time) diagrams of the altimetric SSH anomalies (cm) at the latitudes of stations a K1 (51 ∘ N), b K2 (47 ∘ N), and c KNOT (44 ∘ N). Smoothing was performed using a 15-month running mean filter. The vertical dotted lines in panels (a), (b), and (c) indicate the longitudes of K1, K2, and KNOT, respectively. Arrows show El Niño-related SSH depressions
a–c Same as Fig. 9 but for inverted Ekman vertical velocity, −wE, in 10−6m s−1 based on the NCEP/NCAR wind stress data from 1979 to 2014. Negative value indicates upward velocity, i.e., Ekman suction. Arrows show El Niño-related enhancements in Ekman suction. Vertical dashed lines and connecting arrows denote westernmost longitudes of significantly negative wE and their westward migrations in the 1990s, respectively
Meanwhile, the disturbances in SSH excited by the El Niño-related enhancement of the Ekman suction, i.e., the increases in wE, at the latitudes of K2 (Fig. 11b) and KNOT (Fig. 11c) are calculated to reach the western end of the calculation region (arrows) and they are shown by the significant SSH depressions in Fig. 8b and c. At the latitude of K1 (Fig. 11a), the El Niño-related SSH dips were generated to the east of 180 ∘ and sometimes to the west. Also, in the opposite phase of El Niño–Southern Oscillation (ENSO), i.e., La Niña, elevations in SSH are excited at the latitudes of K2 and KNOT by the relaxations of the Ekman suction. The spatiotemporal patterns of the total wind-driven SSH variations at the latitudes (Fig. 11) are fairly similar to and dominated by those of the barotropic SSH variations (Fig. 12). Remarkably, the ENSO-timescale variations are nearly simultaneously exited by the barotropic (n=0) Rossby wave response to the changes in wind stress to the east of approximately 180 ∘ longitude, which is a region of significant wind variation due to the change in the Aleutian Low (black line in Fig. 5b). The amplitudes of the wind-driven SSH variations diminish toward the farther east.
a–c Same as Fig. 9 but for the SSH variations calculated from the NCEP/NCAR wind stress data from 1979 to 2014. In panel a, the wind stress data in the region east of 180 ∘ were not used for the calculation of the SSH anomaly in the western region. Arrows show El Niño-related SSH depressions
a–c Same as Fig. 9 but for the SSH variations due to the barotropic Rossby wave mode (n=0) forced by the wind stress. In panel a, the wind stress data in the region east of 180 ∘ were not used for the calculation of the SSH anomaly in the western region. Arrows show El Niño-related SSH depressions
Probably because topographic effects such as the JEBAR were neglected, the modeled ocean produced significant SSH depressions, which were not comparably observed by the altimetry, in response to short-term El Niño-related intensification of the Aleutian Low. Meanwhile, in the case of La Niña, discrepancies between the simulated and observed SSH changes are not remarkable. The wind variations in the subpolar region through the atmospheric teleconnections from the tropical region consist of fluctuations on timescales shorter than approximately 10 days (e.g., Feldstein 2000). In addition, El Niño decays more rapidly than La Niña (e.g., Ohba and Ueda 2009 and Hu et al. 2017). Therefore, it should be noted that the ENSO-timescale variation in wind stress forcing is not caused by individual pulses of the atmospheric anomalies but their accumulative modulation; more (less) frequent tropical convective activity triggers extratropical wind anomalies in the central to eastern North Pacific more (less) often during El Niño (La Niña) years. These differences might be the reason why the ocean response to El Niño is different from that to La Niña.
From the mid-1990s to the late 1990s, substantially negative wE exceeding 0.5×10−6 m s−1 was mostly induced to the east of approximately 140 ∘ W and 180 ∘ at the latitudes of K2 and KNOT, respectively; note that, in Fig. 10, the signs of wE were reversed to match the coloring of SSH as in Figs. 9 and 11. In the late 1990s, the negative wE abruptly migrated to ∼180 ∘ longitude at the latitude of K2 (Fig. 10b) and to ∼170 ∘ E at the latitude of KNOT (Fig. 10c), as illustrated by vertical dashed lines and connecting arrows in the figure. After the late 1990s, greatly negative wE frequently occurred from these longitudes to approximately 140 ∘ W. The frequent relaxation of the Ekman suction is attributed to the attenuation of the Aleutian Low due to the increase in occurrences of substantial La Niña from the late 1990s to mid-2000.
Associated with this westward jump of the forcing relaxation in the late 1990s, positive SSH anomalies seem to have propagated westward from approximately 150 ∘ W at the latitudes of K2 and KNOT. Additionally, at the latitude of K1, a westward jump of the wind forcing relaxation was observed in the mid-1990s (Fig. 10a). Although the amplitude of the forcing change at the latitude of K1 is smaller than those at the latitudes of K2 and KNOT, the corresponding SSH elevation at K1 was as great as those at the other latitudes. The propagation speed from 150 ∘ W to 180 ∘ is estimated to be approximately 1.3 cm s−1, which is much slower and faster than the estimated propagation speeds of the barotropic and baroclinic Rossby waves, respectively, as reported by Chelton and Schlax (1996) (Table 1). The background flow in this region is up to eastward ∼15 cm s−1 (e.g., Roden 2000), which is opposite to the propagation of Rossby waves. The northward shoaling of the main pycnocline is quite little (e.g., Roden 2000). Despite taking into account of the background state, the propagation of this disturbance observed by the altimetry cannot be interpreted as the free baroclinic Rossby waves alone. Presumably, this propagating signal is a result of superposition of SSH fluctuations due to barotropic and baroclinic Rossby waves forced by the westward jump of the Ekman suction relaxation, although, in the model, the disturbance is rapidly damped just east of ∼160 ∘ W due to the strong dissipation and is rather depressed by the 1997/1998 El Niño.
The frequent relaxation of the Ekman suction after the late 1990s caused the long-term elevation in SSH at the latitudes of K2 (Figs. 11b and 12b) and KNOT (Figs. 11c and 12c). In particular, the attenuations of the Aleutian Low due to the 2005/2006 and 2007/2008 La Niña greatly contribute to the decadal SSH elevation, i.e., substantial northward shrinkage of the WSAG, in the mid to late 2000s. In addition, there is a long-term decrease in wE at the latitudes of K2 and KNOT to the west of ∼180 ∘ longitude due to the weakening of the westerly wind (Fig. 9b, c), which contributes to the increase in SSH at K2 and KNOT.
Using Eq. (12), we estimated the geostrophic volume transport relative to a depth of 1000 dbar in the western boundary region of the WSAG between the Kuril Islands and K2, as Nagano et al. (2016) calculated. The standard deviation of the volume transport is approximately 0.1 Sv (where 1 Sv =106 m3 s−1). On the basis of AGEM-based hydrographic data, Nagano et al. (2016) calculated the geostrophic volume transport in the top 1000 dbar layer at a westward line from the point of the maximal amplitude of the SSH variation (53 ∘\(38^{'}\phantom {\dot {i}\!}\) N, 164 ∘\(30^{'}\phantom {\dot {i}\!}\) E). Associated with the northward shrinkage of the WSAG, the AGEM-based volume transport of the gyre changed from approximately southward 1.0 Sv in 1996 to northward 0.5 Sv in 2003 (see thick dashed line Fig. 9 in Nagano et al. 2016). The variabilities in the AGEM-based and wind-driven volume transports are comparable to each other but are one or two orders of magnitude smaller than the Sverdrup transport of 5 Sv (July) to 40 Sv (February) by Favorite et al. (1976) and the climatological wintertime Sverdrup transport of ∼40 Sv by Ishi and Hanawa (2005). The wind-driven volume transport in the layer from the sea surface to 4000 dbar, i.e., full-depth wind-driven volume transport, varied with a large standard deviation of 15.4 Sv. The small variability in the AGEM-based transport is attributed to the small vertical displacement of the main pycnocline. Therefore, the baroclinic variability might be really the minor component in the decadal wind-driven SSH and volume transport variations in the western subarctic region, as suggested by the present SSH computation.
The full-depth volume transport of the extended WSAG (Fig. 7b) was approximately southward 20 Sv in 1994, when Nagano et al. (2016) examined the streamfunction from the sea surface to a depth of 1000 dbar, and it was larger in the period prior to the early 1990s. The volume transport in the extended gyre state is consistent with the estimated values of the Sverdrup transport (5–40 Sv) in past studies. After the early 2000s, the East Kamchatka Current returned northeastward to the north of K2 due to the northward shrinkage of the WSAG; therefore, the volume transport vanished in the early 2000s and was occasionally positive, i.e., northward. This is consistent with the distribution of the streamfunction of the diminished WSAG in 2004 prepared by Nagano et al. (2016).
At the latitude of K1, near the west coast of North America (∼140 ∘ W), a substantial Ekman pumping (downward velocity) is present through the study period but is quite limited, yielding a significant SSH elevation to the west. Even though the barotropic disturbances are generated by the wind stress changes in the central region between approximately 140 ∘ W and 180 ∘ (Fig. 12a), they are likely to be shielded by the Aleutian Arc, as described above. The disturbances locally generated in the area between K1 and longitude 180 ∘ affect SSH at K1.
Figure 13 shows pentad mean wind-driven SSH maps during 1995–1999, 2000–2004, and 2005–2009. During 1995–1999, just after the onset of the gyre shrinkage (Fig. 13a), the cyclonic WSAG is produced in the western subarctic region, as illustrated by a low SSH to the west of approximately 175 ∘ W. The WSAG is considerably reduced in its extension and is present only around the east of station K2 during 2000–2004 (Fig. 13b), and eventually disappears around K2 during 2005–2009 (Fig. 13c). The gyre shrinkage is associated with a substantial anticyclonic circulation centered around 43–44 ∘ N, 170–175 ∘ E, southeast of the WSAG (Fig. 13b, c), which has been developed from a weak SSH peak around 44 ∘ N, 175 ∘ E during 1995–1999 (Fig. 13a). Owing to the meridional scales of the WSAG and the anticyclonic circulation (∼500 km), the SSH change in the western subarctic region is fairly affected by the second meridional mode (m=2). Despite prominent variations with smaller spatial scales (100–200 km) in altimetric SSH, the simulated wind-driven change of the WSAG is basically consistent with the gyre shrinkage observed by the satellite altimetry (Fig. 14).
Maps of wind-driven SSH averaged during a 1995–1999, b 2000–2004, and c 2005–2009. Contour interval is 2 cm. Stars indicate locations of stations K1, K2, and KNOT
a–c Same as Fig. 13, but for altimetric SSH anomaly from the mean state
However, unlike calculated wind-driven SSH, altimetric SSH did not change significantly or decreased from 1990–1995 (Fig. 14a) to 2000–2004 (Fig. 14b) in the central area where the correlation between these SSH fields is low (Fig. 6). Intriguingly, the anticyclonic circulation is accompanied by a northeastward current at the western edge of the circulation (Fig. 13). The corresponding northeastward current anomalies are found in the altimetric SSH anomaly maps (Fig. 14) and are coincident with a northeastward jet from around 43 ∘ N, 160 ∘ E to 45 ∘ N, 170 ∘ E (J2) reported by Isoguchi et al. (2006).
Qiu (2002) reported that the SSH variation in the initial phase of the WSAG shrinkage is caused by the baroclinic Rossby wave adjustment to the change in wind stress. Meanwhile, the present calculation of the wind-driven SSH variation demonstrates that the barotropic Rossby wave adjustment is primarily responsible for the northward shrinkage of the gyre. We think that the difference is attributable to that of the dissipation rates used in the models. The dissipation rate used in Qiu (2002) is much smaller (6-year damping) than that in the present model (approximately 1-year damping for the first baroclinic mode) (Table 3). Note that the SSH variation which Qiu (2002) used to determine the dissipation rate is dominated by the variation on the annual timescale (Fig. 19b in Qiu 2002). On the other hand, as described above, we determined the eddy dissipation coefficients by fitting the SSH variations on interannual to decadal timescales. This is likely to be the reason why the dissipation rate used in Qiu (2002) is much smaller. If we set B to a lower value, 1×10−8 m2s−3, without changing other parameters, we obtain a smaller rate of the increase in SSH at K2 (0.35 cm year−1). Thus, the wind-driven decadal increase in SSH in the western subarctic region is considered to be more exactly estimated in the present model with the higher dissipation, which substantially attenuates the baroclinic disturbances in the western region.
Remember that the eigenfunction of the barotropic mode is vertically constant (the black line in Fig. 4 b), so that the vertical gradient, i.e., dϕ0/dz, is zero. The barotropic Rossby wave response to the change in wind stress is not associated with the density change. Conversely, disturbances due to the baroclinic Rossby waves are expected to be associated with the density changes because dϕn/dz≠0 (n=1, 2, 3, and 4). In particular, the vertical gradients of the modes higher than the first are remarkable just beneath a depth of approximately 110 dbar, where the winter mixed layer does not reach (Wakita et al. 2017) and the variations in the sea surface heat and freshwater fluxes are considered to be ineffective. The baroclinic Rossby wave responses and the associated potential density changes will be discussed in the next subsection.
Wind-driven potential density change
The amplitudes of the SSH variations due to the baroclinic Rossby wave responses to the changes in wind stress at the latitudes of K1, K2, and KNOT (Fig. 15) are smaller than those of the SSH variations due to the barotropic response (Fig. 12) but are larger than the error of the satellite altimetric observations (∼3 cm) (Le Provost 2001). The wind-driven baroclinic SSH variations are primarily generated not by remote wind stress changes via westward propagations of Rossby waves but by local changes in the wind stress. As demonstrated at the latitude of K2, the baroclinic Rossby wave response to the wind stress change due to the weakening of the westerly wind to the west of ∼180 ∘ longitude (Fig. 10b) contributes to the decadal increase in SSH in the western subarctic region during the calculation period (Fig. 15b). This is consistent with the high correlation near K2 between wind stress curl and the SSH change in association with the WSAG shrinkage (Fig. 2).
a–c Same as Fig. 9 but for the SSH variations due to the baroclinic Rossby wave modes (n=1, 2, 3, and 4) forced by the wind stress. Arrows show El Niño-related SSH depressions. In panel a, the wind stress data in the region east of 180 ∘ were not used for the calculation of the SSH anomaly in the western region
In Fig. 16, we show the first (n=1) to fourth (n=4) baroclinic Rossby wave mode variations in SSH at K2. Spatiotemporal patterns of the variations due to the second (n=2) to fourth (n=4) modes (Fig. 16b, c, and d), which amplitudes are proportional to ϕ(0)2 because of no significant propagations, are similar to each other but rather different from that of the propagating first (n=1) mode variation (Fig. 16a). Notably, the SSH elevations due to the second and third baroclinic Rossby wave modes are more prominent than those due to other baroclinic modes and well coincide with anomalously negative forcing in space and time (meshes in Fig. 16b and c). Meanwhile, the SSH elevations by the first baroclinic Rossby wave mode are rapidly dissipated apart from negative forcing times and regions (meshes in Fig. 16a), resulting in smaller amplitudes of disturbances than those by other higher baroclinic modes. There are reports on SSH variabilities due to higher baroclinic mode Rossby waves than the first. For instance, an analysis of the SSH variability in the South Pacific by Maharaj et al. (2007) indicates that the higher baroclinic mode waves contribute to the variations even to the south of 40 ∘ S. Remarkably, in Maharaj et al. (2007), we recognize areas where the amplitudes of the second baroclinic mode variations are greater than those of the first baroclinic mode variations. Their report supports our indication that, also in the western subarctic region, the disturbances due to the higher baroclinic mode waves can have greater SSH amplitudes than those due to the first baroclinic mode waves.
Hovmöller diagrams of SSH (cm) at the latitude of station K2 (47 ∘ N) due to the a first (n=1), b second (n=2), c third (n=3), and d fourth (n=4) baroclinic Rossby wave modes forced by the wind stress. Arrows indicate El Niño-related SSH depressions shown in Fig. 14 b. Meshing denotes Ekman vertical velocity anomaly from the mean value at each longitude smaller than − 0.2×10−6m s−1. The vertical dashed lines indicate the longitude of station K2 (160 ∘ E)
With respect to the amplification of the higher baroclinic mode variations, it is worthwhile to point out the coincidence between the wE variation and the SSH variations due to the second, third, and fourth baroclinic modes. Such persistent wind stress forcing is considered to amplify disturbances until the balance between the forcing and damping is reached. The large amplitudes of the slowly propagating (or quasi-stationary) disturbances of the baroclinic higher modes can be interpreted by a quasi-resonant amplification (QRA). This amplification mechanism has been proposed to explain the reinforcement of quasi-stationary atmospheric Rossby wave disturbances with relatively high horizontal wavenumbers of 6–8 that possibly bring about frequent extreme meteorological events in recent years (e.g., Petoukhov et al. 2013; Coumou et al. 2014; Kornhuber et al. 2017). In case of mid- and high-latitude oceans, fluctuations having large vertical scales rapidly propagate as the first baroclinic Rossby waves after excited by wind stress changes and are subject to the substantial damping. In contrast to the fast traveling fluctuations, the QRA mechanism can be exerted on quasi-stationary fluctuations of the higher baroclinic modes because they continue to be excited as long as forced and are reinforced despite the presence of the damping.
Corresponding to the intensification of the Aleutian Low associated with El Niño (indicated by depressions in the NPI in Fig. 7a), negative SSH anomalies are found in the region between 170 ∘ E and 170 ∘ W at the latitudes of K2 and KNOT in 1981, 1983, 1987, 1993, 1997, 2003, and 2010 (arrows in Fig. 15b and c). The SSH disturbances due to the first baroclinic mode (n=1) propagate westward (Fig. 16a) at the estimated propagation speed (0.45 cm s−1, Table 2), even though the amplitude of the variation is significantly smaller (<1 cm) than those of the other modes. Most of these ENSO-timescale baroclinic disturbances decayed significantly and did not proceed to K2 and KNOT. At the latitude of K1, the baroclinic SSH depressions due to El Niño are weak (Fig. 15a) because this site is located to the north of the area of significant variation in Ekman vertical velocity driven by the change in the Aleutian Low.
The baroclinic Rossby wave mode variations, particularly higher baroclinic mode variations than the first, are responsible for the water density change in the subsurface layer around K2. The vertical distribution in potential density at K2 is shown by the contours in Fig. 17; to suppress noisy fluctuations on timescales shorter than approximately 3 years, the time series at each pressure level was temporally smoothed using a 49-month running mean filter. Associated with the decadal elevation in SSH at K2 during the study period (Fig. 15b), the upper main pycnocline, which exists within depths of 100 to 300 dbar, tended to be deepened (Fig. 17). An anomalously large elevation and depression of the upper main pycnocline were observed in the periods around 1985 and 2006, corresponding to the inductions of negative and positive wE, which are quite limited to the west of ∼170 ∘ E (Fig. 10b), respectively. They greatly contributed to the decadal deepening of the upper main pycnocline. The locality of the main pycnocline depth responded to the wind forcing support that the QRA mechanism plays a crucial role also in the deepening of the main pycnocline depth by the higher baroclinic modes.
Time-pressure diagram of the calculated variation in the potential density (σθ, contours) due to the baroclinic Rossby wave modes (n=1, 2, 3, and 4) in the top 400 dbar forced by the wind stress and the calculated anomaly from the mean potential density profile (kg m−3, color shades). The contour interval is 0.1 kg m−3. Smoothing was performed using a 49-month running mean filter
We decomposed the variation in potential density into those of the baroclinic Rossby wave modes to examine the contribution of each baroclinic Rossby wave mode to the deepening of the upper main pycnocline (Fig. 18). In comparison to the first baroclinic Rossby wave mode (n=1), the higher baroclinic modes (n=2, 3, and 4), which have sharp vertical gradients in the modal structures of the potential density in the top 250 dbar (Fig. 4b), are revealed to contribute greatly to the potential density change.
Potential density (σθ, contours) and variations (kg m−3, color shades) in the top 400 dbar layer due to the a first (n=1), b second (n=2), c third (n=3), and d fourth (n=4) baroclinic modes of the Rossby waves forced by the wind stress. The contour interval is 0.1 kg m−3. Smoothing was performed using a 49-month running mean filter
Annual mean variations in the upper main pycnocline depth forced by variations in the wind stress (solid line) and the in situ halocline depth (dashed line) at station K2. The main pycnocline depth is indicated by the pressure level of the potential density of 26.8 σθ and the halocline depth is the pressure level of the isohaline (33.8) estimated from the ship-board CTD and Argo data by Wakita et al. (2017)
To perform a comparison with the annual mean time series of the halocline depth indicated by the in situ salinity value of 33.8 processed using ship-board CTD and Argo data by the method of Wakita et al. (2017) (the dashed line in Fig. 19), we computed the annual mean time series of the wind-driven deepening of the upper main pycnocline, defined by the isopycnal surface of 26.8σθ, (the solid line). The wind-driven deepening of the upper main pycnocline is consistent with that of the halocline. The linear trend of the 26.8σθ isopycnal surface during the period of 1999–2014 is estimated to be 1.36 m year−1, which is equivalent to approximately 70% of the linear trend of the halocline (1.79 m year−1) estimated from in situ data by Wakita et al. (2017). Therefore, it is concluded that the higher baroclinic mode Rossby wave disturbances excited by the local change in wind stress and reinforced through the QRA mechanism mainly deepened the upper main pycnocline (halocline) depth at K2.
As described in "Introduction" section, sea surface water density change, which can cause steric sea level change, is not significantly detected in the western subarctic region (Wakita et al. 2017). On the other hand, the precipitation rate (rainfall) gradually increased in the western subarctic region during the study period (not shown). We estimated the impact of the freshwater flux change on the upper-ocean circulation. Using the NCEP/NCAR evaporation rate (E) and precipitation rate (P) data, we calculated freshwater flux, E−P, as processed in Nagano et al. (2017). The vertical velocity due to freshwater flux estimated as wF =(E−P)/ρ0 (e.g., Gill 1982) is downward owing to the excessive precipitation in the western subarctic region and its magnitude is approximately two orders smaller than that due to the wind stress. Although the downward velocity attenuated and acted to elevate SSH through the study period, the impact of the freshwater flux forcing on the SSH change is much less than that of the wind stress change.
Focusing on the wind stress changes in the subpolar North Pacific, we examined the interannual to decadal variations in SSH related to the northward shrinkage of the WSAG using a dynamical model of the Rossby waves forced by changes in the wind stress and damped by horizontal and vertical eddy dissipation. The Ekman vertical velocity derived from the NCEP/NCAR wind stress data during the period of 1979–2014 was decomposed into the four meridional modes of the sine functions. Assuming a flat bottom ocean, the eigenfunctions and eigenvalues of the four baroclinic Rossby wave modes were obtained by solving the vertical structure equation based on the potential density profile averaged between 160 ∘ E and 170 ∘ E at 47 ∘ N (WOCE P01 line). The SSH variations excited by changes in wind stress propagate with different phase speeds and damping rates depending on the meridional and baroclinic modes.
By adopting DH=10 m2s−1 and B=1×10−7 m2s−3 for the horizontal and vertical eddy dissipation coefficients, respectively, we solved the vorticity gradient equation forced by the Ekman vertical velocity and obtained realistic SSH changes in the western subarctic region. SSH depressions related to the intensification of the Aleutian Low in winter of El Niño years, which were not comparably observed by the satellite altimetry, were produced probably because topographic effects such as the JEBAR were not taken into account in the model. With respect to the variations on decadal timescales, the correlation coefficient between the calculated and observed SSH variations is fairly high in the northwestern and eastern parts of the subpolar region; in particular, the value at K2 (0.79) exceeds the 90% confidence interval. However, probably because the damping parameters were set to be uniform in the whole subpolar region, the simulated SSH variation missed reproducing propagations of baroclinic SSH variations observed in the central subpolar North Pacific by the altimetry. The decadal SSH changes at K2 (47 ∘ N, 160 ∘ E) and KNOT (44 ∘ N, 155 ∘ E) associated with the northward gyre shrinkage was found to be primarily caused by the barotropic Rossby wave response to the relaxation of the Ekman suction due to the attenuation of the Aleutian Low by the frequent occurrences of La Niña after the late 1990s and the long-term weakening of the westerly wind. The northward WSAG shrinkage is found to be accompanied by the intensification of an anticyclonic circulation centered around 43–44 ∘ N, 170–175 ∘ E on the southeast of the WSAG. At the latitude of K1(51 ∘ N), the SSH variation from the east of the southern end of the Aleutian Arc is shielded and that locally excited to the west affects the variation in SSH at K1.
The local baroclinic Rossby wave response to the weakening of the westerly wind was revealed to produce a decadal deepening of the upper main pycnocline in the western subarctic region. The disturbances of the first baroclinic Rossby wave mode propagated from the forcing regions and decayed substantially after excited. Meanwhile, the disturbances of the higher baroclinic modes, i.e., the second, third, and fourth modes, simultaneously occur with the forcing and are significantly reinforced through the quasi-resonant amplification (QRA) mechanism owing to their slow propagations, i.e., persistent characters, until the balance between the forcing and damping is achieved.
Because the higher baroclinic modes have steep vertical gradients in the modal structures of the potential density in the top 250 dbar layer, the variations of these modes primarily contribute to the deepening of the upper main pycnocline. The impact of the first baroclinic mode variation on the upper main pycnocline is negligibly small. The linear trend of the wind-driven deepening of the upper main pycnocline at K2 during the period of 1999–2014 (1.36 m year−1) reached approximately 70% of the linear trend of the in situ halocline depth (1.79 m year−1) estimated by the method of Wakita et al. (2017). The deepening of the upper main pycnocline was primarily accounted for by the baroclinic Rossby wave response to the decadal change in wind stress.
The datasets supporting the conclusions of this article, i.e., the NCEP/NCAR forcing, WOCE P01 CTD, and altimetric SSH data, were provided by the US National Oceanic and Atmospheric Administration/Earth System Research Laboratory, Physical Sciences Division (http://www.esrl.noaa.gov/psd/), Clivar and Carbon Hydrographic Data Center (https://cchdo.ucsd.edu/cruise/49KA199905_1), and Copernicus Marine Environment Monitoring Service (http://marine.copernicus.eu), respectively.
AGEM:
Altimetry-based gravest empirical mode
Archiving, Validation and Interpretation of Satellite Oceanographic
CTD:
Conductivity-temperature-depth
ENSO:
El Niño–Southern Oscillation
JAMSTEC:
NCEP/NCAR:
National Centers for Environmental Prediction/National Center for Atmospheric Research
NPI:
North Pacific Index
PDO:
Pacific Decadal Oscillation
QRA:
Quasi-resonant amplification
WOCE:
World Ocean Circulation Experiment
WOD2013:
World Ocean Database 2013
WSAG:
Western subarctic gyre
Andreev, A, Kusakabe M, Honda M, Murata A, Saito C (2002) Vertical fluxes of nutrients and carbon through the halocline in the western subarctic gyre calculated by mass balance. Deep-Sea Res II 49:5577–5593.
AVISO (2016) SSALTO/DUACS user handbook: (M)SLA and (M)ADT near-real time and delayed time products. CLS-DOS-NT-06-034 - Issue 5.0 - Date: 2016/08/20 - Nomenclature: SALP-MU-P-EA-21065-CLS.
Boyer, TP, Antonov JI, Baranova OK, Coleman C, Garcia HE, Grodsky A, Johnson DR, Locarnini RA, Mishonov AV, O'Brien TD, Paver CR, Reagan JR, Seidov D, Smolyar IV, Zweng MM (2013) World Ocean Database 2013. https://doi.org/10.7289/V5NZ85MT.
Chelton, DB, Schlax MG (1996) Global observations of oceanic Rossby waves. Science 272:234–238. https://doi.org/10.1126/science.272.5259.234.
Coumou, D, Petoukhov V, Rahmstorf S, Petri S, Schellnhuber HJ (2014) Quasi-resonant circulation regimes and hemispheric synchronization of extreme weather in boreal summer. Proc Natl Acad Sci USA 111(34):12331–12336. https://doi.org/10.1073/pnas.1412797111.
Dodimead, AJ, Favorite F, Hirano T (1963) Winter oceanographic conditions in the central Subarctic Pacific. Bull Int N Pac Comm 13:1–195.
Enfield, DB, Allen JS (1980) On the structure and dynamics of monthly mean sea level anomalies along the Pacific coast of North and South America. J Phys Oceanogr 10:557–578.
Favorite, F, Dodimead AJ, Nasu K (1976) Oceanography of the subarctic Pacific region. Bull Int N Pac Comm 33:1–187.
Feldstein, SB (2000) The timescale, power spectra, and climate noise properties of teleconnection patterns. J Clim 13:4430–4440.
Frankignoul, C, Müller P, Zorita E (1997) A simple model of the decadal response of the ocean to stochastic wind forcing. J Phys Oceanogr 27:1533–1546.
Fukasawa, M, Freeland H, Perkin R, Watanabe T, Uchida H, Nishina A (2004) Bottom water warming in the North Pacific Ocean. Nature 427:825–827. https://doi.org/10.1038/nature02337.
Gill, AE (1982) Atmosphere-Ocean Dynamics. Academic Press, London. https://doi.org/10.1002/qj.49711046322.
Hu, Z-Z, Kumar A, Huang B, Zhu J, Zhang R-H, Jin F-F (2017) Asymmetric evolution of El Niño and La Niña: the recharge/discharge processes and role of the off-equatorial sea surface height anomaly. Clim Dyn 49:2737–2748. https://doi.org/10.1007/s00382-016-3498-4.
Ishi, Y, Hanawa K (2005) Large-scale variabilities of wintertime wind stress curl field in the North Pacific and their relation to atmospheric teleconnection patterns. Geophys Res Lett 32(L10607). https://doi.org/10.1029/2004GL022330.
Isoguchi, O, Kawamura H, Kono T (1997) A study on wind-driven circulation in the subarctic North Pacific using TOPEX/POSEIDON altimeter data. J Geophys Res 102(C6):12457–12468.
Isoguchi, O, Kawamura H (2006) Seasonal to interannual variations of the western boundary current of the subarctic North Pacific by a combination of the altimeter and tide gauge sea levels. J Geophys Res 111(C04013). https://doi.org/10.1029/2005JC003080.
Isoguchi, O, Kawamura H, Oka E (2006) Quasi-stationary jets transporting surface warm waters across the transition zone between the subtropical and the subarctic gyres in the North Pacific. J Geophys Res 111(C10003). https://doi.org/10.1029/2005JC003402.
Jacobs, GA, Hurlburt HE, Kindle JC, Metzger EJ, Mitchell JL, Teague WJ, Wallcraft AJ (1994) Decade-scale trans-Pacific propagation and warming effects of an El Niño anomaly. Nature 370:360–363.
Kalnay, E, Kanamitsu M, Kistler R, Collins W, Deaven D, Gandin L, Iredell M, Saha S, White G, Woollen J, Zhu Y, Chelliah M, Ebisuzaki W, Higgins W, Janowiak J, Mo KC, Ropelewski C, Wang J, Leetmaa A, Reynolds R, Jenne R, Joseph D (1996) The NCEP/NCAR 40-year reanalysis project. Bull Am Meteorol Soc 77(3):437–471.
Kawabe, M (2000) Calculation of the interannual variations of sea level in the subtropical North Pacific. J Oceanogr 56:691–706.
Kawabe, M (2001) Interannual variations of sea level at Nansei Islands and volume transport of the Kuroshio due to wind changes. J Oceanogr 57:189–205.
Kornhuber, K, Petoukhov V, Petri S, Rahmstorf S, Coumou D (2017) Evidence for wave resonance as a key mechanism for generating high-amplitude quasi-stationary waves in boreal summer. Clim Dyn 49:1961–1979. https://doi.org/10.1007/s00382-016-3399-6.
Kuroda, H, Wagawa T, Shimizu Y, Ito S, Kakehi S, Okunishi T, Ohno S, Kusaka A (2015) Interdecadal decrease of the Oyashio transport on the continental slope off the southeastern coast of Hokkaido, Japan. J Geophys Res 120(4):2504–2522. https://doi.org/10.1002/2014JC010402.
LeBlond, PH, Mysak LA (1978) Waves in the Ocean. Elsevier oceanography series, vol 20. Elsevier, Amsterdam.
Le Provost, C (2001) Ocean tides. In: Fu L-L Cazenave A (eds)Satellite altimetry and earth sciences: a handbook for techniques and applications, 267–303.. Academic Press, London. Chap. 6.
Maharaj, AM, Cipollini P, Holbrook NJ, Killworth PD, Blundell JR (2007) An evaluation of the classical and extended Rossby wave theories in explaining spectral estimates of the first few baroclinic modes in the South Pacific Ocean. Ocean Dynam 57:173–187. https://doi.org/10.1007/s10236-006-0099-5.
Mantua, NJ, Hare SR, Zhang Y, Wallace JM, Francis RC (1997) A Pacific interdecadal climate oscillation with impacts on salmon production. Bull Amer Meteor Soc 78(78):1069–1079.
Mantua, NJ, Hare SR (2002) The Pacific Decadal Oscillation. J Oceanogr 58:35–44.
Miura, T, Suga T, Hanawa K (2002) Winter mixed layer and formation of dichothermal water in the Bering Sea. J Oceanogr 58:815–823.
Nagata, Y, Ohtani K, Kashiwai M (1992) Subarctic gyre in the North Pacific Ocean. Oceanogr Jpn 1(3):75–104.
Nagano, A, Wakita M, Watanabe S (2016) Dichothermal layer deepening in relation with halocline depth change associated with northward shrinkage of North Pacific western subarctic gyre in early 2000s. Ocean Dyn 66(2):163–172. https://doi.org/10.1007/s10236-015-0917-8.
Nagano, A, Hasegawa T, Ueki I, Ando K (2017) El Niño–Southern Oscillation-time scale covariation of sea surface salinity and freshwater flux in the western tropical and northern subtropical Pacific. Geophys Res Lett 44:6895–6903. https://doi.org/10.1002/2017GL073573.
Ohba, M, Ueda H (2009) Role of nonlinear atmospheric response to SST on the asymmetric transition process of ENSO. J Clim 22:177–192. https://doi.org/10.1175/2008JCLI2334.1.
Ohtani, K (1973) Oceanographic structure in the Bering Sea. Mem Fac Fish Hokkaido Univ 21:61–106.
Pedlosky, J (1987) Geophysical Fluid Dynamics, 2nd edn.. Springer, New York. https://doi.org/10.1007/978-1-4612-4650-3.
Petoukhov, V, Rahmstorf S, Petri S, Schellnhuber HJ (2013) Quasiresonant amplification of planetary waves and recent Northern Hemisphere weather extremes. Proc Natl Acad Sci USA 110(14):5336–5341. https://doi.org/10.1073/pnas.1222000110.
Qiu, B, Miao W, Müller P (1997) Propagation and decay of forced and free baroclinic Rossby waves in off-equatorial oceans. J Phys Oceanogr 27:2405–2417.
Qiu, B (2002) Large-scale variability in the midlatitude subtropical and subpolar North Pacific Ocean: observations and causes. J Phys Oceanogr 32:353–375.
Rio, MH, Guinehut S, Larnicol G (2011) New CNES-CLS09 global mean dynamic topography computed from the combination of GRACE data, altimetry, and in situ measurements. J Geophys Res 116(C07018). https://doi.org/10.1029/2010JC006505.
Ripa, P (1978) Normal Rossby modes of a closed basin with topography. J Geophys Res 83(C4):1947–1957.
Roden, GI (2000) Flow and water property structures between the Bering Sea and Fiji in the summer of 1993. J Geophys Res 105(C12):28595–28612.
Schopf, PS, Anderson DLT, Smith R (1981) Beta-dispersion of low-frequency Rossby waves. Dynam Atmos Ocean 5:187–214.
Sekine, Y (1999) Anomalous southward intrusions of the Oyashio east of Japan: 2. two-layer numerical model. J Geophys Res 104(C2):3049–3058. https://doi.org/10.1029/1998JC900044.
Sen, PK (1968) Estimates of the regression coefficient based on Kendall's tau. J Am Stat Assoc 63(324):1379–1389.
Stammer, D (1998) On eddy characteristics, eddy transports, and mean flow properties. J Phys Oceanogr 28:727–739.
Trenberth, KE, Hurrel JW (1994) Decadal atmosphere-ocean variations in the Pacific. Clim Dynam 9:303–319.
Tsurushima, N, Nojiri Y, Imai K, Watanabe S (2002) Seasonal variations of carbon dioxide system and nutrients in the surface mixed layer at station KNOT (44 ∘N, 155 ∘E) in the subarctic western North Pacific. Deep-Sea Res II 49:5377–5394.
Wakita, M, Watanabe S, Murata A, Tsurushima N, Honda M (2010) Decadal change of dissolved inorganic carbon in the subarctic western North Pacific Ocean. Tellus 62B:608–620. https://doi.org/10.1111/j.1600-0889.2010.00476.x.
Wakita, M, Watanabe S, Honda M, Nagano A, Kimoto K, Matsumoto K, Kawakami H, Fujiki T, Kitamura M, Sasaoka K, Sasaki K, Nakano Y, Murata A (2013) Ocean acidification from 1997 to 2011 in the subarctic western North Pacific Ocean. Biogeosciences 10:7817–7827. https://doi.org/10.5194/bg-10-7817-2013.
Wakita, M, Nagano A, Fujiki T, Watanabe S (2017) Slow acidification of the winter mixed layer in the subarctic western North Pacific. J Geophys Res 122:6923–6935. https://doi.org/10.1002/2017JC013002.
Wallace, JM, Gutzler DS (1981) Teleconnections in the geopotential height field during the Northern Hemisphere Winter. Mon Wea Rev 109:784–812.
Watanabe, YM, Ono T, Shimamoto A, Sugimoto T, Wakita M, Watanabe S (2001) Probability of a reduction in the formation rate of the subsurface water in the North Pacific during the 1980s and 1990s. Geophys Res Lett 28(17):3289–3292. https://doi.org/10.1029/2001GL013212.
Wilks, DS (2019) Statistical methods in the atmospheric sciences, 4th edn. Elsevier, Amsterdam. https://doi.org/10.1016/C2017-0-03921-6.
The authors thank Dr. M. Nonaka (JAMSTEC) and Dr. A. Kuwano-Yoshida (Kyoto University) for the helpful comments on the ENSO-timescale wind-driven SSH variation. The authors are also grateful to the editor, Prof. Akira Oka (Atmosphere and Ocean Research Institute, The University of Tokyo), and anonymous reviewers for constructive review comments.
This work was supported by Japan Society for the Promotion of Science (JSPS), Grant-in-Aid for Scientific Research (15H02835, 17K05660).
Research Institute for Global Change, Japan Agency for Marine-Earth Science and Technology (JAMSTEC), 2-15 Natsushima-cho, Yokosuka Kanagawa, 237-0061, Japan
Mutsu Institute for Oceanography, Japan Agency for Marine-Earth Science and Technology (JAMSTEC), 690 Kitasekine Sekine, Mutsu Aomori, 035-0022, Japan
Masahide Wakita
Search for Akira Nagano in:
Search for Masahide Wakita in:
AN proposed the topic, conceived, and designed the study. MW analyzed the data and collaborated with the corresponding author in the construction of the manuscript. Both authors read and approved the final manuscript.
Correspondence to Akira Nagano.
Nagano, A., Wakita, M. Wind-driven decadal sea surface height and main pycnocline depth changes in the western subarctic North Pacific. Prog Earth Planet Sci 6, 59 (2019) doi:10.1186/s40645-019-0303-0
Received: 21 October 2018
North Pacific western subarctic gyre
Aleutian Low
Barotropic and baroclinic Rossby waves
Wind stress
2. Atmospheric and hydrospheric sciences | CommonCrawl |
BioData Mining
An automated pipeline for bouton, spine, and synapse detection of in vivo two-photon images
Qiwei Xie1,2,3,
Xi Chen3,
Hao Deng4,
Danqian Liu5,
Yingyu Sun6,
Xiaojuan Zhou6,
Yang Yang5,7 &
Hua Han3,7,8
BioData Mining volume 10, Article number: 40 (2017) Cite this article
In the nervous system, the neurons communicate through synapses. The size, morphology, and connectivity of these synapses are significant in determining the functional properties of the neural network. Therefore, they have always been a major focus of neuroscience research. Two-photon laser scanning microscopy allows the visualization of synaptic structures in vivo, leading to many important findings. However, the identification and quantification of structural imaging data currently rely heavily on manual annotation, a method that is both time-consuming and prone to bias.
We present an automated approach for the identification of synaptic structures in two-photon images. Axon boutons and dendritic spines are structurally distinct. They can be detected automatically using this image processing method. Then, synapses can be identified by integrating information from adjacent axon boutons and dendritic spines. In this study, we first detected the axonal boutons and dendritic spines respectively, and then identified synapses based on these results. Experimental results were validated manually, and the effectiveness of our proposed method was demonstrated.
This approach will helpful for neuroscientists to automatically analyze and quantify the formation, elimination and destabilization of the axonal boutons, dendritic spines and synapses.
Synapses were first discovered in the 1890s, when Sir Sherrington, through his pioneering work on motor reflexes, wrote that synapse is the way of neuronal communication in the nervous system [1]. There are two major types of synapses: chemical and electrical. In the mammalian central nervous system, the vast majority of the synapses are chemical. Chemical synapses, especially excitatory synapses, typically consist of presynaptic axon boutons and postsynaptic dendritic spines. The structural plasticity of boutons and spines underlies functional synaptic plasticity, widely accepted as the neural basis of learning and memory. Brain imaging can be used to characterize changes occurring in a brain during very different time-scales [2]. The advent of boutons and spines can be imaged in live animals over days or even months, allowing observation of structural changes in vivo, often in direct association with learning [3–11].
Manual validation is extremely time-consuming, and error prone. Meanwhile, different criteria may lead to different results. Therefore, manual methods are not suitable for the processing of large-scale data. The recent advances in biomedical imaging have allowed the initial development of computer-aided semiautomatic or automatic approaches to detect dendritic spines based on image analysis. In [12], Xie et al. proposed an algorithm for automatic neuron reconstruction. The algorithm can handle complex structures adaptively and optimize the localization of bifurcations. In [13], an automated scheme to perform segmentation in a variational framework was proposed to trace neurons from confocal microscopy images. The segmentation framework, referred to as "tubularity flow field (TuFF)", performs directional regional growing guided by the direction of tubularity of the neurites. In [14], a robust automatic neuron segmentation and morphology generation algorithm was proposed. The algorithm-Tree2Tree uses a local medial tree generation strategy in combination with a global tree linking to build a maximum likelihood global tree. It is a reliable technique to compare various of neurons for tracing evaluation and neuron retrieval. Gonzalez et al. presented an approach to fully automated delineation of tree structures in noisy 2D images and 3D image stacks. It is able to eliminate noise while retaining the right tree structure [15]. Besides, in [16], Gonzalez et al. showed that using steerable filters to create rotationally invariant features that include higher-order derivatives, and training a classifier based on these features allows us handle such irregular structures. Rodriguez et al. developed an open-source software NeuronStudio to aid the neuroscientist in the task of reconstruction of neuronal structures from confocal and multi-photon images [17]. It is a self-contained software package that is free, easy to use. The focus of previous work mentioned above varies, with some focusing on neuronal tracking, segmentation and others on specific situations.
They inspired us to explore 3D tracking, segmentation and extraction of synapses both in 2D and 3D based on the detection results of our automatic detection method. Therefore, it is of interest to explore methods of automatic detection and quantification of synapses, dendrites and axons.
In addition to examining boutons and spines separately by two-photon microscopy, it is also possible to visualize synaptic connections with identified boutons and spines that are in close proximity. Although the resolution of light microscopy is larger than the size of the synaptic cleft, previous studies have showed that over 85% of putative synapses identified in deconvoluted confocal images were true synapses confirmed using electron microscopy [18]. Light microscopy can still provide useful information. Given that boutons and spines originated from different brain regions or of different cell types can be labeled using different fluorescent proteins, observation of synaptic connections using two-photon microscopy provides a valuable method for researching long-range and cell-type specific synaptic plasticity in vivo [19]. Therefore the automated detection of synapses will be of tremendous help for this kind of data analysis.
In this paper, we focus on the detection of axonal boutons, dendritic spines and synapses from the in vivo two-photon image stacks. As described above, a synapse typically consists of one axonal bouton and one dendrite spine, with the exception of multi-bouton and multi-spine synapses. A reasonable strategy to locate the synapses is to first detect axonal boutons and dendritic spines, then to search for synaptic contacts composed of bouton and spine pairs. A robust Gaussian model was used in order to enhance the morphology of axonal boutons and dendrites respectively, while effectively inhibiting noise. Before the enhancement operation, we performed deconvolution on axon images as a preprocessing method for noise reduction. And the regions with relatively higher values are regarded as axonal boutons with great possibility. For the detection of dendritic spines, we performed one-threshold segmentation to obtain the structure of the dendrites based on the enhanced images of dendrites, which followed by an efficient thinning algorithm. After we extracted the centerline of the dendrites, the dendritic spines were determined by finding the bifurcation points and endpoints.
Material and method
Figure 1 illustrates the workflow of our proposed approach for detection of synapses. We will give a detailed description of each procedure of the method after the introduction to the image stack in this paper.
Workflow of detecting synapses on in vivo two-photon images of mouse
The image data used in this study was obtained from Institute of Neuroscience, State key Laboratory of Neuroscience, Chinese Academy of Sciences, Center for Excellence in Brain Science and Intelligence Technology. The transgenic mice (YFP-H line), both male and female, were imaged using a two-photon microscope (Sutter), controlled by Scanimage (Janelia). Auditory cortex of mice was exposed surgically and covered with glass cranial window for repeated two-photon imaging in vivo. Surgical details refer to Y. Yang [19]. Image stack was acquired from the cortical surface to 100–150 μm depth with 0.7 μm intervals. A 25 × objective with 1.05 numerical aperture was used (Olympus). A Ti:sapphire laser (Spectra-Physics) was used as the light source, and tuned to 92 nm for imaging. YFP (Yellow Fluorescent Protein) and GFP (Green Fluorescent Protein) signals were collected using filters 495/40 and 535/50 (Chrome). The 535/50 filter (Channel 1) collected both GFP and YFP signals, and the 495/40 filter (Channel 2) collected GFP-only signals. By subtracting the GFP signals from Channel 1 signals, the YFP-only images were obtained [19].
The dual-color images, as shown in Fig. 2, are the two-photon images, where the red section and green section represent YFP (containing dendrites and axons) and GFP images (contains long-range projecting axons only) respectively and the spine-bouton pairs are thought to be synapse. The x-y resolution and the z resolution of the image data are 137 nm/pixel and 700 nm/pixel respectively, and the image size (x-y) is 512-by-512.
Two-photon images of mouse
Detection of axonal boutons
In this section, we provide algorithmic details for axonal bouton detection. The proposed algorithm is divided into three parts. First, a 3D deconvolution operation is required due to the noise in the original image stacks. Next, we enhanced the bright swellings in the deconvolved images and segmented them. Finally, we identified true axonal boutons based on a series of criteria. The whole workflow for detecting axonal boutons is shown in Fig. 3.
Work of detecting axonal boutons. a Axon image stacks from the two-photon image stacks. b One deconvolved axon image after 3D deconvolution. c Magnified bouton from the area indicated by the red rectangle shown in panel (b). The image is shown in two different states with the deconvolved one on the top and the original one on the bottom. d The enhanced image. e The final detected axonal boutons
3D deconvolution
Although confocal microscopy images are known to be sharper than standard epifluorescence ones, they are still inevitably degraded by Poisson noise and residual out-of-focus light due to photon-limited detection [20]. Thus, several deconvolution methods have been proposed. In this study, we adopt the Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS), which decouples the array design and processing influence from the noise being measured using a simple and robust algorithm [21]. The details of 3D deconvolution operation implemented in ImageJ [22] are shown in Appendix.
One deconvolved axon image is depicted in Fig. 3 b. To demonstrate the performance of the 3D deconvolution operation, we show an axonal bouton indicated by the red rectangle in Fig. 3 b. We then show two different states of this image in Fig. 3 c, with the deconvolved one on the top and the original one on the bottom. A significant difference can be seen from the detailed comparison, showing that the 3D deconvolution operation helps to identify the axonal boutons.
Enhancing bright swellings
The thresholding on a deconvolved image does not necessarily ensure perfect segmentations, or even good ones. This is because the range of the intensity of different axonal boutons can vary dramatically. A low threshold will reserve the bright axon shaft, but a high threshold will eliminate weak axonal boutons. To avoid the loss of data, we first enhanced the bright swellings.
By the statistics carried on the corresponding electron microscope data, the average diameter of terminal boutons is 1.0 μm. By setting the pixel size to 137 nm, we find the average radius of an axonal bouton is about 4 pixels. We randomly select an axonal bouton as shown in Fig. 4 a and show the plot of its corresponding intensity image in Fig. 4 b. Note that the axonal bouton is a "rounded" profile. We can see that the image in Fig. 4 b looks very similar to the three-dimensional Gaussian surface plotted in Fig. 4 c, suggesting it is reasonable to model the intensity of axonal bouton using a three-dimensional Gaussian surface,
$$ R(x,y) = C\exp\left(-\frac{{{{\left(x - {x_{0}}\right)}^{2}} + {{\left(y - {y_{0}}\right)}^{2}}}}{{2{\delta^{2}}}}\right), $$
Axonal bouton intensity modeling. a A randomly selected axonal bouton. b The intensity image of the bouton shown in (a). c The three-dimensional Gaussian surface with a variance of 4/\(\sqrt 3\)
where C is a constant corresponding to the coordinate of the maximum magnitude point (x 0,y 0), and δ is the variance of the Gaussian surface. A very small part of the axonal boutons can be approximated by a ridge, we then construct a Hessian-based ridge detector. Let m=(x−x 0)2+(y−y 0)2. The intensity of enhanced image is set as additive inverse of the eigenvalue with minimum absolute, i.e. [Appendix A],
$$ \lambda (m) = \left\{{\begin{array}{c} { - \exp\left(- \frac{m}{{2{\delta^{2}}}}\right)\left(m - 2{\delta^{2}}+\sqrt {{{\left(m + 2{\delta^{2}}\right)}^{2}} - 4{\delta^{4}}} \right)/2{\delta^{5}},m \le 2{\delta^{2}}}\\ { - \exp\left(- \frac{m}{{2{\delta^{2}}}}\right)\left(m - 2{\delta^{2}} - \sqrt {{{\left(m + 2{\delta^{2}}\right)}^{2}} - 4{\delta^{4}}} \right)/2{\delta^{5}},m > 2{\delta^{2}}.} \end{array}} \right. $$
Here we analyze in three cases:
: m=0,λ(m)=1/δ 3;
: \(m = 2{\delta ^{2}},\lambda (m) = - \sqrt 3 /({\delta ^{3}} \times e),\) where e is the Euler's number;
: m→∞,λ(m)=0.
For the parabolic line profile, the magnitude of the second derivative of the extracted position is always maximum at the line position [23]. We can conclude that the relationship between the variance δ and the radius r of the axonal bouton is \(\delta =r/\sqrt 3\) [Appendix B]. Combined with the radius of axonal boutons, we set the variance as \(\delta =r/\sqrt 3\) in this study.
To allow visual interpretation, we plot the chosen eigenvalue of model (1) in Fig. 5, from which we can see that the central region is enhanced while the surrounding region weakens gradually. This provides the theoretical basis for image enhancement and segmentation. Inspired by [23–25], we select the above variance. Figure 3 depicts the enhanced image of one bright swelling, whose variation tendency consistently conforms to that of Fig. 3 almost everywhere, supporting the correctness of our theoretical analysis. Compared to the image in Fig. 3 b, the enhanced image shown in Fig. 3 d has an advantage for weaker axonal boutons because of its more obvious profile. The following work is based on the enhanced image in Fig. 6, and the detail is stated in Algorithm 2.
Eigenvalue of a matrix satisfying the distribution of model (1)
The enhanced bright swelling
Obtaining axonal boutons
As discussed in last section, the application of a relatively lower threshold will inevitably generate false positives. Fortunately, the shapes of the axonal boutons are homogeneous and each has a sole maximum point. Therefore, we first find local maximum points as candidate points for axonal boutons, a simple but effective strategy. The detail is stated in Algorithm 1.
We then evaluate whether each region in the resulting segmentation contains a local maximum points and we delete the regions lacking local maximum points. On this basis, we compute some statistical characteristics including the eccentricity, major axis, and minor axis. Then we reserve the regions that exhibit statistical characteristics similar to a disk. Finally, we record the location of the reserved regions and determine whether the peak intensity of each region is more than three times brighter than its axon shaft in the original image [19]. The final result of axonal boutons analysis is shown in Fig. 3 e.
Detection of dendritic spines
In this section, we explicate the details of our method for the detection of dendritic spines. Dendritic spines are small with spine head volumes ranging from 0.01 μ m 3 to 0.8 μ m 3. According to the shape, dendritic spines can be classified into following types: thin, mushroom, and stubby, as shown in Fig. 7 [26]. The variable shape of these spines is related to the strength and maturity of the synapses [27]. Thus, based on the forms, it is reasonable to locate the dendritic spines by looking for the spur pixels that are connected to the bifurcation points.
Common types of dendritic spines. a thin b stubby c mushroom
The proposed algorithm consists of three parts: enhancement of the line structure in the images after pretreatment; segmentation of dendrites and extraction of their skeletons; and identification of the dendritic spines based on the dendritic skeletons. The workflow is shown in Fig. 8 and Algorithm 4.
Workflow of detection of dendritic spines. (1) Normalized image of dendrite; (2) Region specified by red rectangle in (1); (3) Corresponding enhanced image; (4) Segmentation result; (5) Skeletons of dendrite; (6) Branch points on skeletons; (7) The finally detected result of dendritic spines; (8) Result of dendrite
Enhancing line structure
Before performing other operations, we first normalize the images to reduce the impact of noise by using the following formula:
$$ I(x,y) = \frac{I(x,y) - I_{min}}{I_{max} - I_{min}}, $$
where I(x,y) is the intensity value in I at (x,y), I max and I min represent the maximum intensity and minimum intensity value of the image respectively.
Next, we enhance the linear structure. As shown in Fig. 9, the intensity value of each section of the dendritic linear structure can be modeled as a Gaussian curve [23], which can be written as
$$ I(x') = {C_{den}}\exp\left(- \frac{{x'^{2}}}{{2{\sigma^{2}}}}\right) = {C_{den}}\exp\left(- \frac{{{{\left(x\cos \theta - y\sin \theta \right)}^{2}}}}{{2{\sigma^{2}}}}\right), $$
Dendrites intensity modeling. (a) Part of a dendrite, a section of which is marked with red. The x-y coordinates are marked with green and the Y axis in the Cartesian coordinate system marked with blue is the principle direction of the line structure; (b) Intensity value of the dendrite section marked by red line in (a) and the Gaussian curve with a variance of 2, marked with red and blue respectively
where x ′ is the abscissa on Cartesian coordinate system X ′- Y ′; x,y are the abscissa and ordinate on Cartesian coordinate system X-Y, respectively; C den is the maximum pixel value of the cross section; σ is the variance of the Gaussian curve, and θ is the angle between the cross section and the main direction of the linear structure, which is shown in Fig. 9 a. According to [23], we can obtain the relationship between the variance σ and the radius of the lines structure w [Appendix C]: \(\sigma = w/\sqrt 3\).
The average diameter of the dendritic spine of the part is less than 0.9 μm, while the x-y resolution is 137 nm/pixel, so the average radius w is equal to 3 pixels.
As in previous part, we construct a Hessian-based ridge detector and take the additive inverse of the eigenvalue with maximum absolute value as the intensity of the enhanced image [Appendix D]:
$$ I_{enh}(x,y) = \left\{ {\begin{array}{c} { - {\sigma^{2}}\lambda (x,y),~\text{if}~\lambda (x,y) < 0}\\ {0,~\text{otherwise}} \end{array}} \right. $$
The approach for enhancing line structure can be summarized as follows:
Extracting skeleton and finding branch points
We use the following Algorithm 6 to get the dendritic skeleton C (Fig. 8 (5)) at the basis of the binary image B (Fig. 8 (4)), which is obtained by segmenting the enhanced image I (Fig. 8 (3)) using a suitable threshold [28]:
In the first sub-iteration, delete pixel p if and only if the condition (a), (b), (c) are all satisfied.
In the second sub-iteration, delete pixel p if and only if the condition (a), (b), (d) are all satisfied.
Condition (a): X H (p)=1
where \({X_{H}}(p) = \sum \limits _{i = 1}^{4}{b_{i}}\), \({b_{i}} = \left \{ {\begin {array}{*{20}{c}} { 1,~if~x_{2i-1} = 0~\text {and}~(x_{2i} = 1~or~x_{2i+1} = 1)}\\ { 0,~\text {otherwise}} \end {array}} \right.\) x 1,x 2,…,x 8 are the values of the eight neighbors of p, starting from the east neighbor and numbered in counter-clockwise order.
Condition (b): 2≤ min{n 1(p),n 2(p)}≤3,
where \({n_{1}}(p) = \sum \limits _{k = 1}^{4} {{x_{2k - 1}} \cup {x_{2k}}}\), \({n_{2}}(p) = \sum \limits _{k = 1}^{4} {{x_{2k}} \cup {x_{2k + 1}}}\).
Condition (c): \(({x_{2}} \cup {x_{3}} \cup {\overline x_{8}}) \cap {x_{1}} = 0\)
Condition (d): \(({x_{6}} \cup {x_{7}} \cup {\overline x_{4}}) \cap {x_{5}} = 0\)
The two sub-iterations together make up one iteration of the algorithm and the iterations are repeated until the resulting image stops changing. The approach for extracting skeletons can be summarized as follows:
In this study, the operation for finding branch points is a two-dimensional convolution of the binary image of skeletons and a 3-by-3 filter, with an intensity value of 0 for 4 vertices and 1 for the rest positions. The points with an intensity value equal or greater than 4 are considered as branch points. Figure 8 (6) illustrates the branch point detection results.
Locating dendritic spines
We locate the suspected dendritic spines as follows: 1 〉 Remove spur pixels of the dendritic skeletons. The removed pixels are the putative locations of the dendritic spines. 2 〉 In a bifurcation-centered and properly sized region of skeleton, if there is an alternative point connected with the branch point, we consider the alternative point indicates the position of dendritic spine. This process is illustrated in Fig. 10 and the detail is stated in Algorithm 4.
Illustration of locating dendritic spines. From left to right: Skeleton, skeleton after removing spur pixels, the overlapped image. The red point is the position of dendritic spine (within the red square) and the structure marked by yellow is the overlapped section of the skeleton and skeleton after removing spur pixels
Filtering points on axons
The transgenic mouse used in this study is YFP-H line, in which a subset of layer 5 cortical neurons express YFP. Therefore, YFP signals in these images contain both dendrites and axons. When searching for dendritic spines, it is essential to determine whether these points are on an axon. For each of the structures that centered on suspected spines with a proper size, illustrated in Fig. 11, we take the ratio of its area to its perimeter and average intensity as judging criteria.
Illustration of filtering spines on points. a binary image of dendrites (left) (b) structure centered on suspected spine marked by red circle (right)
As shown in Fig. 12, the positions marked with red circles are the results before screening and the positions marked with green plus sign are results after screening. The positions only marked with red circle are likely locations of axons, rather than spines. The detail is presented in Algorithm 7.
Comparison of results before and after screening on three different layers
Detection of synapses
Through the discussion in the previous two sections, we obtained the position of the axonal boutons and dendritic spine in the two-photon image stacks. As mentioned in above section, the presynaptic part is located on an axon and the postsynaptic part is located on a dendrite in mostly synapses. Then, it is reasonable to get the locations of the synapses on 2D by integrating the locations of the axon boutons and dendritic spines. Specifically, we calculate the distance between the axon and dendritic spine to determine whether the two are overlapping. Furthermore, we can count the synapses on 3D based on the detection in continuous 2D images. As shown in Algorithm 8, for each synapse in the 2D image, find its nearest synapse in the next layer. If this synapse is also the nearest of the synapse in the next layer, and the distance between them is close enough, these two synapses are the same synapse in the view of 3D.
Experimental results
In order to demonstrate the effectiveness of the proposed algorithm, we show two axon images corresponding to layer 1 and layer 20, with the axonal boutons indicated by red circles marked by experienced neurobiologists in Fig. 13 a and b. The corresponding experimental results detected by our algorithm are shown in Fig. 13 c and d. The ground truth of synapses, axons and dendrites were redundantly marked by three students, and disagreements are decided by another biologist.
Illustration of true axonal boutons and detected axonal boutons. a, b True axonal boutons in Layer 1 and Layer 20, respectively. c, d Detected axonal boutons in Layer 1 and Layer 20, respectively
The manual annotation process lasts about 2 days. The round-like structures and the structures shown in Fig. 7 were labeled as axonal boutons and dendritic spines respectively, while the spine-bouton pairs were marked as synapses.
We conducted some experiments on other layers and recorded the number of axonal boutons in ground truth. By comparing the detected result with the corresponding ground truth, we respectively determined the number of redundant and missing axonal boutons in different layers as listed in Table 1.
Table 1 The numerical analysis of experimental result on detected axonal boutons in each layer
To illustrate the effectiveness of the proposed algorithm, we show partial results of the dendritic spines detection from Layer 1 to Layer 25 in Fig. 14.
Partial results of dendritic spines detection in Layer 1 to Layer 26. The red arrows point to the location of false positives and the the green arrows point to the location of false negatives
As shown in Table 2, we recorded the number of dendritic spines in ground truth and the number of false positive and missing dendritic spines, which were obtained by comparing the detected result with the corresponding ground truth, for several layers.
Table 2 Results of detection dendritic spines on several layers
In Fig. 15, we can see that one axonal bouton indicated by green rectangles arises in layers 5-10 but is only marked in layer 5. Analogously, two axonal boutons are respectively indicated by red rectangles and yellow rectangles are solely marked in layer 8. This method can count the axonal boutons precisely in 3D because it considers the multi-layer information. A specific example in Fig. 16 can account for it.
Axonal boutons marked in 3D view. Three different axonal boutons indicated by the red, blue and yellow rectangles with random indexes and colors in layers 5-10 and are solely marked by the blue arrows
The same axonal bouton appears in the image stack. The axonal boutons in the red rectangles with random indexes and colors are the same bouton that appears in layer 12 to layer 17. It is omitted in layer 15 but is marked solely in layer 14
The partial results of synapse detection from Layer 1 to Layer 26 are shown in Fig. 17 and the experimental results of synapse detection are shown in Table 3. The green ellipses mark the location of false negatives and the red arrows point to the location of false positives.
Partial results of synapse detection in Layer 1 to Layer 26. The red arrows point to the location of false positives and green eclipse point to the location of false negatives
Table 3 Results of detection synapse on several layers
We have integrated the proposed method of identifying axonal boutons, dendritic spines and synapses with TrakEM, a plugin of ImageJ. This automates synapse analysis process. The left subgraph in Fig. 18 shows the 2D synapse positions, in which synapses are marked by yellow circles. It also provides interactive function, which makes it easy to proofread the detection results. Furthermore, we marked the positions located by the automatic method and by manual annotation with blue triangles and a yellow triangle respectively. The right subgraph of Fig. 18, which was extracted from the left subgraph, and shows the manually marked position (marked by a yellow triangle) with a value of -1.
GUI presentation. left: The positions located by the automatic method (marked by blue triangles) and manually marked position (marked by a yellow triangle); right: Corresponding enlarged view of the manually marking position
In vivo two-photon microscopy has been widely used to study structural plasticity of axonal boutons and dendritic spines in live animals. Recently, Yang et al. [19] simultaneously labeled and imaged long-range projecting axons and local dendrites, and studied the turnover dynamics of boutons, spines, and synaptic contacts. This dual-color two-photon imaging method allows in vivo examination of synaptic dynamics in specific neural pathways. However, manual annotation of synaptic contacts is time-consuming and prone to bias. The efficiency of synapse detection will be greatly improved by replacing the manual method with automatic method. The automated method can also be used for bouton and spine detection.
As can be seen from the original image in Fig. 1, the structures of axons and dendrites are not significant enough to cause them to be confused with the ambient noise. Therefore, it is necessary to carry out image enhancement to improve the accuracy of detection.
There are 140 two-photon images in total, each of which is 512-by-512 in size with a x-y-z resolution of 137 × 137× 700 nm/pixel. The time spent on manually checking the results of the automatic algorithm and manual annotation are shown in the following bar graph in Fig. 19. We can notice that our approach is much more efficient than manual annotation, especially advantageous if the data volume is larger.
The time spent on manually checking the results of the automatic algorithm and manual annotation
Besides, we have tested our method to another data provided by Beijing Normal University (refered to as Data B) and obtained satisfactory detection results. This dataset provides two-photon image data from neurons in the tbasal ganglia of aeniopygia guttata. The volume of the dataset is 53.3 μ m×53.3 μ m×5.6 μ m and slice thickness is 0.2 μ m. The size of per image in 2D is 1024 pixels×1024 pixels. Some of the detection results are shown in Fig. 20, in which the green part are axons and the red part are dendrites. The positions of the candidate synapses detected using our pipeline are denoted by blue circles, while the probable missing synapses are indicated by yellow arrows. We detected all 12 synapses in 3D precisely.
Synapses detection results on Data B
Applying our method to a new dataset requires determining the parameters of image enhancement, ie. the radius of axonal bouton and the radius of lines structure of dendrites.
In [29], Yi Zuo et al. found that, using in vivo two-photon imaging, experienced-dependent elimination of postsynaptic dendritic spines in the cortex was accelerated in ephrin-A2 knockout mice and ephrin-A2 regulates experience-dependent, N-methyl-Daspartate (NMDA) receptor-mediated synaptic pruning through glial glutamate transport during maturation of the mouse cortex. In [30], Ajmal Zemmar et al. tested the effects of Nogo-A neutralization on synaptic plasticity in the motor cortex and on motor learning in the uninjured mature Central nervous system (CNS). According to a series of statistics, such as numbers of dendrites, spines and axons, they concluded that anti-Nogo-A-mediated enhancement of structural and functional synaptic plasticity enlarges the memory capacity per synapse, leading to improved motor learning in vivo. Data analysis in these studies can benefit from our proposed method. Our approach will greatly facilitate data analysis related to dendrite, axon and synapse imaging.
We presented a novel strategy for identifying axon boutons, dendritic spines, and synapses in in vivo two-photon images. For continuous sequence image stack, we can also count the amount of them in 3D by analyzing the context cues of the detected synapses. This approach will help neuroscientists automatically analyze and quantify the formation, elimination and destabilization of the axonal boutons, dendritic spines and synapses. But it is not yet possible to extract the morphology of synapses. One of our future directions is to get synaptic shapes in 3D.
Appendix 3D deconvolution
The 3D deconvolution operation implemented in ImageJ [22] consists of the following steps:
Download the software ImageJ. Then download the following files: Diffraction_PSF_3D.class, Diffraction_PSF_3D.java, Iterative_Deconvolve_3D.class, and Iterative_Deconvolve_3D.jave. Next, put files in the plugins folder;
Run ImageJ and load the original axon image stacks;
Open the Diffraction PSF 3D plugin. Fill the form with the related parameters and compute the point-spread function (PSF);
Open the Iterative Deconvolve 3D plugin. Select the generated PSF and original axon image stacks, then input the iteration times and generate the deconvolved axon image stacks.
Model the intensity of axonal bouton using a three-dimensional Gaussian surface:
$$R(x,y) = C\exp\left(- \frac{{{{\left(x - {x_{0}}\right)}^{2}} + {{\left(y - {y_{0}}\right)}^{2}}}}{{2{\delta^{2}}}}\right). $$
The partial derivatives R xx ,R xy ,R yx ,R yy can be computed as follows:
$$ \begin{array}{l} {R_{xx}}(x,y) = R(x,y)\left(\frac{{{{\left(x - {x_{0}}\right)}^{2}}}}{{{\delta^{4}}}} - \frac{1}{{{\delta^{2}}}}\right)\\ {R_{xy}}(x,y) = {R_{yx}}(x,y) = R(x,y)\frac{{\left(x - {x_{0}}\right)\left(y - {y_{0}}\right)}}{{{\delta^{4}}}}\\ {R_{yy}}(x,y) = R(x,y)\left(\frac{{{{\left(y - {y_{0}}\right)}^{2}}}}{{{\delta^{4}}}} - \frac{1}{{{\delta^{2}}}}\right). \end{array} $$
Then the eigenvalues λ a (x,y) of the Hessian matrix are solved as follows:
$$\begin{array}{@{}rcl@{}} \begin{array}{llll} \lambda_{a}(x,y) &=&R(x,y)\left(\!\!\vphantom{\sqrt{{{\left({{\left(x-{x_{0}}\right)}^{2}}+{{\left(y-{y_{0}}\right)}^{2}}+2{\delta^{2}}\right)}^{2}}-4{\delta^{4}}}}\left({(x-x)^{2}}+{\left(y-{y_{0}}\right)^{2}}-2{\delta^{2}}\right)\right.\\ &&\left.\pm \sqrt{{{\left({{\left(x-{x_{0}}\right)}^{2}}+{{\left(y-{y_{0}}\right)}^{2}}+2{\delta^{2}}\right)}^{2}}-4{\delta^{4}}}\right)/2{\delta^{4}}, \end{array} \end{array} $$
For clarity of presentation, we choose the cross section of y=y 0. The Gaussian curve corresponding to the pixel value of cross section y=y 0 is
$$ R(x) = C\exp\left(- \frac{{{{\left(x - {x_{0}}\right)}^{2}}}}{{2{\delta^{2}}}}\right). $$
And the edge point (x ∗,y 0) satisfies
$$ \left(x^{\ast}-x_{0}\right)^{2}+\left(y_{0}-y_{0}\right)^{2}=r^{2} $$
Additionally by the definition in [23], the edge point (x ∗,y 0) in (8) also satisfies the equation R ′′′(x ∗,y 0)=0. After some lengthy calculations, we have
$$ R\left(x^{\ast}\right)\left({\left(x^{\ast} - {x_{0}}\right)^{3}} - 3{\delta^{2}}\left(x^{\ast} - {x_{0}}\right)\right) = 0. $$
A suitable solution is
$$ x^{\ast} = {x_{0}} + \sqrt 3 \delta. $$
According to Eqs. (9) and (11), we conclude that \(\delta =r/\sqrt 3 \) is a good choice to identify the axonal boutons.
According to [23], the magnitude of the second derivative of the extracted position is always maximum at the line position. Then, for a fixed y=y 0, the third derivative of Formula(4)
$$I(x') = {C_{den}}exp\left(- \frac{{x'^{2}}}{{2{\sigma^{2}}}}\right) = {C_{den}}exp\left(- \frac{{{{\left(x\cos \theta - y\sin \theta \right)}^{2}}}}{{2{\sigma^{2}}}}\right) $$
can be written as:
$$ - \frac{{I\left(x_{0}\cos \theta - y_{0}\sin \theta\right)}}{{{\sigma^{4}}}}\left(x_{0}\cos \theta - {y_{0}}\sin \theta \right)\left[{\left(x_{0}\cos \theta - {y_{0}}\sin \theta \right)^{2}} - 3{\sigma^{2}}\right] = 0. $$
Then we can obtain
$$x_{0}\cos \theta - {y_{0}}\sin \theta = \sqrt 3 \sigma. $$
With additional efforts, as illustrated in Fig. 9(a), we can obtain the relationship between the variance σ and the radius of the linear structure w:
$$\sigma = w/\sqrt 3. $$
For I as shown in Formula(4), the partial derivatives I xx ,I xy ,I yx , and I yy can be computed as follows:
$$ \begin{array}{l} {I_{xx}}(x,y) = I(x,y)\left[ \frac{{{{\cos }^{2}}\theta }}{{{\sigma^{4}}}}{\left(x\cos \theta - y\sin \theta \right)^{2}} - \frac{{{{\cos }^{2}}\theta }}{{{\sigma^{2}}}}\right]\\ {I_{yy}}(x,y) = I(x,y)\left[ \frac{{{{\sin }^{2}}\theta }}{{{\sigma^{4}}}}{\left(x\cos \theta - y\sin \theta \right)^{2}} - \frac{{{{\sin }^{2}}\theta }}{{{\sigma^{2}}}}\right] \end{array} $$
$$ {I_{xy}}(x,y) = {I_{yx}}(x,y) = I(x,y)\left[ \frac{{\sin \theta \cos \theta }}{{{\sigma^{2}}}}+\frac{{\sin \theta \cos \theta }}{{{\sigma^{4}}}}{\left(x\cos \theta - y\sin \theta \right)^{2}}\right]. $$
Then we can get the eigenvalues of \(H(x,y) = \left ({\begin {array}{cc} {{I_{xx}}(x,y)}&{{I_{xy}}(x,y)}\\ {{I_{yx}}(x,y)}&{{I_{yy}}(x,y)} \end {array}} \right)\):
$$ \lambda_{d}(x,y) = - \frac{1}{{{\sigma^{2}}}}\exp \left(- \frac{{{{\left(x\cos \theta - y\sin \theta \right)}^{2}}}}{{2{\sigma^{2}}}}\right) $$
Comparison of segmentation with and without image region enhancement
Process of finding branch points. Left: kernel; Middle: skeleton of dendrite; Right: Convolution result
Appendix E. Comparison of segmentation with and without image region enhancement
To justify the use of the image region enhancement on boutons, some useful experiments are conducted. We use three different thresholds, 1000, 2000, 3000, for direct segmentation. Above figures provide direct segmentation results without image region enhancement on boutons. Below figures reserve the final segmentation regions containing local maximum value. From these figures, we conclude that a small threshold will reserve the bright axon shaft, whereas a big threshold will eliminate the weak axonal boutons. For these reasons, we propose to use the image region enhancement method to extract the disk-like structure. The original experiment results demonstrated the effective of our proposed method.
Appendix F. Process of finding branch points
We show the process of finding branch points in the following figure. The kernel is a 3-by-3 filter with an intensity value of 0 for 4 vertices and 1 for the rest positions. And the points on the convolution result, with an intensity value equal or greater than 4 are considered as branch points.
CNS:
DAMAS:
Deconvolution approach for the mapping of acoustic sources
GFP:
Green fluorescent protein
NMDA:
N-methyl-Daspartate
PSF:
Point-spread function
YFP:
Yellow fluorescent protein
Sherrington C. The integrative action of the nervous system. J Nerv Ment Dis. 1907; 34(12):801–hyhen.
Tohka J, Ruotsalainen U. Imaging brain change across different time scales. Front Neuroinformatics. 2012; 6:29.
Paola, 1 VD, Svoboda1 K, et al. Cell type-specific structural plasticity of axonal branches and boutons in the adult neocortex. Neuron. 2006; 49(6):861–75.
Becker N, Nägerl UV, et al. Ltd induction causes morphological changes of presynaptic boutons and reduces their contacts with spines. Neuron. 2008; 60(4):590–7.
Karube F, Kubota Y, Kawaguchi Y. Axon branching and synaptic bouton phenotypes in gabaergic nonpyramidal cell subtypes. J Neurosci Off J Soc Neurosci. 2004; 24(12):2853–65.
Grillo FW, Song S, et al. Increased axonal bouton dynamics in the aging mouse cortex. Proc Natl Acad Sci. 2013; 110(16):1514–23.
LeDoux JE. Emotion circuits in the brain. Focus. 2009; 7(7):274–4.
Bourne JN, Harris KM. Balancing structure and function at hippocampal dendritic spines. Ann Rev Neurosci. 2008; 31:47.
Sherrington C. Estructura de los centros nerviosos de las aves. Revista Trimestral de Histología Normal y Patológica. 1888; 1:1–10.
Segal M. Dendritic spines and long-term plasticity. Nat Rev Neurosci. 2005; 6(4):277–84.
Fan J, Zhou X, et al. An automated pipeline for dendrite spine detection and tracking of 3d optical microscopy neuron images of in vivo mouse models. Neuroinformatics. 2009; 7(2):113–30.
Jun X, Ting Z, et al. Anisotropic path searching for automatic neuron reconstruction. 2011; 15(5):680–9.
Mukherjee S, Condron B, Acton ST. Tubularity flow field–a technique for automatic neuron segmentation. IEEE Trans Image Process. 2015; 24(1):374–89.
Basu S, Condron B, Aksel A, Acton ST. Segmentation and tracing of single neurons from 3d confocal microscope images. IEEE J Biomed Health Inform. 2013; 17(2):319–35.
González G, Türetken E, Fua P, et al. Delineating trees in noisy 2d images and 3d image-stacks. In: Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference On. San Francisco: IEEE;2010. p. 2799–806.
González G, Aguet F, Fleuret F, Unser M, Fua P. Steerable features for statistical 3d dendrite detection. Medical Image Computing and Computer-Assisted Intervention–MICCAI 2009. 2009;625–32.
Rodriguez A, Ehlenberger DB, Dickstein DL, Hof PR, Wearne SL. Automated three-dimensional detection and shape classification of dendritic spines from fluorescence microscopy images. PLoS ONE. 2008; 3(4):1997.
Gabriele R, Qi G, Dirk F. Synaptic microcircuits in the barrel cortex. 2015;59–108.
Yang Y, Liu D-Q, Huang W, Deng J, Sun Y, Zuo Y, Poo M-M. Selective synaptic remodeling of amygdalocortical connections associated with fear memory. Nat Neurosci. 2016; 19(10):1348–1355.
Li W, Zhang D, Xie Q, Chen X, Han H. An automated detection for axonal boutons in vivo two-photon imaging of mouse. In: Eighth International Conference on Graphic and Image Processing. Tokoyo: SPIE (the international society for optics and photonics);2017. p. 102250Q.
Dougherty RP. Extensions of damas and benefits and limitations of deconvolution in beamforming. AIAA paper. 2013; 11:2961.
Schmid B, Schindelin J, et al. A high-level 3d visualization api for java and imagej. Bmc Bioinformatics. 2010; 11(1):1–7.
Steger C. An unbiased detector of curvilinear structures. IEEE Trans Pattern Anal Mach Intell. 1998; 20(2):113–25.
Lindeberg T. Edge detection and ridge detection with automatic scale selection. Int J Comput Vis. 1998; 30(2):117–54.
Steger C. Extracting curvilinear structures: A differential geometric approach. In: 4th European Conference on Computer Vision. Cambridge: Springer;1996. p. 630–41.
Wang S, Chen M, et al. Detection of dendritic spines using wavelet-based conditional symmetric analysis and regularized morphological shared-weight neural networks. Comput Math Meth Med. 2015; 2015(2):1–12.
Yoshihara Y, Roo MD, Muller D. Dendritic spine formation and stabilization. Curr Opin Neurobiol. 2009; 19(2):146–53.
Lam L, S W Lee CYS. Thinning methodologies-a comprehensive survey. IEEE Trans Pattern Anal Mach Intell. 1992; 14(9):869–85.
Yu X, Wang G, Gilmore A, Yee AX, Li X, Xu T, Smith SJ, Chen L, Zuo Y. Accelerated experience-dependent pruning of cortical synapses in ephrin-a2 knockout mice. Neuron. 2013; 80(1):64–71.
Zemmar A, Weinmann O, Kellner Y, Yu X, Vicente R, Gullo M, Kasper H, Lussi K, Ristic Z, Luft AR, et al. Neutralization of nogo-a enhances synaptic plasticity in the rodent motor cortex and improves motor learning in vivo. J Neurosci. 2014; 34(26):8685–98.
The financial support of Special Program of Beijing Municipal Science & Technology Commission (NO. Z161100000216146), Science and Technology Development Fund of Macau (044/2015/A2), Scientific research instrument and equipment development project of Chinese Academy of Sciences (YZ201671), Strategic Priority Research Program of the CAS (NO. XDB02060001), Institute of Automation, CAS, for the 3D Reconstruction of Brain Tissue at Synaptic Level (NO. Y3J2031DZ1) and National Natural Science Foundation of China (NO. 61673381, NO. 61201050, NO. 61306070, No. 31472001) is appreciated.
The data and source code in this paper is available upon request.
Research Base of Beijing Modern Manufacturing Development, No.100, Pingleyuan, Beijing, 100124, China
Qiwei Xie
Data Mining Lab, School of Management, Beijing University of Technology, No.100, Pingleyuan, Beijing, 100124, China
Institute of Automation, Chinese Academy of Sciences, 95 Zhongguancun East Road, Beijing, 100190, China
Qiwei Xie, Xi Chen & Hua Han
Faculty of Information Technology, Macau University of Science and Technology, Avenida Wai Long, Taipa, Macau, China
Hao Deng
Institute of Neuroscience, Chinese Academy of Sciences, 320 Yue Yang Road, Shanghai, 200031, China
Danqian Liu & Yang Yang
Beijing Normal University, No. 19, Waida Jie, Xinjie Kou, Beijing, 100875, China
Yingyu Sun & Xiaojuan Zhou
Center for Excellence in Brain Science and Intelligence Technology Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, 320 Yue Yang Road, Shanghai, 200031, China
Yang Yang & Hua Han
University of Chinese Academy of Sciences, School of future technology, No.19(A) Yuquan Road, Beijing, 100049, China
Hua Han
Danqian Liu
Yingyu Sun
Xiaojuan Zhou
Conceived and designed the experiments: QX, XC, HH, YY Performed the experiments: QX, HD Analyzed the data: QX Contributed materials: DL, YY, YS, XZ. All authors read and approved the final manuscript.
Correspondence to Yang Yang or Hua Han.
Xie, Q., Chen, X., Deng, H. et al. An automated pipeline for bouton, spine, and synapse detection of in vivo two-photon images. BioData Mining 10, 40 (2017). https://doi.org/10.1186/s13040-017-0161-5
in vivo two-photon imaging
Submission enquiries: [email protected] | CommonCrawl |
Genetic Analysis Workshop 19: Sequence, Blood Pressure and Expression Data. Proceedings.
Multipoint association mapping for longitudinal family data: an application to hypertension phenotypes
Yen-Feng Chiu1Email author,
Chun-Yi Lee1 and
Fang-Chi Hsu2
It is essential to develop adequate statistical methods to fully utilize information from longitudinal family studies. We extend our previous multipoint linkage disequilibrium approach—simultaneously accounting for correlations between markers and repeat measurements within subjects, and the correlations between subjects in families—to detect loci relevant to disease through gene-based analysis. Estimates of disease loci and their genetic effects along with their 95 % confidence intervals (or significance levels) are reported. Four different phenotypes—ever having hypertension at 4 visits, incidence of hypertension, hypertension status at baseline only, and hypertension status at 4 visits—are studied using the proposed approach. The efficiency of estimates of disease locus positions (inverse of standard error) improves when using the phenotypes from 4 visits rather than using baseline only.
Genetic Effect
Disease Locus
Association Mapping
Hypertension Status
Linkage Disequilibrium Mapping
Approaches for analyzing longitudinal family data have been categorized into 2 groups [1]: (a) first summarizing repeated measurements into 1 statistic (eg, a mean or slope per subject) and then using the summarized statistic as a standard outcome for genetic analysis; or (b) simultaneous modeling of genetic and longitudinal parameters. In general, joint modeling is appealing because (a) all parameter estimates are mutually adjusted, and (b) within- and between-individual variability at the levels of gene markers, repeat measurements, and family characteristics are correctly accounted for [1].
The semiparametric linkage disequilibrium mapping for the hybrid family design we developed previously [2] uses all markers simultaneously to localize the disease locus without making an assumption about genetic mechanism, except that only 1 disease gene lies in the region under study. The advantages of this approach are (a) it does not require the specification of an underlying genetic model, so estimating the position of a disease locus and its standard error is robust to a wide variety of genetic mechanisms; (b) it provides estimates of disease locus positions, along with a confidence interval for further fine mapping; and (c) it uses linkage disequilibrium between markers to localize the disease locus, which may not have been typed. We extended this approach to map susceptibility genes using longitudinal nuclear family data with an application to hypertension. Four different outcomes were used based on the proposed method: (I) ever having hypertension ("Ever"), (II) incidence event with status changed from unaffected to affected ("Progression"), (III) first available visit as baseline only ("Baseline"), and (IV) all available time points ("Longitudinal"). We compared the estimates of the disease locus positions, their standard errors, the genetic effect estimate at the disease loci, and their significance for the 4 phenotypes to examine the efficiency gained from using repeated longitudinal phenotypes.
Genome-wide genotypes and phenotype data
Association mapping was conducted on chromosome 3 of the genome-wide association study (GWAS) data. A total of 65,519 single-nucleotide polymorphisms (SNPs) included in 1095 genes were genotyped on chromosome 3 for 959 individuals from 20 original pedigrees in Genetic Analysis Workshop 19 (GAW19). Of these individuals, there were 178 (38 %) affected offspring out of 469 offspring for phenotype (I) "Ever"; 130 (31 %) out of 421 offspring for phenotype (II) "Progression"; 64 (11 %) out of 600 offspring for phenotype (III) "Baseline"; and 60 (11 %) out of 565 offspring to approximately 85 (45 %) out of 189 offspring across the 4 visits (or 87 [21.63 %] out of 402 offspring on average) for phenotype (IV) "Longitudinal" (Table 1). To compare phenotypes (I) and (II), only individuals with at least 2 measurements were included in the "ever" phenotype. PedCut [3] was used to split large pedigrees with members more than 20 members into nuclear pedigrees. Consequently, we analyzed a total of 138 pedigrees with 1,495 individuals (the IDs for missing parents were added to form trios). In divided pedigrees, the nuclear families contained between 3 and 25 individuals. Five SNPs were removed because they failed the test of Hardy-Weinberg equilibrium (HWE) (p value < 10−4). The HWE test was performed using PLINK 1.07 [4] based on 56 unrelated subjects. (For information on PLINK, see http://pngu.mgh.harvard.edu/purcell/plink/.) A total of 22,056 genotypes from various SNPs with genotyping errors (genotyping error rate was around 3.51 × 10−4) were further excluded by the MERLIN 1.1.2 computing package (see http://www.sph.umich.edu/csg/abecasis/merlin/tour/linkage.html). None of the covariates was adjusted for in this approach.
Number of offspring for different phenotypes
Visit 1
Affected offspring
All offspring
Number of nuclear families
Multipoint linkage disequilibrium mapping
Suppose M markers were genotyped in the region R at locations of 0 ≤ t 1 < t 2 < … < t M ≤ T. We assume there are 2 alleles per marker. With H (t) being the target allele at marker position t, and h (t) being the nontarget allele, we define
\( \begin{array}{l}{Y}_1^{D_{k_{il}}}(t)=\left\{\begin{array}{l}1\kern0.36em \mathrm{if}\;\mathrm{the}\;\mathrm{transmitted}\;\mathrm{paternal}\;\mathrm{allele}\;\mathrm{at}\;t\;\mathrm{is}\;H\;(t)\\ {}0\kern0.24em \mathrm{if}\;\mathrm{the}\;\mathrm{transmitted}\;\mathrm{paternal}\;\mathrm{allele}\;\mathrm{at}\;t\;\mathrm{is}\;h\;(t)\end{array}\right.,\;\\ {}{Y}_2^{D_{k_{il}}}(t)=\left\{\begin{array}{l}1\kern0.24em \mathrm{if}\;\mathrm{the}\;\mathrm{nontransmitted}\;\mathrm{paternal}\;\mathrm{allele}\;\mathrm{at}\;t\ \mathrm{is}\;H(t)\\ {}0\kern0.24em \mathrm{if}\;\mathrm{the}\;\mathrm{nontransmitted}\;\mathrm{paternal}\;\mathrm{allele}\;\mathrm{at}\;t\ \mathrm{is}\;h(t)\end{array}\right.,\end{array} \) for the affected offspring \( {D}_{k_{il}} \),
and \( \begin{array}{l}{Y}_1^{{\overline{D}}_{k_{il}}}(t)=\left\{\begin{array}{l}\hbox{-} 1\kern0.36em \mathrm{if}\;\mathrm{the}\;\mathrm{transmitted}\;\mathrm{paternal}\;\mathrm{allele}\;\mathrm{at}\;t\;\mathrm{is}\;H\;(t)\\ {}0\kern0.24em \mathrm{if}\;\mathrm{the}\;\mathrm{transmitted}\;\mathrm{paternal}\;\mathrm{allele}\;\mathrm{at}\;t\;\mathrm{is}\;h\;(t)\end{array}\right.,\;\\ {}{Y}_2^{{\overline{D}}_{k_{il}}}(t)=\left\{\begin{array}{l}\hbox{-} 1\kern0.24em \mathrm{if}\;\mathrm{the}\;\mathrm{nontransmitted}\;\mathrm{paternal}\;\mathrm{allele}\;\mathrm{at}\;t\ \mathrm{is}\;H(t)\\ {}0\kern0.24em \mathrm{if}\;\mathrm{the}\;\mathrm{nontransmitted}\;\mathrm{paternal}\;\mathrm{allele}\;\mathrm{at}\;t\ \mathrm{is}\;h(t)\end{array}\right.,\end{array} \) for the unaffected offspring \( {\overline{D}}_{k_{il}} \). Then, we define the preferential transmission statistic \( {Y}_{T_{k_{il}}}(t)={Y}_1^{D_{k_{il}}}(t)-{Y}_2^{D_{k_{il}}}(t) \) for the paternal side and \( {X}_{T_{k_{il}}}(t)={X}_1^{D_{k_{il}}}(t)-{X}_2^{D_{k_{il}}}(t) \) for the maternal side for a trio; similarly, the preferential transmission statistic \( {Y}_{U_{k_{il}}}(t)={Y}_1^{{\overline{D}}_{k_{il}}}(t)-{Y}_2^{{\overline{D}}_{k_{il}}}(t) \) and \( {X}_{U_{k_{il}}}(t)={X}_1^{{\overline{D}}_{k_{il}}}(t)-{X}_2^{{\overline{D}}_{k_{il}}}(t) \) for an unaffected trio for both parental sides, respectively, where k il = 1, …, N 1il (for unaffected), N 1il (N 2il ) is the number of affected (unaffected) offspring in the family i at the l th time point, i = 1, … n, l = 1, …, L (L = 1 or 4 in this study).
The expectation of the statistic is \( {\mu}_{1\;{k}_{il}\;j\;}\left(\delta,\;\pi \right)=E\left[{Y}_{T_{k_{il}}}\left({t}_j\right)\;\left|{\varPhi}_1\right.\right]=\left(1-2{\theta}_{t_j,\;\tau}\right)\;C\;{\left(1-{\theta}_{t_j,\;\tau}\right)}^N\;{\pi}_j \) for case-parent trios and \( {\mu}_{2\;{k}_{il}\;j\;}\left(\delta,\;\pi \right)=E\left[{Y}_{U_{k_{il}}}\left({t}_j\right)\;\left|{\varPhi}_2\right.\right]=\left(1-2{\theta}_{t_j,\;\tau}\right)\;C\;*{\left(1-{\theta}_{t_j,\;\tau}\right)}^N\;{\pi}_j \) for control-parent trios, where \( {\theta}_{t_j,\;\tau } \) is the recombination fraction between marker position t j and disease locus position τ, the recombination fraction Θ is a parametric function of the parameter of primary interest (τ, the physical position of the functional variant), N is the number of generations since the initiation of the disease variant, Φ 1 denotes the event that the offspring is affected, Φ 2 represents the event that the offspring is unaffected, \( C=E\left[{Y}_{T_{k_{il}}}\left(\tau \right)\;\left|{\varPhi}_1\right.\right]=E\left[{X}_{T_{k_{il}}}\left(\tau \right)\;\left|{\varPhi}_1\right.\right] \), \( C*=E\left[{Y}_{U_{k_{il}}}\left(\tau \right)\;\left|{\varPhi}_2\right.\right]=E\left[{X}_{U_{k_{il}}}\left(\tau \right)\;\left|{\varPhi}_2\right.\right],\delta =\left(\tau, N,C,C,*\right) \)is the vector of parameters, and π j = Pr [h(t j ) |h(τ)]. \( {\mu}_{1{k}_{il}j} \) is the probability for an affected offspring to receive a target allele, and \( -{\mu}_{2{k}_{il}j} \) is the probability for an unaffected offspring to receive a target allele. The statistic \( {Z}_{1{k}_{il}j}={X}_{T{k}_{il}j}+{Y}_{T{k}_{il}j} \) and \( {Z}_{2{k}_{il}j}={X}_{U{k}_{il}j}+{Y}_{U{k}_{il}j} \) were used to estimate the parameters. The estimating equations used to solve for parameters δ are:
$$ {S}_1\left(\delta \right)={\displaystyle \sum_{i=1}^n{\displaystyle \sum_{l=1}^L{\displaystyle \sum_{k_{il=1}}^{N_{1il}}{\displaystyle \sum_{j=1}^M\frac{\partial {\mu}_{1{k}_{il}j\;}\left(\delta,\;{\widehat{\pi}}_j\right)}{\partial \delta }}}}}\;Co{v}^{-1}\left({Z}_{1{k}_{il}j}\right)\;\left\{{Z}_{1{k}_{il}j}-2{\mu}_{1{k}_{il}j}\;\left(\delta, {\widehat{\pi}}_j\right)\right\}, $$
where \( {\widehat{\pi}}_j \) is the average of nontransmitted parental alleles in the sample.
The estimating equations were solved iteratively for parameters τ, N, C, and C*, where τ and C are the 2 parameters of interest. The variance of the disease locus position estimate was estimated to make inferences about the disease locus position (τ) and its genetic effect (C) [2]. Theoretically, the genetic effect of τ, characterized by C, is the transmission probability that the affected offspring will carry the disease allele, H, at τ. Detailed derivations for case-parent trios in a cross-sectional design can be found in Chiu et al. [2, 5]. We will present the details of this proposed methodology elsewhere.
Gene-based association mapping was conducted for all SNPs on chromosome 3. This approach accounts for correlations between markers and repeated phenotypes within subjects, and correlations between subjects per family. The consistent estimates of hypertension locus position using "Ever" and "Progression" are shown in Table 2 and Fig. 1, while the consistent estimates of hypertension locus position using baseline and longitudinal data (at all 4 visits) are listed in Table 3 and Fig. 2.
Significant and consistent estimates of disease locus positions and their genetic effects using "Ever" and "Progression" phenotypes
Gene*
Previous hits
\( \widehat{\tau} \) ± SE
p Value
FBLN2
13.6464 ± 0.00026
8.85 × 10−7
1.61 × 10−12
14.6802 ± 0.0010
OSBPL10
CMTM8
RFT1
ADAMTS9
EPHA3
7.3 × 10−7
113.3097 ± 0.0026
SIDT1
114.7743 ± 0.00039
IFT122
RBP1
PLOD2
LEKR1
RSRC1
ECT2
PEX5L
181.0145 ± 0.013
OSTN
Ĉ, the genetic effect estimate; G, previous GWAS hits; L, previous linkage hits; \( \widehat{\tau} \), the disease locus position estimate in cM
*Because of space limitations, we list only the 2 phenotypes with consistent estimates for the disease locus positions (the difference between the 2 \( \widehat{\tau} \) for both phenotypes is less than 10−2 cM) and significant estimates for the genetic effects (both with P < 4.57 × 10−5, Bonferroni)
Length of 95 % confidence intervals (CIs) for the estimate of the disease locus position for "Ever" and "Progression" phenotypes
Significant and consistent estimates of disease locus positions and their genetic effects using "Baseline" and "Longitudinal" phenotypes
GRM7†
7.4917 ± 0.00048
7.4871 ± 0.0015
SLC4A7
27.4521 ± 0.000045
SCN10A
AC092058.3
LTF
FAM116A
LRIG1
TBC1D23
PLCXD2
LSAMP
ILDR1
PDIA5
HPS3
CASRL1
IGF2BP2
FETUB ‡
IL1RAP ‡
C3orf21 ‡
<10−18
*Displayed are all genes where p ≤ 0.05
†The gene is significant with the Bonferroni correction (P < 4.57 × 10−5) and its P values are 2.31 × 10−6 and 0.00044 for "Ever" and "Progression," respectively
‡ The same genes for the "Ever" and "Progression" phenotypes had P values <0.05 but > 4.57 × 10−5 for the genetic effect estimate
Length of 95 % confidence intervals (CIs) for the estimate of the disease locus position for "Baseline" and "Longitudinal" phenotypes
A total of 119 (11 %), 79 (7 %), 49 (4 %), and 42 (4 %) of 1095 genes had a significant genetic effect (P < 4.57 × 10−5 with Bonferroni correction) based on hypertension status at "Ever," "Progression," baseline ("Baseline"), and 4 visits ("Longitudinal"), respectively. There are only 3 significantly associated genes (P ≤ 0.05) for baseline and longitudinal phenotypes duplicated with the significantly associated genes for "Ever" and "Progression" outcomes: FETUB, IL1RAP, and C3orf21. Several hits identified here have been reported from linkage or GWAS studies. Table 2 shows genes with a significant genetic effect (P < 4.57 × 10−5). Table 3 presents the genes that are significant at a significance level of 0.05. Only 1 gene, GRM7, is significant at the level of P < 4.57 × 10−5.
Figures 1 and 2 display the 95 % confidence intervals for the estimate of the hypertension locus position for the 4 phenotypes centered at the estimated disease locus position. The comparison is shown for the genes listed in Tables 2 and 3. The standard errors of the estimates for the disease locus position are smaller in 64 % of the genes based on longitudinal data (Table 3) compared to those based on baseline data. This is because those incidence cases included in "Progression" were also included in the analysis of "Ever." Only prevalent cases, a relatively small proportion, are additionally included in the analysis of "Ever." Thus, the results from "Progression" and "Ever" are similar.
Methods of genetic analysis rely heavily on correlations among family members' outcomes to infer genetic effects, whereas longitudinal studies allow investigators to study factors' effects on outcomes and changes over time [1]. To retrieve full information from longitudinal family data, appropriate statistical approaches are crucial. We proposed a multipoint linkage disequilibrium approach accounting for multilevel correlations between markers per subject, within-subject longitudinal observations, and subjects within families, aiming to correctly localize the disease locus and assess its genetic effects. This approach has several advantages: it allows us to estimate the disease locus position, the disease locus's genetic effect, and the 95 % confidence intervals without specifying a disease genetic mode and yet making full use of the markers and repeated measurements. In addition, this approach treats the genotype data as random conditional on the phenotype, eliminating the problem of ascertainment bias. We applied this approach to the baseline and longitudinal prevalence/incidence of hypertension events. The efficiency of parameter estimates was similar for the "Ever" and "Progression" categories, but was improved with repeated longitudinal outcomes compared to the use of "Baseline" only. This difference between analyses might largely result from the different total sample sizes and proportions of hypertensive subjects for different phenotypes. Several identified genes on chromosome 3 for hypertension were consistent with findings from previous linkage and association studies. Despite its advantages, this proposed approach also has limitations; for example, covariate adjustment is not available.
We deeply appreciate the reviewers' thorough reviews and constructive suggestions, which greatly improve the quality of this manuscript. This project was supported by a grant from Ministry of Science and Technology, Taiwan (MOST102-2118-M-400-005) and a grant from the National Health Research Institutes, Taiwan (PH-103-pp-04). We thank Ms. Karen Klein (Biomedical Research Services and Administration, Wake Forest School of Medicine) for her editorial contributions to this manuscript.
This article has been published as part of BMC Proceedings Volume 10 Supplement 7, 2016: Genetic Analysis Workshop 19: Sequence, Blood Pressure and Expression Data. Summary articles. The full contents of the supplement are available online at http://bmcproc.biomedcentral.com/articles/supplements/volume-10-supplement-7. Publication of the proceedings of Genetic Analysis Workshop 19 was supported by National Institutes of Health grant R01 GM031575.
YFC designed the overall study, CYL conducted statistical analyses, YFC and FCH drafted the manuscript. All authors read and approved the final manuscript.
Institute of Population Health Sciences, National Health Research Institutes, Miaoli, 35053, Taiwan, Republic of China
Department of Biostatistical Sciences, Division of Public Health Sciences, Wake Forest School of Medicine, Winston-Salem, 27157, USA
Gauderman WJ, Macgregor S, Briollais L, Scurrah K, Tobin M, Park T, et al. Longitudinal data analysis in pedigree studies. Genet Epidemiol. 2003;25 Suppl 1:S18–28.View ArticlePubMedGoogle Scholar
Chiu YF, Lee CY, Kao HY, Pan WH, Hsu FC. Analysis of family- and population-based samples using multiple linkage disequilibrium mapping. Ann Hum Genet. 2013;77(3):251–67.View ArticlePubMedGoogle Scholar
Liu F, Kirichenko A, Axenovich TI, van Duijn CM, Aulchenko YS. An approach for cutting large and complex pedigrees for linkage analysis. Eur J Hum Genet. 2008;16(7):854–60.View ArticlePubMedGoogle Scholar
Purcell S, Neale B, Todd-Brown K, Thomas L, Ferreira MA, Bender D, et al. PLINK: a toolset for whole-genome association and population-based linkage analysis. Am J Hum Genet. 2007;81(3):559–75.View ArticlePubMedPubMed CentralGoogle Scholar
Chiu YF, Liang KY, Pan WH. Incorporating covariates into multipoint association mapping in the case-parent design. Hum Hered. 2010;69(4):229–41.View ArticlePubMedPubMed CentralGoogle Scholar | CommonCrawl |
Using mixed effects logistic regression models for complex survey data on malaria rapid diagnostic test results
Chigozie Louisa J. Ugwu1 na1 &
Temesgen T. Zewotir1 na1
The effect of malaria in Nigeria is still worrisome and has remained a leading public health issue in the country. In 2016, Nigeria was the highest malaria burden country among the 15 countries in sub-Saharan Africa that accounted for the 80% global malaria cases. The purpose of this study is to utilize appropriate statistical models in identifying socio-economic, demographic and geographic risk factors that have influenced malaria transmission in Nigeria, based on malaria rapid diagnostic test survey results. This study contributes towards re-designing intervention strategies to achieve the target of meeting the Sustainable Development Goals 2030 Agenda for total malaria elimination.
This study adopted the generalized linear mixed models approach which accounts for the complexity of the sample survey design associated with the data. The 2015 Nigeria malaria indicator survey data of children between 6 and 59 months are used in the study.
From the findings of this study, the cluster effect is significant \((P<0.0001)\) which has suggested evidence of heterogeneity among the clusters. It was found that the vulnerability of a child to malaria infection increases as the child advances in age. Other major significant factors were; the presence of anaemia in a child, an area where a child resides (urban or rural), the level of the mother's education, poverty level, number of household members, sanitation, age of head of household, availability of electricity and the type of material for roofing. Moreover, children from Northern and South-West regions were also found to be at higher risk of malaria disease and re-infection.
Improvement of socio-economic development and quality of life is paramount to achieving malaria free Nigeria. There is a strong link of malaria risk with poverty, under-development and the mother's educational level.
Almost half of the world's population has been at the risk of malaria, but in terms of mortality and morbidity attributed to the disease, the African children aged under 5 years have been mostly affected. According to the World Health Organization (WHO), year 2016 alone recorded 216 million cases of malaria infection and 445,000 mortality cases worldwide, of which 91% occurred in African countries [1].
Of the fifteen countries in sub-Saharan Africa that accounted for 91% of the global malaria cases, Nigeria bears the major burden of about 40% which included 25% infant mortality, close to 31% under-five mortality and nearly 11% maternal mortality in annual bases [2]. Similarly, there are more than 100 million clinically diagnosed malaria cases, of which approximately 300,000 malaria associated childhood deaths occur yearly in Nigeria [3]. The effect of malaria disease in Nigeria is worrisome and has remained a leading public health issue in the country, hence, a major cause of about 60% unscheduled hospital visits and more than 30% hospitalization of children and pregnant women in Nigeria [4]. Malaria parasitaemia is mainly observed during the first pregnancy, but decreases afterwards; pregnancy in turn reduces the inhabitance of normal immune response due to the infection and as such, may cause severe cases among them [5]. Malaria infection of the mother increases the risk of abortion, stillbirth and also the odd of congenital malaria transmission to newborns which will eventually reduce the infant's survival chances.
In Nigeria, malaria is endemic and has contributed to the huge economic loss to the nation due to its negative impact in the capacity of a debilitating work force and draining national resource due to the disease control and treatment [6]. Moreover, malaria disease affects mostly agricultural regions, the infection weakens its victim's strength by making the individual succumb to other infectious diseases and as such, affects country's agricultural efficiency [7].
The Nigerian government, through the National Control Programme (NMCP), together with several non-governmental partners such as Roll Back Malaria (RBM) have made and are still making drastic efforts in reducing malaria transmission and associated child death through the implementation of (2009–2013) malaria control strategic plan and on the wide dissemination of malaria knowledge through mass distribution of long-lasting insecticide-impregnated nets (LLINs) within the selected state of the country. Their effort yielded a huge result within 2010–2015 by reducing malaria prevalence from 52 to 45% [2]. The NMIS outcome between 2010–2015 indicated an improvement of about 5% in malaria prevalence reduction, though some regions are still lagging behind with tremendous malaria cases [2]. It has been and still being a leading cause of death among children between the age bracket (6 months–5 years) in Nigeria, mostly among the poor and rural communities [1, 2].
Recent research on malaria prevalence in other malaria endemic countries [8,9,10,11,12,13,14] and in Nigeria [5,6,7, 15,16,17,18,19,20,21] have identified major factors such as unavailability of LLINs, presence of other infections, illiteracy on the part of parents or caregiver, poverty, and inadequate dissemination of malaria knowledge, to be highly associated with the malaria disease transmissions.
Most of the studies in Nigeria have been largely limited to community and hospital based simple random sample survey among pregnant mothers [5, 17,18,19], however, very few studied clinical malaria cases among children [15, 20, 22, 23]. Using data from 2010 Nigeria malaria indicator survey and the mapping malaria risk dataset in Africa (MARA), [5, 24] employed standard logistic regression and a Bayesian geostatistical modeling. Their results showed that environmental and climatic factors are major predictors of malaria parasite infection. Also, [25] used the 2008, Nigeria demographic health survey data (NDHS) to study the relationship between children's fever report and poverty in Nigeria. This study found that low fever occurrences were reported in the households that posses mosquito bed nets. However, no studies have been done on under-five malaria risk indicators in Nigeria using National level data.
The world is presently in the post MDG era and recently the WHO Global Technical Strategy for malaria 2016–2030 is endorsed with the objective of drastically reducing global malaria occurrences by at least 90%, malaria related death by at least 90%, eradicating malaria in at least 35 countries and preventing re-emergence of malaria in all the malaria free countries [26]. To meet the SDGs 2030 target on total malaria elimination and to also achieve Nigeria's own 2014–2020 agenda in reducing malaria—related deaths to zero level, investigation into individual and household (socio-economic, geographic, demographic and environmental) determinants of malaria prevalence and associated child mortality is paramount for the best strategic interventions. In other to achieve great success in re-strategizing policy measures, policy implementation that will extensively lower the malaria burden in the country, consistent investigation into the epidemiology and the major risk factors associated with malaria infection is paramount [5, 15, 24].
In this paper, the 2015, Nigeria malaria indicator survey data (NMIS) was utilized to investigate the factors associated with malaria RDT status of children aged under 5 years in Nigeria and hence, this study contribute to highlight measures that may be implemented towards re-designing intervention strategies to achieve the SDGs 2016–2030 Agenda for total malaria elimination in Nigeria.
The 2015 Nigeria Malaria Indicator Survey (NMIS) has been conducted by the National Population Commission (NPopC), the National Bureau of Statistics (NBS), the Malaria Elimination Program (NMEP) and the Malaria Partnerships in Nigeria, which was supported by PMSI-USAID, GFATM, DFID, UNICEF, WHO and the United Kingdom Department for International Development (DFID) and was carried out from October through December 2015 [2]. This was the second and more comprehensive malaria indicator survey implemented just one year after the first survey in 2010 and also after one year in the development of the new national malaria strategic plan that covers 2014–2020 [2]. This is an internationally recognized household survey, which is periodically conducted in high malaria endemic countries at the time of malaria season for the purpose of providing national level information on malaria indicators and prevalence. The NMIS captured a number of individual and household characteristics. A sample of 8148 households was selected from 333 clusters across the country, of which 138 clusters are in urban areas and about 195 clusters are from rural areas [2].
Children aged 6–59 months, who were born from women in the 8148 sampled households were tested for malaria and anaemia using blood samples. A total of 5236 children participated in the 2015 NMIS. Hence, children aged 6–59 months were used in this study.
Response variable
Malaria rapid diagnostic tests (RDTs) are immuno-chromatography form of tests which detect the presence of malaria antigens discharged from the parasitized red blood cells.
The World Health Organization (WHO) has supported the use of both microscopy and rapid diagnostic testing approach for malaria diagnosis. Microscopy being the oldest has been recognized as the standard approach for malaria diagnosis, but the application is however tedious. Microscopy requires an experienced (laboratory specialist) microscopist, relaxed environment, time, degree of operational expertise and cost [27]. In remote rural communities, microscopy may be subject to false negative results due to the fact that, malaria results are highly subject to human error attributed to loss of parasite during the staining procedure. Conversely RDTs does not require specialized equipments, long process and skilled personnel. The recent development in introduction of RDTs has been so fruitful towards early detection, prompt treatment and reduction of severe cases for effective 'test and treat-strategy recommended by WHO [28]. The RDT method has gained popularity in every situation and has been mostly applied during population based survey for immediate intervention, because it gives rapid result in a space of 15–30 min [29, 30]. Moreover, the systematic reviews have proven that the RDTs approach is a reliable diagnosis for malaria infection [31, 32].
Therefore, for the purpose of this study, the dependent variable is the binary response from the children RDT outcome where 1 signifies the presence of malaria infection and 0 for no malaria infection.
Explanatory variables
The explanatory variables were selected to give an answer to the study objective. The selected variables were based on previous studies to critically compare results. These include;
Child's characteristics: sex of child (female, male); age of child (6–59 months; the anemic status of a child (yes, no); child treated fever before malaria test (yes, no).
Geospatial: sampling enumeration clusters; region (North central, North East, North West, South East, South South, South West); type of place of residence (rural, urban).
Mother's characteristics: mother's educational level (no education, primary, secondary and higher education)
Head of household's characteristics: age of head of household (continuous), gender of head of household (female, male).
Socio-economic characteristics of the household: wealth index (poorer, middle-range, richer, richest); number of household members (continuous); availability of some critical household possessions such as radio (yes, no); television (yes, no); electricity (yes, no); household wall material (mud-wood-others), roof (thatched-wood-others, zinc-metal-roof), main floor (cement-wood-other, palm-sand-others); source of drinking water (protected water, tap-piped water, unprotected water).
Environmental and sanitation characteristics: Use of mosquito indoor residual spray (yes, no); use of mosquito net (yes, no); total number of nets used (continuous); toilet facility (flush toilet, no toilet, pit-latrine); distance from water source (< 30 min, 31–49 min, 50–90 min, > 90 min, on premises).
Under complex survey design with unequal weighting, the ordinary logistic regression statistical estimates will be inappropriate for the analysis [33,34,35,36]. Accordingly, this study employed the mixed effects logistic regression model approach under the generalized linear mixed models (GLMMs) framework which accounts for the complexity of the sampling design. Moreover, the GLMM accommodates both random and fixed effects in the model [37,38,39].
Let \(y_{ikt}\) be the binary response variable of the ith child in the kth household within the tth sampling clusters. Let \(\pi _{ikt}=P(y_{ikt}=1)\) denote the probability that an ith child RDT outcome in the kth household, within the tth cluster is positive. Suppose \(\mathbf{x }^{\prime }_{ikt}\) is the row vector of covariates, which corresponds to the ith child in the kth household and the tth cluster and \(\beta\) is the vector of unknown model parameters. Then, following [14, 40,41,42], the generalized linear mixed models (GLMMs) framework of the mixed effect logistic regression models formulates the logit of \(\pi _{ikt}\) as a function of the covariates \(\mathbf{x }^{\prime }_{ikt}\) and the random cluster effect \(\gamma _{t}\), as:
$$logit(\pi _{ikt})= log \left [\frac{\pi _{ikt}}{1-\pi _{ikt}}\right]=\mathbf{x }^{\prime }_{ikt}\beta + \gamma _{t}.$$
Weighted mixed effects logistic regression model was regressed on the explanatory variables. The weights were the sampling weights which were used in the NMIS complex survey design. To avert the influence of confounding variables, all the main effects were retained in the model. It was assessed as to whether any interaction terms were needed to be incorporated into the model. This was examined by fitting each of the two-way interaction terms formed from all the explanatory variables, one at a time to the model that had all the main effects. Interactions which highly improved the goodness of fit and highly significant (P < 0.10) were sequentially added to the model until there was no significant interaction effect to be included in the model.
Accordingly, only four interaction effects, namely region and type of place of residence, wealth index and type of place of residence, age and gender of the head of household, and age of head of the household and the number of household members. Consequently, the final model included all the main effects and the four two-way interaction effects.
Table 1 Covariance parameter estimates
All the model fits and estimates were obtained using the SAS GLIMMIX procedure [43]. The model fit was assessed using the ratio of the generalized chi-square statistics and it's degree of freedom, which yielded 0.90. This result indicated a good model fit with no residual over-dispersion. The random effect cluster, which accounted for the complexity of the sampling design is significant as shown in Table 1. The result shows that there is heterogeneity between clusters. The cluster variability accounts about 50% the total variability of under-five child RDT outcome.
Table 2 Type III tests for fixed effects
The type III tests for the fixed effects in Table 2 shows that region, mother's level of education, child's anaemia level, age of the child, age of head of household, toilet facility, number of household members, cluster altitude in meters, availability of electricity, type of place of residence (urban or rural) and child's fever report two weeks prior to survey and the interactions between number of household members and age of head of household, gender and age of head of household, region and type of place of residence significantly associated child's malaria RDT outcome.
Table 3 Parameter estimates of odds ratio for the main effects
In this study, the results of the main effect parameter estimates, the odds ratios (OR), the 95% confidence intervals and the P-values are shown in Table 3. Highlighted also were some of the results from Table 3.
The age effect shows that as a child gets older, the odd of malaria RDT positive outcome. The risk of anaemia was found to be associated with malaria status of under-five children. The odds of positive RDT outcome for under-five anaemic children is 3.16 times more than that of the non-anaemic, but otherwise identical children.
The mother's educational level was significantly associated with the risk of malaria. The positive outcome of the malaria RDT increased with a decreasing level of the mother's education. A child who has an illiterate mother is \(2.0454\,(P\text{-}value = 0.0006)\) more likely to have malaria positive RDT outcome otherwise identical mother with a higher educational level.
Table 4 Parameter estimates of odds ratio for the interaction effects
The interaction effects estimate summary is given in Table 4. The interaction effects between regions (South East, South South, South West, North Central, North East and North West) and type of place of residence (urban or rural) is presented in Fig. 1. Figure 1 shows that malaria prevalence is higher in rural areas than that of the urban areas in all the regions of Nigeria.
Predicted probabilities for positive malaria RDT by region and type of place of residence with 95% confidence limits
Predicted probabilities for positive malaria RDT by household wealth index and type of place of residence with 95% confidence limits
Figure 2 presents the interaction effect involving type of place of residence and wealth index (socio-economic status) of households. The prevalence of malaria was significantly very high among poorer and poorest in both urban and rural households compared to (middle-range, richer and richest) households.
The interaction between gender and age of head of household is presented in Fig. 3. The figure shows that increase in ages of both male and female head of household increases the odd of malaria prevalence on the under-five children.
Predicted probabilities for positive malaria RDT by gender and age of head of household with 95% confidence limits
Predicted probabilities for positive malaria RDT by age of head of household and number of household members with 95% confidence limits
Finally Fig. 4 presents the interaction effect between the ages of head of household and household size. In Fig. 4, it shows that the number of household members increases as the age of head of household increases which also impact heavily on the malaria RDT outcome of children under-5 years in Nigeria.
Understanding the critical risk factors and prevalence of malaria among children in a household is very informative and crucially important in re-designing appropriate intervention strategies for final malaria eradication in Nigeria. This study is aimed to investigate the determinant of malaria infection among Nigerian children aged under 5 years using the 2015 NMIS data.
The use of mosquito bed net, has insignificant effect on the under-five child's malaria RDT outcome. This result is in line with the findings of [9, 14, 44]. But this is contrary to the results obtained from studies in Ethiopia [8], Burkina-Faso [13] and Rwanda [45], they observed significant relationship between those predictor variables and malaria prevalence among children under 5 years. The Roll Back Malaria Partners, the WHO and many other private donors have contributed tremendously in mosquito bed net distribution in many regions in Nigeria, which might contribute to the reason for the insignificant effect of mosquito bed net on under-five children RDT outcome.
In this study, it was observed that as a child gets older, the odd of malaria infection increases. Children between the ages of 6–24 months are found to be less affected by the malaria parasite than older children between ages 49–59 months. This result is consistent with recent results found by many studies on under-5 year children. From the findings, it was observed that a child's vulnerability to malaria infection increases with increase in age, older children being more at the risk of malaria infection [9, 14]. This was evident from recent studies on under-five children that malaria positivity increasing with age [4, 8, 12, 16, 17]. A child between age 0 and 13 months might still be protected by the maternal antibodies, mothers give more attention to children under one year and as the child gets older outdoor activities exposes them to more malaria risk [9, 15, 16]. Similarly, the results has shown that a child's gender has no association with malaria infection, which is similar to the results obtained by [8, 13, 14].
This study has observed a similar result with [25] that, malaria RDT status of the under-five children in Nigeria was positively associated with anaemia risk. This means that for the anaemic children, the RDT outcome tends to be positive and may require further investigation to ascertain if the result might be a case of RDT sensitivity issues.
Maternal education plays a very important role in the child's health in a household. The result of this study shows a significant association between educational level of the child's mother and malaria prevalence. This finding is similar to the studies carried out by [10, 14]. It is believed that since mothers are at the centre of family well being, their exposure through education is paramount to understanding health related issues and preventive measures for malaria infections towards their children.
Regarding geographical impact on malaria prevalence, the finding shows a significant geographical variation in malaria prevalence among Nigerian children. Children living in the North West, North Central, North East, and South West were highly associated with high malaria risk compared to those residing in the South East and South West regions. This result is consistent with similar results found from published studies [4, 24].
In this paper, a GLMM was fitted and the complexity of the designs were incorporated in the model. The heterogeneity among clusters is found to be significant and the effects were accounted in the analysis of the factors effect.
The level of under-development in Nigeria presents a serious challenge for malaria eradication. The findings from this study have also provided insight into socio-economic and mother's educational level. Mother's educational level has been found to influence her children's vulnerability to malaria infection. Having better educated mothers is a human capital for the nation and the family at large. Therefore, child malaria eradication and information strategy should incorporate mother's education enhancement.
Investigation into the significant association between under-five children RDT outcome and their anaemic test will be one of the alarming results about RDT diagnostic method. This is perhaps that anaemic children RDT outcome tends to show positive outcome or vice-versa. Therefore, one of the future direction of this research is to investigate the joint distribution of anaemia test status and the RDT outcome on under-five children.
Odds ratios
EAs:
Enumeration area
Primary sampling unit
GLMMs:
Generalized linear mixed models
MDGs:
SDGs:
DFID:
NPopC:
National Population Commission
NPHC:
National Population and housing Census
The United Children's Emergency Fund
USAID:
The United States Agency for International Development
GFATM:
Global Fund to Fight AIDS, Tuberculosis and Malaria
NMEP:
National Malaria Eradication Program
WHO. World malaria report. Geneva: World Health Organization; 2017.
Programme NMC. Nigeria MIS final report. Abuja: Federal Republic of Nigeria; 2015.
WHO and UN Partners. Country statistics and global health estimates. Abuja: WHO Statistics Profile; 2015.
Adigun AB, Gajere EN, Oresanya O, Vounatsou P. Malaria risk in Nigeria: Bayesian geostatistical modelling of 2010 malaria indicator survey data. Malar J. 2015;14:156.
Gunn JKL, Ehiri JE, Jacobs ET, Ernst KC, Pettygrove S, Kohler LN. Population-based prevalence of malaria among pregnant women in Enugu State, Nigeria: the Healthy Beginning Initiative. Malar J. 2015;14:438.
Onwujekwe O, Uguru N, Etiaba E, Chikezie I, Uzochukwu B, Adjagba A. The economic burden of malaria on households and the health system in Enugu State Southeast Nigeria. PloS ONE. 2013;8:e78362.
Oladepo O, Tona GO, Oshiname FO, Titiloye MA. Malaria knowledge and agricultural practices that promote mosquito breeding in two rural farming communities in Oyo State, Nigeria. Malar J. 2010;9:91.
Ayele D, Zewotir T, Mwambi H. Prevalence and risk factors of malaria in Ethiopia. Malar J. 2012;11:195.
Gahutu JB, Steininger C, Shyirambere C, Zeile I, Cwinya-Ay N, Danquah I, et al. Prevalence and risk factors of malaria among children in southern highland Rwanda. Malar J. 2011;10:134.
Sultana M, Sheikh N, Mahumud RA, Jahir T, Islam Z, Sarker AR. Prevalence and associated determinants of malaria parasites among Kenyan children. Trop Med Int Health. 2017;45:25.
Lowe R, Chirombo J, Tompkins AM. Relative importance of climatic, geographic and socio-economic determinants of malaria in Malawi. Malar J. 2013;12:416.
Clark TD, Greenhouse B, Njama-Meya D, Nzarubara B, Maiteki-Sebuguzi C, Staedke SG. Factors determining the heterogeneity of malaria incidence in children in Kampala, Uganda. J Infect Dis. 2008;198:393–400.
Baragetti M, Fournet F, Henry M, Assi S, Ouedraogo H, Rogier C. Social and environmental malaria risk factors in urban areas of Ouaga-dougou, Burkina Faso. Malar J. 2009;8:13.
Roberts D, Matthews G. Risk factors of malaria in children under the age of five years old in Uganda. Malar J. 2016;15:246.
Uzochukwu BSC, Onwujekwe EO, Onoka CA, Ughasoro MD. Rural–urban differences in maternal responses to childhood fever in South East Nigeria. PloS ONE. 2008;3:e1788.
Dawaki S, Al-Mekhlafi HM, Ithoi I, Ibrahim J, Atroosh WM, Abdulsalam AM, et al. Is Nigeria winning the battle against malaria? Prevalence, risk factors and KAP assessment among Hausa communities in Kano State. Malar J. 2016;15:351.
Dogara MM, Ocheje AJ. Prevalence of malaria and risk factors among patients attending Dutse General Hospital, Jigawa State, Nigeria. Int J Pub Environ Health. 2016;11:270–7.
Agomo CO, Oyibo WA. Factors associated with risk of malaria infection among pregnant women in Lagos, Nigeria. Infect Dis Poverty. 2013;2:19.
Fana SA, Bunza MDA, Anka SA, Imam AU, Nataala SU. Prevalence and risk factors associated with malaria infection among pregnant women in a semi-urban community of north-western Nigeria. Inf Dis Poverty. 2015;4:24.
Olasehinde GI, Ajay AA, Taiwo SO, Adekeye BT, Adeyeba OA. Prevalence and management of falciparum malaria among infants and children in Ota, Ogun State, Southwestern Nigeria. Afr J Clin Exper Microbiol. 2010;11:159–63.
Okonko IO, Soleye FA, Amusan TA, Ogun AA, Udeze AO, Nkang AO. Prevalence of malaria plasmodium in Abeokuta, Nigeria. Malays J Microbiol. 2009;5:113–8.
Nwaorgu OC, Orajaka BN. Prevalence of malaria among children 1–10 years old in communities in Awka North Local Government Area, Anambra State South East Nigeria. Ethiopia Int J Multidiscip Res. 2011;5:264–81.
Ejezie GC, Ezedinachi ENU, Usanga EA, Gemade EII, Ikpatt NW, Alaribe AAA. Malaria and its treatment in rural villages of Aboh Mbaise, Imo State, Nigeria. Acta Trop. 1990;48:17–24.
Onyiri N. Estimating malaria burden in Nigeria: a geostatistical modelling approach. Geospat Health. 2015;10:306.
Yusuf OB, Adeoye BW, Oladepo OO, Peters DH, Bishai D. Poverty and fever vulnerability in Nigeria: a multilevel analysis. Malar J. 2010;9:235.
WHO. Global technical strategy for malaria 2016–2030. Geneva: World Health Organization; 2015.
Maltha J, Gillet P, Jacobs J. Malaria rapid diagnostic tests in endemic settings. Clin Microbiol Inf. 2013;19:399–407.
WHO. Guildelines for the treatment of malaria. 2nd ed. Geneva: World Health Organization; 2010.
WHO. Malaria rapid diagnostic test performance: summary results of WHO product testing of malaria RDTs: round 1–7 (2008–2016). Geneva: World Health Organization; 2017.
Wongsrichnalai C, Barcus MJ, Muth S, Sutamihardja A, Wernsdorfer WH. A review of malaria diagnostic tools: microscopy and rapid diagnostic test (RDT). Am J Trop Med Hyg. 2007;6:119–27.
Abba K, Deeks JJ, Olliaro PL, Naing C, Jackson SM, Takwoingi Y, et al. Rapid diagnostic tests for diagnosing uncomplicated P. falciparum malaria in endemic countries. Cochrane Database Syst Rev. 2011;7:CD008122.
Boyce MR, Menya D, Turner EL, Laktabai J, Prudhomme O'MW. Evaluation of malaria rapid diagnostic test (RDT) use by community health workers: a longitudinal study in western Kenya. Malar J. 2018;17:2016.
Heeringa SG, West BT, Berglund PA. Applied survey data analysis. 2nd ed. New York: Chapman & Hall/CRC; 2017.
Rao JNK, Scott AJ. The analysis of categorical data from complex sample surveys: Chi-squared tests for goodness of fit and independence in two-way tables. J Am Stat Assoc. 1998;76:221–30.
Wilson JR, Koehler KJ. Hierarchical models for cross-classified overdispersed multinomial data. J Bus Econ Stat. 1991;9:103–10.
Rozi S, Mahmud S, Lancaster G, Hadden W, Pappas G. Multilevel modeling of binary outcomes with three-level complex health survey data. Open J Epidemiol. 2017;7:27.
McCullagh P, Nelder JA. Generalized linear models. 2nd ed. New York: Chapman & Hall; 1989.
Breslow NE, Clayton DG. Approximate inference in generalized linear mixed models. J Am Stat Assoc. 1993;88:9–25.
Neuhaus JM, Kalbfleisch JD, Hauck WW. A comparison of cluster-specific and population-averaged approaches for analyzing correlated binary data. Int Stat Rev. 1991;59:25–35.
Pendergast JF, Gange SJ, Newton MA, Lindstrom MJ, Palta M, Fisher MR. A survey of methods for analyzing clustered binary response data. Int Stat Rev. 1996;89:118.
Capanu M, Gönen M, Begg CB. An assessment of estimation methods for generalized linear mixed models with binary outcomes. Stat Med. 2013;32:4550–66.
Agresti A. An introduction to categorical data analysis. 2nd ed. New Jersey: Wiley; 1996.
SAS. PHREG and Procedures, REGRESSION. SAS/STAT 9.4 User's Guide. Cary: SAS Institute Inc; 2017.
Okebe J, Mwesigwa J, Kama EL, Ceesy SJ, Njie F. A comparative case control study of the determination of the clinical malaria in the Gambia. Malar J. 2014;13:306.
Kateera F, Mens PF, Hakizimana E, Ingabire CM, Muragijemariya L, Karinda P, Grobusch MP. Malaria parasite carriage and risk determinants in a rural population: a malariometric survey in Rwanda. Malar J. 2015;14:16.
CLJU and TTZ conceptualized the modeling idea; CLJU performed the analysis; both CLJU and TTZ jointly drafted and revised the manuscript. All authors read and approved the final manuscript.
The authors appreciate the Measure DHS, Calverton Macro, USA, National Bureau of Statistics, National Malaria Control programme of the Federal Ministry of Health, Abuja and other contributor for granting the Authors access to the 2015 NMIS data. The first author also appreciates the study leave and the opportunity granted by the University of Nigeria Nsukka, Nigeria.
The data set for this study was obtained by request from Measure Demographic Health Survey (DHS) website: http://www.dhsprogram.com/data.
The ethical clearance for the survey was obtained from measure DHS and the ethical committee of ICF Macro (Calverton, MD, USA).
Chigozie Louisa J. Ugwu and Temesgen T. Zewotir contributed equally to this work
School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Westville Campus, Durban, South Africa
Chigozie Louisa J. Ugwu
& Temesgen T. Zewotir
Search for Chigozie Louisa J. Ugwu in:
Search for Temesgen T. Zewotir in:
Correspondence to Chigozie Louisa J. Ugwu.
Generalized Chi-square statistic
Interaction effect
Link function
Odd ratios
Random effects | CommonCrawl |
Nominal Interest Rate
Real Interest Rate
Present Value vs Future Value
Time Value of Uneven Cash Flows
Finding Annuity Payment
Types of Interest Rates
Quoted vs Periodic Interest
Simple vs Compound Interest
Present Value Factor
Future Value Factor
Interest vs Discount
Money Market Yield
Bond Equivalent Yield
Growing Annuity
PV of Ordinary Annuity
Present Value of Annuity Due
FV of Ordinary Annuity
Present Value of a Perpetuity
Future Value of Annuity Due
FV with Continuous Compounding
Total Interest
Finding Interest Rate
Time Periods in TVM
Effective Interest Rate
PV of a Single Sum of Money
Present Value of an Annuity
FV of a Single Sum
Future Value of an Annuity
DefinitionPresent Value of a Growing AnnuityFuture Value of a Growing AnnuityExample
Home Finance TVM Growing Annuity
A growing annuity is a finite stream of equal cash flows that occur after equal interval of time and grow at a constant rate. It is also called an increasing annuity. It differs from ordinary annuity and annuity due in that the periodic cash flows in a growing annuity grow at a constant rate but stays constant in an annuity.
Many cash flows stream constitute a growing annuity. For example, rental contracts may stipulate an increase in annual rental at a constant rate. The multi-stage dividend growth model might include a stage in which a company's dividend may be expected to grow at a constant rate over a certain period.
Present Value of a Growing Annuity
The present value of a growing annuity can be calculated by (a) finding each cash flow by growing the first cash flow at the given constant rate, (b) individually discounting each cash flow to time 0 and (c) summing up the component present values.
It can also be worked out directly by using the following formula:
$$ {\rm \text{PV}} _ {\text{GA}}=\frac{\text{C}}{\text{r}-\text{g}}\times\left(\text{1}-\left(\frac{\text{1}+\text{g}}{\text{1}+\text{r}}\right)^\text{n}\right) $$
The present value of a growing annuity due can be worked out by multiplying the above equation with (1 + r).
$$ {\rm \text{PV}} _ {\text{GAD}}=\frac{\text{C}}{\text{r}-\text{g}}\times\left(\text{1}-\left(\frac{\text{1}+\text{g}}{\text{1}+\text{r}}\right)^\text{n}\right)\times(\text{1}+\text{r}) $$
Where PVGA is the present value of growing annuity, PVGAD is the present value of annuity due, C is the periodic cash flow, r is the periodic discount rate, g is the periodic growth rate and n is the total number of cash flows.
Future Value of a Growing Annuity
The future value of a growing annuity can be calculated by working out each individual cash flow by (a) growing the initial cash flow at g; (b) finding future value of each cash flow at the interest rate r and (c) then summing up all the component future values.
The future value of a growing annuity can also be calculated by growing the present value of the growing annuity at the interest rate r for n periods. This can be expressed mathematically as follows:
$$ {\rm \text{FV}} _ {\text{GA}}={\rm \text{PV}} _ {\text{GA}}\times{(\text{1}+\text{r})}^\text{n} $$
Where FVGA is the future value of growing annuity, PVGA is the present value of growing annuity, r is the periodic discount rate and n is the number of cash flows.
We have effectively moved a single value at time 0 i.e. PVGA n number of years in future at the interest rate r.
Substituting the PVGA formula in the above equation, we get the following direct formula:
$$ {\rm \text{FV}} _ {\text{GA}}=\frac{\text{C}}{\text{r}-\text{g}}\times\left(\text{1}-\left(\frac{\text{1}+\text{g}}{\text{1}+\text{r}}\right)^\text{n}\right)\times{(\text{1}+\text{r})}^\text{n} $$
This can be simplified as follows:
$$ {\rm \text{FV}} _ {\text{GAD}}=\frac{\text{C}}{\text{r}-\text{g}}\times\left({(\text{1}+\text{r})}^\text{n}-{(\text{1}+\text{g})}^\text{n}\right) $$
Where FVGAD is the future value of growing annuity due.
The future value of a growing annuity due can be worked out by multiplying the above expression with (1 + r).
$$ {\rm \text{FV}} _ {\text{GAD}}=\frac{\text{C}}{\text{r}-\text{g}}\times\left({(\text{1}+\text{r})}^\text{n}-{(\text{1}+\text{g})}^\text{n}\right)\times(\text{1}+\text{r}) $$
Your parents want to set-up a college fund for you to fund your 4-year bachelors program. The tuition fee is $40,000 per semester payable in advance. The tuition fee is expected to grow at 4% and the college fund will earn 8% interest per annum. Calculate the amount the college fund must have when you start college.
You need to work out the present value of the growing annuity due in this case. There are two semesters in a year so periodic growth rate and rate of return are 2% and 4% respectively and there are 8 total semesters.
The following function works out the balance needed:
$$ {\rm \text{PV}} _ {\text{GAD}}=\frac{\text{\$40,000}}{\text{4%}-\text{2%}}\times\left(\text{1}-\left(\frac{\text{1}+\text{2%}}{\text{1}+\text{4%}}\right)^\text{8}\right)\times(\text{1}+\text{4%})=\text{\$299,270} $$ | CommonCrawl |
Modified home range kernel density estimators that take environmental interactions into account
Guillaume Péron ORCID: orcid.org/0000-0002-6311-43771
Kernel density estimation (KDE) is a major tool in the movement ecologist toolbox that is used to delineate where geo-tracked animals spend their time. Because KDE bandwidth optimizers are sensitive to temporal autocorrelation, statistically-robust alternatives have been advocated, first, data-thinning procedures, and more recently, autocorrelated kernel density estimation (AKDE). These yield asymptotically consistent, but very smoothed distributions, which may feature biologically unrealistic aspects such as spilling beyond impassable borders.
I introduce a semi-parametric variant of AKDE designed to extrapolate more realistic home range shapes by incorporating movement mechanisms into the bandwidth optimizer and into the base kernels. I implement a first approximative version based on the step selection framework. This method allows accommodating land cover selection, permeability of linear features, and attraction for select landscape features when delineating home ranges.
In a plains zebra (Equus quagga), the reluctance to cross a railway, the avoidance of dense woodland, and the preference for grassland when foraging created significant differences between the estimated home range contours by the new and by previous methods.
There is a tradeoff to find between fully parametric density estimators, which can be very realistic but need to be provided with a good model and adequate environmental data, and non-parametric density estimators, which are more widely applicable and asymptotically consistent, but whose details are bandwidth-limited. The proposed semi-parametric approach attempts to strike this balance, but I outline a few areas of future improvement. I expect the approach to find its use in studies that compare extrapolated resource availability and interpolated resource use, in order to discover the movement mechanisms that we need to improve the extrapolations.
Many researchers use kernel density estimators (KDE) to extrapolate where a geo-tracked animal spends its time, often using the 95% extrapolated isopleth as the home range contour [1,2,3]. KDE work by approximating the stationary utilization distribution p(r) of the animal, i.e., its time budget with respect to location r, with a sum of "kernels", i.e., unimodal distributions κ centered around each recorded location {ri} [4, 5].
$$ \widehat{p}\left(r,\left\{{\boldsymbol{r}}_{\boldsymbol{i}}\right\},{\boldsymbol{\upsigma}}_{\boldsymbol{B}}\right)=\frac{1}{N}\sum \limits_{i=1}^N\kappa \left(\boldsymbol{r},{\boldsymbol{r}}_{\boldsymbol{i}},{\boldsymbol{\upsigma}}_{\boldsymbol{B}}\right) $$
The parameter σB, termed the bandwidth, controls the spread of each kernel around each recorded location, and therefore ultimately the degree of smoothing of the resulting distribution [4, 6, 7]. A small bandwidth yields a distribution with numerous peaks around each cluster of recorded locations; a large bandwidth smooths out these peaks and yields a more spread-out distribution [2, 5]. The choice of an appropriate bandwidth is therefore critical, and indeed usually trumps the influence of the actual shape of the kernels, i.e., the analytical form of function κ [4]. At least three categories of optimal bandwidth selection routines have been developed to inform and automatize this decision [8], but, because of temporal autocorrelation in movement data [9,10,11], only one, the reference function approximation, is recommended for animal tracking applications [12]. In the reference function approximation approach, one optimizes the bandwidth by minimizing an approximated mean integrated square error criterion (MISE).
$$ \mathrm{MISE}\left({\boldsymbol{\sigma}}_{\boldsymbol{B}}\right)=\left\langle {\int}_{\Omega}{\left|{p}_{REF}\left(\boldsymbol{r}\right)-\widehat{p}\left(\boldsymbol{r},\left\{{\boldsymbol{r}}_{\boldsymbol{i}}\right\},{\boldsymbol{\upsigma}}_{\boldsymbol{B}}\right)\right|}^2\ d\boldsymbol{r}\right\rangle $$
pREF(r) is the reference function, usually chosen for its mathematical properties. In a non-approximate MISE, there should be a p(r) term instead of pREF(r), but, since p(r) is the unknown we want to estimate, we need to replace it with the reference function. Ω represents the domain of possible locations, the ∫ ∙ dr notation denotes integration over one realization of the stochastic movement process in space, and the 〈∙〉 notation denotes integration over all realizations of the movement process.
If Eq. 2 did not account for the temporal autocorrelation structure of the movement process, it would introduce a bias in the bandwidth estimate, that can severely impedes comparative inference [7, 9, 11,12,13,14,15]. For example, the optimizer would converge towards a zero bandwidth when the sampling resolution would increase or the amount of temporal autocorrelation in the animal movement would increase, yielding increasingly smaller home ranges [12]. The amount of bias depends on both the sampling resolution and the movement rates of the animal. There are two known ways to deal with temporal autocorrelation in Eq. 2. First, one may subsample the data so that successive records are approximatively independent [9, 16]. The recommended best practice is to keep one record every 3τ where τ is the autocorrelation time (the rate at which the proximity between two records declines with the time lag between them). I hereafter refer to this as "robust KDE" (KDEr). The other way to deal with temporal autocorrelation is to keep all the data in, but specify a temporal autocorrelation model in the kernels that make up \( \widehat{p} \) and in the reference function, yielding a different MISE that is minimized by a different value of the bandwidth [12]. This approach is termed "autocorrelated kernel density estimation" (AKDE). In both cases, the approximation of p(r) by pREF(r) still leads to a "reference function approximation bias" [17]. This bias is usually positive. It can be corrected a posteriori [17], in which case I use the recommended notation "c", e.g., AKDEc.
Both these robustizing protocols increase the bandwidth compared to standard or naive KDE. The KDEr option also requires the user to discard potentially massive amounts of data. As a consequence, the resulting distribution, although statistically robust and asymptotically consistent, may look oversmoothed and biologically irrelevant [16, 18]. For example, the 95% isopleth might intersect impassable barriers such as coastlines [19]. The issue of boundaries is indeed recurrent in KDE applications, including in other fields than animal tracking data analysis (review in [20, 21]). For home range estimation, the most common response is to clip the distributions at known barriers [19]. This intuitive practice is equivalent to introducing a dose of mechanism in the otherwise fully nonparametric KDE methodology. By specifying where to clip and how to redistribute the weight, we in effect inform the model that the barrier was unpassable. However, instead of providing that information in an ad hoc way at the very end of the process, we could uptake it from the start. The MISE would then become less sensitive to an increase in bandwidth that would otherwise have caused the kernel distributions to spill beyond the barrier, yielding a different optimal bandwidth. We would also incorporate the barrier into the reference function, making it more realistic, and thereby suppressing the reference function approximation bias. Lastly, we would apply the effect of the barrier to each kernel, yielding an estimated utilization distribution that is truncated at the barrier by construct. More generally, in addition to barriers, we can incorporate in this way any movement mechanism that can be formalized using a step selection function [22, 23], e.g., land cover selection [22] and permeability of linear features like roads [24].
Step 1: fitting a mechanistic movement model
At time t, the position rt of the animal was assumed to be drawn from a step selection kernel gu made of the product of an availability function ga, conditioned on the movement path prior to t, Rt − 1, and especially the last known position, rt − 1, and of a weight function W, both defined over the movement domain Ω [22, 23, 25].
$$ {g}_u\left({\boldsymbol{r}}_t|{\boldsymbol{R}}_{t-1}\right)={K_t}^{-1}W\left({\boldsymbol{r}}_t|{\boldsymbol{R}}_{t-1},t\right){g}_a\left({\boldsymbol{r}}_t|{\boldsymbol{R}}_{t-1}\right) $$
Kt is a scaling constant so that gu sums to one.
The availability function ga was modelled using the Ornstein-Uhlenbeck process (OU), a continuous-time stochastic movement model that represents home range behavior as a tendency to revert back to a mean location following random deviations from that mean [26]. The weight function W described environmental interactions: selection of some land covers over others, attraction or repulsion towards fixed landscape features such as human settlements, and barrier permeability, i.e., the rate at which animals avoid crossing linear features such as roads and rivers [25]. Following established practice [22, 27, 28], the analytical form of the weight function W was:
$$ \log\ W\left({\boldsymbol{r}}_t|{\boldsymbol{R}}_{t-1},t\right)=\boldsymbol{x}{\left({\boldsymbol{r}}_t\right)}^{\mathrm{T}}\cdot \boldsymbol{\alpha} +\boldsymbol{\delta} {\left({\boldsymbol{r}}_t|{\boldsymbol{r}}_{t-1}\right)}^{\mathrm{T}}\cdot \boldsymbol{\lambda} $$
x(rt) describes the environment at location rt. The kth elements of x(rt) (k = 1, … K1) contains a 0 or 1 coding for the presence of land cover type k at location rt. The next K2 elements contain the distances from rt to fixed landscape features (or the value of continuous environmental covariates such as climate or vegetation density). δ(rt| rt − 1) is a vector containing a 1 in the lh cell if a barrier of type l is crossed when going straight from rt − 1 to rt. Note that this straight-line permeability model is valid only for small steps that make longer detours extremely unlikely.
The model parameters are included in vectors α, the selection coefficients, and λ, the permeability coefficients. The kth element of α (1 < k ≤ K1) quantifies how much land cover type k is preferred over land cover type 1. The lth element of λ quantifies the reluctance to cross linear features of type l, zero meaning that the linear feature has no effect on movement. Both α and λ were considered constant through time and space in the present application.
For the sake of simplicity, and because my focus here was on the post-fitting treatment of α and λ estimates, rather than on the estimation itself, I used a relatively fast but approximate procedure to estimate the parameters in W. Following Johnson, Hooten & Kuhn [29], I reformulated the movement model as a space-time point process [30, 31]. This meant that the availability function ga was approximated by a Brownian availability window at each time step. However, in the post-fitting treatment, I used the Ornstein-Uhlenbeck model for ga as announced above. This means that different contradictory models were in practice used to estimate the weight function and the availability function. The detail of the space-time point process implementation is provided in Additional file 2.
Step 2: bandwidth optimization
To incorporate environmental interactions, I replaced the kernels of Eq. 1 by weighed multivariate Gaussian distributions [20, 21].
$$ \widehat{p}\left(\boldsymbol{r}\right)=\frac{1}{N}\sum \limits_{i=1}^N{K_i^B}^{-1}W\left(\boldsymbol{r}|{\boldsymbol{R}}_i\right)\varphi \left(\boldsymbol{r},{\boldsymbol{r}}_{\boldsymbol{i}},{\boldsymbol{\upsigma}}_{\boldsymbol{B}}\right) $$
φ(r, ri, σB) denotes the multivariate Gaussian distribution of mean ri and variance-covariance matrix σB, and \( {K}_i^B \) is a scaling constant so that each kernel sums to one (more details in Additional file 1). Following Fleming et al. [12], I simplified Eq. 5 using σB = σB ∙ σ0, where σ0 is the variance-covariance matrix of the availability function. This means that the direction of the smoothing was driven by the anisotropy of the movement process.
Importantly, these kernels feature a simplistic permeation model that imply a straight-line move from r to ri (Eq. 4). In the zebra case study below, the violation of that straight-line assumption was almost without consequence, because the σB value was moderate and the linear feature exhibited no convolution at all, meaning that even if the path from r to ri was not straight, the chance to cross the linear feature was similar to that of a straight move. However in other applications, researchers may need to consider alternative jackknifing methods, e.g., diffusive permeation kernels [32]. In addition, with Eq. 5 we assume a regular redistribution of the discounted weight across the whole availability domain. In applications where the animals remain near the linear features when they encounter them [19], alternative redistribution rules may be warranted, e.g., reflected kernels [20].
Next, for the reference function, I also used weighted multivariate Gaussian distributions, summed over recorded locations to represent the expected equilibrium distribution of the movement process.
$$ {p}_{REF}\left(\boldsymbol{r}\right)=\frac{1}{N}\sum \limits_{i=1}^N{K_i^0}^{-1}W\left(\boldsymbol{r}|{\boldsymbol{R}}_{\boldsymbol{i}}\right)\varphi \left(\boldsymbol{r},{\boldsymbol{\mu}}_{\mathbf{0}},{\boldsymbol{\upsigma}}_{\mathbf{0}}\right) $$
\( {K}_i^0 \) is a scaling constant so that each element of the reference function sums to one.
Like Eq. 5, Eq. 6 features a simplistic permeation model that assumes a straight line move from μ0 to r. However, now that simplistic permeation model is applied to the whole home range. Any movement bottleneck, e.g., a constrained corridor between two sections of the home range, would be overly discounted in the resulting reference function. In the zebra case study, we did not have to deal with any such feature. However, in other applications, users may have to consider alternative formulations. Individual-based simulations (IBS; see "Zebra case study" below) could in this case prove particularly useful to generate a "pilot estimate" to use instead of Eq. 6. In addition to offering a straightforward way to incorporate above-mentioned movement bottlenecks into the reference function, IBS can be set up so that the simulation step length is short enough that the straight-line permeability model remains realistic at all stages of the simulations. However, this option is not implemented yet, and probably warrants further investigation pertaining to sensitivity to simulation parameters (simulated duration, step lengths, etc.).
When using Eq. 5, 6, there is an analytical form for the MISE, derived in Additional file 1. These details expand on the proof of Fleming et al. [12] demonstrating how temporal autocorrelation is incorporated into the KDE framework. The resulting MISE optimization algorithm is however prohibitively time-consuming. Thus, pending algorithmic improvement, the new approach remains mostly theoretical and exploratory. I however developed a faster-running simplified version in the following section.
Simplified version
Because the MISE optimization was so prohibitively time-consuming, I simplified the protocol by by-passing Step 2 entirely, or more precisely, replacing Step 2 with the corresponding step of the AKDE analytical protocol [33]. In other words, the bandwidth is optimized while taking temporal autocorrelation into account, but without taking environmental interactions into account. The weights from Step 1 are then only applied when eventually computing the distribution (Step 3). I propose the notation E-AKDE for the full version where environmental interactions are incorporated in Step 2, and SE-AKDE for the simplified version where environmental interactions are not incorporated at Step 2. Importantly, because of the assumption from the straight-line permeability model, E-AKDE is not compulsorily always more reliable than SE-AKDE.
Step 3: computing the kernel density and correcting for the remaining reference function approximation bias
From the estimated σB, I computed isopleths of \( \widehat{p}\left(\boldsymbol{r}\right) \) using Eq. 5. I then applied the reference function bias correction routine of Fleming and Calabrese [17] to the isopleths, but only when implementing SE-AKDE, hence the SE-AKDEc notation hereafter. When implementing E-AKDE, I considered that by changing the reference function (Eq. 6), I got rid of the reference function approximation bias. This is certainly a strong assumption that was however supported by the empirical results.
Zebra case study and comparison with alternatives to KDE
I analyzed data collected from a plains zebra (Equus quagga) in and near Hwange national park, Zimbabwe (26.861E, − 18.624 N). The study individual (individual local identifier: Ganda) was monitored between Jan 2011 and Sept 2012 with a collar-mounted GPS that recorded one location every hour [34]. I rescaled the recent Hwange vegetation map [35] at a 150 m resolution and pooled vegetation classes into 4 categories to reduce computing time for this illustration case. Other landscape features known to influence zebra space use included water holes, a railway with adjacent road that marks the northern border of the park, and a town. These were all included in the step selection model (Eq. 3).
I compared four variants of the KDE methodology: a naïve reference function-based bandwidth optimizer that did not account for temporal autocorrelation (KDE), the robustized approach where the data were subsampled before analysis (KDEr), the AKDE approach where the autocorrelation structure was incorporated in the reference function (AKDEc), and finally the new methods (E-AKDE and SE-AKDEc). For KDE and KDEr I used the kde2d function in R-package MASS. The position autocorrelation time τ required to subsample the data for KDEr was estimated from the ctmm.fit routine in R-package ctmm [33]. For AKDEc, I used the akde function in ctmm within the recommended analytical protocol [33]. For E-AKDE, the algorithm and mathematical justification are described (with words) in Additional file 1. For SE-AKDEc, I used the AKDE bandwidth but then applied the environmental interaction weights when computing the distribution.
I also implemented three non-KDE methodologies. First, I used the asymptotic distribution of the fitted Ornstein-Uhlenbeck model to draw the ellipses that most closely approximated the home range and core area. Second, I implemented the movement-based kernel density estimator (MKDE), in which the kernels are replaced by step selection functions, i.e., the gu(ri + 1| Ri) estimated at Step 1 [19, 36]. MKDE, like the Brownian bridge, computes the probability that a location was used between records [36]. The method focuses on one realization of the movement path and on the process uncertainty around the interpolated path between records [37]. By contrast, KDE extrapolators average the utilization distribution across realizations of the movement path. Lastly, I implemented an individual-based simulation procedure (IBS) [38, 39] to generate 1000 1-month-long tracks with one record per hour, each track starting from a randomly selected recorded location, and moving stochastically according to the model described in Eq. 1. This yielded a cloud of 720,000 simulated records, from which I computed the density of records per pixel, thereby obtaining a rasterized cumulative utilization distribution that quantified the time budget under the fitted mechanistic movement model.
For each estimator, I computed the home range area (95% isopleth) and the core area (50% isopleth), as well as the home range scale computed as the root mean square distance of the distribution to its centroid. I also computed the "amplitude" of the core area and of the home range as the longest distance between two points on the isopleth. These represent different ways to measure the home range. In particular, the home range area of the asymptotic OU distribution is proportional to the movement variance or home range scale.
Comparison between KDE variants in the zebra case study
The standard KDE yielded the smallest home range area and amplitude among all KDE variants (Fig. 1 and Table 1). As reviewed in the introduction, this small estimated home range size is partly caused by an unwanted property of the standard KDE in the presence of temporal autocorrelation [7, 9, 11,12,13,14,15]. The KDEr version indeed yielded a much larger estimated home range. AKDEc yielded a smaller home range estimate than KDEr, with a notably smaller core area leading to a large estimated home range scale. Neither AKDEc nor KDEr provided any information about landcover selection or reluctance to cross the railway (Fig. 1), as expected by construction. The most visually compelling effect of using SE-AKDEc and E-AKDE was that the space to the east of the railway was weighted down compared to AKDEc and KDEr. In addition, the core area was markedly irregular in shape, reflecting the avoidance of the densest woodland cover type and the preference for pure grassland. The composition of the home range remained similar across all methodologies. In particular, while c.30% of the raw data was recorded in grassland, only 1–6% of the estimated home range was estimated to be grassland. This result stems from the scarcity and patchiness of grassland in the area, meaning that any extrapolation was bound to incorporate more non-grassland than grassland. In addition, bushland and woodland are sometimes actively selected by zebras, e.g., after a predation event [34], meaning that the fitted landcover selection model did not strictly discount these landcover types and that it might be interesting to fit a time-varying landcover selection model in this context. Lastly, the estimated E-AKDE bandwidth (0.20) was smaller than that of AKDE (0.31) yielding a smaller estimated home range from E-AKDE than SE-AKDEc. The decrease in bandwidth suggests that the methodology successfully took up the information that some of the movement variance was caused by resource selection, not stochastic diffusion. These empirical results overall suggest that the new reference function suppressed the reference function approximation bias.
The 50% (dark grey) and 95% (light grey) isopleths of the utilization distribution of a plains zebra, as estimated by the 8 estimators in this study. The dashed diagonal line represents a railway with adjacent road that marks the border of the national park where the zebra was captured. KDE, KDEr and AKDEc represent three strategies to choose the bandwidth of the kernel density estimator. MKDE is a bridge-based interpolation of the movement path. SE-AKDEc and E-AKDE are the result of the new developments in this study, incorporating step-selection functions into the AKDE framework. IBS depicts a cloud of 720.000 simulated locations from a fitted step-selection model. The asymptotic OU distribution represents the spread of a simple Ornstein-Uhlenbeck advection-diffusion model fitted to the data
Table 1 Home range area, home range scale, and home range composition for the same zebra study individual, using the various home range estimation methods. The home range scale is the root mean squared distance to the distribution centroid. The amplitude is computed as the longest distance between two points on a contour. IBS stands for individual-based simulation, other notation like in the main text
Extrapolation/interpolation, parametric/non-parametric
As outlined by several authors previously, MKDE does not measure the same thing as KDE [33, 36]. MKDE implements an interpolation. The potential for confusion has led some authors to recommend against using the terminology of "utilization distribution" for MKDE and other interpolative methods, and to restrict the use of that phrase to extrapolated distributions [33]. Accordingly, in the zebra case study, the "home range" area was much smaller when estimated with MKDE than with other methods (Table 1, Fig. 1). The isopleth of the MKDE distribution quantifies the process uncertainty around the interpolated path, and the interpolated time budget during the study. By contrast, KDEr and AKDEc are designed to delineate asymptotically consistent, statistically robust, conservative buffers around activity centers.
Like KDEr and AKDEc, SE-AKDEc, and E-AKDE extrapolate the utilization probability. But, contrary to KDEr and AKDEc, they include a dose of mechanism (Eq. 1) into the base kernels of the extrapolation (Eq. 5). This yields what can be termed a semi-parametric extrapolation. Compared to AKDEc, the dose of mechanism modifies what the method considers a plausible realization of the movement process. The objective is to combine the asymptotic consistence of AKDE with the biological realism of fully parametric methodologies. One of the main criticism of fully parametric extrapolations [40] is indeed their sensitivity to the goodness of fit of the underlying mechanistic movement model, which the semi-parametric approach partly relaxes. In terms of biological inference, SE-AKDEc, and E-AKDE extrapolate the potentially accessible resources under a known set of movement rules and under the constraint that the movement paths must all pass through the recorded locations. Comparing extrapolations from different models can help infer new movement mechanisms or assess model parsimony. Finally, compared to the IBS approach, SE-AKDEc and E-AKDE provide three key advantages: no tuning parameters, full conditioning on the recorded locations, and asymptotic consistency. As highlighted above, a way to better articulate the IBS approach with E-AKDE would be to use IBS to generate the pilot estimate upon which to base the reference function.
The key message is that it should soon be possible to make the statistically robust, asymptotically consistent alternatives to KDE less bandwidth-limited than they currently are, and make them yield more realistic, less ovoid home range shapes. I introduced new semi-parametric methodologies, based on the step-selection framework [23]. I outlined several avenues for future improvement. I expect E-AKDE to function as part of an iterative process by which semi-mechanistic extrapolations are compared to realized resource use until no significant improvement can be made by adding new movement rules in the extrapolation process. The point would be to give less importance to the time spent at a given location, and more importance to the ratio between availability and use.
AKDE:
autocorrelated kernel density estimator
E-AKDE:
autocorrelated kernel density estimator with environmental interactions.
kernel density estimator
MKDE:
movement-based kernel density estimator
Fieberg J, Börger L. Could you please phrase "home range" as a question? J Mammal. 2012;93:890–902.
Kie JG, Matthiopoulos J, Fieberg J, Powell RA, Cagnacci F, Mitchell MS, et al. The home-range concept: are traditional estimators still relevant with modern telemetry technology? Philos Trans R Soc B Biol Sci. 2010;365:2221–31.
Powell RA, Mitchell MS. What is a home range? J Mammal. 2012;93:948–58.
Silverman BW. Density estimation for statistics and data analysis. London: Chapman and Hall; 1986.
Worton B. Kernel methods for estimating the utilization distribution in home-range studies. Ecology. 1989;70:164–8.
Kie JG. A rule-based ad hoc method for selecting a bandwidth in kernel home-range analyses. Anim Biotelemetry. 2013;1:13.
Börger L, Franconi N, De Michele G, Gantz A, Meschi F, Manica A, et al. Effects of sampling regime on the mean and variance of home range size estimates. J Anim Ecol. 2006;75:1393–405.
Silverman BW. Spline smoothing: the equivalent variable kernel method. Ann Stat Institute of Mathematical Statistics. 1984;12:898–916.
Hansteen TL, Andreassen HP, Ims RA. Effects of spatiotemporal scale on autocorrelation and home range estimators. J Wildl Manag. 1997;61:280–90.
Dray S, Royer-Carenzi M, Calenge C. The exploratory analysis of autocorrelation in animal-movement studies. Ecol Res. 2010;25:673–81.
Swihart RK, Slade NA. Testing for Independence of observations in animal movements. Ecology. 1985;66:1176–84.
Fleming CH, Fagan WF, Mueller T, Olson KA, Leimgruber P, Calabrese JM. Rigorous home-range estimation with movement data: a new autocorrelated kernel-density estimator. Ecology. 2015;96:1182–8.
Girard I, Ouellet J-P, Courtois R, Dussault C, Breton L. Effects of sampling effort based on GPS telemetry on home-range size estimations. J Wildl Manag. 2002;66:1290–300.
Laver PN, Kelly MJ. A critical review of home range studies. J Wildl Manag. 2008;72:290–8.
Seaman DE, Millspaugh JJ, Kernohan BJ, Brundige GC, Raedeke KJ, Gitzen RA. Effects of sample size on kernel home range estimates. J Wildl Manag. 1999;63:739.
De Solla SR, Bonduriansky R, Brooks RJ. Eliminating autocorrelation reduces biological relevance of home range estimates. J Anim Ecol. 1999;68:221–34.
Fleming CH, Calabrese JM. A new kernel density estimator for accurate home-range and species-range area estimation. Methods Ecol Evol. 2017;8:571–9.
Slaght JC, Horne JS, Surmach SG, Gutiérrez RJ. Home range and resource selection by animals constrained by linear habitat features: an example of Blakiston's fish owl. J Appl Ecol. 2013;50:1350–7.
Benhamou S, Cornélis D. Incorporating movement behavior and barriers to improve kernel home range space use estimates. J Wildl Manag. 2010;74:1353–60.
Jones MC. Simple boundary correction for kernel density estimation. Stat Comput. 1993;3:135–46.
Tenreiro C. Boundary kernels for distribution function estimation. Revstat Stat J. 2013;11:169–90.
Forester JD, Im HK, Rathouz PJ. Accounting for animal movement in estimation of resource selection functions: sampling and data analysis. Ecology. 2009;90:3554–65.
Lele SR, Keim JL. Weighted distributions and estimation of resource selection probability functions. Ecology. 2006;87:3021–8.
Bischof R, Steyaert SMJG, Kindberg J. Caught in the mesh: roads and their network-scale impediment to animal movement. Ecography. 2017;40:1369–80.
Beyer HL, Gurarie E, Börger L, Panzacchi M, Basille M, Herfindal I, et al. "You shall not pass!": quantifying barrier permeability and proximity avoidance by animals. J Anim Ecol. 2016;85:43–53.
Dunn JE, Gipson PS. Analysis of radio telemetry data in studies of home range. Biometrics. 1977;33:85–101.
Johnson DS, Thomas DL, Ver Hoef JM, Christ A. A general framework for the analysis of animal resource selection from telemetry data. Biometrics. 2008;64:968–76.
Fieberg J, Matthiopoulos J, Hebblewhite M, Boyce MS, Frair JL. Correlation and studies of habitat selection: problem, red herring or opportunity? Philos Trans R Soc B Biol Sci. 2010;365:2233–44.
Johnson DS, Hooten MB, Kuhn CE. Estimating animal resource selection from telemetry data using point process models. J Anim Ecol. 2013;82:1155–64.
Cressie NAC. Statistics for spatial data. New York: Wiley; 1993.
Berman M, Turner TR. Approximating point process likelihoods with GLIM. J R Stat Soc C Appl Stat. 1992;41:31–8.
Robb WL. Thin silicone membranes. Their permeation properties and some applications. Ann N Y Acad Sci. 1968;146:119–37.
Calabrese JM, Fleming CH, Gurarie E. Ctmm: an r package for analyzing animal relocation data as a continuous-time stochastic process. Methods Ecol Evol. 2016;7:1124–32.
Courbin N, Loveridge AJ, Macdonald DW, Fritz H, Valeix M, Makuwe ET, et al. Reactive responses of zebras to lion encounters shape their predator-prey space game at large scale. Oikos. 2016;125:829–38.
Arraut EM, Loveridge AJ, Chamaillé-Jammes S, Fox HV, Macdonald DW. The 2013-2014 vegetation structure map of Hwange National Park, Zimbabwe, produced using free images and software. KOEDOE - African Prot Area Conserv Sci. 2018;60:a1497.
Benhamou S. Dynamic approach to space and habitat use based on biased random bridges. PLoS One. 2011;6:e14592.
Fleming CH, Fagan WF, Mueller T, Olson KA, Leimgruber P, Calabrese JM. Estimating where and how animals travel: an optimal framework for path reconstruction from autocorrelated tracking data. Ecology. 2016;97:576–82.
Signer J, Fieberg J, Avgar T. Estimating utilization distributions from fitted step-selection functions. Ecosphere. 2017;8:e01771.
Wang M, Grimm V. Home range dynamics and population regulation: an individual-based model of the common shrew Sorex araneus. Ecol Model. 2007;205:397–409.
Moorcroft PR, Lewis MA. Mechanistic home range analysis. Princeton: Princeton University Press; 2004.
S. Chamaillé-Jammes provided the zebra data and constructive comments on a previous draft. I thank C. Fleming for a detailed and helpful review, J. Calabrese for insightful discussions of previous drafts, and an anonymous reviewer. V. Miele advised on the incorporation of C++ code.
The zebra project was funded by ANR-08-BLAN-0022 and ANR-16-CE02–0001-01.
The data have been uploaded to MoveBank (www.movebank.org) under the study name "Plains zebra Chamaillé-Jammes Hwange NP".
Univ Lyon, Université Lyon 1, CNRS, Laboratoire de Biométrie et Biologie Evolutive UMR5558, F-69622, Villeurbanne, France
Guillaume Péron
Search for Guillaume Péron in:
The author read and approved the final manuscript.
Correspondence to Guillaume Péron.
The zebra project operates under Zimbabwe Parks and Wildlife Management Authority permit # 23(1)(c)(ii)03/2009.
Appendix A: Description of the E-AKDE bandwidth optimizer. (PDF 271 kb)
Appendix B: Additional method elements: The space-time point process likelihood and Approximate routine to determine whether a linear feature is crossed. (PDF 241 kb)
Péron, G. Modified home range kernel density estimators that take environmental interactions into account. Mov Ecol 7, 16 (2019). https://doi.org/10.1186/s40462-019-0161-9
AKDE
Temporal autocorrelation
Step selection function
Point process pattern
Semiparametric | CommonCrawl |
Electronic Journal of Statistics
Electron. J. Statist.
Optimal exponential bounds for aggregation of estimators for the Kullback-Leibler loss
Cristina Butucea, Jean-François Delmas, Anne Dutfoy, and Richard Fischer
More by Cristina Butucea
More by Jean-François Delmas
More by Anne Dutfoy
More by Richard Fischer
Enhanced PDF (448 KB)
We study the problem of aggregation of estimators with respect to the Kullback-Leibler divergence for various probabilistic models. Rather than considering a convex combination of the initial estimators $f_{1},\ldots,f_{N}$, our aggregation procedures rely on the convex combination of the logarithms of these functions. The first method is designed for probability density estimation as it gives an aggregate estimator that is also a proper density function, whereas the second method concerns spectral density estimation and has no such mass-conserving feature. We select the aggregation weights based on a penalized maximum likelihood criterion. We give sharp oracle inequalities that hold with high probability, with a remainder term that is decomposed into a bias and a variance part. We also show the optimality of the remainder terms by providing the corresponding lower bound results.
Electron. J. Statist., Volume 11, Number 1 (2017), 2258-2294.
Received: January 2016
First available in Project Euclid: 23 May 2017
https://projecteuclid.org/euclid.ejs/1495504916
doi:10.1214/17-EJS1269
Primary: 62G07: Density estimation 62M15: Spectral analysis
Secondary: 62G05: Estimation
Aggregation Kullback-Leibler divergence probability density estimation sharp oracle inequality spectral density estimation
Creative Commons Attribution 4.0 International License.
Butucea, Cristina; Delmas, Jean-François; Dutfoy, Anne; Fischer, Richard. Optimal exponential bounds for aggregation of estimators for the Kullback-Leibler loss. Electron. J. Statist. 11 (2017), no. 1, 2258--2294. doi:10.1214/17-EJS1269. https://projecteuclid.org/euclid.ejs/1495504916
[1] J.-Y. Audibert. Progressive mixture rules are deviation suboptimal. In J. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 41–48. Curran Associates, Inc., 2008.
[2] A. R. Barron and C.-H. Sheu. Approximation of density functions by sequences of exponential families., The Annals of Statistics, 19(3) :1347–1369, 1991.
Digital Object Identifier: doi:10.1214/aos/1176348252
Project Euclid: euclid.aos/1176348252
[3] P. Bellec. Optimal exponential bounds for aggregation of density estimators., arXiv :1405.3907, 2014.
Digital Object Identifier: doi:10.3150/15-BEJ742
Project Euclid: euclid.bj/1475001354
[4] J. Bigot, R. B. Lirio, J.-M. Loubes, and L. M. Alvarez. Adaptive estimation of spectral densities via wavelet thresholding and information projection., arXiv preprint arXiv :0912.2026, 2009.
[5] S. Boyd and L. Vandenberghe., Convex optimization. Cambridge University Press, 2004.
[6] R. C. Bradley. On positive spectral density functions., Bernoulli, 8(2):175–193, 2002.
[7] L. M. Brègman. A relaxation method of finding a common point of convex sets and its application to the solution of problems in convex programming., Ž. Vyčisl. Mat. i Mat. Fiz., 7:620–631, 1967.
[8] F. Bunea, A. B. Tsybakov, and M. H. Wegkamp. Aggregation for Gaussian regression., Ann. Statist., 35(4) :1674–1697, 08 2007.
Digital Object Identifier: doi:10.1214/009053606000001587
[9] C. Butucea, J.-F. Delmas, A. Dutfoy, and R. Fischer. Fast adaptive estimation of log-additive exponential models in Kullback-Leibler divergence., arXiv :1604.06304, 2016.
Zentralblatt MATH: 06864491
Digital Object Identifier: doi:10.1214/18-EJS1413
[10] O. Catoni. Universal aggregation rules with exact bias bounds. Laboratoire de Probabilités et Modeles Aléatoires, CNRS, Paris., Preprint, 510, 1999.
[11] C. Chang and D. Politis. Aggregation of spectral density estimators., Statistics & Probability Letters, 94:204–213, 2014.
Digital Object Identifier: doi:10.1016/j.spl.2014.07.017
[12] D. Dai, P. Rigollet, L. Xia, and T. Zhang. Aggregation of affine estimators., Electron. J. Statist., 8(1):302–327, 2014.
Digital Object Identifier: doi:10.1214/14-EJS886
[13] D. Dai, P. Rigollet, and T. Zhang. Deviation optimal learning using greedy $Q$-aggregation., Ann. Statist., 40(3) :1878–1905, 06 2012.
Digital Object Identifier: doi:10.1214/12-AOS1025
[14] A. S. Dalalyan and J. Salmon. Sharp oracle inequalities for aggregation of affine estimators., Ann. Statist., 40(4) :2327–2355, 08 2012.
[15] A. S. Dalalyan and A. B. Tsybakov. Aggregation by exponential weighting and sharp oracle inequalities. In, Learning theory, volume 4539 of Lecture Notes in Comput. Sci., pages 97–111. Springer, Berlin, 2007.
[16] A. S. Dalalyan and A. B. Tsybakov. Aggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsity., Machine Learning, 72(1–2):39–61, 2008.
[17] R. B. Davies. Asymptotic inference in stationary Gaussian time-series., Advances in Appl. Probability, 5:469–497, 1973.
[18] E. Di Nezza, G. Palatucci, and E. Valdinoci. Hitchhiker's guide to the fractional Sobolev spaces., Bull. Sci. Math., 136(5):521–573, 2012.
Digital Object Identifier: doi:10.1016/j.bulsci.2011.12.004
[19] U. Grenander and G. Szegö., Toeplitz forms and their applications, volume 321. Univ of California Press, 1958.
[20] A. Juditsky and A. Nemirovski. Functional aggregation for nonparametric regression., Ann. Statist., 28(3):681–712, 05 2000.
[21] A. Juditsky, P. Rigollet, and A. B. Tsybakov. Learning by mirror averaging., Ann. Statist., 36(5) :2183–2206, 10 2008.
Digital Object Identifier: doi:10.1214/07-AOS546
[22] G. Lecué. Lower bounds and aggregation in density estimation., J. Mach. Learn. Res., 7:971–981, 2006.
[23] G. Lecué and S. Mendelson. Aggregation via empirical risk minimization., Probab. Theory Related Fields, 145(3–4):591–613, 2009.
[24] C. C. Moore. The degree of randomness in a stationary time series., Ann. Math. Statist., 34 :1253–1258, 1963.
Digital Object Identifier: doi:10.1214/aoms/1177703860
Project Euclid: euclid.aoms/1177703860
[25] P. Rigollet. Kullback-Leibler aggregation and misspecified generalized linear models., Ann. Statist., 40(2):639–665, 04 2012.
[26] P. Rigollet and A. B. Tsybakov. Linear and convex aggregation of density estimators., Mathematical Methods of Statistics, 16(3):260–280, 2007.
Digital Object Identifier: doi:10.3103/S1066530707030052
[27] M. Rosenblatt. Remarks on a multivariate transformation., Ann. Math. Statist., 23(3):470–472, 09 1952.
[28] A. B. Tsybakov. Optimal rates of aggregation. In B. Schölkopf and M. K. Warmuth, editors, Learning Theory and Kernel Machines, volume 2777 of Lecture Notes in Computer Science, pages 303–313. Springer Berlin Heidelberg, 2003.
[29] Z. Wang, S. Paterlini, F. Gao, and Y. Yang. Adaptive minimax regression estimation over sparse $\ell_q$-hulls., J. Mach. Learn. Res., 15 :1675–1711, 2014.
[30] M. Wegkamp. Model selection in nonparametric regression., Ann. Statist., 31(1):252–273, 02 2003.
[31] Y. Yang. Combining different procedures for adaptive regression., Journal of Multivariate Analysis, 74(1):135–161, 2000.
Digital Object Identifier: doi:10.1006/jmva.1999.1884
[32] Y. Yang. Mixing strategies for density estimation., Ann. Statist., 28(1):75–87, 02 2000.
[33] Y. Yang. Aggregating regression procedures to improve performance., Bernoulli, 10(1):25–47, 02 2004.
Digital Object Identifier: doi:10.3150/bj/1077544602
The Institute of Mathematical Statistics and the Bernoulli Society
IMS Co-sponsored Journal
Fast adaptive estimation of log-additive exponential models in Kullback-Leibler divergence
Butucea, Cristina, Delmas, Jean-François, Dutfoy, Anne, and Fischer, Richard, Electronic Journal of Statistics, 2018
Adaptation in log-concave density estimation
Kim, Arlene K. H., Guntuboyina, Adityanand, and Samworth, Richard J., The Annals of Statistics, 2018
Approximation of Density Functions by Sequences of Exponential Families
Barron, Andrew R. and Sheu, Chyong-Hwa, The Annals of Statistics, 1991
Bayesian fractional posteriors
Bhattacharya, Anirban, Pati, Debdeep, and Yang, Yun, The Annals of Statistics, 2019
Optimal exponential bounds for aggregation of density estimators
Bellec, Pierre C., Bernoulli, 2017
Kullback Leibler property of kernel mixture priors in Bayesian density estimation
Wu, Yuefeng and Ghosal, Subhashis, Electronic Journal of Statistics, 2008
Smoothing Spline Density Estimation: Theory
Gu, Chong and Qiu, Chunfu, The Annals of Statistics, 1993
Log-density estimation in linear inverse problems
Koo, Ja-Yong and Chung, Han-Yeong, The Annals of Statistics, 1998
Majorizatiuon and Zipf-Mandelbrot law
Latif, Naveed, Pečarić, Đilda, and Pečarić, Josip, Tbilisi Mathematical Journal, 2018
Semi-parametric dynamic time series modelling with applications to detecting neural dynamics
Rigat, Fabio and Smith, Jim Q., The Annals of Applied Statistics, 2009
euclid.ejs/1495504916 | CommonCrawl |
Adjudin-preconditioned neural stem cells enhance neuroprotection after ischemia reperfusion in mice
Tingting Zhang1,
Xiao Yang1,
Tengyuan Liu1,
Jiaxiang Shao1,
Ningzhen Fu1,
Aijuan Yan2,
Keyi Geng1 &
Weiliang Xia ORCID: orcid.org/0000-0002-0256-42191,2
Transplantation of neural stem cells (NSCs) has been proposed as a promising therapeutic strategy for the treatment of ischemia/reperfusion (I/R)-induced brain injury. However, existing evidence has also challenged this therapy on its limitations, such as the difficulty for stem cells to survive after transplantation due to the unfavorable microenvironment in the ischemic brain. Herein, we have investigated whether preconditioning of NSCs with adjudin, a small molecule compound, could enhance their survivability and further improve the therapeutic effect for NSC-based stroke therapy.
We aimed to examine the effect of adjudin pretreatment on NSCs by measuring a panel of parameters after their transplantation into the infarct area of ipsilateral striatum 24 hours after I/R in mice.
We found that pretreatment of NSCs with adjudin could enhance the viability of NSCs after their transplantation into the stroke-induced infarct area. Compared with the untreated NSC group, the adjudin-preconditioned group showed decreased infarct volume and neurobehavioral deficiency through ameliorating blood–brain barrier disruption and promoting the expression and secretion of brain-derived neurotrophic factor. We also employed H2O2-induced cell death model in vitro and found that adjudin preconditioning could promote NSC survival through inhibition of oxidative stress and activation of Akt signaling pathway.
This study showed that adjudin could be used to precondition NSCs to enhance their survivability and improve recovery in the stroke model, unveiling the value of adjudin for stem cell-based stroke therapy.
Ischemic stroke represents the most common cause of serious morbidity and mortality, which is the second major cause of disability worldwide [1]. Few pharmacotherapies have drawn the attention of medical circles and one therapy is the recanalization of occluded vessels via thrombolysis using tissue plasminogen activator (tPA), which due to a narrow time window can only be applied to a minority of patients [2, 3]. Because of limitations and complications in tPA-based treatment, restorative therapies are urgently needed to promote brain remodeling and repair once acute ischemic stroke (AIS) injury has occurred. Fortunately, stem cell-based strategies have emerged as a promising therapeutic strategy for AIS and gained increasing interest in recent years for their unique properties of action, which is the ability to abrogate subacute and chronic secondary cell death associated with the disease [4, 5]. Currently, different types of stem cells are used for the treatment of ischemic stroke including neural stem cells (NSCs) [6], mesenchymal stem cells (MSCs) [7], oligodendrocyte progenitor cells (OPCs) [8], embryonic stem cells (ESCs) [9], endothelial progenitor cells (EPCs) [10, 11], induced pluripotent stem cells (iPSCs) [12], vascular progenitor cells (VPCs) [7], and so forth. These stem cells could secrete various neurotrophic factors and cytokines, or differentiate into multiple cell types to compensate for I/R-induced cell death, strengthen the connection between the synapses, and establish new neural circuits to attenuate ischemic brain injury and finally improve neurobehavioral recovery [13, 14]. And in the clinical study, a number of preliminary trials found that transplanting stem cells to patients between 3 days and 24 months after stroke was feasible and safe [15, 16]. However, recent evidence consistently challenges this therapy on its limitations, especially the hostile microenvironment in the ischemic brain which presents a significant hurdle to the survival of transplanted cells. Hicks et al. [17] demonstrated that only 1–3% of grafted cells survived in the ischemic brain 28 days after grafting. The massive death of transplanted stem cells will hamper the application of cell-based therapy, which might be influenced by the production of reactive oxygen species (ROS) and inflammatory response mediators after I/R injury [18,19,20]. Thus finding a strategy to overcome this obstacle would potentially be of great value.
In order to resolve the problem of cell survival after transplantation, several remedial approaches have been suggested. Both preconditioned stem cells and gene modification exhibited an improvement of cell viability after transplantation [21,22,23,24]. However, although these methods showed a better transplantation outcome, some challenges still existed in using chemical factors to precondition stem cells or through modifying certain genes in stem cells. For example, lipopolysaccharide (LPS), IL-6, minocycline, and melatonin were all available factors for stem cell preconditioning, which could reduce cell death, increase stem cell proliferation and neurotrophic factor secretion, enhance cytoprotection and angiogenesis, and accelerate functional recovery in acute and subacute ischemia [25,26,27,28]. However, LPS could cause neuroinflammation, hypotension, or sepsis in pathological injury [29]; IL-6-pretreated MSCs could promote osteosarcoma growth, which suggested that IL-6 mediated the recruitment of MSCs to facilitate tumor progression [30]. So far, it has been considered that minocycline and melatonin have low-toxicity and they are biologically natural agents which were used to pretreatment cells. As for gene modification, uncontrolled expression of introduced genes could cause many adverse impacts on the body, such as leukemia, which has been attributed to insertional mutagenesis that combined with acquired somatic mutations following gene therapy of SCID-X1 patients [31]. Compared with gene modification, preconditioned stem cell therapy seemed more beneficial, simpler, and safer for ischemic stroke therapy [32]. Therefore, we wish to offer safe and effective drugs which could combine with NSCs for future clinical application.
Adjudin, 1-(2,4-dichlorobenzyl)-1H-indazole-3-carbohydrazide, formerly called AF-2364, is a reversible antispermatogenic compound, which is under development as a potential nonhormonal male contraceptive that can disrupt the adherens junction of germ cells to Sertoli cells without affecting testosterone production [33]. Adjudin is a small molecular derivative of indazole and is an analog of the chemotherapy drug lonidamine which had been demonstrated to have no apparent side effects in treated animals [33]. It has also been reported that many indazole derivatives are nonsteroidal anti-inflammatory drugs (NSAID) which could suppress the production of prostaglandin E2 (PGE2) synthesis and nitric oxide (NO) and the release of cytokines and chemokines [34]. Our previous results demonstrated that adjudin could protect against cerebral I/R injury by inhibition of neuroinflammation and blood–brain barrier (BBB) disruption through intraperitoneal injection [35]. We also found that adjudin could attenuate LPS-induced BV2 activation by suppression of the NF-κB pathway [36], which showed that adjudin appears to be a promising neuroprotective agent for ischemic stroke therapy. In this study, we aimed to examine whether adjudin pretreatment on NSCs could have a better effect on neuroprotection compared with nonpreconditioned NSCs after I/R injury.
Cell culture and characterization
All animal experimental protocols were approved by the Institutional Animal Care and Use Committee (IACUC) of Shanghai Jiao Tong University, Shanghai, China (Permission number: Bioethics 2012022). NSCs were harvested from the cortex of the E14 green fluorescent protein (GFP)-transgenic mice (Animal Research Center of Nanjing University, Nanjing, China). In brief, bilateral cortex zones from mouse brains were dissected in HBSS and dissociated mechanically. The cells were collected and resuspended in DMEM/F12 (1:1) medium (Gibco, Carlsbad, CA, USA) containing B27 supplement (Gibco), l-glutamine (Sigma-Aldrich), 20 ng/ml mouse basic fibroblast growth factor (Gibco), and 20 ng/ml mouse epidermal growth factor (Gibco). Cells were monolayer cultured on a 60-mm plastic dish (Corning Incorporated, Corning, NY, USA) precoated with poly-l-ornithine hydrobromide (Sigma, St Louis, MO, USA) and laminin (Sigma) at 37 °C with 5% CO2 in an incubator (Thermo Scientific, Barrington, IL, USA). The medium was changed every 2 days and cells were passaged in about 5 days. Cells that had been passaged three to five times were used for the experiments, which strongly maintained their proliferation and differentiation ability.
To characterize cells, NSCs were cultured on poly-l-ornithine hydrobromide (Sigma) and laminin (Sigma)-coated glass coverslips in a 24-well plate (Corning). Cells were then immunostained with mouse anti-Nestin (1:200; Millipore, Billerica, MA, USA), goat anti-Sox2 (1:100; Santa Cruz Technology, Santa Cruz, CA, USA), rabbit anti-glial fibrillary acidic protein (GFAP) (1:200; Millipore), and mouse anti-Doublecortin (1:100; Santa Cruz Technology).
Adjudin pretreatment of NSCs
The NSCs were preconditioned with adjudin before the in-vitro experiments or transplantation. Adjudin was added to the cell culture medium with a final concentration of 0, 5, 10, 30, or 60 μM for 24 hours, followed by drug washout before experiments. Cell death was quantified by a standard measurement of lactate dehydrogenase (LDH) release assay as described previously [36]. Cell viability was assessed by CCK-8 assay kit (Dojindo Laboratories, Kumamoto, Japan). Data were acquired using a microplate reader (Synergy2; BioTek, Winooski, VT, USA).
Cell death and cell survival analysis in vitro
To evaluate NSC viability under oxidative stress, NSCs were seeded at a density of 1 × 105 or 1 × 104 cells per well in 24-well culture plates or 96-well culture plates (Corning) respectively and subjected to different concentrations of H2O2 (0.05, 0.1, 0.3, 0.5 mM; Sigma) for 1 hour. NSCs were then washed three times with phosphate-buffered saline (PBS) and cultured for another 24 hours in high-glucose DMEM with 10% FBS. These NSCs were then examined by LDH assay and CCK-8 assay kit.
To determine the effect of adjudin on NSC viability under oxidative stress, NSCs were pretreated by adjudin with the concentration of 10 or 30 μM for 24 hours. The cells were then washed three times with PBS and subjected to 0.1 mM H2O2 for 1 hour followed by LDH assay and CCK-8 assay 24 hours later.
ATP assay
ATP levels were quantified using the Roche ATP Bioluminescence Assay Kit (HS II; Indianapolis, IN, USA) following the standard protocol provided by the vendor. In brief, cells were washed once with PBS and lysed with the Cell Lysis Reagent for 15 min. Then 50 μl of the homogenates were mixed with 150 μl of the Luciferase Reagent, and the luminescence was detected using a microplate reader (Synergy2). The protein concentrations of the samples were quantified with bicinchoninic acid (BCA) protein assay (Pierce, Rockford, IL, USA). The ATP concentrations of the sample were calculated using an ATP standard, and normalized against the protein of the samples.
Cell proliferation and differentiation in vitro
To evaluate NSC proliferation and differentiation after treatment with adjudin in vitro, NSCs were monolayer cultured on poly-l-ornithine hydrobromide (Sigma) and laminin (Sigma)-coated glass cover slips in a 24-well plate (Corning). After pretreatment with adjudin at concentrations of 10 or 30 μM for 24 hours, NSCs were washed with fresh medium to remove the drug. Then 3 days later, cells were immunostained with mouse anti-Nestin (Millipore), goat anti-Sox2 (Santa Cruz Technology), rabbit anti-glial fibrillary acidic protein (GFAP) (Millipore), mouse anti-Doublecortin (Santa Cruz Technology), and rabbit anti-Ki67 (1:200; Abcam, Cambridge, MA, USA).
Transient middle cerebral artery occlusion model
Focal cerebral ischemia in mice was performed as described previously [35]. In brief, adult male ICR mice weighing 25–30 g were anesthetized with ketamine/xylazine (100 mg/10 mg/kg; Sigma) intraperitoneally. Body temperature was maintained at 37 ± 0.5 °C using a heating pad (RWD Life Science, Shenzhen, China). Under the surgical microscope (Leica, Solms, Germany), the left common carotid artery (CCA), the external carotid artery (ECA), and the internal carotid artery (ICA) were isolated. Then a 6-0 suture (Dermalon, 1741-11; Covidien, OH, USA) with a round tip and coated with silicone was inserted from the ECA into the ICA and reached the circle of Willis to occlude the origin of the middle cerebral artery (MCA) until a slight resistance was felt. The distance from the furcation of the ECA/ICA to the opening of the MCA was 9 ± 0.5 mm. The success of occlusion was determined by monitoring the decrease in surface cerebral blood flow to 80% of baseline, which was verified by a laser Doppler flow-meter (Moor LAB; Moor Instruments, Devon, UK). Reperfusion was performed by withdrawing the suture 2 hours after middle cerebral artery occlusion (MCAO). To confirm successful occlusion/reperfusion, cerebral blood flow was tested again. The sham operated mice were subjected to the same procedure except for the suture insertion.
NSC transplantation
Twenty-four hours after transient middle cerebral artery occlusion (tMCAO), mice were divided randomly into three groups for NSC or vehicle injection: PBS group, NSC group, and adjudin-pretreated group. The animals were anesthetized with ketamine/xylazine intraperitoneally, and received stereotaxic transplantation. Adjudin-pretreated or untreated NSC suspension with 1 × 106 cells in 5–15 μl PBS was injected into the striatum of the ipsilateral hemisphere in mice, with the following coordinates: M–L, −1.5 mm; D–V, −3.25 mm. The same amount of PBS was injected as control. Deposits were delivered at 0.5 μl/min and the needle was left in situ for 5 min post injection before being removed slowly. The wound was then closed and the animal was returned to the cage for follow-up experiments.
Behavioral assessment
Three days after tMCAO, modified neurological severity scores (mNSS) were assessed by an investigator who was blind to the treatment regimen to assess the neurological status of the animals, which is a composite of motor, reflex, and balance tests (normal score, 0; maximal deficit score, 14) as described previously [37]. Total neurological score was calculated as the sum of scores on limb flexion (range 0–3), walking gait (range 0–3), beam balance (range 0–6), and reflex absence (range 0–2).
The rotarod test required mice to balance on a rotating rod. Mice were given 1-min adaption on the rod, which were then accelerated up to 40 rpm within 2 min. The duration of mice remaining on the rotating rod was recorded. Mice were examined at various time points (≤35 days) after NSC transplantation.
Measurement of infarct volume
Mice from each group were sacrificed 3 days after cell transplantation. Following PBS solution perfusion, mouse brains were perfused with 4% paraformaldehyde (PFA) and brains immediately removed and frozen in prechilled isopentane and stored at −80 °C. The tissues were then cut into a series of 20-μm-thick coronal sections from the beginning of the infarct area to the end, and one section out of every 10 was collected on the same slide to have a representative cerebral injury with the distance between adjacent sections of 200 μm. The entire set of brain sections was immersed in 0.1% cresyl violet (Sinopharm Chemical Reagent Co., Shanghai, China) for 30 min and then rinsed in distilled water for 10 min. The infarct area in each section was calculated using NIH ImageJ software by the following formula:
$$ \mathrm{Infarct}\ \mathrm{area}\ \left({\mathrm{mm}}^2\right)=\mathrm{contralateral}\ \mathrm{hemisphere}\ \mathrm{area}\ \left({\mathrm{mm}}^2\right)\hbox{--} \mathrm{ipsilateral}\ \mathrm{undamaged}\ \mathrm{area}\ \left({\mathrm{mm}}^2\right). $$
The infarct volume between two adjacent sections was calculated by the following equation:
$$ 1/3\times h\left(S1+S2+\sqrt{S{1}^{\ast }S2}\right), $$
where S1 and S2 are the infarct areas of the two sections and h is the distance between them. The total infarct volume was calculated by the sum of all infarct volume from each pair of adjacent sections [38].
Immunohistological staining
Cultured NSCs or brain sections (20 μm in thickness) were fixed with absolute methanol in a −20 °C freezer for about 10 min and then washed three times in PBS, and the slices were blocked in 10% normal donkey serum (Jackson ImmunoResearch, West Grove, PA, USA) for 30 min at RT. Cryosections were incubated with one of the following primary antibodies in 1% of the blocking serum at 4 °C overnight: mouse anti-CD11b antibody (1:100; BD Biosciences, San Jose, CA, USA), rabbit anti-Occludin (1:100; Invitrogen, Carlsbad, CA, USA), rabbit anti-ZO-1 (1:100; Invitrogen), and goat anti-CD31 antibodies (1:100; R&D Systems, Tustin, CA, USA). After being washed three times with PBS, sections were incubated with Alexa-488-conjugated secondary anti-body (1:500 dilution; Life Technologies, CA, USA) containing 1% normal donkey serum at RT for 1 hour in darkness, and nuclei were stained with 4,6-diamidino-2-phenylindole (DAPI) (1:500 dilution; Beyotime Institute of Biotechnology, China) for 10 min. After washing with PBS, slides were mounted with antifade mounting medium (Beyotime) and images were acquired under a Leica upright microscope (Leica DM2500) or a confocal laser-scanning microscope (Leica TCS SP5 II). IgG detection in the brain parenchyma was used to indicate the integrity of BBB. These brain sections were incubated with donkey anti-mouse IgG conjugated with biotin (1:500; Life Technologies), and visualized by adding with avidin-Alexa Fluor 488.
Tissue samples were collected from the striatum and cortex of the ipsilateral hemisphere, and sheared, briefly processed ultrasonically, and lysed in lysis buffer (Thermo Scientific, Rockford, IL, USA) containing Complete Protease Inhibitor Cocktail, Phosphatase Inhibitor Cocktail, and 2 mM phenylmethylsulfonyl fluoride (PMSF). The lysates were centrifuged at 12,000 rpm for 20 min at 4 °C, and the supernatants were collected. Immunoblotting was carried out as described previously [39]. A BCA assay kit (Pierce) was used for total protein quantification. Total proteins (40 μg) were denatured at 95 °C for 5 min and electrophoresed through 10 or 6% (for ZO-1) SDS-PAGE and then electrotransferred to 0.45-μm nitrocellulose membranes (Whatman, Piscataway, NJ, USA). Membranes were then blocked with 5% skim milk for 1 hour at RT and incubated with primary antibody solutions respectively at 4 °C overnight. After four washes in TBST, the membranes were hybridized with appropriate HRP-conjugated secondary antibody (1:5000; Jackson) for 1 hour at RT and washed four times with TBST again. The final detection was visualized using enhanced chemiluminescence (ECL) (Thermo Scientific, Rockford, IL, USA). Western blotting reagents and images were captured using the ChemiDoc XRS system (BioRad, Hercules, CA, USA). Loading differences were normalized using an anti-actin antibody with 1:1000 dilution (Santa Cruz Biotechnology, Santa Cruz, CA, USA). The primary antibodies used were as follows: p-AKT/AKT (1:2000; Epitomics, Burlingame, CA, USA); p-p38/p-38, p-JNK/JNK, and p-ERK/ERK (1:1000; Cell Signaling Technology, Danvers, USA); iNOS (1:1000; Abcam); catalase and SOD2 (1:1000; Santa Cruz); BDNF (1:500; Bioworld Technology, USA); β-tubulin (1:2000; Sigma); and β-actin (1:1000; Santa Cruz). The intensity analysis was carried out using the Gel-Pro Analyzer (Media Cybernetics, Silver Spring, MD, USA).
Total RNA from NSCs and brain tissue samples was isolated using Trizol Reagent (TaKaRa, Dalian, China). The concentration of RNA was measured by a spectrophotometer (NanDrop1000; Thermo, Wilmington, DE, USA) followed by a reverse transcription process using the PrimeScript RT reagent kit (TaKaRa). Quantitative real-time PCR was performed on ABI 7900HT using SYBR Premix Ex Taq (TaKaRa) and the following primer pairs for different genes. These primers are as follows: iNOS, sense 5′-GTTCTCAGCCCAACAATACAAGA-3′ and anti-sense 5′-GTGGACGGGTCGATGTCAC-3′; catalase, sense 5′-ACGCAATTCACACCTACACG-3′ and anti-sense 5′-TCCAGCGTTGATTACAGGTG-3′; SOD2, sense 5′-GCGGTCTAAACCTCAAT-3′ and anti-sense 5′-TAGGGCTCAGGTTTGTCCAG-3′; IL-6, sense 5′-TAGTCCTTCCTACCCCAATTTCC-3′ and anti-sense 5′-TTGGTCCTTAGCCACTCCTTC-3′; IL-1β, sense 5′-GCAACTGTTCCTGAACTCAACT-3′ and anti-sense 5′-ATCTTTTGGGGCGTCAACT-3′; TNF-α, sense 5′-CCCTCACACTCAGATCATCTTCT-3′ and anti-sense 5′-GCTACGACGTGGGCTACAG-3′; BDNF, sense 5′-TCATACTTCGGTTGCATGAAGG-3′ and anti-sense 5′-AGACCTCTCGAACCTGCCC-3′; NGF, sense 5′-TGATCGGCGTACAGGCAGA-3′ and anti-sense 5′-GCTGAAGTTTAGTCCAGTGGG-3′; GDNF, sense 5′-CCAGTGACTCCAATATGCCTG-3′ and anti-sense 5′-CTCTGCGACCTTTCCCTCTG-3′; Arg-1, sense 5′-GAACACGGCAGTGGCTTTAAC-3′ and anti-sense 5′-TGCTTAGCTCTGTCTGCTTTGC-3′; CD16, sense 5′-TTTGGACACCCAGATGTTTCAG-3′ and anti-sense 5′-GTCTTCCTTGAGCACCTGGATC-3′; and Rplp0, sense 5′-AGATTCGGGATATGCTGTTGGC-3′ and anti-sense 5′-TCGGGTCCTAGACCAGTGTTC-3′. PCR was performed using the following conditions: denaturing at 95 °C for 10 s, followed by 40 cycles of 95 °C for 5 s and 60 °C for 30 s. Data were analyzed using the comparative threshold cycle (Ct) method, and results were expressed as fold difference normalized to Rplp0.
Evans Blue extravasation
Mice were anesthetized with ketamine/xylazine, and then 4 ml/kg of 2% Evans Blue (Sigma) in normal saline was injected through the left jugular vein at 3 days following tMCAO. After 2 hours of circulation, the mice were anesthetized and perfused with normal saline. The ipsilateral and contralateral hemisphere of the mice were removed and weighed. Then EB was extracted by homogenizing the samples in 1 ml of 50% trichloroacetic acid solution followed by centrifuging at 12,000 rpm for 20 min. The supernatant was diluted with 100% ethanol at a ratio of 1:3. The amount of EB was determined quantitatively by measuring the 610 nm absorbance of the supernatant (BioTek, Winooski, VT, USA).
CD31/BrdU double immunostaining
Brains were post-fixed for 24 hours followed by 48 hours of immersion in 30% sucrose in PBS and immediately frozen, and then sectioned using a freezing microtome (Leica, Solms, Germany). A thickness of 20-μm coronal sections was cut. Floating coronal sections were collected in antigen protective solution, which includes 20% glycol, 30% glycerol, and 50% PBS. Sections were first treated with 2 mol/L HCl for 20 min at 37 °C and then neutralized with sodium borate twice each for 10 min. Sections were then treated with 0.3% Triton-X 100 in PBS for 15 min, blocked by 10% BSA, and incubated with CD31 (1:200; R&D) and BrdU (1:50; Santa Cruz) antibody at 4 °C overnight. Finally, the sections were incubated with secondary antibodies (1:500; Thermo Fisher) for 60 min at room temperature. Stained sections were mounted after rinsing.
Each experiment was repeated at least three times. All data are presented as mean ± SEM. Data were analyzed by a one-way ANOVA, followed by Tukey's honest significant test using the GraphPad InStat (GraphPad Software Inc., La Jolla, CA, USA). P < 0.05 was considered statistically significant.
NSC culture and characterization
Neural stem cells were generated from the cortex of E14 mice and characterized by immunocytochemistry. A small proportion of the primary cells generated neurospheres after 7 days of initial culture (Fig. 1a). When NSCs were cultured on poly-l-ornithine hydrobromide and laminin-coated plates, they were able to grow as monolayers with adherence to the plate (Fig. 1b). Immunostaining analysis showed that cells were Nestin+ and Sox2+ while GFAP– and DCX– (Fig. 1c–f), suggesting that the majority of the cells in the culture maintained a stem cell phenotype.
NSC culture and characterization. Morphological analysis of NSCs from mouse cortex. Phase-contrast photomicrographs of suspension neurospheres (a) and monolayer culture cells (b). Scale bar = 100 μm. Identification of cultured NSCs. Fluorescent photomicrographs of NSCs for Nestin (c), SOX2 (d), GFAP (e), and DCX (f). Nuclei were stained with DAPI. Scale bar = 100 μm. Quantifications for GFAP+, Nestin+, SOX2+, and DCX+ cells (g)
Differentiation and proliferation of NSCs after pretreatment with adjudin
In order to explore whether adjudin could affect the differentiation and proliferation of NSCs, cells were cultured in a monolayer and pretreated with adjudin at a concentration of 10 or 30 μM. The results of immunostaining indicated that NSCs under the two concentrations of adjudin pretreatment were positive for Nestin and Sox2, and negative for DCX, whereas GFAP was negative for 10 μM and positive for 30 μM pretreated NSCs (Additional file 1: Figure S1a). Fluorescent photomicrographs of Ki67 showed that 10 μM adjudin did not affect the proliferation of NSCs, but 30 μM adjudin could apparently inhibit NSC proliferation (Additional file 1: Figure S1b). Both results suggest that 10 μM adjudin had no effect on the differentiation and proliferation of NSCs.
Adjudin preconditioning improved the survival of and maintained the ATP level of NSCs under H2O2 stress
To evaluate whether adjudin preconditioning could reduce NSC death under stress in vitro, we used hydrogen peroxide oxidative stress models. We first investigated the effect of different concentrations of adjudin and H2O2 on the cell viability of NSCs in order to establish the working concentration of adjudin and H2O2. After pretreatment of adjudin for 24 hours, the LDH assay revealed that adjudin could not induce cell death even at 60 μM (Additional file 2: Figure S2a), but the results of the CCK-8 assay showed that 30 and 60 μM adjudin could significantly decrease cell viability, while 5 and 10 μM adjudin had no effect on this (Additional file 2: Figure S2b). Combining with the immunostaining results of Ki67 (Additional file 1: Figure S1b), we inferred that this was because high concentrations of adjudin could inhibit cell proliferation instead of decreasing NSC viability. As shown in Additional file 2: Figure S2b, treatment with H2O2 reduced NSC viability significantly in a concentration-dependent manner. The optimal concentration of H2O2 for subsequent experiments was determined to be 0.1 mM H2O2 because cell viability was 40–50% at this concentration (Additional file 2: Figure S2c, d).
After 1 hour of 0.1 mM H2O2 stimulation, cells were replenished with fresh medium and cultured for another 24 hours, to be followed with the LDH assay and CCK-8 assay, which revealed that adjudin-preconditioned NSCs (10 and 30 μM) had a significant reduction in death and an increase in cell survival, compared with the nonpreconditioned NSCs (Fig. 2a, b). This cytoprotective effect was supported by the ATP assay as adjudin pretreatment could maintain the ATP level of NSCs after H2O2 stimulation (Fig. 2c). The serine/threonine kinase Akt, which is a conserved family of signal transduction enzymes, not only plays a pivotal role in the cell death/survival pathway [40, 41] but also plays an important part in regulating inflammatory responses and apoptosis [42]. Here we used western blot analysis to estimate the activity of the Akt signaling. Compared to the nonpreconditioned NSCs, adjudin pretreatment could dramatically increase the ratio of p-Akt/Akt after H2O2 stimulation (Fig. 2d, e).
Adjudin pretreatment attenuated cell death and maintained the ATP level of NSCs after H2O2 stimulation. Assays to evaluate whether adjudin pretreatment could attenuate cell death of NSCs after 0.1 mM H2O2 exposure. Cell death and cell survival measured by LDH (a) and CCK-8 assay (b). ATP level of NSCs after H2O2 stimulation with/without adjudin pretreatment (c). Levels of p-Akt and Akt detected by western blot analysis after H2O2 treatment (d). Quantification of the densitometric value of protein bands normalized to total Akt (e). Bars represent mean ± SEM from three independent experiments. *P < 0.05, **P < 0.01, ***P < 0.001. LDH lactate dehydrogenase
Adjudin preconditioning upregulated antioxidant genes and reduced oxidative stress in vitro
We next sought to elucidate the underlying mechanism of adjudin-induced cytoprotection. As exogenous H2O2 could induce a strong increase in intracellular ROS levels within 1 hour of cell treatment [43], we investigated the expression of iNOS and several antioxidant genes using RT-PCR and western blot analysis. Real-time RT-PCR assays showed that adjudin preconditioning significantly inhibited iNOS level (Fig. 3a) and upregulated expression of catalase (Fig. 3b), SOD2 (Fig. 3c), and GCLC (Additional file 3: Figure S3a) after 1 hour of H2O2 stimulation followed by 12 hours of reculture, whereas it did not change NOX4, HO-1, NQO1, and Nrf2 levels (Additional file 3: Figure S3b–e). This was also supported by western blot analysis of the whole cell lysate from the NSCs, showing that adjudin significantly lowered protein expression of iNOS and induced higher levels of catalase and SOD2 after 1 hour of H2O2 stimulation followed by 24 hours of normal condition culture (Fig. 3d–g). This finding suggested that resistance to oxidative stress is one of the mechanisms of adjudin-induced cytoprotection.
Adjudin-pretreated NSCs inhibited H2O2-induced oxidative stress. Bar graphs show mRNA levels of iNOS, catalase, and SOD2: relative mRNA expression of iNOS (a), catalase (b), and SOD2 (c) normalized to Rplp0. Adjudin-pretreated NSCs inhibited H2O2-induced oxidative stress at protein levels. Representative western blot analysis showed that adjudin inhibited H2O2-induced iNOS upregulation, and increased catalase and SOD2 protein levels in the presence of 0.1 mM H2O2 (d). Quantification of densitometric value of the protein bands normalized to the respective β-tubulin (e–g). Bars represent mean ± SEM from three independent experiments. *P < 0.05, **P < 0.01, ***P < 0.001
Adjudin preconditioning promoted expression of neurotrophic factors in vitro
Because NSCs could secrete many neurotrophic factors and other soluble molecules to modify the release of inflammatory mediators and oxidative reaction [13, 27, 44], we tested whether adjudin changed their expression in NSCs in vitro. Significantly higher gene expression of BDNF, nerve growth factor (NGF), and glial cell-derived neurotrophic factor (GDNF) was detected in the adjudin-preconditioned NSC group after 1 hour of H2O2 stimulation and 12 hours of reculture, compared with the nonpreconditioning NSC group (Fig. 4a–c).
Induction of neurotrophic factors with adjudin preconditioning in vitro. Real-time RT-PCR assays of NSCs. Relative mRNA expressions of BDNF, NGF, and GDNF normalized to Rplp0 (a–c). Bars represent mean ± SEM from three independent experiments. *P < 0.05. BDNF brain-derived neurotrophic factor, GDNF glial cell-derived neurotrophic factor, NGF nerve growth factor
Adjudin preconditioning reduced brain infarct volume and improved neurobehavioral outcome after ischemia/reperfusion
Twenty-four hours after tMCAO, mice were divided randomly into three groups for NSC or vehicle injection: PBS group, NSC group, and adjudin-pretreated NSC group. NSCs (1 × 106 cells suspended in PBS) that was pretreated with or without adjudin or untreated were injected into the striatum of the ipsilateral hemisphere in mice. Brain infarct volume was determined by cresyl violet staining 3 days after cell transplantation (Fig. 5a). Adjudin-pretreated NSCs greatly reduced infarct volume by as much as 50% compared to the PBS group, whereas untreated NSCs only produced ~ 30% reduction in the infarct volume compared to the PBS group (Fig. 5b). Meanwhile, adjudin preconditioning improved behavioral performance with the neuroscore plummeting by approximately 50% in comparison to the PBS group, while untreated NSCs resulted in only a 25% decrease in neuroscore (Fig. 5c). These findings illustrate that adjudin pretreatment could significantly attenuate I/R-induced cerebral injury. Moreover, compared to the untreated NSCs, PBS, and sham groups, the adjudin preconditioning group considerably increased the ratio of p-Akt/Akt both in the cortex and the striatum (Fig. 5d–g).
Adjudin-pretreated NSCs reduced brain infarct volume and improved neurobehavioral outcome after I/R. Representative sets of cresyl violet staining of brain sections from mice treated with PBS, untreated NSCs, and adjudin-pretreated NSCs 3 days following tMCAO. Dashed line shows the border of the infarct area (a). Quantification of infarct volumes (b). n = 8 in each group. Adjudin-pretreated NSCs significantly ameliorated neurological deficits 3 days after transplantation when compared to the PBS or NSC group. n = 14 for PBS and untreated NSC group, and n = 19 for adjudin-pretreated NSC group (c). Adjudin-pretreated NSCs promoted the phosphorylation of Akt in ipsilateral cortex (d) and striatum (e) after tMCAO. Representative western blot assay showing that adjudin increased the p-Akt protein level 3 days after tMCAO compared with sham, PBS, and NSC groups. Quantification of densitometric value of the protein bands of cortex and straitum normalized to total Akt (f, g). n = 6 in each group. Data are mean ± SEM. *P < 0.05, **P < 0.01, ***P < 0.001. NSC neural stem cell, PBS phosphate-buffered saline
Adjudin preconditioning reduced cytokine production and attenuated microglial activation after ischemia/reperfusion
To investigate whether adjudin-pretreated NSCs in the acute phase of cerebral ischemia had a better effect on immunomodulatory influence, we first examined IL-6, IL-1β, and TNF-α mRNA expression in both the cortex and the striatum. The results showed that IL-6, IL-1β, and TNF-α mRNA were increased dramatically at day 3 following tMCAO. The expression of the three cytokines decreased significantly in the untreated NSC group compared to the PBS group, and in the adjudin-pretreated NSC group, further reduction in their expression was found (Fig. 6a–f). As the resident immune cells in the central nervous system (CNS), microglia could be activated by I/R injury, which could regulate the primary events of neuroinflammatory responses [45]. We then investigated whether adjudin preconditioning also affected microglia in the tMCAO model. A CD11b signal, an indicator of active microglia, was revealed by fluorescence microscopy (Fig. 6g, h). In the sham group, no obvious activation of microglia and CD11b signal were detected (Fig. 6g top left panel). In the PBS group, strong staining of CD11b was widely found in the ipsilateral hemisphere (Fig. 6g top right panel). Contrarily, stereotactic injection of nonpreconditioned NSCs after reperfusion significantly inhibited the activation of microglia (Fig. 6g bottom left panel). Moreover, adjudin-pretreated NSCs could further inhibit microglia activation, where much less CD11b signal was detected (Fig. 6g bottom right panel). Statistical analysis of the CD11b signal from brain sections of mice indicated that adjudin preconditioning significantly attenuated microglial activation in the ipsilateral region of the brain after I/R injury (Fig. 6h).
Adjudin-pretreated NSCs inhibited cytokine production and activation of microglia after I/R. Relative mRNA expression of IL-6, IL-1β, and TNF-α normalized to Rplp0 detected 3 days following cell transplantation. Expression of IL-6 (a, d), IL-1β (b, e), and TNF-α (c, f) in ipsilateral cortex and striatum shown in the NSC and adjudin-pretreated NSC groups. n =6 in each group. Immunofluorescence staining for CD11b (green) in the sham group, and tMCAO groups with either PBS injection, NSC injection, or adjudin-pretreated NSC injection. Samples were acquired 3 days after cell transplantation, with DAPI staining for contrast (g). Scale bar =100 μm. Quantification of CD11b immunofluorescence intensity in each group (h). n = 8 in each group. Data are mean ± SEM. *P < 0.05, **P < 0.01, ***P < 0.001. DAPI 4,6-diamidino-2-phenylindole, NSC neural stem cell, PBS phosphate-buffered saline
Adjudin preconditioning suppressed M1 microglia activation but promoted M2 polarization after ischemia/reperfusion
As the principal immune cells in the brain, microglia cells share certain characteristics with macrophages and response to immunoreaction for local CNS injury [7]. M1 macrophage polarization can produce proinflammatory cytokines, such as IL-6, IL-1β, and TNF-α, and express markers such as CD16 [46]. Activated M2 polarized microglia can express arginase-1 (Arg-1), CD163, Ym1, produce other anti-inflammation cytokines and restore homeostasis [20, 24, 47].
It has been reported that the dynamic changes of M1/M2 macrophage activation are involved in CNS damage and regeneration. M1/M2 macrophage polarization also plays an important role in controlling the balance between promoting and suppressing inflammation [20, 48]. Here we stained CD16 and Arg-1 to check MI/M2 microglial activation. The results revealed that ischemic brain damage could prominently activate both M1 and M2 microglia compared with sham groups (Fig. 7a, d). Furthermore, when comparing adjudin-pretreated NSC groups with nonpretreated NSC groups, we found that adjudin pretreatment could significantly suppress the expression of M1 microglia and promote M2 microglia expression (Fig. 7a–f).
Adjudin-pretreated NSCs inhibited M1 activation but promoted M2 expression after I/R. Immunofluorescence staining for CD16 and Arg-1 showed activation of M1 and M2 microphages in sham, PBS, nonpretreated NSC, and adjudin-pretreated NSC groups at 3 days after cell transplantation (a, d). Scale bar =50 μm. Quantification of MI/M2 positive cells in each group (b, e). Relative mRNA expression level of CD16 and Arg-1 checked using RT-PCR normalized to Rplp0 (c, f). n = 5 in each group. Data are mean ± SEM. *P < 0.05, **P < 0.01. NSC neural stem cell, PBS phosphate-buffered saline
Adjudin preconditioning attenuated oxidative stress after ischemia/reperfusion
Since ROS also plays an important role in cerebral I/R injury, we then investigated the effect of adjudin preconditioning on resistance to oxidative stress. Compared with the PBS and NSC groups, the iNOS mRNA level was significantly decreased in the adjudin preconditioning group both in the cortex and striatum (Fig. 8a, d). Also, the expression of antioxidant genes catalase (Fig. 8b, e) and SOD2 (Fig. 8c, f) was apparently increased in the adjudin pretreatment group after I/R injury. Western blot analysis of whole cell lysate from the ipsilateral cortex and striatum also supported these results, which showed that adjudin preconditioning dramatically decreased iNOS protein expression and promoted higher levels of catalase and SOD2 3 days after I/R (Fig. 8g–n).
Adjudin-pretreated NSCs inhibited the oxidative stress after tMCAO. Modulation of oxidative stress gene expression in vivo. Relative mRNA expression of iNOS, catalase, and SOD2 in ipsilateral cortex (a–c) and striatum (d–f) normalized to Rplp0 detected 3 days after cell transplantation. Western blot analysis of iNOS, catalase, and SOD2 protein levels in ipsilateral cortex and striatum 3 days after cell transplantation (g, h). Quantification of densitometric value of the protein bands of cortex (i–k) and striatum (l–n) normalized to the respective β-tubulin. n = 6 in each group. Data are mean ± SEM. *P < 0.05, **P < 0. 01, ***P < 0. 001. NSC neural stem cell, PBS phosphate-buffered saline
Adjudin preconditioning enhanced neuroprotection after tMCAO via p-38 and JNK but not the ERK signaling pathway
To assess the phosphorylation status of the MAPK signaling pathway, western blot analysis was used. We demonstrated that I/R significantly increased p38, JNK, and ERK1/2 phosphorylation levels in the cortex and striatum compared with sham, and this induction could be inhibited after transplantation of NSCs (Fig. 9a, c). However, compared with the nonpretreated NSC group, the adjudin preconditioning group had a more profound effect on inhibiting the phosphorylation level of p38 and JNK in the cortex (Fig. 9a, b, d, e), while ERK1/2 (Fig. 9c, f) phosphorylation had no detectable changes after transplantation between groups. No significant differences were observed in the expression of total ERK1/2, total JNK1/2, and total p38 MAPKs among all experimental groups. Therefore, the results indicated that I/R induced inflammatory cytokines and oxidative stress by activating the p38 and JNK pathway but not the ERK signaling pathway.
Adjudin-pretreated NSCs inhibited phosphorylation of p-38 and JNK after tMCAO. P-p38, p-38, p-JNK, JNK, p-ERK, and ERK levels in sham, PBS, nonpretreated NSC, and adjudin-pretreated NSC groups at 3 days after cell transplantation in the ipsilateral cortex (a–f) and striatum (g–l). Quantification of densitometric values of the protein bands normalized to total p38, JNK, and ERK1/2 (d–f, j–l). n = 6 in each group. Data are mean ± SEM. *P < 0.05, ***P < 0.001. NSC neural stem cell, PBS phosphate-buffered saline
Adjudin preconditioning attenuated ischemia/reperfusion-induced blood–brain barrier leakage
The permeability of BBB after ischemic brain injury was assessed by measuring the extravasation of EB and IgG protein, which could not leak to the brain parenchyma through the BBB in the normal physiological state. We demonstrated that a tremendous amount of EB and IgG were detected in the ipsilateral hemisphere of the PBS group, while NSCs remarkably reduced the EB and IgG leakage, and adjudin-pretreated NSCs could further decrease the leakage of EB and IgG, which indicated that BBB integrity was even better protected by adjudin-pretreated NSCs (Fig. 10a–d). Meanwhile, we also found in the sham group that no EB dye or IgG signal was detected in the same brain regions (Fig. 10a–d). To investigate the mechanism of BBB disruption, we analyzed the localization of tight junction (TJ)-related proteins ZO-1 and occludin in cerebral vascular structures by immunofluorescence microscopy in conjunction with CD31, an endothelial marker that also locates at the BBB, and by western blot analysis to determine the change of the protein levels. Confocal microscopy analysis showed that ZO-1 and occludin positive staining were continuously located on the endothelial cell margin of cerebral microvessels in the sham group, while this continuity was disrupted after I/R injury by forming many gaps along the microvessels (Fig. 10e). However, this process could be reversed by stereotactic injection of NSCs, and compared with the nonpretreated NSC group, adjudin preconditioning could further lessen gap formation after tMCAO (Fig. 10e). To corroborate this result, western blot analysis of lysates from the ipsilateral region was adopted. We found that the significant reduction of ZO-1 and occludin levels after I/R (PBS versus sham) could be rescued by NSC transplantation and adjudin preconditioning had a better effect on protecting against the protein reduction of ZO-1 and occludin after I/R injury (Fig. 10f). Together, these results further demonstrated that the BBB destruction after I/R injury could be effectively rescued by adjudin-pretreated NSCs.
Adjudin-pretreated NSCs lessened Evans blue and IgG extravasation and inhibited ZO-1 and occludin degradation. Photographs represent the perfused brains after EB injection (a). Quantification of extravasated EB dye. The dye was analyzed by a spectrophotometer at 610 nm (b). n =14 for PBS and untreated NSC groups, and n = 19 for adjudin-pretreated NSC group. Immunofluorescence staining for IgG (red) in sham, PBS, nonpretreated NSC, and adjudin-pretreated NSC groups at 3 days after cell transplantation, with DAPI staining for contrast (c). Scale bar =100 μm. Quantification of the IgG fluorescent intensity in each group (d). n = 8 in each group. Sections from ischemic penumbra were stained for ZO-1 (green) and occludin (green), and then costained with endothelial marker CD31 (red) (e). Nuclei were stained with DAPI. Scale bar = 100 μm. Representative western blot analysis for ZO-1 and occludin protein levels in the ischemic penumbra from sham, PBS, nonpretreated NSC, and adjudin-pretreated NSC groups at 3 days after cell transplantation (f). Quantification of densitometric values of the protein bands normalized to the respective β-tubulin and actin (g). n = 6 in each group. Data are mean ± SEM. *P < 0.05, **P < 0.01, ***P < 0.001. NSC neural stem cell, PBS phosphate-buffered saline
Adjudin preconditioning enhanced the secretion of neurotrophic factors after ischemia/reperfusion
To evaluate the ability of NSCs to secrete neurotrophic factors, we measured BDNF levels in both the cortex and striatum of the ipsilateral hemisphere using RT-PCR and western blot analysis 3 days after ischemia and transplantation. Real-time RT-PCR assays showed that these paracrine factors significantly increased in the adjudin-pretreated NSC group compared with the nonpretreated NSC group and the PBS group (Fig. 11a, b), which were also confirmed by western blot analysis (Fig. 11c–f).
Adjudin-pretreated NSCs upregulated expression of neurotrophic factors. Relative mRNA expression of BDNF, NGF, and GDNF normalized to Rplp0 in cortex (a) and striatum (b) from sham, PBS, nonpretreated NSC, and adjudin-pretreated NSC groups at 3 days after cell transplantation. Western blot analysis of BDNF in cortex (c, d) and striatum (e, f) from sham, PBS, nonpretreated NSC, and adjudin-pretreated NSC groups at 3 days after cell transplantation. Quantification of densitometric values of the protein bands normalized to β-tubulin. n = 6 in each group. Data are mean ± SEM. *P < 0.05, ***P < 0.001. BDNF brain-derived neurotrophic factor, NSC neural stem cell, PBS phosphate-buffered saline
Adjudin preconditioning promoted angiogenesis and enhanced neurobehavioral recovery after ischemia/reperfusion
Ischemic angiogenesis directly relates to reestablishment of microcirculation within the I/R damaged area and represents a key vital process for poststroke functional recovery [49, 50]. Since angiogenesis could modulate the endogenous angiogenic response to generate new vessels and then increase blood supply which is necessary for new neuronal survival and development, angiogenesis is directly linked to neurogenesis [51, 52]. Here we further measured EPC marker CD31 and 5-bromo-2′-deoxyuridine (BrdU) double-positive cells to evaluate angiogenesis 35 days after transplantation (Fig. 12a, b). The staining results showed that nonpreconditioned NSCs could significantly increase new vessel generation compared with the PBS group (Fig. 12a top panel, b), while the adjudin-pretreated NSCs had a more remarkable effect on angiogenesis (Fig. 12a bottom left panel, b). To evaluate the effect of adjudin pretreatment on functional recovery, rotarod test was performed at different time points (≤5 weeks) after cell transplantation.
Immunostaining for CD31 (red) and BrdU (green) showing the number of new generated vessels from PBS, nonpretreated NSC, and adjudin-pretreated NSC groups at 35 days after cell transplantation (a). Scale bar =50 μm. Quantification of CD31+/BrdU+ cell number in each group (b). n = 5 in each group. Bar graphs summarizing results of rotarod test maintaining time in each group (c). n = 5 in each group. Data are mean ± SEM. *P < 0.05, **P < 0.01. NSC neural stem cell, PBS phosphate-buffered saline
The rotarod maintaining time declined sharply after tMCAO surgery compared with the nonoperative group (Fig. 12c), while the maintaining time was significantly prolonged in surgery groups 7, 14, and 35 days after cell transplantation (Fig. 12c). In each tMCAO group, we found that the functional recovery effects were in accordance with angiogenesis results. NSC transplantation could significantly increase the rotarod maintaining time compared with the PBS group, and adjudin-pretreated NSCs showed even better effects (Fig. 12c).
In this study, we showed that, compared with nonpreconditioned NSCs, adjudin preconditioning not only enhanced the survival rate of NSCs under H2O2 oxidative stress in vitro, but also had a better effect on decreasing infarct volume, improving behavioral outcome, inhibiting neuroinflammation and oxidative stress, maintaining BBB integrity, and expressing higher levels of neurotrophic factors, resulting in stronger therapeutic effects in I/R-induced brain injury. Such a neuroprotective effect was mediated by inhibiting activation of the p38 and JNK MAPK signaling pathway. Together our results suggested the potential of using adjudin for NSC transplantation, and provided preclinical experimental evidence for the combination therapy of adjudin and NSCs after stroke.
Because of the complexity of the ischemic cascade, which includes various mechanisms of excitotoxicity (glutamate release and receptor activation), calcium influx, ROS scavenging, NO production, inflammatory reactions, and apoptosis, numerous molecular targets have been tackled in order to achieve neuroprotection [53, 54]. Since the majority of patients continue to exhibit neurological deficits even following successful thrombolysis and therapy, restorative therapies are urgently needed to promote brain remodeling and repair once stroke injury has occurred. Stem cell transplantation therapy has emerged as a promising regenerative medicine for ischemic stroke which could promote tissue repair and functional recovery via potent immune modulatory actions, trophic support enforcement, and cell replacement mechanisms [13, 55]. However, a number of issues and problems remain unresolved and need specific attention in order to develop clinical treatments successfully. These include an appropriate cell source in consideration of therapeutic value and ethical concerns, cell type-specific differentiation, and survival of transplanted cells in the harsh pathological microenvironment [16]. Massive death of donor cells in the infarcted area during acute phase immensely lowers the efficacy of the procedure [17]. In order to improve the effect of stem cell-based therapy, various strategies are discussed which have been adopted to develop and optimize the protocols to enhance donor stem cell survival post transplantation, with special focus on the preconditioning approach [56]. Up to now, a number of preconditioning triggers have been tested in stem cell-based therapy, such as ischemia, hypoxia, H2O2, erythropoietin (EPO), insulin-like growth factor-1 (IGF-1), pharmacological agents, and so forth, which have shown that exposure of stem cells to sublethal hypoxia or other preconditioning insults increased the tolerance of these cells to multiple injurious insults and thus protected them against the harsh environment after transplantation [27, 57,58,59,60,61].
Many studies have already illustrated that NSC therapy has great potential to restore neurological function after ischemic brain injury [6, 14], and here we likewise demonstrated the neuroprotection effect of NSCs which attenuated infract volume and improved the outcome of behavioral recovery after stroke and transplantation onward. In our study, we found that the MAPK signaling pathway as one of the underlying mechanisms of stem cell function, was dramatically inhibited 3 days after NSC transplantation. Our results showed that NSC transplantation could inhibit the activation of p-ERK1/2, p-JNK1/2, and p-p38 MAPKs which could significantly increase after I/R injury in comparison with that of sham-operated animals. MAPK signaling pathways are not only implicated in inflammatory and apoptotic processes of cerebral I/R injury, but are also involved in the proliferation, survival, and cell fate determination (neurogenesis vs gliogenesis) of NSCs that depend on the extrinsic factors regulated by different MAPK-activated transcription factors, or interacted with other signaling pathways [62, 63]. MAPKs are activated after focal cerebral I/R, which mainly function as mediators of cellular stress by phosphorylating intracellular enzymes, transcription factors, and cytosolic proteins involved in cell survival, inflammatory mediator production, and apoptosis [64, 65]. Kyriakis and Avruch showed that the presence of JNK and p38 MAPKs had an effect on cell injury, unlike the ERK signaling that was part of the survival route [64]. Cumulative experimental evidence showed that p38 and JNK MAPKs could be activated in neurons, microglia, and astrocytes after various types of ischemia [66,67,68,69], and their activation was associated with the production of proinflammatory cytokines, such as TNF-α and IL-1β, which tend to act as perpetrators in the CNS injury [70, 71]. A growing body of evidence showed that inhibition of p38 or JNK MAPK activation using inhibitors or knockout mice could provide protection in a variety of brain injury models [72,73,74,75]. However, phosphorylation of ERK occurred at different time intervals after I/R injury. Whether the activation of ERK was associated with neuronal protection or damage in ischemic brain remains to be determined unequivocally [76]. From our experiments, we found that adjudin preconditioning could further decrease the levels of p-JNK1/2 and p-p38 MAPKs, but had no additional effect on the increase of p-ERK1/2 levels compared with that in the nonpreconditioned NSC group. These findings, together with our results, supported the involvement of the JNK1/2 and p38 MAPK pathway in the adjudin preconditioning neuroprotection. Notably, adjudin failing to attenuate the increased p-ERK1/2 levels was consistent with our observations that adjudin treatment did not change p-ERK1/2 levels in H2O2-induced NSC injury in vitro (Additional file 4: Figure S4).
Adjudin preconditioning could increase the expression of p-Akt both in vitro and in vivo. Akt belongs to a conserved family of signal transduction enzymes, which is the downstream target of phosphoinositide 3-kinase (PI3K) that not only plays an important part in regulating cellular activation and inflammatory responses, but also participates in cell growth, survival, metabolism, and apoptosis [77, 78]. In the initial hours of cerebral ischemia, p-Akt protein level transiently rises in neurons, and this increment is supposed to be a neuroprotective response [79]. The phosphorylation of Akt could activate downstream proteins such as Bcl-2-associated death protein (BAD) and caspase 9, thereby inhibiting the Bax-dependent apoptosis pathway and blocking cytochrome c-mediated caspase 9 activation [78, 80]. In our study, we found that the level of p-Akt was elevated in the adjudin-preconditioned NSC group compared with that of the nonpreconditioned NSC group both in vivo and in vitro. Therefore, we demonstrated that the positive effect of adjudin preconditioning was mediated partially through a PI3K/Akt-dependent mechanism.
Massive cell death is induced in hours to days with additional injury resulting from increased free radicals and inflammatory responses since energy metabolism dysfunction and glutamate excitotoxicity occur in ischemic brain injury [35]. Adjudin preconditioning treatments applied to NSCs have been shown to enhance resistance to those insults by modulating MAPK and Akt signaling pathways, inhibiting the activation of microglia, downregulating IL-6, IL-1β, TNF-α, and iNOS, and upregulating antioxidant genes such as SOD2, catalase, and GCLC. Microglial cells are brain macrophages which serve important functions in many CNS diseases. Our previous work has shown that adjudin could significantly attenuate microglia activation and decrease proinflammatory cytokine release through inhibition of NF-κB activity in BV2 microglia [36], and here we also demonstrated that adjudin-pretreated NSCs could dramatically decrease H2O2-induced phosphorylation of p65 in NSCs (Additional file 5: Figure S5). Mitochondria play an important role in cytoprotection and preconditioning of cells. Generation of ROS in mitochondria is one of the main triggers that induce ischemic tolerance in the brain [81]. Madhavan et al. [82] demonstrated that NSCs resisted oxidative stress better than neurons because of their higher expression of antioxidant enzymes at a steady state and faster upregulation following oxidative stress stimulation. In this study, we showed that adjudin pretreatment significantly increased SOD2 and catalase activity and decreased iNOS levels in the ischemic penumbra of the cerebral and H2O2-induced NSC injury compared with nonpreconditioned NSCs. Thus, our results have provided evidence for a better effect of the antioxidative activity of preconditioned NSCs after focal cerebral I/R injury.
Besides neuroinflammation and oxidative stress, we also focused on the protective effects of adjudin-preconditioned NSCs on BBB permeability since maintaining BBB integrity is critical for reducing secondary brain injury following cerebral ischemia. As the core part of the BBB, tight junction proteins like JAM-A, claudin-5, occludin, and ZO-1 are located in the tightly sealed monolayer of brain endothelial cells (BEC) and conferred barrier function to preclude blood substances permeating into the brain parenchyma [35]. Many brain injuries such as ischemia and trauma lead to a disruption and reconstruction of tight junction proteins. In the present study, we demonstrated that, compared with nonpreconditioned NSCs, adjudin preconditioning could further reduce the leakage of IgG and EB by maintaining the protein levels of tight junction protein ZO-1 and occludin, leading to better outcomes in tMCAO mice. This protective effect was due to an attenuation of neuroinflammatory response and oxidative stress, which were capable of disrupting the epithelial barrier by decreasing tight junction protein expression [83].
Better understanding of molecules acting in neuroprotection might illuminate more treatment strategies of neurological disorders [84]. Transplanted NSCs exert beneficial effects not only via structural replacement, but also via neurotrophic actions [85, 86]. An interesting finding of this study was the induction of neurotrophic factors with adjudin preconditioning. Numerous studies have demonstrated that grafted stem cells adapt to the ischemic microenvironment and facilitate homeostasis via the secretion of numerous tissue trophic factors that have beneficial effects on endogenous brain cells, as well as modulatory actions on both innate and adaptive immune responses [13]. Our work illustrated that compared with the nonpreconditioning group, adjudin preconditioning increased the expression of BDNF significantly in the ipsilateral brain 3 days after transplantation. Concomitantly, the heightened expression of BDNF, GDNF, and NGF in vitro in adjudin-pretreated NSCs was consistent with our observations in vivo, further demonstrating the neuroprotection of NSCs preconditioned by adjudin. BDNF withstood cerebral ischemic injury by means of upregulating antioxidant enzymes and mainly interfering with apoptotic pathways [87]. Greenberg et al. [88] found that the Akt pathway was an important downstream signaling pathway of BDNF, and via this pathway BDNF protected tissue from injury and fostered neuronal plasticity. Meanwhile, Lu et al. illustrated that the role of BNDF in hippocampal neurogenesis was mediated by ERK1/2 signaling pathways [89]. Moreover, Almeida et al. revealed that the exposure of neurons to BDNF stimulates CREB phosphorylation and activation via both MAPK and PI3K/Akt pathways. CREB was capable of regulating BDNF gene transcription directly, which suggested that a positive-feedback loop might be operating in some cell populations that were resistant to brain injury [90]. These findings, together with our results, supported that the neuroprotective effects of NSCs and adjudin-preconditioned NSCs were not through one way alone. Instead, they crosstalked with each other via many different pathways.
Although our work showed a better neuroprotective function of adjudin-preconditioned NSCs on I/R-induced brain injury, and adjudin may become a promising drug for clinical use that combines with stem cell-based therapy, further research is required before applying it to clinical research. The advantages of stem cell-based therapy are that grafted cells are not only able to secrete a plethora of soluble molecules to modulate the activation of host microglia/macrophages, thus modifying the release of inflammatory mediators and inhibiting oxidative stress, and thereby stabilizing the BBB, but are also capable of directly increasing cell proliferation within the SVZ, potentiating neuroblast migration, augmenting peri-ischemic angiogenesis, positively affecting the differentiation of endogenous neuroblasts and plasticity within the ischemic tissue. In addition, they could directly differentiate into postmitotic neurons, astrocytes, or oligodendrocytes to establish new neural circuits, and finally attenuate ischemic brain injury and improve neurobehavioral recovery [13, 15]. In order to examine whether adjudin preconditioning could achieve a better therapeutic effect and promote the transformation of adjudin to clinical use, more long-term experiments should be carried out. In this study we have included a 35-day study that has already shown encouraging results, but limitations also exist. Previous study showed that NSCs could survive and differentiate into functional neurons, attenuate infarction, and improve neurobehavioral recovery after stroke [91]. To further confirm the role of adjudin and study the mechanism of adjudin-pretreated NSCs in protecting the brain from ischemia injury, long-term experiments aiming to observe the number, localization, and differentiation status of transplanted cells in the ischemic brain are needed in future studies. Furthermore, adjudin has been demonstrated to have no apparent side effects in treated animals [33], but long-term safety remains a concern for clinical use when combined with cell sources. Although Lindvall and Kokaia [92] have demonstrated that no tumors were detected in five patients with Batten disease 2 years after transplantation of human fetal NSCs, the harsh microenvironment after I/R brain injury might influence tumorigenesis and differentiation profiles of grafted NSCs [93]. More observations in larger cohorts will be required for confirmation in the near future, with more definite conclusions regarding the safety of stem cell treatment to be made.
In summary, our study demonstrated that adjudin preconditioning promoted NSC survival under H2O2 stimulation in vitro, reprogrammed NSCs to tolerate neuroinflammation and oxidative stress, and expressed higher levels of neurotrophic factors, resulting in augmenting the therapeutic efficiency of NSCs in transient focal ischemia in vivo. The protective effect of adjudin was achieved through activating the Akt pathway and inhibiting the p-p38 and p-JNK MAPK pathway. The beneficial effects of adjudin preconditioning may represent a safe approach for future clinical applications.
AIS:
Acute ischemic stroke
BBB:
Blood–brain barrier
BCA:
Bicinchoninic acid
BEC:
Brain endothelial cells
CCA:
Common carotid artery
DAPI:
4,6-Diamidino-2-phenylindole
EB:
Evans blue
ECA:
External carotid artery
ECL:
Enhanced chemiluminescence
Endothelial progenitor cell
Embryonic stem cell
GFP:
Green fluorescent protein
I/R:
Ischemia/reperfusion
ICA:
iPSC:
Induced pluripotent stem cell
LDH:
Lactate dehydrogenase
LPS:
Lipopolysaccharide
MAPK:
MCA:
Middle cerebral artery
mNSS:
Modified neurologic severity scores system
NSC:
OPC:
Oligodendrocyte progenitor cell
Phosphate-buffered saline
PCR:
PGE2:
Prostaglandin E2
PMSF:
Phenylmethylsulfonylfluoride
ROS:
Oxygen species
TJ:
Tight junction
tMCAO:
Transient middle cerebral artery occlusion
tPA:
Tissue plasminogen activator
Donnan GA, et al. Stroke. Lancet. 2008;371(9624):1612–23.
Balami JS, et al. The exact science of stroke thrombolysis and the quiet art of patient selection. Brain. 2013;136:3528–53.
Graham GD. Tissue plasminogen activator for acute ischemic stroke in clinical practice—a meta-analysis of safety data. Stroke. 2003;34(12):2847–50.
Borlongan CV, et al. Neural transplantation as an experimental treatment modality for cerebral ischemia. Neurosci Biobehav Rev. 1997;21(1):79–90.
Nishino H, Borlongan CV. Restoration of function by neural transplantation in the ischemic brain. Prog Brain Res. 2000;127:461–76.
Hou B, et al. Exogenous neural stem cells transplantation as a potential therapy for photothrombotic ischemia stroke in kunming mice model. Mol Neurobiol. 2017;54(2):1254–62.
Kettenmann H, et al. Physiology of microglia. Physiol Rev. 2011;91(2):461–553.
Seo JH, et al. Oligodendrocyte precursor cells support blood-brain barrier integrity via TGF-beta signaling. PLoS One. 2014;9(7):e103174.
Drury-Stewart D, et al. Highly efficient differentiation of neural precursors from human embryonic stem cells and benefits of transplantation after ischemic stroke in mice. Stem Cell Res Ther. 2013;4(4):93.
Marti-Fabregas J, et al. Endothelial progenitor cells in acute ischemic stroke. Brain Behav. 2013;3(6):649–55.
Geng J, et al. Endothelial progenitor cells transplantation attenuated blood-brain barrier damage after ischemia in diabetic mice via HIF-1alpha. Stem Cell Res Ther. 2017;8(1):163.
Chau MJ, et al. iPSC Transplantation increases regeneration and functional recovery after ischemic stroke in neonatal rats. Stem Cells. 2014;32(12):3075–87.
Hermann DM, et al. Neural precursor cells in the ischemic brain—integration, cellular crosstalk, and consequences for stroke recovery. Front Cell Neurosci. 2014;8:291.
De Feo D, et al. Neural stem cell transplantation in central nervous system disorders: from cell replacement to neuroprotection. Curr Opin Neurol. 2012;25(3):322–33.
Balami JS, Fricker RA, Chen RL. Stem cell therapy for ischaemic stroke: translation from preclinical studies to clinical treatment. CNS Neurol Disord Drug Targets. 2013;12(2):209–19.
Kalladka D, Muir KW. Brain repair: cell therapy in stroke. Stem Cells Cloning. 2014;7:31–44.
Hicks AU, et al. Transplantation of human embryonic stem cell-derived neural precursor cells and enriched environment after cortical stroke in rats: cell survival and functional recovery. Eur J Neurosci. 2009;29(3):562–74.
Lo EH, Dalkara T, Moskowitz MA. Mechanisms, challenges and opportunities in stroke. Nat Rev Neurosci. 2003;4(5):399–415.
Savitz SI, et al. Cell transplantation for stroke. Ann Neurol. 2002;52(3):266–75.
Shechter R, et al. Recruitment of beneficial M2 macrophages to injured spinal cord is orchestrated by remote brain choroid plexus. Immunity. 2013;38(3):555–69.
Mottaghi S, Larijani B, Sharifi AM. Apelin 13: a novel approach to enhance efficacy of hypoxic preconditioned mesenchymal stem cells for cell therapy of diabetes. Med Hypotheses. 2012;79(6):717–8.
Wei L, et al. Transplantation of embryonic stem cells overexpressing Bcl-2 promotes functional recovery after transient cerebral ischemia. Neurobiol Dis. 2005;19(1–2):183–93.
Liu H, et al. Neuroprotection by PlGF gene-modified human mesenchymal stem cells after cerebral ischaemia. Brain. 2006;129:2734–45.
Mosser DM, Edwards JP. Exploring the full spectrum of macrophage activation. Nat Rev Immunol. 2008;8(12):958–69.
Gomi M, et al. Single and local blockade of interleukin-6 signaling promotes neuronal differentiation from transplanted embryonic stem cell-derived neural precursor cells. J Neurosci Res. 2011;89(9):1388–99.
McPherson CA, Aoyama M, Harry GJ. Interleukin (IL)-1 and IL-6 regulation of neural progenitor cell proliferation with hippocampal injury: differential regulatory pathways in the subgranular zone (SGZ) of the adolescent and mature mouse brain. Brain Behav Immun. 2011;25(5):850–62.
Sakata H, et al. Minocycline-preconditioned neural stem cells enhance neuroprotection after ischemic stroke in rats. J Neurosci. 2012;32(10):3462–73.
Tang YH, et al. Melatonin pretreatment improves the survival and function of transplanted mesenchymal stem cells after focal cerebral ischemia. Cell Transplant. 2014;23(10):1279–91.
Al-Saffar H, et al. Lipopolysaccharide-induced hypothermia and hypotension are associated with inflammatory signaling that is triggered outside the brain. Brain Behav Immun. 2013;28:188–95.
Bian ZY, et al. Human mesenchymal stem cells promote growth of osteosarcoma: involvement of interleukin-6 in the interaction between human mesenchymal stem cells and Saos-2. Cancer Sci. 2010;101(12):2554–60.
Howe SJ, et al. Insertional mutagenesis combined with acquired somatic mutations causes leukemogenesis following gene therapy of SCID-X1 patients. J Clin Investig. 2008;118(9):3143–50.
Cai HX, Zhang ZJ, Yang GY. Preconditioned stem cells: a promising strategy for cell-based ischemic stroke therapy. Curr Drug Targets. 2014;15(8):771–9.
Mok KW, et al. Adjudin, a potential male contraceptive, exerts its effects locally in the seminiferous epithelium of mammalian testes. Reproduction. 2011;141(5):571–80.
Bhatia M, et al. Treatment with bindarit, a blocker of MCP-1 synthesis, protects mice against acute pancreatitis. Am J Physiol Gastrointest Liver Physiol. 2005;288(6):G1259–65.
Liu TY, et al. Adjudin protects against cerebral ischemia reperfusion injury by inhibition of neuroinflammation and blood-brain barrier disruption. J Neuroinflammation. 2014;11:107.
Shao JX, et al. Adjudin attenuates lipopolysaccharide (LPS)- and ischemia-induced microglial activation. J Neuroimmunol. 2013;254(1–2):83–90.
Chen JL, et al. Atorvastatin induction of VEGF and BDNF promotes brain plasticity after stroke in mice. J Cereb Blood Flow Metab. 2005;25(2):281–90.
Huang J, et al. CXCR4 Antagonist AMD3100 protects blood-brain barrier integrity and reduces inflammatory response after focal ischemia in mice. Stroke. 2013;44(1):190–7.
Xia WL, Mruk DD, Cheng CY. C-type natriuretic peptide regulates blood-testis barrier dynamics in adult rat testes. Proc Natl Acad Sci U S A. 2007;104(10):3841–6.
Dudek H, et al. Regulation of neuronal survival by the serine-threonine protein kinase Akt. Science. 1997;275(5300):661–5.
Yano S, et al. Activation of Akt/protein kinase B contributes to induction of ischemic tolerance in the CA1 subfield of gerbil hippocampus. J Cereb Blood Flow Metab. 2001;21(4):351–60.
Cantley LC. The phosphoinositide 3-kinase pathway. Science. 2002;296(5573):1655–7.
Borodkina A, et al. Interaction between ROS dependent DNA damage, mitochondria and p38 MAPK underlies senescence of human adult stem cells. Aging. 2014;6(6):481–95.
Liu FF, et al. Combined effect of nerve growth factor and brain-derived neurotrophic factor on neuronal differentiation of neural stem cells and the potential molecular mechanisms. Mol Med Rep. 2014;10(4):1739–45.
Tam WY, Ma CHE. Bipolar/rod-shaped microglia are proliferating microglia with distinct M1/M2 phenotypes. Sci Rep. 2014;4:7279.
Lan X, et al. Modulators of microglial activation and polarization after intracerebral haemorrhage. Nat Rev Neurol. 2017;13(7):420–33.
Gordon S, Martinez FO. Alternative activation of macrophages: mechanism and functions. Immunity. 2010;32(5):593–604.
Geissmann F, et al. Development of monocytes, macrophages, and dendritic cells. Science. 2010;327(5966):656–61.
Madelaine R, et al. MicroRNA-9 couples brain neurogenesis and angiogenesis. Cell Rep. 2017;20(7):1533–42.
Li Y, et al. Salvianolic acids enhance cerebral angiogenesis and neurological recovery by activating JAK2/STAT3 signaling pathway after ischemic stroke in mice. J Neurochem. 2017;143(1):87–99.
Arenillas JF, et al. The role of angiogenesis in damage and recovery from ischemic stroke. Curr Treat Options Cardiovasc Med. 2007;9(3):205–12.
Xu W, et al. Neuroprotective effects of stem cells in ischemic stroke. Stem Cells Int. 2017;2017:4653936.
Ginsberg MD. Neuroprotection for ischemic stroke: past, present and future. Neuropharmacology. 2008;55(3):363–89.
Sutherland BA, et al. Neuroprotection for ischaemic stroke: translation from the bench to the bedside. Int J Stroke. 2012;7(5):407–18.
Spiriev T, Sandu N, Schaller B. Molecular imaging and tracking stem cells in neurosciences. Methods Mol Biol. 2013;1052:195–201.
Yu SP, Wei Z, Wei L. Preconditioning strategy in stem cell transplantation therapy. Transl Stroke Res. 2013;4(1):76–88.
Ii M, et al. Endothelial progenitor cells are rapidly recruited to myocardium and mediate protective effect of ischemic preconditioning via "imported" nitric oxide synthase activity. Circulation. 2005;111(9):1114–20.
Yan F, et al. Hypoxic preconditioning improves survival of cardiac progenitor cells: role of stromal cell derived factor-1alpha-CXCR4 axis. PLoS One. 2012;7(7):e37948.
Zhang J, et al. Hydrogen peroxide preconditioning enhances the therapeutic efficacy of Wharton's Jelly mesenchymal stem cells after myocardial infarction. Chin Med J (Engl). 2012;125(19):3472–8.
Li Y, et al. Erythropoietin-induced neurovascular protection, angiogenesis, and cerebral blood flow restoration after focal ischemia in mice. J Cereb Blood Flow Metab. 2007;27(5):1043–54.
Lu G, Ashraf M, Haider KH. Insulin-like growth factor-1 preconditioning accentuates intrinsic survival mechanism in stem cells to resist ischemic injury by orchestrating protein kinase calpha-erk1/2 activation. Antioxid Redox Signal. 2012;16(3):217–27.
Jiang M, et al. Neuroprotective effects of bilobalide on cerebral ischemia and reperfusion injury are associated with inhibition of pro-inflammatory mediator production and down-regulation of JNK1/2 and p38 MAPK activation. J Neuroinflammation. 2014;11(1):167.
Miloso M, et al. MAPKs as mediators of cell fate determination: an approach to neurodegenerative diseases. Curr Med Chem. 2008;15(6):538–48.
Kyriakis JM, Avruch J. Mammalian MAPK signal transduction pathways activated by stress and inflammation: a 10-year update. Physiol Rev. 2012;92(2):689–737.
Cargnello M, Roux PP. Activation and function of the MAPKs and their substrates, the MAPK-activated protein kinases. Microbiol Mol Biol Rev. 2011;75(1):50–83.
Nozaki K, Nishimura M, Hashimoto N. Mitogen-activated protein kinases and cerebral ischemia. Mol Neurobiol. 2001;23(1):1–19.
Krupinski J, et al. Time-course phosphorylation of the mitogen activated protein (MAP) kinase group of signalling proteins and related molecules following middle cerebral artery occlusion (MCAO) in rats. Neuropathol Appl Neurobiol. 2003;29(2):144–58.
Piao CS, et al. Delayed and differential induction of p38 MAPK isoforms in microglia and astrocytes in the brain after transient global ischemia. Brain Res Mol Brain Res. 2002;107(2):137–44.
Kovalska M, et al. Intracellular signaling MAPK pathway after cerebral ischemia-reperfusion injury. Neurochem Res. 2012;37(7):1568–77.
Vila N, et al. Proinflammatory cytokines and early neurological worsening in ischemic stroke. Stroke. 2000;31(10):2325–9.
Wang CX, Shuaib A. Involvement of inflammatory cytokines in central nervous system injury. Prog Neurobiol. 2002;67(2):161–72.
Barone FC, et al. Inhibition of p38 mitogen-activated protein kinase provides neuroprotection in cerebral focal ischemia. Med Res Rev. 2001;21(2):129–45.
Legos JJ, et al. The selective p38 inhibitor SB-239063 protects primary neurons from mild to moderate excitotoxic injury. Eur J Pharmacol. 2002;447(1):37–42.
Pirianov G, et al. Deletion of the c-Jun N-terminal kinase 3 gene protects neonatal mice against cerebral hypoxic-ischaemic injury. J Cereb Blood Flow Metab. 2007;27(5):1022–32.
Gao Y, et al. Neuroprotection against focal ischemic brain injury by inhibition of c-Jun N-terminal kinase and attenuation of the mitochondrial apoptosis-signaling pathway. J Cereb Blood Flow Metab. 2005;25(6):694–712.
Spudich A, et al. Aggravation of ischemic brain injury by prion protein deficiency: role of ERK-1/-2 and STAT-1. Neurobiol Dis. 2005;20(2):442–9.
Yao YW, et al. Lipopolysaccharide preconditioning enhances the efficacy of mesenchymal stem cells transplantation in a rat model of acute myocardial infarction. J Biomed Sci. 2009;16:74.
Cardone M, et al. Regulation of cell death protease caspase-9 by Akt-mediated protein phosphorylation. Mol Biol Cell. 1998;9:246a.
Liu HQ, et al. Losartan, an angiotensin II type 1 receptor blocker, ameliorates cerebral ischemia-reperfusion injury via PI3K/Akt-mediated eNOS phosphorylation. Brain Res Bull. 2012;89(1–2):65–70.
Nakae J, Barr V, Accili D. Differential regulation of gene expression by insulin and IGF-1 receptors correlates with phosphorylation of a single amino acid residue in the forkhead transcription factor FKHR. EMBO J. 2000;19(5):989–96.
Dirnagl U, Meisel A. Endogenous neuroprotection: mitochondria as gateways to cerebral preconditioning? Neuropharmacology. 2008;55(3):334–44.
Madhavan L, Ourednik V, Ourednik J. Increased "vigilance" of antioxidant mechanisms in neural stem cells potentiates their capability to resist oxidative stress. Stem Cells. 2006;24(9):2110–9.
Tang G, et al. Mesenchymal stem cells maintain blood-brain barrier integrity by inhibiting aquaporin-4 upregulation after cerebral ischemia. Stem Cells. 2014;32(12):3150–62.
Schaller B, et al. Oxygen-conserving reflexes of the brain: the current molecular knowledge. J Cell Mol Med. 2009;13(4):644–7.
Bernstock JD, et al. Neural stem cell transplantation in ischemic stroke: a role for preconditioning and cellular engineering. J Cereb Blood Flow Metab. 2017;37(7):2314–9.
Stonesifer C, et al. Stem cell therapy for abrogating stroke-induced neuroinflammation and relevant secondary cell death mechanisms. Prog Neurobiol. 2017;S0301-0082(17)30082–5.
Gardiner J, et al. Neurotrophic support and oxidative stress: converging effects in the normal and diseased nervous system. Neuroscientist. 2009;15(1):47–61.
Greenberg ME, et al. New insights in the biology of BDNF synthesis and release: implications in CNS function. J Neurosci. 2009;29(41):12764–7.
Lu J, et al. SMAD pathway mediation of BDNF and TGF beta 2 regulation of proliferation and differentiation of hippocampal granule neurons. Development. 2005;132(14):3231–42.
Almeida RD, et al. Neuroprotection by BDNF against glutamate-induced apoptotic cell death is mediated by ERK and PI3-kinase pathways. Cell Death Differ. 2005;12(10):1329–43.
Tang Y, et al. Neural stem cell protects aged rat brain from ischemia-reperfusion injury through neurogenesis and angiogenesis. J Cereb Blood Flow Metab. 2014;34(7):1138–47.
Lindvall O, Kokaia Z. Stem cell research in stroke how far from the clinic? Stroke. 2011;42(8):2369–75.
Seminatore C, et al. The postischemic environment differentially impacts teratoma or tumor formation after transplantation of human embryonic stem cell-derived neural progenitors. Stroke. 2010;41(1):153–9.
This study was supported by grants from Ministry of Science & Technology (2013CB945604), National Key Grant (2016YFC0906400), National Natural Science Foundation, China (31270032, 81773115), and SJTU funding (YG2012ZD05).
School of Biomedical Engineering & Med-X Research Institute, Shanghai Jiao Tong University, Shanghai, China
Tingting Zhang, Xiao Yang, Tengyuan Liu, Jiaxiang Shao, Ningzhen Fu, Keyi Geng & Weiliang Xia
Department of Neurology & Institute of Neurology, Rui Jin Hospital, School of Medicine, Shanghai Jiao Tong University, Room 211, Med-X Research Institute, 1954 Huashan Road, Shanghai, 200030, China
Aijuan Yan & Weiliang Xia
Tingting Zhang
Xiao Yang
Tengyuan Liu
Jiaxiang Shao
Ningzhen Fu
Aijuan Yan
Keyi Geng
Weiliang Xia
WX and TZ conceived the project, coordinated the study, analyzed the data, and drafted the manuscript. TZ and XY designed the experiments, generated tMCAO models, carried out analyses involving behavioral assessment, real-time PCR, western blot analyses, and immunohistological staining, analyzed the data, and drafted the manuscript. TZ, TL, and JS performed cell line and primary cell culture experiments. NF performed real-time PCR analysis. TL, AY, and KG participated in the animal experimental procedures. All authors read and approved the final manuscript.
Correspondence to Weiliang Xia.
This procedure is supported by grant NSFC #81471178 and is approved by the Bioethics Committee of the School of Biomedical Engineering, Shanghai Jiao Tong University as #2014008. All the participants gave written informed consent.
The effect of adjudin on differentiation and proliferation of NSCs. Fluorescent photomicrographs indicate that the two concentrations of adjudin-pretreated NSCs were Nestin+, SOX2+, DCX–, GFAP– for 10 μM and GFAP+ for 30 μM pretreated NSCs (a). Nuclei stained with DAPI. Scale bar = 100 μm. Role of adjuidin in NSC proliferation detected by immunostaining of Ki67 (b). Nuclei stained with DAPI. Scale bar = 100 μm (PNG 5980 kb)
Cell viability of NSCs after pretreatment by adjudin and stimulated by H2O2 in vitro. Cell death and cell survival measured by LDH (a) and CCK-8 assay (b) after pretreatment with indicated concentrations of adjudin for 24 hours. Cell death and cell survival measured by LDH (c) and CCK-8 assay (d) after exposure to various concentrations of H2O2 (mM) for 1 hour. Bars represent mean ± SEM from three independent experiments. *P < 0.05, ***P < 0.001 (PNG 176 kb)
Expression of antioxidant genes in NSCs with adjudin preconditioning in vitro. GCLC mRNA expression in adjudin-pretreated NSCs after H2O2 stimulation (a). mRNA expression levels of NOX4, HO-1, NQO1, and Nrf2 in adjudin-pretreated NSCs after H2O2 stimulation (b–e). Bars represent mean ± SEM from three independent experiments. *P < 0.05 (PNG 220 kb)
Adjudin-pretreated NSCs inhibited phosphorylation of p-38 and JNK in vitro. Changes in p-p38, p-JNK, and p-ERK levels after 0.1 mM H2O2 stimulation in vitro. Representative western blot analysis showed phosphorylation levels of p38, JNK, and ERK in adjudin-pretreated NSCs which were stimulated by 0.1 mM H2O2 (a, c, e). Quantification of densitometric value of the protein bands normalized to total p38, JNK, and ERK1/2 (b, d, f). Bars represent mean ± SEM from three independent experiments. **P < 0.01, ***P < 0.001 (PNG 359 kb)
Adjudin-pretreated NSCs inhibited phosphorylation of p65 in vitro. NSCs were pretreated with adjudin for 24 hours and then stimulated with H2O2 for 1 hour. Cell lysates were analyzed by western blot analysis with antibodies specific to phospho-p65 and p65 (PNG 116 kb)
Zhang, T., Yang, X., Liu, T. et al. Adjudin-preconditioned neural stem cells enhance neuroprotection after ischemia reperfusion in mice. Stem Cell Res Ther 8, 248 (2017). https://doi.org/10.1186/s13287-017-0677-0
Revised: 30 August 2017
Adjudin | CommonCrawl |
Behavioral modifications by a large-northern herbivore to mitigate warming conditions
Jyoti S. Jennewein ORCID: orcid.org/0000-0002-9650-65371,
Mark Hebblewhite2,
Peter Mahoney3,
Sophie Gilbert4,
Arjan J. H. Meddens5,
Natalie T. Boelman6,
Kyle Joly7,
Kimberly Jones8,
Kalin A. Kellie9,
Scott Brainerd10,
Lee A. Vierling1 &
Jan U. H. Eitel1,11
Temperatures in arctic-boreal regions are increasing rapidly and pose significant challenges to moose (Alces alces), a heat-sensitive large-bodied mammal. Moose act as ecosystem engineers, by regulating forest carbon and structure, below ground nitrogen cycling processes, and predator-prey dynamics. Previous studies showed that during hotter periods, moose displayed stronger selection for wetland habitats, taller and denser forest canopies, and minimized exposure to solar radiation. However, previous studies regarding moose behavioral thermoregulation occurred in Europe or southern moose range in North America. Understanding whether ambient temperature elicits a behavioral response in high-northern latitude moose populations in North America may be increasingly important as these arctic-boreal systems have been warming at a rate two to three times the global mean.
We assessed how Alaska moose habitat selection changed as a function of ambient temperature using a step-selection function approach to identify habitat features important for behavioral thermoregulation in summer (June–August). We used Global Positioning System telemetry locations from four populations of Alaska moose (n = 169) from 2008 to 2016. We assessed model fit using the quasi-likelihood under independence criterion and conduction a leave-one-out cross validation.
Both male and female moose in all populations increasingly, and nonlinearly, selected for denser canopy cover as ambient temperature increased during summer, where initial increases in the conditional probability of selection were initially sharper then leveled out as canopy density increased above ~ 50%. However, the magnitude of selection response varied by population and sex. In two of the three populations containing both sexes, females demonstrated a stronger selection response for denser canopy at higher temperatures than males. We also observed a stronger selection response in the most southerly and northerly populations compared to populations in the west and central Alaska.
The impacts of climate change in arctic-boreal regions increase landscape heterogeneity through processes such as increased wildfire intensity and annual area burned, which may significantly alter the thermal environment available to an animal. Understanding habitat selection related to behavioral thermoregulation is a first step toward identifying areas capable of providing thermal relief for moose and other species impacted by climate change in arctic-boreal regions.
Global temperatures are drastically increasing [36], which directly affect animal behavior and fitness [9, 88, 91]. When ambient temperatures rise above an animal's thermal neutral zone, they use physiological and behavioral mechanisms to dissipate heat and mitigate thermal stress. For instance, additional energy may be spent to augment the cardiovascular and respiratory systems enabling evaporative cooling but may also lead to dehydration [16, 54, 73]. Consequentially, increases in ambient temperature may contribute to a negative energy balance within an animal [5, 85, 87]. Energetic requirements of mammals vary by season and traits (e.g., body mass, lactation). Summer is an important season for mammals as they need to recover from winter food deficits, lactate and rear young, and store fat [14, 75, 85]. Climate change puts further stress on these important activities, which may, in turn, limit the ability of mammals to meet energetic requirements for reproduction and survival [25, 50, 90]. Recent work suggests that large-bodied mammals respond more strongly to climate change, when compared to smaller-bodied mammals, through contraction or expansion of elevational ranges and also experience increased extinction risk [53].
Moose (Alces alces) are an important, large-bodied mammal vulnerable to increasing temperatures because they are well-adapted to cold climates [73, 76]. Moose also act as ecosystem engineers, by regulating forest carbon and structure, below ground nitrogen cycling processes, and predator-prey dynamics [12, 15, 48, 55]. According to the seminal physiological study by Renecker and Hudson [73], moose reached their upper critical temperature threshold at 14 °C in summer where they increased their heart and respiration rates, while open-mouthed panting began at 20 °C. However, recent works call these thresholds into question and suggest there is no static temperature threshold where free-ranging moose become heat stressed [83, 84]. Similarly, behavioral changes are often observed at temperatures that exceeds the upper critical summer threshold proposed by Renecker and Hudson [73] [11, 56].
Behavioral alterations elicited by changes in temperature influence both resource selection patterns and movement rates. For example, previous studies showed that during hotter periods, moose displayed stronger selection for riparian or wetland habitats [74, 80], taller and denser forest canopies that provide thermal cover [20, 56, 88], and minimized exposure to solar radiation [54]. Additionally, moose may also decrease their activity and movement rates in response to warmer daytime temperatures [58, 80].
Moose thermoregulatory behaviors are indeed a 'hot topic' in applied ecology because of rising temperatures related to climate change and their important ecosystem role (e.g., [56, 58, 80]). However, most previous studies occurred in Europe or the southern end of moose range in North America [50, 56, 88]. Understanding whether ambient temperature elicits a behavioral response in high-northern latitude (i.e., ≥ 60°N) moose populations in North America may be increasingly important as these arctic-boreal systems have been warming at a rate two to three times the global mean [2, 36, 77, 95] and current projections anticipate continued increases in temperature [36, 51]. Thus, it is important to explore how movement patterns of moose, a heat-sensitive large-bodied mammal, are influenced by changes in temperature at the northern extent of their range.
Accordingly, our study objective was to assess Alaska moose (Alces alces gigas) habitat selection as a function of ambient temperature. We tested the hypothesis that moose modified resource selection in response to ambient temperature as predicted by physiological models. To accomplish this, we used Global Positioning System (GPS) -telemetry locations from four Alaska moose populations (n = 169 moose; Fig. 1 & Table 1) from 2008 to 2016 that were located in four unique ecoregions [65]. We combined moose GPS locations with remotely sensed products important to thermoregulatory behaviors. We analyzed only summer months (June–August) because of their importance in moose life history and because thermal stress is most likely to occur in summer [23, 88]. Each population was analyzed independently and separated into male and female subsets because fine-scale movements vary by sex and local habitat characteristics [41, 43, 49]. We predicted that Alaska moose exhibit a detectable behavioral response to increasing summer temperatures, and, that as temperature increased, moose would select for cooler locations, such as thermal refugia provided through increased canopy cover, areas closer to water, and/or low exposure to solar radiation.
Moose (Alces alces gigas) study area locations in four distinct ecoregions of Alaska, USA. In total, 169 moose were included in these analyses (111 females; 58 males)
Table 1 Summaries of Alaska moose (Alces alces gigas) Global Positioning System (GPS) datasets by study area. Information on the number of fixes and the fix success rate are specific to summer (June 1 – August 31). The number of clusters for each population-sex partition refer to the unique combination of individual-year, which were used in our conditional logistic regression models as a clustering variable for estimating robust variance estimates using generalized estimating equations
All four study areas span a mixture of subarctic and arctic boreal forest vegetation including black spruce (Picea mariana), alders (Alnus spp.), willows (Salix spp.), Alaska birch (Betula neoalaskaa), white spruce (Picea glauca), quaking aspen (Populus tremuloides), and balsam poplar (Populus balsmifera). The upper Koyukuk region located in the Brooks Mountain Range (Fig. 1) is rugged and varies from 500 to 2600 m above sea level [1]. Wildfire is common in this region, which experiences strongly continental climate patterns where summers are short, but temperatures can exceed 30 °C [41]. Average daily summer (June–August) temperature ranged from 7.5 °C to 15 °C from 1986 to 2016 [64]. The Tanana Flats region is located south of Fairbanks, where the alluvial plane from the Alaska Mountain Range slopes northward making meandering rivers and oxbow lakes common [1]. Elevation ranges from 0 to 700 m, however the highest elevations occurred in the northern portion of the Alaska Mountain Range [1]. The Tanana region experiences dry-continental climate, and average daily summer temperature ranged from 11 °C to 19.5 °C from 1986 to 2016 [64]. The Innoko region lies in southwest Alaska and includes a portion of the lower Yukon River. Meandering waterways, oxbow lakes and floods are common in the lowlands while upland areas experience more wildfire disturbance [67]. Elevation varies little (30–850 m) and average daily summer temperatures ranged from 9.5 °C to 17.5 °C from 1989 to 2016 [64]. The Susitna moose range lies south of Alaska Mountain Range, and is characterized by numerous wetlands, hilly moraines, black spruce woodlands, and mountains. Elevation varies widely from 400 to 3500 m. This region is primarily located in temperate-continental climate, with some exposure to temperate coastal climates in the southern portion of the range [1]. Average daily summer temperatures ranged from 11.5 °C to 19 °C from 1988 to 2016 in this region [64].
Moose data
All capture protocols and handling protocols adhered to the Alaska Animal Care and Use Committee approval process (#07–11) as well as the Institutional Animal Care and Use Committee Protocol (#09–01). Moose in all regions were darted from helicopter (Robison R-44) and injected using carfentanil citrate (Wildnil® Wildlife Pharmaceuticals, Incorporated, Fort Collins, CO) and xylazine hydrochloride (Anaset®; Lloyd Laboratories, Shenandoah, IA). Moose were instrumented with GPS radio-collars with three and a half to eight-hour fix rates (Table 1). Specifically, moose were fitted with the following collars from Telonics Inc. (Telonics, Mesa, AZ): Koyukuk – GW-4780, Tanana –TGW-4780-3, Susitna – TGW-4780-2, Innoko –CLM-340.
We used a step-selection function (SSF) to assess moose behavioral responses to changing temperatures. SSF's model habitat selection in a used-available design that accounts for changing availability of resources at any point in time [27, 86]. We aggregated moose datasets to a near eight-hour fix rate to enable regional comparisons of behavior (Table 1). We chose this modeling framework because it allows for assessments of fine-scale habitat selection, and the effect of temperature on large herbivore movement behavior are most pronounced at fine to intermediate spatial and temporal scales [89]. To sample availability, we generated ten-paired available locations based on empirical distributions of an individual's step length and turning angles between sampling intervals, which were estimated using the "ABoVE-NASA" R package [29]. We used conditional-logistic regression (CLR, [35]) in the "survival" R package [82] to compare each used location with the concurrent available locations at the same point in time and space (i.e., one stratum contained one used point and ten randomly generated available points). The equation can be written as:
$$ w\ast \left(\mathrm{x}\right)=\frac{\mathit{\exp}\left(\beta 1\mathrm{x}1+\beta 2\mathrm{x}2+\dots +\beta \mathrm{nxn}+e\right)}{1+\mathit{\exp}\left(\beta 1\mathrm{x}1+\beta 2\mathrm{x}2+\dots +\beta \mathrm{nxn}+e\right)} $$
where w*(x), the relative probability of selection, is dependent on habitat covariates X1 through Xn, and their estimated regression coefficients β1 to βn, respectively. Steps with higher w*(x) indicate a greater chance of selection. CLR compares strata (i.e., one used point and ten available points) individually, which enabled us to assess selection of fine-scale habitat features rather than broader-scale landscape characteristics [6]. We did not directly incorporate random effects into our SSF models as the analytical techniques for doing this are sparse and often computationally prohibitive for complex model sets [61]. In our models, we would have a needed to incorporate a random effect of individual for each covariate in the model – the equivalent of random slopes. We believe this would likely have led to convergence issues as our models are already complex (see section regarding temperature interaction terms). Instead, we fit our CLR models with generalized estimating equations (GEE) using a clustering variable of "animal-year" to split the data into statistically independent clusters. This allowed us to account for lack of independence between steps within an individual for a given summer, and provided unbiased (i.e., robust) variance estimates provided there are at least 20 independent clusters and preferably 30 [71]. Our data all had at least 20 unique animal year clusters, and all but one had greater than 30 (Table 1).
Habitat covariates
We obtained temperature estimates from the North American Regional Reanalysis (NARR) as opposed to weather stations. NARR provides a suite of highly-temporally dynamic (eight times daily; 32 km) set of meteorological variables [57]. We annotated NARR temperature estimates using the environmental-data automated track annotation (Env-DATA) system available from Movebank [21]. To ensure accuracy of these temperature estimates, we performed a validation exercise on the two populations of moose which included temperature sensors on their collars (Innoko and Koyukuk). We found a moderate relationship between the two (Supplementary material (S)1; R2 = 0.47–0.58, RMSE = 3.88–4.43 °C). NARR temperature estimates represent an ambient, neighborhood temperature, allowing us to investigate how moose respond to ambient variation in temperature via fine-scale selection for environmental characteristics that are likely to create cooler microclimates. We excluded ambient temperature as a main effect within CLR models because it did not vary within strata, and only included it as an interaction term with other covariates.
Moose may move to areas that provide thermal cover when temperatures increase such as denser canopied forests [56]. In our models, a United States Geological Survey (USGS) percent canopy product for 2010 (30 m cell size, [31]) was used as an index of thermal cover. Moose use canopy cover for purposes other than thermoregulation such as predator avoidance [85]. However, by considering the interaction between temperature and canopy cover, it is likely that we captured behavioral thermoregulation in our models.
We assessed the importance of water habitats in behavioral thermoregulation using a distance-to-water covariate. We estimated this covariate from Pekel et al.'s [68] percent global surface water map, which quantified global surface water from 1984 to 2015. We used the R "raster" package [34] to estimate the Euclidian distance of the nearest water pixel (30 m cell size) from a given moose location. Elevation estimations (in meters) were extracted from the ArcticDEM (version 6, 5 m cell size [69];). The solar radiation index (SRI [46];) was estimated mathematically as a function of latitude, aspect, and slope using the "RSAGA" package [8] – which were derived from the ArcticDEM, with the resultant values representing the hourly extraterrestrial radiation striking an arbitrarily oriented surface [46].
We chose to consider only continuous covariates as predictors to represent habitat as dynamic and continuous (sensu [17]). Covariates were standardized by dividing them by two times their standard deviation [28], allowing coefficients to be directly comparable across models. Collinearity was assessed using Pearson correlation coefficients, if correlation coefficients between predictors exceeded 0.70 we excluded collinear metrics from being present in the same model [22].
Two-way temperature interactions
We considered both linear and nonlinear interactions between habitat covariates and ambient temperature as nonlinear processes are widespread in ecology particularly in response to climate change [13, 92]. In total, three model variants for each population-sex partition were considered: (1) a base model that included habitat covariates as described above with no interaction terms or consideration of temperature, (2) linear interaction models where habitat covariates sequentially interacted with temperature linearly, and (3) spline interaction models where habitat covariates sequentially interacted nonlinearly with temperature using natural cubic splines. Because nonlinear terms are at risk of overfitting models, we constrained any nonlinear relationships explored in the spline interactions to two or three knots in CLR models using the "splines" package [72].
Habitat selection model evaluation and validation
We evaluated model fit for each population-sex partition using the quasi-likelihood under independence criterion (QIC [66];) because it is well suited for case-control models [19]. Finally, predictive ability of model variants were assessed using leave-one-out cross validation (LOOCV), which is a k-fold cross validation variant [7] where each individual animal is sequentially left out and predicted based on the remaining data. Mean Spearman rank coefficients were used to determine the predictive ability of model variants. For each population-sex partition, the model with the highest correlation coefficients from LOOCV and lowest QIC was considered the best. All spatial processing and statistical analyses were conducted in the statistical software R version 3.6.1 [72].
In total, seven base, 28 linear interaction, and 28 spline interaction models were estimated. For the sake of parsimony, only the most biologically significant results are presented and summarized by sex and population. Elevation was collinear with distance-to-water in the Innoko population, we retained the latter because of its known importance in moose ecology [74, 80]. In all but one case (Koyukuk males, S2), spline-based models where percent canopy interacted with temperature outperformed linear interaction and base models and are thus the only models discussed (Tables 2 and 3). In contrast to the strong habitat selection responses of moose for canopy cover, we did not find evidence for other behavioral thermoregulation strategies. For example, we found no support that Alaska moose altered resource selection with increasing summer temperatures in response to topography (i.e., more northerly, cooler slopes), elevation (with the exception of one population, S2), nor hydrology (i.e., by selecting to be closer to water).
Table 2 Model evaluation (QIC) and cross validation (LOOCV) for female moose organized by population. Base models contain no temperature covariates, while spline models incorporate nonlinear interactions between a given covariate and ambient temperature. In this case, "Spline %can2" refers to percent canopy interacted with ambient temperature with two spline segments, while "Spline %can3" refers to percent canopy interacted with ambient temperature with three spline segments. Decreases in QIC indicate a better model fit while increases in LOOCV indicate more predictive ability
Table 3 Model model evaluation (QIC) and cross validation (LOOCV) for male moose summary of organized by population. See additional descriptors in Table 3
The best fit spline models across all four populations occurred when percent canopy interacted with temperature using two to three knots. These spline interaction models had significant improvements in model fit compared to both the base models (ΔQIC = − 108 to − 284; Table 2) and the linear interaction models (not shown). Cross validation scores for spline interaction models experienced small to moderate improvements when compared to the base model (ΔLOOCV = + 1% to + 10%; Table 3).
In summer, female moose in all four regions selected for increased canopy cover nonlinearly as temperature increased (Fig. 2; S3). However, the magnitude of the selection response to thermal cover was most pronounced in the most southerly region (Susitna; β%canopy1 = 33.90, p < 0.001; β%canopy2 = 20.09, p < 0.001; Table 4) as well as the most northerly region (Koyukuk; β%canopy1 = 24.91, p < 0.001; β%canopy2 = 20. 03, p < 0.001). Although the effect of canopy cover was reduced in both the Innoko moose (β%canopy1 = 14.82, p < 0.001; β%canopy2 = 9.01, p < 0.001) and the Tanana moose (β%canopy1 = 4.71, p < 0.001; β%canopy2 = 8.97, β%canopy3 = 7.70, p < 0.001), both populations still revealed highly statistically significant results indicating female moose selected nonlinearly for increased canopy cover as temperature increased.
Conditional probability of selection of spline-based thermal cover as a function of temperature for Alaskan female moose by region in summer months (June–August). We used natural splines with two to three degrees of freedom to represent the relationship between canopy cover and temperature. The probability of selection of denser canopy increased significantly with temperature during summer for all four regions, where red lines indicated the 90% temperature percentiles of experienced temperature and the blue lines indicate the 10% temperature percentiles experienced temperature by region. Shaded bands represent a 95% confidence interval. Plots were created in the 'ggplot2' R package [94]
Table 4 Best habitat selection models by population for female moose (Alces alces gigas) in Alaska from the step-selection function analysis. The best models across all four populations occurred when percent canopy interacted with temperature nonlinearly and are presented here. Natural spline (sp) predictors, where percent canopy interacted with temperature, have coefficients estimated for each line segment. Therefore, numbers one through three in the spline predictor terms represent an individual line segment. Only one of four populations (Tanana) has a third set of coefficients. In the Innoko population, elevation was collinear with distance-to-water and was thus excluded. All predictors were standardized by dividing by two times their standard deviation, making coefficients directly comparable. Robust standard errors are reported
Female moose in the Koyukuk and Susitna regions also showed an increased affinity for water demonstrated in the significant negative beta coefficients for the "distance-to-water" predictor (Table 4), suggesting that moose in these regions preferred to be closer to water. Additionally, we observed additional selection behaviors in the Innoko and Susitna female moose. Female moose in the Innoko population showed an avoidance of areas of high solar radiation (βSRI = − 0.18, p < 0.001), while females in the Susitna population showed an avoidance of higher elevation locations (βelevation = − 1.21, p < 0.001), but these results were independent of temperature.
For males, the best fit spline models in the Susitna and Innoko populations were also from percent canopy interacted with temperature (ΔQIC = −142 and − 97 respectively; Table 3). For the Koyukuk males, the best fit spline model came from elevation interacted with temperature (S2), but males in this region also saw improved model fit from percent canopy interacted with temperature (ΔQIC = − 54). Cross validation scores for spline interaction models (percent canopy interacted with temperature) in all three male populations experienced small to moderate increases when compared to the base model (ΔLOOCV = + 3% to + 6%).
Male moose in all three populations (no males were collared in the Tanana population, see Table 1) selected for increased canopy cover as temperature increased (Fig. 3; S3). However, like with the females, the response to selection of thermal cover was most pronounced in the most northerly region (Koyukuk; β%canopy1 = 27.84, p < 0.001; β%canopy2 = 24.30, p < 0.001; Table 5) as well as the most southerly region (Susitna; β%canopy1 = 22.51, p < 0.001; β%canopy2 = 14.71, p < 0.001). The effect of canopy cover was reduced in the Innoko males (β%canopy1 = 13.02, p < 0.001; β%canopy2 = 8.50, p < 0.001), yet the results still revealed highly statistically significant results indicating moose selected for increased canopy cover as temperature increased.
Conditional probability of selection of spline-based thermal cover as a function of temperature for Alaskan male moose by region in summer months (June–August). We used natural splines with two to three degrees of freedom to represent the relationship between canopy cover and temperature. The probability of selection of denser canopy increased significantly with temperature during summer for all four regions, where red lines indicated the 90% temperature percentiles of experienced temperature and the blue lines indicate the 10% temperature percentiles experienced temperature by region. Shaded bands represent a 95% confidence interval. Plots were created in the 'ggplot2' R package [94]
Table 5 Best habitat selection models for male Alaska moose from the step-selection function analysis. Natural spline (sp) predictors, where percent canopy interacted with temperature, have coefficients estimated for each line segment. Numbers one and two in the spline predictors represent an individual line segment. All three populations had temperature-canopy interactions with two-line segments. In the Innoko population, elevation was collinear with distance-to-water and was thus excluded. All predictors were standardized by dividing by two times their standard deviation. Robust standard errors are reported
Additionally, male moose in the Susitna population showed increased selection of locations closer to water and, like their female counterparts, avoided areas of higher elevation (βelevation = − 1.11, p < 0.001). Similarly, Innoko males showed avoidance for areas with increased topographical solar radiation exposure (βSRI = − 0.12, p < 0.001), but these selection behaviors were independent of temperature.
Our results demonstrate that moose at the northern extent of their range altered habitat selection patterns in response to temperature. Across all populations and sexes, moose selected for denser canopy cover as temperature increased, which is consistent with previous studies [20, 56, 88], and our prediction that moose would select cooler locations as ambient temperature increased.
Magnitude of selection response to temperature varied by sex and population
Our habitat selection results also demonstrated that the magnitude of moose selection for dense canopy cover at higher temperatures varied between populations and sexes (Figs. 2 and 3; S2 and S3; Tables 4 and 5). In two (Innoko and Susitna) of the three populations containing both male and female moose, females demonstrated a stronger selection response for denser canopy at higher temperatures than males. This may be linked to calving and nursing demands on female moose [79] who may more strongly select for denser canopy cover to avoid spending calories to thermoregulate using physiological mechanisms. However, we were unable to distinguish between females with and without calves in this study. This likely influenced our results as females accompanied by their calves tend to increase selection for areas that provide cover for predator avoidance [24, 43] and drastically change their movements both before and after parturition [81].
We also considered whether population differences in selection strength may be related to the availability of thermal cover between regions (i.e., a functional response) where animals alter their habitat selection based on habitat availability [3, 63]. However, our results cannot entirely be explained by a functional response in habitat selection for thermal cover. For example, the Koyukuk moose showed strong selection for thermal cover as temperature increased but also had the second lowest available canopy cover regionally (37.6%; S4). Thus, we do not think a functional response per se explains regional differences in the selection strength, rather we anticipate that it is likely a combination of environmental factors interacting in complex ways to create a suite of unique habitat differences across regions (S5). However, to fully understand functional responses in habitat selection one must also consider the different spatial scales of selection [38, 63], as such responses are often evaluated at the landscape or home range scale [30, 32, 33, 60]. Thus, the lack of functional response of moose to canopy cover in our study may be related to the fine-scale nature of our analytical framework and not an absence of a functional response of moose to thermal cover.
Implications of habitat selection results within a changing climate
The consistent patterns of resource selection for thermal refugia under increasing temperatures found in this study may have important implications for moose resilience in arctic-boreal landscapes responding to increased temperatures from global climate change. For instance, landscape changes associated with wildfire are generally reducing canopy cover from coniferous species, and annual area burned in North American boreal systems doubled in the last half century [44], which is strongly linked to climate and annual weather patterns [37, 45]. Vegetation in interior Alaska now has less older spruce forests, the most common thermal refugia by moose, and a greater proportion of early successional vegetation than before 1990 [51]. Burn severity also plays a major role in how boreal forests recover after wildfire [26], where areas of low burn severity in black spruce stands tend to undergo self-replacement succession [39] and areas of high burn severity favor relay succession of deciduous species over black spruce because of increased exposure of mineral soil and reduced seedbank availability [40, 78]. For moose, such changes in habitat structure may provide new forage resources [4, 47], but also may limit the available thermal refugia needed for behavioral thermoregulation immediately after disturbance events prior to vegetation regeneration, or in late spring (March–April) prior to budburst when moose have not yet shed their winter coats.
Limitations and future work
Our results showed moose did not select for areas closer to water as temperature increased, which differ from previous observations where moose sought wetland or riparian areas to thermoregulate [76, 80]. We believe our results differed due to the spatial resolution (30 m grid cell size) used to represent this behavioral strategy. This restricted detection of smaller aquatic microhabitats important to moose. Unfortunately, no finer-scale map currently exists andlimited our ability to study selection for aquatic microhabitats, which may be especially relevant in flatter, more swamp-like areas such as the Tanana and Innoko regions.
Based on our results and limitations encountered, we make three broad recommendations for future work regarding animal behavioral thermoregulation. First, future work should investigate the vulnerability and resilience of arctic-boreal animals to structural habitat changes as forage resources increase and thermal cover decreases (e.g., [52, 88]). For example, recent work on Alpine ibex (Capra ibex) – another heat-sensitive ungulate – indicates that male ibex response to minimize heat stress comes at the expense of optimal foraging [9]. Unfortunately, we did not have a detailed forage quality or biomass model calibrated for our study areas and hesitated to use categorical land cover maps because of criticisms regarding their use [17]. In Alaska, there is not a wide distinction between shrub classes in landcover maps that would enable us to determine if selected shrub habitats correspond to palatable species and foraging behavior. For instance, "shrub" in most vegetative classifications does not distinguish between shade forages (Salicaceae, Betula neoalaskana) and shade only (Alnus, B. nana) species, which is critical for parsing selection behavior. Moose maximize energy intake in the hottest parts of summer, so selection for forage biomass and quality plausibly overrides thermal stress and predation risk for a time. However, we were unable to directly assess this tradeoff due to data limitations.
Second, we suggest testing for differences in female selection and movement relative to presence or absence of offspring. Such a distinction would connect nicely to calls to link behavior and movement to population outcomes [10, 59], especially when considering the thermal environment as survival and fitness often depend on the availability of suitable habitat to buffer against thermal extremes in a landscape [25].
Finally, a critical next step is to evaluate how habitat selection under thermal stress impacts individual fitness and population dynamics, as temperature plays an important role in limiting fecundity in other mammals [18, 93] including moose [50, 62]. This is especially important as population responses to climate change can vary dramatically. For instance, Joly et al. [42] found the influence of climate on caribou herds in Alaska was not uniform, instead, western populations increased in size while northwestern populations declined as a result of intensity changes in the Pacific Decadal Oscillation. Similarly, using detailed demographic information for caribou (Rangifer tarandus), red deer (Cervus elaphus), and elk (C. canadensis) across the Northern Hemisphere, Post et al. [70] showed that that different population responses to climate varied in both direction and magnitude.
The impacts of climate change in arctic-boreal regions increase landscape heterogeneity through processes such as increased wildfire intensity and area burned, which can significantly alter the thermal environment available to an animal. Despite recognizing the importance of thermal conditions to animals, there is a distinct lack of research on how animals might respond to climate driven changes in thermal refugia. Our regional assessment provides insight on how Alaska moose may respond to changes in ambient temperature, where statewide annual temperatures are averaging an increase of 0.4 °C per decade and summer temperatures are projected to increase 2–5 °C by midcentury [51]. Understanding habitat selection and movement patterns related to behavioral thermoregulation is a first step toward identifying areas capable of providing thermal relief for moose and other species impacted by climate change.
The GPS-telemetry data support the findings of this study are owned by Alaska Department of Fish and Game, National Park Service, and the Bureau of Land Management but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available.
ABoVE:
Arctic-boreal vulnerability experiment
ADFG:
Alaska Department of Fish and Game
AMAP:
Arctic Monitoring and Assessment Programme
CLR:
Conditional-logistic regression
Env-DATA:
Environmental-data automated track annotation
GEE:
Generalized estimating equations
IPCC:
LOOCV:
Leave-one-out cross validation
NARR:
North American regional reanalysis
NASA:
NOAA:
QIC:
Quasi-likelihood under independence criterion
USGS:
United States Geological Survey
RMSE:
Root mean square error
RSAGA:
R System for Automated Geospatial Analysis
Sp:
Natural spline
SRI:
Solar radiation index
SSF:
Step selection function
Alaska Department of Fish and Game (ADFG). Our wealth maintained: a strategy for conserving Alaska's diverse wildlife and fish resources. Juneau: Alaska Department of Fish and Game; 2006. p. xviii+824. https://www.adfg.alaska.gov/static/species/wildlife_action_plan/cwcs_full_document.pdf.
Arctic Monitoring and Assessment Programme (AMAP). Snow, water, ice, and permafrost in the Arctic: summary for policy-makers. Oslo; 2017. Retrieved from www.amap.no/swipa.
Arthur SM, Manly BFJ, Garner GW. Assessing habitat selection when availability changes. Ecology. 1996;77(1):215–27.
Beck PSA, Goetz SJ, Mack MC, Alexander HD, Jin Y, Randerson JT, et al. The impacts and implications of an intensifying fire regime on Alaskan boreal forest composition and albedo. Glob Chang Biol. 2011;17(9):2853–66. https://doi.org/10.1111/j.1365-2486.2011.02412.x.
Bourgoin G, Garel M, Blanchard P, Dubray D, Maillard D, Gaillard JM. Daily responses of mouflon (Ovis gmelini musimon × Ovis sp.) activity to summer climatic conditions. NRC Research Press. 2011;89(9):765–73. https://doi.org/10.1139/Z11-046.
Boyce MS. Scale for resource selection functions. Divers Distrib. 2006;12(3):269–76. https://doi.org/10.1111/j.1366-9516.2006.00243.x.
Boyce MS, Vernier PR, Nielsen SE, Schmiegelow FKA. Evaluating resource selection functions. Ecol Model. 2002;157(2–3):281–300.
Brenning A. Statistical geocomputing combining R and SAGA: the example of landslide susceptibility analysis with generalized additive models. In: Boehner J, Blaschke T, Montanarella L, editors. SAGA - seconds out (= hamburger Beitraege zur Physischen Geographie und Landschaftsoekologie), vol. 19; 2008. p. 23–32.
Brivio F, Zurmühl M, Grignolio S, Von Hardenberg J, Apollonio M, Ciuti S. Forecasting the response to global warming in a heat-sensitive species. Sci Rep. 2019;9(3048):1–16. https://doi.org/10.1038/s41598-019-39450-5.
Brodie JF, Post ES, Doak DF. Wildlife conservation in a changing climate. Chicago: University of Chicago Press; 2012.
Broders HG, Coombs AB, Mccarron JR. Ecothermic responses of moose (Alces alces) to thermoregulatory stress on mainland Nova Scotia. Alces. 2012;48:53–61.
Bump JK, Webster CR, Vucetich JA, Rolf O, Shields JM, Powers MD. Ungulate carcasses perforate ecological filters and create biogeochemical hotspots in forest herbaceous layers allowing trees a competitive advantage. Ecosystems. 2009;12(6):996–1007. https://doi.org/10.1007/s10021-009-9274-0.
Burkett VR, Wilcox DA, Stottlemyer R, Barrow W, Fagre D, Baron J, et al. Nonlinear dynamics in ecosystem response to climatic change: case studies and policy implications. Ecol Complex. 2005;2(4):357–94. https://doi.org/10.1016/j.ecocom.2005.04.010.
Cameron RD, Smith T, Fancy SG, Gerhart KL, White RG. Calving success of female caribou in relation to body weight. Can J Zool. 1993;71(3):480–6.
Christie KS, Ruess RW, Lindberg MS, Mulder CP. Herbivores influence the growth, reproduction, and morphology of a widespread Arctic willow. PLoS One. 2014;9(7):1–9. https://doi.org/10.1371/journal.pone.0101716.
Clarke A, Rothery P. Scaling of body temperature in mammals and birds. Funct Ecol. 2008;22(1):58–67. https://doi.org/10.1111/j.1365-2435.2007.01341.x.
Coops NC, Wulder MA. Breaking the habit(at). Trends Ecol Evol. 2019;34(7):585–7. https://doi.org/10.1016/j.tree.2019.04.013.
Corlatti L, Gugiatti A, Ferrari N, Formenti N, Trogu T, Pedrotti L. The cooler the better? Indirect effect of spring–summer temperature on fecundity in a capital breeder. Ecosphere. 2018;9(6):1–13. https://doi.org/10.1002/ecs2.2326.
Craiu RV, Duchesne T, Fortin D. Inference methods for the conditional logistic regression model with longitudinal data. Biom J. 2008;50(1):97–109.
Demarchi MW, Bunnell FL. Forest cover selection and activity of cow moose in summer. Acta Theriol. 1995;4(1):23–36.
Dodge S, Bohrer G, Weinzierl R, Davidson S, Kays R, Douglas D, et al. The environmental-DATA automated track annotation (Env-DATA) system: linking animal tracks with environmental data. Movement Ecology. 2013;1(1):3. https://doi.org/10.1186/2051-3933-1-3.
Dormann CF, Elith J, Bacher S, Buchmann C, Carl G, Carr G, et al. Collinearity: a review of methods to deal with it and a simulation study evaluating their performance. Ecography. 2013;36(1):27–46. https://doi.org/10.1111/j.1600-0587.2012.07348.x.
Dussault C, Ouellet J-P, Courtois R, Huot J, Breton L, Larochelle J. Behavioural responses of moose to thermal conditions in the boreal forest. Ecoscience. 2004;11(3):321–8.
Dussault C, Ouellet J, Courtois R, Huot J, Breton L, Jolicoeur H. Linking moose habitat selection to limiting factors. Ecography. 2005;28(5):619–28.
Elmore RD, Carroll JM, Tanner EP, Hovick TJ, Grisham BA, Fuhlendorf SD, et al. Implications of the thermal environment for terrestrial wildlife management. Wildl Soc Bull. 2017;41(2):183–93. https://doi.org/10.1002/wsb.772.
Epting J, Verbyla D. Landscape-level interactions of prefire vegetation , burn severity, and postfire vegetation over a 16-year period in interior Alaska. Can J For Res. 2005;35(6):1367–77. https://doi.org/10.1139/X05-060.
Fortin D, Beyer HL, Boyce MS, Smith DW, Duchesne T, Mao JS. Wolves influence elk movements: behavior shapes a trophic cascade in Yellowstone National Park. Ecology. 2005;86(5):1320–30.
Gelman A. Scaling regression inputs by dividing by two standard deviations. Stat Med. 2008;27(15):2865–73. https://doi.org/10.1002/sim.3107.
Gurarie E, Mahoney P, LaPoint S, Davidson S. Above: functions and methods for the animals on the move project of the Arctic boreal vulnerability experiment (ABoVE - NASA). R package version 0.11; 2018.
Hansen BB, Herfindal I, Aanes R, Sæther B-E, Henriksen S. Functional response in habitat selection and the tradeoffs between foraging niche components in a large herbivore. Nordic Society Oikos. 2009;118(6):859–72.
Hansen MC, Potapov PV, Moore R, Hancher M, Turubanova SA, Tyukavina A, et al. High-resolution global maps of forest cover change. Science. 2013;342(6160):850–3. https://doi.org/10.1126/science.1244693.
Hayes RD, Harestad AS. Wolf functional response and regulation of moose in the Yukon. Can J Zool. 2000;78(1):60–6.
Hebblewhite M, Merrill E. Modelling wildlife-human relationships for social species with mixed-effects resource selection models. J Appl Ecol. 2008;45(3):834–44. https://doi.org/10.1111/j.1365-2664.2008.01466.x.
Hijmans RJ. Raster: geographic data analysis and modeling. R package version 3.0–2; 2019. https://CRAN.R-project.org/package=raster.
Hosmer DW, Lemeshow S. Applied logistic regression. 2nd ed. New York: Wiley; 2000.
Intergovernmental Panel on Climate Change (IPCC). In: Core Writing Team, Pachauri RK, Meyer LA, editors. Climate Change 2014: Synthesis Report. Contribution of working groups I, II and III to the fifth assessment report of the intergovernmental panel on climate change. Geneva: IPCC; 2014. p. 151.
Johnson EA. Fire and vegetation dynamics: studies from the north American boreal forest. New York: Cambridge University Press; 1996.
Johnson DH. The comparison of usage and availability measurements for evaluating resource preference. Ecology. 1980;61(1):65–71.
Johnstone JF, Chapin FSIII. Fire interval effects on successional trajectory in boreal forests of Northwest Canada. Ecosystems. 2006;9(2):268–77. https://doi.org/10.1007/S10021-005-0061-2.
Johnstone JF, Hollingsworth TN, Chapin FSIII, Mack MC. Changes in fire regime break the legacy lock on successional trajectories in Alaskan boreal forest. Glob Chang Biol. 2010;16(4):1281–95. https://doi.org/10.1111/j.1365-2486.2009.02051.x.
Joly K, Craig T, Sorum MS, McMillan JS, Spindler MA. Variation in fine-scale movements of moose in the upper Koyukuk River drainage, northcentral Alaska. Alces. 2015;51:97–105.
Joly K, Klein DR, Verbyla DL, Rupp TS, Chapin FS III. Linkages between large-scale climate patterns and the dynamics of Arctic caribou populations. Ecography. 2011;34(2):345–52. https://doi.org/10.1111/j.1600-0587.2010.06377.x.
Joly K, Sorum MS, Craig T, Julianus EL. The effects of sex, terrain, wildfire, winter severity, and maternal status on habitat selection by moose in north-Central Alaska. Alces. 2016;52:101–15.
Kasischke ES, Turetsky MR. Recent changes in the fire regime across the north American boreal region — spatial and temporal patterns of burning across Canada and Alaska. Geophys Res Lett. 2006;33(9). https://doi.org/10.1029/2006GL025677.
Kasischke ES, Verbyla DL, Rupp TS, McGuire AD, Murphy KA, Jandt R, et al. Alaska's changing fire regime — implications for the vulnerability of its boreal forests 1. Candian J Forest Res. 2010;40(7):1313–24. https://doi.org/10.1139/X10-098.
Keating KA, Gogan PJP, Vore JM, Irby L. A simple solar radiation index for wildlife habitat studies. J Wildl Manag. 2007;71(4):1344–8. https://doi.org/10.2193/2006-359.
Kelly R, Chipman ML, Higuera PE, Stefanova I, Brubaker LB, Sheng F. Recent burning of boreal forests exceeds fire regime limits of the past 10,000 years. Proc Natl Acad Sci. 2013;110(32):13055–60. https://doi.org/10.1073/pnas.1305069110.
Kielland K, Bryant JP. Moose herbivory in taiga: effects on biogeochemistry and vegetation dynamics in primary succession. Oikos. 1998;82(2):377–83.
Leblond M, Dussault C, Ouellet JP. What drives fine-scale movements of large herbivores? A case study using moose. Ecography. 2010;33(6):1102–12. https://doi.org/10.1111/j.1600-0587.2009.06104.x.
Lenarz MS, Nelson ME, Schrage MW, Edwards AJ. Temperature mediated moose survival in northeastern Minnesota. J Wildl Manag. 2009;73(4):503–10. https://doi.org/10.2193/2008-265.
Markon C, Gray S, Berman M, Eerkes-Medrano L, Hennessy T, Huntington H, et al. Alaska. In: Reidmiller DR, Avery CW, Easterling DR, Kunkel KE, Lewis KLM, Maycock TK, Stewart BC, editors. Impacts, risks, and adaptation in the United States: fourth National Climate Assessment, volume II. Washington, DC: US Global Change Research Program; 2018. p. 11–85–1241.
Mason TH, Brivio F, Stephens PA, Apollonio M, Grignolio S. The behavioral trade-off between thermoregulation and foraging in a heatsensitive species. Behav Ecol. 2017;28(3):908–18.
McCain CM, King SRB. Body size and activity times mediate mammalian responses to climate change. Glob Chang Biol. 2014;20(6):1760–9. https://doi.org/10.1111/gcb.12499.
McCann NP, Moen RA, Harris TR. Warm-season heat stress in moose (Alces alces). Can J Zool. 2013;91(12):893–8 Retrieved from http://www.nrcresearchpress.com/doi/abs/10.1139/cjz-2013-0175.
McLaren BE, Peterson RO. Wolves, moose, and tree rings on isle Royale. Science. 1994;266(5190):1555–8.
Melin M, Matala J, Mehtätalo L, Tiilikainen R, Tikkanen OP, Maltamo M, et al. Moose (Alces alces) reacts to high summer temperatures by utilizing thermal shelters in boreal forests - an analysis based on airborne laser scanning of the canopy structure at moose locations. Glob Chang Biol. 2014;20(4):1115–25. https://doi.org/10.1111/gcb.12405.
Mesinger FM, DiMego G, Kalnay E, Mitchell K, Shafran PC, Ebiuzaki W, et al. North american regional reanalysis. Am Meterological Soc. 2006;87(3):343–60. https://doi.org/10.1175/BAMS-87-3-343.
Montgomery RA, Redilla KM, Moll RJ, Van Moorter B, Rolandsen CM, Millspaugh JJ, et al. Movement modeling reveals the complex nature of the response of moose to ambient temperatures during summer. J Mammal. 2019;100(1):169–77. https://doi.org/10.1093/jmammal/gyy185.
Morales JM, Moorcroft PR, Matthiopoulos J, Frair JL, Kie JG, Powell RA, et al. Building the bridge between animal movement and population dynamics. Philos Transact Royal Society B: Biol Sci. 2010;365(1550):2289–301. https://doi.org/10.1098/rstb.2010.0082.
Moreau G, Fortin D, Couturier S, Duchesne T. Multi-level functional responses for wildlife conservation: the case of threatened caribou in managed boreal forests. J Appl Ecol. 2012;49(3):611–20. https://doi.org/10.1111/j.1365-2664.2012.02134.x.
Muff S, Signer J, Fieberg J. Accounting for individual-specific variation in habitat-selection studies: efficient estimation of mixed-effects models using Bayesian or frequentist computation. J Anim Ecol. 2020;89(1):80–92. https://doi.org/10.1111/1365-2656.13087.
Murray DL, Cox EW, Ballard WB, Whitlaw HA, Lenarz MS, Custer TW, et al. Pathogens, nutritional deficiency, and climate influences on a declining moose population. Wildl Monogr. 2006;166:1), 1–30.
Mysterud A, Ims R. Functional responses in habitat use: availability influences relative use in trade-off situations. Ecology. 1998;79(4):1435–41. https://doi.org/10.2307/176754.
National Oceanic and Atmospheric Administration (NOAA). National Centers for environmental information, temperature summaries; 2019. [FIPS:02]. Retrieved from https://www.ncdc.noaa.gov/cdo-web/search, [Accessed 1/6/2020].
Nowacki GJ, Spencer P, Fleming M, Jorgenson T. Unified ecoregions of Alaska, U.S. Geol Surv Open File Rep. 2003. p. 02–297 (map). https://pubs.er.usgs.gov/publication/ofr2002297.
Pan W. Akaike's information criterion in generalized estimating equations. Biometrics. 2001;57(1):120–5.
Paragi TF, Kellie KA, Peirce JM, Warren MJ. Movements and Sightability of moose in game management unit 21E. Juneau: Alaska Department of Fish and Game; 2017.
Pekel JF, Cottam A, Gorelick N, Belward AS. High-resolution mapping of global surface water and its long-term changes. Nature. 2016;540(7633):418–22. https://doi.org/10.1038/nature20584.
Porter, Claire, Morin, Paul; Howat, Ian; Noh, Myoung-Jon; Bates, Brian; Peterman, Kenneth; Keesey, Scott; Schlenk, Matthew; Gardiner, Judith; Tomko, Karen; Willis, Michael; Kelleher, Cole; Cloutier, Michael; Husby, Eric; Foga, Steven; Nakamura, Hitomi; Platson, Melisa; Wethington, Michael, Jr.; Williamson, Cathleen; Bauer, Gregory; Enos, Jeremy; Arnold, Galen; Kramer, William; Becker, Peter; Doshi, Abhijit; D'Souza, Cristelle; Cummens, Pat; Laurier, Fabien; Bojesen, Mikkel, 2018, "ArcticDEM", https://doi.org/10.7910/DVN/OHHUKH, Harvard Dataverse, V1, 2018, [Accessed 10/1/2018].
Post E, Brodie J, Hebblewhite M, Anders AD, Maier JAK, Wilmers CC. Global population dynamics and hot spots of response to climate change. Bioscience. 2009;59(6):489–97. https://doi.org/10.1525/bio.2009.59.6.7.
Prima MC, Duchesne T, Fortin D. Robust inference from conditional logistic regression applied to movement and habitat selection analysis. PLoS One. 2017;12(1):1–13. https://doi.org/10.1371/journal.pone.0169779.
R Core Team. R: a language and environment for statistical computing. Vienna: R Foundation for Statistical Computing; 2019. URL https://www.R-project.org/.
Renecker LA, Hudson RJ. Seasonal energy expenditures and thermoregulatory responses of moose. Can J Zool. 1986;64(2):322–7.
Renecker LA, Schwartz CC. Food habits and feeding behavior. In: Franzmann, Schwartz CC, editors. Ecology and Management of the North American Moose. 2nd ed. Washington, D.C.: Wildlife Management Institutions; 2007. p. 403–39.
Rönnegård L, Forslund P, Danell Ö. Lifetime patterns in adult female mass, reproduction, and offspring mass in semidomestic reindeer (Rangifer tarandus tarandus). Can J Zool. 2002;80(12):2047–55. https://doi.org/10.1139/Z02-192.
Schwartz CC, Renecker LA. Nutrition and energetics. In: Franzmann, Schwartz CC, editors. Ecology and Management of the North American Moose. 2nd ed. Washington, D.C.: Wildlife Management Institutions; 2007. p. 441–78.
Screen JA. Arctic amplification decreases temperature variance in northern mid- to high-latitudes. Nat Clim Chang. 2014;4(7):577–82. https://doi.org/10.1038/NCLIMATE2268.
Shenoy A, Johnstone JF, Kasischke ES, Kielland K. Persistent effects of fire severity on early successional forests in interior Alaska. For Ecol Manage. 2011;261(3):381–90. https://doi.org/10.1016/j.foreco.2010.10.021.
Speakman JR, Król E. Maximal heat dissipation capacity and hyperthermia risk: neglected key factors in the ecology of endotherms. J Anim Ecol. 2010;79(4):726–46. https://doi.org/10.1111/j.1365-2656.2010.01689.x.
Street GM, Rodgers AR, Fryxell JM. Mid-day temperature variation influences seasonal habitat selection by moose. J Wildl Manag. 2015;79(3):505–12. https://doi.org/10.1002/jwmg.859.
Testa JW, Becker EF, Lee GR. Movements of female moose in relation to birth and death of calves. Alces. 2000;36:155–62.
Therneau T. A package for survival analysis in S. version 2.38; 2015. https://CRAN.R-project.org/package=survival.
Thompson DP, Barboza PS, Crouse JA, McDonough TJ, Badajos OH, Herberg AM. Body temperature patterns vary with day, season, and body condition of moose (Alces alces). J Mammal. 2019;100(5):1466–78.
Thompson DP, Crouse JA, Jaques S, Barboza PS. Redefining physiological responses of moose (Alces alces) to warm environmental conditions. J Therm Biol. 2020;102581.
Timmermann HR, McNicol JG. Moose habitat needs. For Chron. 1988;64(3):238–45.
Thurfjell H, Ciuti S, Boyce MS. Applications of step-selection functions in ecology and conservation. Movement Ecology. 2014;2(4):1–12. https://doi.org/10.1186/2051-3933-2-4.
van Beest FM, Milner JM. Behavioural responses to thermal conditions affect seasonal mass change in a heat-sensitive northern ungulate. PLoS One. 2013;8(6). https://doi.org/10.1371/journal.pone.0065972.
van Beest FM, Van Moorter B, Milner JM. Temperature-mediated habitat use and selection by a heat-sensitive northern ungulate. Anim Behav. 2012;84(3):723–35. https://doi.org/10.1016/j.anbehav.2012.06.032.
van Beest FM, Rivrud IM, Loe LE, Milner JM, Mysterud A. What determines variation in home range size across spatiotemporal scales in a large browsing herbivore? J Anim Ecol. 2011;80(4):771–85. https://doi.org/10.1111/j.1365-2656.2011.01829.x.
Vors LS, Boyce MS. Global declines of caribou and reindeer. Glob Chang Biol. 2009;15(11):2626–33. https://doi.org/10.1111/j.1365-2486.2009.01974.x.
Walker WH, Meléndez-Fernández OH, Nelson RJ, Reiter RJ. Global climate change and invariable photoperiods: a mismatch that jeopardizes animal fitness. Ecol Evol. 2019;9(17):10044–54. https://doi.org/10.1002/ece3.5537.
Walther GR. Community and ecosystem responses to recent climate change. Philos Transact Royal Society B: Biol Sci. 2010;365(1549):2019–24. https://doi.org/10.1098/rstb.2010.0021.
Wells K, O'Hara RB, Cooke BD, Mutze GJ, Prowse TAA, Fordham DA. Environmental effects and individual body condition drive seasonal fecundity of rabbits: identifying acute and lagged processes. Oecologia. 2016;181(3):853–64. https://doi.org/10.1007/s00442-016-3617-2.
Wickham H. ggplot2: elegant graphics for data analysis. New York: Springer-Verlag; 2016.
Wolken JM, Hollingsworth TN, Rupp TS, Chapin FS, Trainor SF, Barrett TM, et al. Evidence and implications of recent and projected climate change in Alaska's forest ecosystems. Ecosphere. 2011;2(11):1–35. https://doi.org/10.1890/ES11-00288.1.
We sincerely thank data owners who supplied the moose GPS-telemetry data used in this study as well as biologists whose edits contributed significantly to the clarity of this paper (specifically Tom Paragi, Graham Frye, Glenn Stout, Jeffrey Stetz, and Erin Julianus).
Funding for this work was provided by the National Aeronautics and Space Administration's (NASA) Arctic Boreal Vulnerability Experiment (ABoVE) grant numbers: NNX15AT89A, NNX15AW71A, NNX15AU20A, NNX15AV92A. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Department of Natural Resources and Society, University of Idaho, Moscow, ID, USA
Jyoti S. Jennewein, Lee A. Vierling & Jan U. H. Eitel
Wildlife Biology Program, Department of Ecosystem and Conservation Science, W.A. Franke College of Forestry and Conservation, University of Montana, Missoula, MT, USA
Mark Hebblewhite
College of the Environment, University of Washington, Seattle, WA, USA
Peter Mahoney
Department of Fish and Wildlife Sciences, University of Idaho, Moscow, ID, USA
Sophie Gilbert
School of the Environment, Washington State University, Pullman, WA, USA
Arjan J. H. Meddens
Lamont-Doherty Earth Observatory, Columbia University, Palisades, NY, USA
Natalie T. Boelman
National Park Service, Gates of the Arctic National Park and Preserve, Fairbanks, AK, USA
Kyle Joly
Alaska Department of Fish and Game, 1800 Glenn Hwy #2, Palmer, AK, USA
Kimberly Jones
Alaska Department of Fish and Game, Division of Wildlife Conservation, 1300 College Rd, Fairbanks, Alaska, USA
Kalin A. Kellie
Department of Forestry and Wildlife Management, Inland Norway University of Applied Sciences, Evenstad, Norway
Scott Brainerd
McCall Outdoor Science School, University of Idaho, McCall, ID, USA
Jan U. H. Eitel
Jyoti S. Jennewein
Lee A. Vierling
JSJ primary analyst and author; MH, PM, SG, AJHM, NB, KJ, KAK, SB, LAV, and JUHE all contributed and advised on methodology, writing, and manuscript edits. The authors read and approved the final manuscript.
Correspondence to Jyoti S. Jennewein.
All capture protocols and handling protocols for moose in this study adhered to the Alaska Animal Care and Use Committee approval process (#07–11) as well as the Institutional Animal Care and Use Committee Protocol (#09–01).
Additional file 1: Supplementary 1:
Temperature Validation. Supplementary 2: Koyukuk males spline model results for elevation and temperature interaction. Supplementary 3: Interactive 3D plots of interaction between ambient temperature and canopy cover. Supplementary 4: Used-Available Tables of Covariates. Supplementary 5: Regional Habitat Features. Figure 1e: Regional variation in elevation. ANOVA results comparing regional variation in elevation show that all regions vary from each other statistically (F = 2705, p < 0.001). Figure 2e: Regional variation in ambient temperature. ANOVA results comparing regional variation in ambient temperature show that all regions vary from each other statistically (F = 2705, p < 0.001). With Tanana showing the highest temperatures, Innoko second, Koyukuk third, and Susitna fourth. Figure 3: Regional variation in cloud cover. ANOVA results show all regions vary from each other statistically (F = 1472, p < 0.001), except Koyukuk and Susitna. Table 1E: Regional variation in fixes occurring in the rain. Percent estimated proportionally comparing number of fixes in the rain to total number of fixes regionally.
Jennewein, J.S., Hebblewhite, M., Mahoney, P. et al. Behavioral modifications by a large-northern herbivore to mitigate warming conditions. Mov Ecol 8, 39 (2020). https://doi.org/10.1186/s40462-020-00223-9
Behavioral thermoregulation
Thermal stress | CommonCrawl |
Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. It only takes a minute to sign up.
What is the highest possible expanded octet?
Often called "hypervalent", chemicals like phosphorous pentachloride and sulfur hexafluoride are possible due to the fact that their central atoms form covalent bonds with more than four other atoms, giving rise to uncommon arrangements.
In the case of $\ce{PCl5}$, the phosphorous atom forms 5 bonds, giving rise to trigonal bipyramidal shape according to the VSEPR model. With $\ce{SF6}$, the Sulfur atom forms 6 bonds, giving rise to octahedral shape.
My question is, which central atom can achieve the highest number of bonds through the use of an expanded octet, and what is its shape?
The greatest I can think of is iodine heptafluoride, whose shape is pentagonal bipyramidal due to its fluorine atom of 7 bonds.
molecular-structure valence-bond-theory vsepr-theory
NerdatopeNerdatope
$\begingroup$ What you really want to know is what is the highest possible coordination number. It's possible to go quite high; take a look at this Wikipedia article. Also, though it's not completely related, I used to have a great link with all the (theoretical) lowest energy coordination geometries up to massive coordination numbers like 30. I'll see if I can find it. $\endgroup$
– Nicolau Saker Neto
$\begingroup$ Interesting, I never knew about those. It would seem strange to have one equal to 30, as it would seem that such an arrangement would require the use of electrons from lower principle energy levels, if I'm reading that article correctly, that is. $\endgroup$
– Nerdatope
$\begingroup$ Oh the thing I said about a coordination number of 30 is purely theoretical, just a mathematical investigation for the fun of it! The highest I know is actually 16 though, from uranocene. Though perhaps one should establish a difference between coordination number for single-atom ligands (i.e. hapticity 1) and for multiple-atom ligands (hapticity greater than 1). $\endgroup$
$\begingroup$ as for doubtful octet exp. concept: chemistry.stackexchange.com/questions/13949/…, chemistry.stackexchange.com/questions/19433/… $\endgroup$
– Mithoron
$\begingroup$ also chemistry.stackexchange.com/questions/444/… $\endgroup$
I took an interest in this question because it's something I recently wondered myself. First of all, I should clarify that while you mention hypervalency, what you seem interested in is hypercoordination, or even more generally, just compounds with high coordination numbers (hypercoordination is used specifically when the number of ligands in a compound is larger than "normal"). Hypercoordination and high coordination numbers are entirely independent of hypervalency or VSEPR theory altogether. Regardless of the precise electronic structure in a compound, coordination numbers can often be determined far more directly, especially when the compounds can create crystal structures for x-ray crystallography or neutron diffraction.
Usually there is little focus given to compounds with more than six ligands, as the vast majority of compounds will have atoms surrounded by six or less ligands. However, there are some representatives for higher coordination numbers. Some books will mention iodine heptafluoride, $\ce{IF7}$, as a seven-coordinate compound, with its pentagonal bipyramidal structure. It is still possible to add a fluoride and obtain an example of coordination number 8 in the octafluoroiodate(VII) anion, $\ce{IF8^{-}}$, which has an interesting square antiprismatic geometry (take a cube and twist a face by 45°).
For coordination number 9, representatives can be found in the transition metal hydrides, such as the nonahydridorhenate anion, $\ce{ReH_9^{2-}}$, and the lowest energy configuration in this case is a curious tricapped triagonal prism.
Is it possible to go higher than coordination number 9 in coordination compounds? While there are examples, at the present it seems that none of them contain solely monodentate ligands, that is, individual ligands which only bind to the centre once. This is because either there would have to be a lot of crowding over the central atom, creating repulsions between the ligands, or because to allow enough space the ligands would have to stay relatively far from the central atom, making their bonds weak.
However, if a single ligand is allowed to bond to the centre through more than atom simultaneously, then the coordination number can keep increasing without requiring the presence of too many ligands. Actinides have a very rich coordination chemistry and are capable of generating several impressive compounds, such as uranocene with coordination number 16, but it seems the current record is held by actinide elements surrounded by four cyclopentadienyl rings, each with five carbon atoms, reaching an amazing coordination number of 20 in tetrakis(cyclopentadienyl)thorium(IV) ($\ce{Th}\mathrm{(\eta ^5-}\ce{C5H5}\mathrm{)_4}$ or its uranium analogue.
To finalize, it's interesting to note that though there are no examples yet of compounds with coordination number 10 or above containing only monodentate ligands, their expected geometries can be calculated even for much higher coordination numbers (under certain assumptions). For very high coordination numbers, calculations and physical reality will likely diverge, but perhaps a few more coordination numbers with the expected geometry will be unlocked by the study of ultraheavy element chemistry. Of course, the calculations linked here don't help much to study the geometry of coordination compounds with polydentate ligands, as they rely significantly on the geometry of the ligand itself
Edit: This very relevant article raises some interesting points. For example, endohedral fullerene compounds could be thought of as a central atom surrounded by a single ligand in the shape of a cage, so one could possibly make a case for structures with coordination numbers of 60, 70, 80 or even more. The article also calculates the possible existence of a compound containing 15 monodentate ligands, the cation $\ce{PbHe_{15}^{2+}}$, though it would be very weakly bound (as you might expect from a helium compound) and possibly restricted to the gas phase.
Nicolau Saker NetoNicolau Saker Neto
$\begingroup$ Additional content very related to this answer (and partially superceding it) can be found in this more recent question. In particular, the coordination number for cyclopentadienide and other cyclopolyene ions arguably is different from what I wrote here. $\endgroup$
Thanks for contributing an answer to Chemistry Stack Exchange!
Not the answer you're looking for? Browse other questions tagged molecular-structure valence-bond-theory vsepr-theory or ask your own question.
Can an atom have more than 8 valence electrons? If not, why is 8 the limit?
Can an atom bond with more than 8 other atoms?
Does sulfur and phosphorus expand their octet?
How can Iodine bond with 5 Fluorines in Iodine Pentafluoride?
Is it possible for the iodine atom in periodate ester to be octavalent?
Is Octet Expansion appropriate when regular Octet bonding is possible (such as using dative bonds)?
How can the Bonding in IF7 be explained using LCAO method?
What is the geometrical structure of OF₂?
Why an asymmetric geometry with sp3d and sp3d3 hybridization?
How are the hybrid orbitals of sulfur hexafluoride shaped?
Are carbocations necessarily sp2 hybridized and trigonal planar?
What is the CNC bond angle in methyl isocyanate?
Breaking degeneracy in phosphorus pentachloride
What is the difference between expanded structural formula and Lewis dot structure?
Hypervalency of orthonitrate | CommonCrawl |
Online videos indicate human and dog behaviour preceding dog bites and the context in which bites occur
Domestic dogs (Canis familiaris) grieve over the loss of a conspecific
Stefania Uccheddu, Lucia Ronconi, … Federica Pirrone
A framework for understanding how activities associated with dog ownership relate to human well-being
Ana Maria Barcelos, Niko Kargas, … Daniel S. Mills
An ethological analysis of close-contact inter-cat interactions determining if cats are playing, fighting, or something in between
N. Gajdoš-Kmecová, B. Peťková, … D. S. Mills
Dogs distinguish human intentional and unintentional action
Britta Schünemann, Judith Keller, … Juliane Bräuer
Hunting dogs bark differently when they encounter different animal species
Richard Policht, Ondřej Matějka, … Vlastimil Hart
Observational data reveal evidence and parameters of contagious yawning in the behavioral repertoire of captive-reared chimpanzees (Pan troglodytes)
Matthew W. Campbell & Cathleen R. Cox
Separation-related behavior of dogs shows association with their reactions to everyday situations that may elicit frustration or fear
Rita Lenkei, Tamás Faragó, … Péter Pongrácz
Assessment of emotional predisposition in dogs using PANAS (Positive and Negative Activation Scale) and associated relationships in a sample of dogs from Brazil
Carine Savalli, Natalia Albuquerque, … Daniel S. Mills
Watching eyes do not stop dogs stealing food: evidence against a general risk-aversion hypothesis for the watching-eye effect
Patrick Neilands, Rebecca Hassall, … Alex H. Taylor
Sara C. Owczarczak-Garstecka1,2,
Francine Watkins3,
Rob Christley1 &
Carri Westgarth ORCID: orcid.org/0000-0003-0471-27611,2
Scientific Reports volume 8, Article number: 7147 (2018) Cite this article
YouTube videos of dog bites present an unexplored opportunity to observe dog bites directly. We recorded the context of bites, bite severity, victim and dog characteristics for 143 videos and for 56 videos we coded human and dog behaviour before the bite. Perceived bite severity was derived from visual aspects of the bite. Associations between bite severity and victim, dog and context characteristics were analysed using a Bayesian hierarchical regression model. Human and dog behaviour before the bite were summarised with descriptive statistics. No significant differences in bite severity were observed between contexts. Only age of the victim was predictive of bite severity: adults were bitten more severely than infants and infants more severely than children. Non-neutral codes describing dog body posture and some displacement and appeasement behaviours increased approximately 20 seconds before the bite and humans made more tactile contacts with dogs 21 seconds before the bite. This analysis can help to improve understanding of context in which bites occur and improve bite prevention by highlighting observable human and dog behaviours occurring before the bite.
Dog bites are a global public health problem resulting in substantial costs to health care systems1,2,3, and businesses as a result of time off work, human physical and mental health impacts4,5,6,7 and also affect dog welfare, since dogs that bite are likely to be relinquished to shelters8 and/or euthanised9. Human population-level risk factors associated with dog bites include young age of the victim1,10,11,12,13,14 (but see15,16) and male sex11 (but see12,15,16). The breed, neuter status and sex of dogs have also been highlighted17, although the link between these factors and bite risk are contested11,15. Additionally, the physical environment where the interactions are taking place and the dog's history are suggested risk factors for the occurrence of a bite5,18. Most bites to adults are to limbs and children receive more bites to the face and neck areas1, regardless of dog size19, suggesting that children interact with dogs differently than adults.
As well the risk factors for the occurrence of a bite, studies have scrutinised the risk factors for severity of a bite. The severity of a bite tends to be greater among older victims, when the victim is not the owner of the biting dog, when the bite takes place in a public area and outside of the play context15. A link between severity and breed has also been suggested20,21 (but see22), however lack of clear guidelines for breed identification and small sample sizes makes this finding unreliable and inconclusive23. Improving understanding of what changes the severity of bites is important, as whilst some bites may be difficult to prevent, reducing their severity may be more achievable.
Understanding of the contexts in which dog bites occur is crucial for bite prevention. Interactions that are often discussed as preceding bites at a population level include those that are likely to be painful or uncomfortable to dogs, such as medical procedures, physical abuse to dogs16,18, teasing10, interacting with dogs over resources (e.g. food or toys) or on a dog's perceived territory24, playing with or nearby a dog25 and mundane, daily occurrences such as petting or reaching towards a dog13,24,26,27. However, a qualitative study illustrated that some bite victims could not explain why they were bitten or were not aware of the dog's presence before the bite26, which suggests that identification of interactions before the bite may not be very accurate.
Dog bites cannot be studied experimentally as exposing a volunteer to a bite or provoking a dog to bite would be unethical. As bite incidents are relatively rare, collecting data through real-time observations is not feasible. Therefore, dog bite data is gathered through general population surveys e.g.28, veterinary caseloads e.g.17, hospital admissions e.g.1 and interviews with dog bite victims e.g.26. The hospital admission datasets are often large, but the data does not systematically include information about the circumstances of the bite29. Some of the data, e.g. regarding dog's breed, may also be unreliable or not recorded23,30. Moreover, in UK hospitals, dog bites are coded under the code "Bitten or struck by dog", which means that other dog-related incidents, such as falling over because of a dog, may be included within these statistics1. As only a fraction of bites warrant a visit to a hospital31,32,33, hospital derived-data does not represent all types of bites and bites that do not warrant medical attention have been under studied32. Data collected by reviewing veterinary referral cases is also biased to those who are willing to pay for behavioural referral and it is plausible that this data over-represents large dogs as owners tolerate aggression in smaller dogs for longer34. Surveys and questionnaires regarding being bitten often rely on convenience sampling, which may lead to a self-selection bias. Detailed interviews with dog bite victims or witnesses of dog bites are an alternative to the above methods26,35, however the sample size is typically small.
Video sharing platforms, such as YouTube, offer an opportunity to address some of the above issues. YouTube has been used to study sequential behaviours and human-dog interactions within the context in which they occur (e.g. during dog-training) and to collect a more diverse sample of behaviours than veterinary caseload data permits36,37. YouTube provides a chance to observe the interactions leading to a bite directly, in a naturalistic context. This is important as bite education strategies are often structured around the ladder of aggression38. This theory proposes that dog behaviours before a bite escalate gradually (in the time immediately before the bite or over the years), with some behaviours (like lip licking or head turning), being shown earlier in time than other behaviours (like growling or teeth-barring38).
This study has the following aims: 1) to summarise the contexts in which dog bites occur and to describe victim and dog characteristics using YouTube videos of bites, 2) to describe human and dog behaviour preceding a bite, 3) to examine factors that predict the perceived severity of a bite using variables extracted from YouTube videos, and 4) to evaluate YouTube as a novel method of collecting data about dog bites.
Between January 2016 and March 2017, YouTube videos were searched using the following terms: "dog bite", "dog attack", "dog bites man/woman", "dog bites child" and "kid gets bitten". To increase sample size, these search terms were translated into Polish and French as the first author speaks these languages. Dog bite was defined as a dog holding a person's body part in their mouth and applying pressure, which could be reflected by a bite mark and/or the victim's vocalisations (e.g. screaming) or facial expressions indicative of pain (e.g. grimacing). We excluded videos under 5 seconds, that showed bite compilations, 'bite work' (i.e. any activity involving teaching a dog to bite or biting on command), or where a bite was not visible (due to quality or content of a video). All identified videos (N = 653) were watched by the first author and, after excluding duplicates, 143 videos were included in the final sample. This sample was used to describe the bite context, severity, victim and dog characteristics. Fifty-six videos from this sample showed the behaviour of a dog and a person in detail from the beginning of an interaction until a bite and were included in analysis of pre-bite behaviour.
Details of victim, dog, context and bite characteristics were extracted from each video (Table 1). Bite severity is usually approximated by asking if a bite required medical attention or by inspecting the wound39,40. As this was not possible for all videos, we developed a scale, which incorporated different elements of bites more easily discerned from the videos and expressed 'perceived severity'. When constructing this measure, the importance of puncture wounds was emphasised, because bites that result in a puncture have been the basis of previous bite severity scales39,40. We assumed that the puncture did not occur when it was not possible to ascertain whether a bite broke the skin. Dog head shaking whilst biting was highlighted as it can lead to further lacerations of existing wounds40. The duration of the bite was included as bites that are longer could be more traumatic. A cut-off point for bite duration was set at one second because most bites observed here were less than that. Where a video showed multiple bites of different severity, the most extreme scores for variables a, b and c were included to calculate the total score. Perceived severity is defined as (1):
$$perceived\,severity={\rm{n}}\,\ast \,{\rm{a}}+{\rm{b}}+{\rm{c}}$$
Table 1 Variables describing dog, victim, context and bite characteristics.
n = number of bites to the victim observed in the video
\({\rm{a}}=\{\begin{array}{l}{\rm{3}}\,\mbox{--}\,{\rm{puncture}}\,{\rm{wound}}\,{\rm{is}}\,\mathrm{visible}\,\\ {\rm{1}}\,\mbox{--}\,{\rm{puncture}}\,{\rm{wound}}\,{\rm{is}}\,{\rm{not}}\,{\rm{present}}\,{\rm{or}}\,{\rm{not}}\,\mathrm{visible}\,\end{array}\)
\({\rm{b}}=\{\begin{array}{l}{\rm{1}}\,\mbox{--}\,{\rm{dog}}\,{\rm{shook}}\,{\rm{its}}\,{\rm{head}}\,{\rm{whilst}}\,\mathrm{biting}\,\\ {\rm{0}}\,\mbox{--}\,{\rm{dog}}\,{\rm{did}}\,{\rm{not}}\,{\rm{shake}}\,{\rm{its}}\,{\rm{head}}\,{\rm{whilst}}\,\mathrm{biting}\,\end{array}\)
\({\rm{c}}=\{\begin{array}{c}{\rm{1}}\,\mbox{--}\,{\rm{dog}}\,{\rm{held}}\,{\rm{on}}\,{\rm{for}} > \mathrm{1s}\,\\ {\rm{0}}\,\mbox{--}\,{\rm{dog}}\,{\rm{held}}\,{\rm{on}}\,{\rm{for}}\le \mathrm{1s}\,\end{array}\)
To describe the interaction preceding the bite without over-relying on inferring the dog's motivation or emotions, we adapted a classification used by Reisner et al.24 (Table 2) and each bite was assigned to one context.
Table 2 Interactions preceding the bite adapted from Reisner et al.24.
Ethogram development and behaviour coding
Human and dog behaviour ethograms that describe behaviour and movement patterns before the bite were developed. The dog behaviour ethogram (Supplement 1) was constructed to include behaviours described as preceding a dog bite and often taught as a part of a dog bite prevention initiatives38, which include displacement activities (such as a shake off, yawning) and appeasement gestures (like head turning or nose licking). In addition, the following behaviours were included: locomotory behaviours (direction in relation to the person and pace), body, tail and ear posture (as these are associated with negative affect in dogs42), body position, vocalisation and the type of contact that a dog made with a person (gentle or intensive).
To describe human behaviour preceding bites, the following behaviours were included: macro-movements near the dog (i.e. head and body turns, standing over a dog, moving legs, arms or objects towards the dog), types of tactile contact with a dog, (i.e. petting, hugging, hitting, restraining, pushing, pulling, holding a body part and lifting), vocalisations, body position and locomotory behaviours (direction in relation to the dog and pace). The descriptions of human behaviours are based on previous studies exploring human-dog play interactions43 (see Supplement 2 for human behaviour ethogram).
We also noted the site of contact on the body and body part used during contact for both person and a dog. Dogs were coded to make a tactile contact with a person using: head/neck area, mouth, limbs (including tail) and body and a person was coded to make a contact with a dog using head/neck area, limbs or body. Both dogs and people were coded as being touched on the head/neck area, limbs, or body.
The definitions of the behaviours/ behaviour units included in the final ethograms were tested using a sample of the final video dataset, in order to refine the definitions if needed, and to ensure that the behaviours selected were described in a narrowest possible way, but remained broad enough to be able to identify them across different styles of videos. The videos were coded from beginning of each clip or a beginning of a human-dog interaction (if a dog and person were not both in the video at the beginning) until the first bite. The ethograms were applied via scan sampling. For this, the videos were observed for each 3 second interval from the beginning of the video/interaction until the bite and all the behaviours listed in the ethograms which occurred within each 3 second window were noted.
Observer reliability
SCOG and CW, both experienced in analysing dog behaviour, coded a sample of the data independently, compared the results and discussed discrepancies in classification of the interactions where these occurred to reach a consensus. Subsequently, all videos were coded twice by SCOG (in January and March 2016) and intra-rater agreement was calculated using Cohen's kappa. A number of more subjective variables described in Table 1 (dog size, victims' age, dog breed and bite severity score) were checked for inter-observer reliability with an observer naïve to the purpose of the study and dog behaviour literature using 10% of randomly selected dog bite videos and the Cohen's Kappa was calculated. For both intra- and inter-rater reliability a threshold of 0.61–0.80 was considered acceptable.
Behaviours preceding a bite
All statistical analyses were conducted using R44. To summarize the behavior before the bite, videos across all contexts were pooled and a percentage of occurrence within a given time frame before the bite was provided. We limited the analysis to 35 seconds before the bite as only 5 videos were longer than this.
Bite severity
To understand the association between bite severity score and context, victim and dog characteristics, we used a hierarchical regression model. The distribution of the bite severity scores was checked and data were assumed gamma distributed, as on visual inspection the data fit the gamma model better than models for positive integers, e.g. Poisson. Bite severity scores were the dependent variable in these models and were modelled (using a log-link) as a function of: bite context, the duration of the interaction in seconds, dog size, victim sex, victim age, the anatomical location of the bite, and whether the human or dog initiated the interaction. The model was hierarchical because varying intercept parameters were included for different bite contexts, and those intercepts were constrained by a common distribution. This approach reflected that the bite contexts are not completely independent of one another but are a subset of possible categorisations. This allowed partial-pooling of bite severity estimates across contexts, which often results in more accurate predictions45, particularly when the number of data points per hierarchical group (e.g. the context of a bite) are highly variable46, which was the case here. For comparison, we also display the sample mean bite severity and 95% confidence interval (CI), derived from non-parametric bootstrapping, for each bite context.
To account for uncertainty in model predictions, the analyses were computed using a Bayesian approach45 utilising Markov chain Monte Carlo (MCMC) with the probabilistic programming language JAGS version 4.245 through the runjags package in R44. We used model selection to assess whether all of the predictor variables were necessary for predicting bite severity. The baseline model included the bite contexts, the duration of the interaction and dog size, since these variables were considered a priori important for predicting bite severity. Thirteen additional models were computed including all combinations of the remaining predictor variables noted above. The best fitting model was recomputed with bite contexts as a fixed effect rather than a varying effect, to assess whether a hierarchical model was necessary. Models were assessed using the widely applicable information criterion (WAIC), a Bayesian information criterion that evaluates the out-of-sample predictive accuracy of a model relative to other possible models. Information criteria are preferable to classical measures of model fit (e.g. R2) because they guard against under- and over-fitting to the data46. The WAIC values were transformed to WAIC weights, giving the relative probability of each model having the best out-of-sample predictive accuracy. Fourteen videos (around 10%) had missing data for who initiated the interaction, so models that included the initiated predictor imputed missing values (see Supplement 3 and Supplement Table 1 for more details).
Prior distributions on regression parameters were broad except for predictor variable coefficients, which had normally distributed priors with means of 0 and standard deviations of 1, further guarding against spurious results in addition to the model selection. The models were run with four MCMC chains, long enough for the effective sample size for each parameter to be >10,000 and Gelman-Rubin statistics to be \( < \)1.0146. Parameters were summarised by their mean and 95% highest density interval (HDI), the 95% most probable parameter values (see Supplement 4 for the R script, and Supplement Table 2 for a full dataset).
As all videos were in the public domain, ethical approval from the University Ethics Committee was not required. Videos were used in accordance with YouTube regulations and laws.
Both intra- and inter- rater agreements on coding were high, and were considered acceptable (κ = 0.91 and κ = 0.73 respectively).
Video characteristics
Three hundred and sixty-two bites were observed in 143 videos. Most of the observed dogs were cross-breeds (n = 47, 32.87%), other common breeds included: Chihuahuas (n = 13, 9.09%), German Shepherds and Pit bulls (n = 11, 7.69% of each) and Labrador Retrievers (n = 6, 4.2%). Almost half of bites (49.65%) were less than one second long and 74.13% were 3 seconds or less (Supplement Table 2).
The victim, dog, bite and context variables are summarized in Table 3. Male victims were more numerous across all bite contexts and children and infants were more numerous than adults. There were more big dogs compared to medium and small dogs in this sample. The proportion of small dogs that bit in the context of benign interactions was higher than the proportion of medium- and big-sized dogs (43% vs. 17.95% vs. 20.9%). Victims initiated more interactions than dogs (48.95% vs. 41.26%). Bites to limbs were more frequent than bites to any other location. The severity score of most bites did not exceed 5, however 51.67% of all bites that scored over 10 points occurred in the context of territorial interactions (Supplement Table 2).
Table 3 Summary of victim, dog, bite and context variables (%).
The proportion of videos where dogs were seen holding their body awkwardly or in a low position and showing a non-neutral ear carriage increased before the bite. The increase in videos where these changes were observed was seen approximately 30 seconds before the bite for "holding body awkwardly", 27 seconds before the bite for "body in low position" and 34 seconds before the bite for "non-neutral ear carriage" (Fig. 1). There was no clear pattern of changes in tail carriage and high body posture before a bite.
Patterns of changes in dog body carriage (ears and body posture), dog behaviour (head/ body turning, staring, stiffening, frowning, snapping, panting, lip licking, paw lifting) preceding the bite. Dots indicate observed proportions, lines represent 3-point moving averages and the shaded area the 95% confidence intervals for the observations.
There was an increase in a proportion of videos where dogs were seen stiffening, snapping and frowning shortly before the bite (from 22, 16 and 22 seconds before the bite respectively). In the lead up to the bite the proportion of videos where dogs were seen head turning, staring, and panting also increase (from 28, 30 and 16 seconds before the bite respectively), but these behaviours decreased, plateaued or fluctuated shortly before the bite.
Yawning and shake off were observed sporadically and lip licking, paw raises and sniffing did not follow any clear pattern (Fig. 1).
There was an increase in the proportion of dogs growling and a decrease in dogs being silent or barking before the bite. Pain-related vocalisations were rare. Closer in time to the bite, more dogs were coded as restrained and fewer were coded as standing. There was no clear pattern regarding play bows, sitting and laying down. As the bite became closer, there was more of fast pace locomotory behaviours and less jumping and slow pace locomotory behaviours. There was no clear pattern regarding dogs making a gentle contact before the bite and there was a clear spike in a proportion of dogs making an intensive contact immediately before the bite, which reflects the moment of a bite. Dogs touched people mainly with their paws and there was no pattern in tactile contact initiated by the dog before a bite (Supplement Table 2).
Human behaviour
Behaviours grouped as 'movements without contact with a dog' were observed more than 50% of the time. Leading to a bite, there was an increase in codes representing standing over a dog (approximately 35 seconds before the bite, Fig. 2). There was no clear pattern to all other non-contact behaviours.
Patterns of changes in human behaviour (petting, restraining and standing over the dog) preceding the bite. Dots indicate observed proportions, clines represent 3-point moving averages and the shaded area the 95% confidence intervals for the observations.
All of the contact behaviours increased approximately 21 seconds before the bite with petting and restraining a dog being particularly frequent (Fig. 2). Hugging, hitting, pushing and pulling did not follow any clear pattern. Kissing, hitting with an object, kicking and pulling hair were not observed or were rare.
Until 21 seconds before the bite there were proportionally more codes for movement towards the dog and from 9 seconds before the bite more codes for movement away from the dog were noted. There was no clear trend regarding changes of pace of movement in time before the bite. There was a sharp increase in the proportion of pain-related vocalisations immediately before the bite (which could indicate anticipation of pain rather than the experience of pain) and a slight increase in laughing vocalisations from 15 seconds before the bite. Normal talk and silence were observed proportionally less often closer in time to the bite. Shortly before the bite standing and crouching behaviours decreased and there were slightly more laying down behaviours (from 18 seconds before the bite). People usually made a tactile contact with a dog using limbs and there was no clear pattern regarding the part of the dog that was touched (Supplement Table 2).
Model selection
Three models had demonstrably lower WAIC values and higher WAIC weights than the other models assessed (see Supplement 5 for model comparison). The model including varying effects for bite contexts and all predictor variables was ranked first, with WAIC weight of 45%. The same model with bite contexts included as a fixed rather than a varying effect had a WAIC weight of 20%. The model with varying effects for bite contexts, interaction duration, dog size, the anatomical location of the bite and whether the human or dog initiated the interaction and where victim's sex and age were excluded, had a WAIC weight of 33%. Thus, all predictors appeared important to predicting severity.
Bite contexts
Across bite contexts, the mean bite severity score was estimated as 5.61 (95% HDI: 4.16, 7.27, see Fig. 2 and Supplement Table 2). Due to the varying numbers of videos in each category, differences among contexts were pooled closer to the overall mean compared to the raw data. The benign and play contexts have the most influence due to having the largest sample sizes. Moreover, the territorial context appeared to have a larger mean estimate but it only had 14 videos with large variation, as evidenced by its 95% CI, resulting in its estimate being pooled towards the overall mean. The intra-class correlation coefficient was both highly uncertain and included zero (mean = 0.11; 95% HDI: 0.00, 0.38), suggesting greater within- than between-context variation for bite severity. The 95% HDI estimates for differences between bite contexts all overlapped zero (see Supplement 6), i.e. we did not observe any significant difference in bite severity between different contexts of interactions (Fig. 3), however, bites in the public space and territorial contexts and painful interactions were more severe than the mean, whereas bites in the context of resting were less severe than the mean, when taking into account just sample mean and bootstrap 95% CI.
Estimated bite severity in each context. Black points and lines represent the regression model mean and 95% HDI estimates. Blue points and lines represent the raw sample means and bootstrap 95% CIs. Sample sizes are shown next to each parameter. Regression model estimates are pooled towards the overall mean (dashed vertical line) when contexts have relatively low sample size (e.g. resting) and/or deviate greatly from the overall mean without enough data to support such a difference (e.g. territorial).
Other predictor variables
Among the fixed-effect predictor variables, bite severity scores increased by an average of 1.25 points with approximately every minute (one SD = 56 s) increase in the duration of the interaction (see Supplement 7). The 95% HDI of this estimate extended from 1.09 to 1.41, indicating it was significant. This is somewhat expected as longer videos/longer interactions could feature more bites, which is a variable used for calculating bite severity. Videos with bites to multiple locations had significantly higher bite severity scores than videos with bites only to the limbs (\({\beta }_{{limbs}-{multiple}}\) = −6.82; 95% HDI: −12.15, −2.31; Supplement 6) or only to the face (\({\beta }_{{face}-{multiple}}\) = −7.76; 95% HDI: −13.37, −2.62), which again could be due to more bites being observed meaning greater severity score. When the victim was an adult, bites were more severe than when the victim was a child (\({\beta }_{{child}-{adult}}\) = −1.61; 95% HDI: −3.16, −0.08) and bites to infants were more severe than those to children (\({\beta }_{{infant}-{child}}\) = 2.17; 95% HDI: 0.14, 4.37). All other predictor variables had 95% HDI estimates overlapping zero, meaning that they were not significant (Fig. 4; Supplement 6).
Estimated differences in bite severity between categorical predictor variables. Points and horizontal lines represent mean and 95% HDI model estimates. Estimates in black exclude zero, indicating a significantly non-zero difference; estimates in grey overlap zero.
In this study, we used YouTube videos to explore dog bites to humans. The most common breeds and types of dogs found in our sample (i.e. German Shepherds, Chihuahuas, Labrador Retrievers and Pit bulls) reflect those previously identified in studies of dog bites10,11,12,13,15,16,17,47. Chihuahuas are rarely mentioned in studies that use hospital admissions, possibly because their small size makes them less likely to cause serious injury. However, a study using a Canine Behavioural Assessment and Research Questionnaire (a validated questionnaire for assessing dogs' behaviour) compared 30 breeds for aggressive behaviour and found that Chihuahuas were higher than average for human- and dog- directed aggression48. In addition to this, we hypothesize that bites by a small dog may be perceived as comical and thus more often uploaded online. It is also unclear if the breeds observed here are more likely to bite or to be more commonly owned13,15,23. In our study, male victims were over-represented. This trend has been noticed in previous publications1,5,12,13,15 but not to the same extent. It is therefore plausible that clips showing men are more often shared online. Our study included a similar proportion of adults to children and infants as those previously reported12,16,17,49, with children and infants being considerably more common victims than adults. Here, most bites were to the limbs, followed by bites to the face and neck area. Bites to face and neck area were more common among children and infants, which is also consistent with earlier reports1,10,11,27.
A variety of bite contexts were represented in our sample. Bites during play and benign interactions were particularly common, as reported before10,14,16,18,24,50. Play bites as well as behaviours that we labelled as benign interactions may be perceived as a normal part of human-dog behavioural repertoires and thus more frequently permitted and easier to film than other bite contexts. Moreover, bite victims may not see all bites as 'true' bites35, which could lead to bites in contexts such as play being more often uploaded online. In contrast to Reisner et al.24, we found that bites in the context of resources and resting were rare. The dissimilarity could arise due to differences in studied population: our sample consisted of all age groups, whilst Reisner et al.24 included only children. Alternatively, it could be due to these contexts being unlikely to be filmed.
Here we followed the classification used by Reisner et al.24 to aide comparison and because it permits categorisation based on the observed behaviours that can easily be recognised by an untrained observer. This terminology can, however, be misleading. For instance, previous research suggests that 'benign' interactions, as perceived by victims, may not be pleasant for dogs. Dogs may dislike being petted on top of their heads49,51, although we did not see a clear increase in tactile contact with head and neck areas before the bite.
Displacement and appeasement behaviours as well as postural changes and vocalisations were included in this analysis as they are often discussed as preceding a bite and taught as a part of bite-prevention education38. Closer to the time of the bite, dogs were more often coded as holding their body low or in an awkward position and their ears were more often observed to be in a non-neutral position. The postural changes have been linked with dogs experiencing acute distress in response to a fear-inducing stimuli42 and changes in ear carriage have been observed during training that involved painful stimuli52. It is plausible that these changes were detected here as some interactions leading to a bite may be painful or cause a distress to a dog. However, not all dogs in the videos show these changes and we also did not observe any clear changes in tail carriage pattern. Nonetheless, as postural changes may be easier to spot than some of the more subtle distance increasing behaviours, and as an increase in these behaviours was observed from approximately 30 seconds before the bite, bite-prevention messages should emphasise them more.
Following from the ladder of aggression theory38, behaviours such as lip licking, head turning are expected to escalate and be replaced with behaviours like snapping or growling in time before the bite. Head turning and full body turning as well as staring, stiffening, snapping, growling and frowning were observed proportionally more often in a build up to a bite, with head turning and staring dropping immediately before the bite, as would be expected from the ladder of aggression theory. We observed an increase in these behaviours approximately 20 seconds before the bite, which suggests that a person interacting with a dog does have time to alter their behaviour in response to these signs. However, as the increase in these behaviours is gradual, a person may not recognise their presence until later, if at all. A person may also recognise these signs and carry on interactions, assuming that a bite "would not happen to them"26. Other behaviours included in the ladder of aggression (like lip licking, paw lifting and sniffing) did not follow a clear pattern and sniffing and paw lifts were rarely observed. However, these behaviours may have escalated over a longer period of time, for instance a dog may have shown some of the behaviours from the lower steps of the ladder during previous interactions with a person, which would not have been captured in the video studied. Alternatively, these behaviours may not fit the pattern of behaviour progression proposed by the ladder of aggression theory. Overall, the postural changes were observed more often than other behaviours included in the ladder of aggression. Previous studies linked some of these behaviours (lip licking, paw lifts, head turns and yawning) with acute stress and pain42,53, emotional conflict52 and as a response to human facial expressions linked with a negative emotional valence54 which may be specific to some, but not all contexts in which bites occur.
Standing over a dog, petting and restraining a dog were seen proportionally more frequently closer to the bite, increasing approximately 20–30 seconds before. Other behaviours that did not result in contact and other tactile behaviours did not follow any clear pattern. The high prevalence of 'standing over' codes in the time preceding a bite suggests this particular behaviour should be emphasised in bite prevention training. The high frequency of petting and restraining behaviours makes prevention advice challenging, as these types of contacts are likely to occur when a person is familiar with the dog and interacts with a dog on daily basis, in a routine, habitual fashion. This results shows that dog owner education should emphasise the idea of all interactions with a dog, and in particular tactile interactions like petting, should be mutually consensual, i.e. only initiated by a person after a dog has already made a contact or otherwise shown an interest in being petted. In addition, restraining a dog e.g. in order to medicate it or prevent it from escaping may be hard to avoid and therefore requires additional care. It indicates the importance of teaching low-stress handling methods55.
The regression model with mean and 95% HDI estimates identified no significant differences in bite severity between bite contexts. This could be because there may be more similarities between bite contexts than differences, making the distinction between contexts difficult. However, the analysis of sample mean and bootstrap 95% CI suggested that territorial bites and bites in public spaces were more severe than other bites. Bites in the context of benign and unpleasant interactions and resting were less severe, which reflects previous research27. The bootstrap analysis also indicated that when a dog initiated the interactions (vs. a person), bite severity was greater. Bites in the context of benign interactions and unpleasant interactions may be more inhibited as the victim involved is likely to be more familiar with the dog. It is also plausible that the dog-initiated interactions in general may include more offensive aggression, whereas the human-initiated interactions may reflect more of the defensive aggression56. Different motivation to aggress could explain differences in severity as, in cases of defensive aggression, a dog may strive to warn off, which may result in a lesser severity of bite.
Overall, other predictor variables – and, in particular, the location of the bite on the body, age of the victim and size of the dog – were better at predicting the severity of a bite than context alone, although even here, the regression model with mean and 95% HDI estimates were not always significant. Again, this could be due to a small sample size or the way the severity measure was derived. In general, there may be numerous interactive effects between predictor variables that were not possible to explore in this study due to limited sample size and differences in number of videos in each context. The analysis showed that severity of a bite was correlated with the duration of a video, regardless of the context of interactions or other predictor variables. This could be because a person who was attacked for longer received more serious injuries or because longer attacks simply score higher severity mark. Moreover, not all variables which are often cited in literature as risk factors for bites were used as predictor variables in our model as it was not always possible to discern them from the video. We also did not include a breed as a predictor variable due to documented problems in recognising a breed based on visual characteristics30 and a small number of dogs in each breed category. One approach that could be utilised in the future on larger samples is systems-style modelling, such as using network analysis, that would identify interactive effects between variables that result in bites occurring and/or influence bite severity.
Using YouTube to study dog bites enabled us to carry out observations of bites of diverse severity, in naturalistic settings and across a range of contexts. The benefit of this approach is that permits studying human and dog behaviours preceding bites, which is not possible with other retrospective methods. However, the sample generated through YouTube search is subject to some biases as the frequency of bites in a given context and the victim and dog characteristics could reflect the likelihood with which these interactions are filmed and the self-selection bias for uploading videos online. The quality of videos and editing styles varied across the sample which meant that we could not collect a fine level of detail from each video. Small sample size meant that the analysis of body language had to be restricted to simple descriptive statistics, which is a further limitation of this method.
Moreover, our analysis is limited because the bite severity score reflects the perception of severity as observed in the video, as it is impossible to assess the full extent of each injury. It is plausible that as puncture wounds may be more difficult to identify in some videos, we sometimes under-estimated the severity. The same score on severity could, for example, reflect different elements of the bite; videos in the context of play may have a high severity score due to number of bites in a video whereas bites in the context of territorial aggression could have a high score due to puncture wounds and tearing movement of the dog, whereas in the reality, the later would cause more damage.
In summary, this study used a novel approach to analyse human-dog interactions in naturalistic contexts. We found that despite potential biases of this sample, the demographic characteristics of the victims and dogs seen in YouTube bite videos reflect those found in previous publications. Although our analysis did not allow exploration of the causal relationship between human behaviour and dog bites, we observed that tactile contact with a dog increases approximately 20 seconds before a bite as does standing or leaning over a dog. Prevention messages could emphasise the risk of leaning over a dog and simply advise avoiding contact with a dog when possible or in doubt (for instance, when interacting with an unfamiliar dog). In a lead up to a bite, changes in dog body posture were more obvious than changes in appeasement and displacement behaviours. Some, but not all, appeasement and displacement behaviours described in the "ladder of aggression"38 were also observed.
Winter, J. Admission caused by dogs and other mammals, https://digital.nhs.uk/media/33701/Provisional-Monthly-HES-for-Admitted-Patient-Care-Outpatient-and-Accident-and-Emergency-Data-April-2014-to-February-2015-Topic-of-Interest-Admissions-caused-by-dogs-and-other-mammals/pdf/Animal_Bites_M12_1415 (2015).
Anon. NHS Hospital Stay, https://digital.nhs.uk/catalogue/PUB19124 (2015).
Benson, L. S., Edwards, S. L., Schiff, A. P., Williams, C. S. & Visotsky, J. L. Dog and cat bites to the hand: treatment and cost assessment. The Journal of hand surgery 31, 468–473 (2006).
Peters, V., Sottiaux, M., Appelboom, J. & Kahn, A. Posttraumatic stress disorder after dog bites in children. The Journal of pediatrics 144, 121–122 (2004).
Overall, K. L. & Love, M. Dog bites to humans-demography, epidemiology, injury, and risk. Journal of the American Veterinary Medical Association 218, 1923–1934 (2001).
Abuabara, A. A review of facial injuries due to dog bites. Medicina Oral, Patología Oral y Cirugía Bucal (Internet) 11, 348–350 (2006).
Knobel, D. L. et al. Re-evaluating the burden of rabies in Africa and Asia. Bulletin of the World health Organization 83, 360–368 (2005).
Kass, P. H., New, J. J. C., Scarlett, J. M. & Salman, M. D. Understanding Animal Companion Surplus in the United States: Relinquishment of Nonadoptables to Animal Shelters for Euthanasia. Journal of Applied Animal Welfare Science 4, 237–248 (2001).
BVA. (2016).
Touré, G., Angoulangouli, G. & Méningaud, J.-P. Epidemiology and classification of dog bite injuries to the face: A prospective study of 108 patients. Journal of Plastic, Reconstructive & Aesthetic Surgery 68, 654–658 (2015).
Súilleabháin, P. Ó. Human hospitalisations due to dog bites in Ireland (1998–2013): Implications for current breed specific legislation. The Veterinary Journal 204, 357–359 (2015).
Klaassen, B., Buckley, J. & Esmail, A. Does the dangerous dogs act protect against animal attacks: a prospective study of mammalian bites in the accident and emergency department. Injury 27, 89–91 (1996).
Rezac, P., Rezac, K. & Slama, P. Human behavior preceding dog bites to the face. The Veterinary Journal 206, 284–288 (2015).
Mannion, C., Graham, A., Shepherd, K. & Greenberg, D. Dog bites and maxillofacial surgery: what can we do? British journal of oral and maxillofacial surgery 53, 522–525 (2015).
Cornelissen, J. M. & Hopster, H. Dog bites in The Netherlands: A study of victims, injuries, circumstances and aggressors to support evaluation of breed specific legislation. The Veterinary Journal 186, 292–298 (2010).
Rosado, B., García-Belenguer, S., León, M. & Palacio, J. Spanish dangerous animals act: Effect on the epidemiology of dog bites. Journal of Veterinary Behavior: Clinical Applications and Research 2, 166–174 (2007).
Guy, N. C. et al. Risk factors for dog bites to owners in a general veterinary caseload. Applied Animal Behaviour Science 74, 29–42 (2001).
Reisner, I. R. et al. Behavioural characteristics associated with dog bites to children presenting to an urban trauma centre. Injury prevention, ip. 2010, 029868 (2011).
Meints, K., Syrnyk, C. & De Keuster, T. Why do children get bitten in the face? Injury Prevention 16, A172–A173 (2010).
Garvey, E. M., Twitchell, D. K., Ragar, R., Egan, J. C. & Jamshidi, R. Morbidity of pediatric dog bites: A case series at a level one pediatric trauma center. Journal of pediatric surgery 50, 343–346 (2015).
O'Brien, D. C., Andre, T. B., Robinson, A. D., Squires, L. D. & Tollefson, T. T. Dog bites of the head and neck: an evaluation of a common pediatric trauma and associated treatment. American journal of otolaryngology 36, 32–38 (2015).
Pédrono, G. et al. 483 Dog bites: severity and sequelae, a multicenter survey, France, September 2010–December 2011. Injury Prevention 22, A175–A175 (2016).
Ó Súilleabháin, P. & Doherty, N. Epidemiology of dog bite injuries: Dog-breed identification and dog-owner interaction. Journal of Plastic, Reconstructive and Aesthetic Surgery 68, 1157–1158, https://doi.org/10.1016/j.bjps.2015.03.025 (2015).
Reisner, I. R., Shofer, F. S. & Nance, M. L. Behavioral assessment of child-directed canine aggression. Injury Prevention 13, 348–351 (2007).
Náhlík, J., Baranyiová, E. & Tyrlík, M. Dog Bites to Children in the Czech Republic: the Risk Situations. Acta Veterinaria Brno 79, 627–636 (2011).
Westgarth, C. & Watkins, F. A qualitative investigation of the perceptions of female dog-bite victims and implications for the prevention of dog bites. Journal of veterinary behavior: Clinical applications and research 10, 479–488 (2015).
Shuler, C. M., DeBess, E. E., Lapidus, J. A. & Hedberg, K. Canine and human factors related to dog bite injuries. Journal of the American Veterinary Medical Association 232, 542–546 (2008).
Casey, R. A., Loftus, B., Bolster, C., Richards, G. J. & Blackwell, E. J. Human directed aggression in domestic dogs (Canis familiaris): Occurrence in different contexts and risk factors. Applied Animal Behaviour Science 152, 52–63 (2014).
Bernardo, L. M. et al. Documentation of Growls and Bites in the Emergency Setting. Journal of Emergency Nursing 28, 5 (2002).
Westgarth, C. Is that dog a pit bull? A cross-country comparison of perceptions of shelter workers regarding breed identification. (2014).
Sacks, J. J., Kresnow, M.-J. & Houston, B. Dog bites: how big a problem? Injury prevention 2, 52–54 (1996).
Gilchrist, J., Sacks, J., White, D. & Kresnow, M. Dog bites: still a problem? Injury prevention 14, 296–301 (2008).
Westgarth, C., Brooke, M. & Christley, R. M. How many people have been bitten by dogs? A cross-sectional survey of prevalence, incidence and factors associated with dog bites in a UK community. Journal of Epidemiology and Community Health 72, 331–336, https://doi.org/10.1136/jech-2017-209330 (2018).
Arhant, C., Landenberger, R., Beetz, A. & Troxler, J. Attitudes of caregivers to supervision of child–family dog interactions in children up to 6 years—An exploratory study. Journal of Veterinary Behavior: Clinical Applications and Research 14, 10–16 (2016).
Orritt, R., Gross, H. & Hogue, T. His bark is worse than his bite: perceptions and rationalization of canine aggressive behavior. Human-Animal Interaction Bulletin 3, 1–20 (2015).
Preston, S. M., Shihab, N. & Volk, H. A. Public perception of epilepsy in dogs is more favorable than in humans. Epilepsy & Behavior 27, 243–246 (2013).
Burn, C. C. A vicious cycle: a cross-sectional study of canine tail-chasing and human responses to it, using a free video-sharing website. PloS one 6, e26553 (2011).
Shepherd, K. Ladder of aggression. BSAVA Manual of Canine and Feline Behavioural Medicine, 13–16 (2009).
Dunbar, I. The Bite Scale. An Objective Assessment of the Severity of Dog Bites Based on Evaluation of Wound Pathology, http://www.dogtalk.com/BiteAssessmentScalesDunbarDTMRoss.pdf (ND).
Lackmann, G.-M., Draf, W., Isselstein, G. & Töllner, U. Surgical treatment of facial dog bite injuries in children. Journal of Cranio-Maxillofacial Surgery 20, 81–86 (1992).
Anon. Breed Information Centre http://www.thekennelclub.org.uk/services/public/breed/ (2016).
Beerda, B., Schilder, M. B., van Hooff, J. A., de Vries, H. W. & Mol, J. A. Behavioural, saliva cortisol and heart rate responses to different types of stimuli in dogs. Applied Animal Behaviour Science 58, 365–381 (1998).
Rooney, N. J., Bradshaw, J. W. & Robinson, I. H. Do dogs respond to play signals given by humans? Animal Behaviour 61, 715–722 (2001).
Statistical Package, R. R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing (2009).
Kruschke, J. Doing Bayesian data analysis: A tutorial with R, JAGS, and Stan. (Academic Press, 2014).
Gelman, A., Gelman, A. & Hill, J. Data analysis using regression and multilevel/hierarchical models. (Cambridge; Cambridge University Press, 2007).
Wake, A., Minot, E., Stafford, K. & Perry, P. A survey of adult victims of dog bites in New Zealand. New Zealand veterinary journal 57, 364–369 (2009).
Duffy, D. L., Hsu, Y. & Serpell, J. A. Breed differences in canine aggression. Applied Animal Behaviour Science 114, 441–460 (2008).
De Keuster, T., Lamoureux, J. & Kahn, A. Epidemiology of dog bites: a Belgian experience of canine behaviour and public health concerns. The Veterinary Journal 172, 482–487 (2006).
Mannion, C. & Mills, D. Injuries sustained by dog bites. The British journal of oral & maxillofacial surgery 51, 368–369 (2013).
Kuhne, F., Hößler, J. C. & Struwe, R. Behavioral and cardiac responses by dogs to physical human–dog contact. Journal of Veterinary Behavior: Clinical Applications and Research 9, 93–97 (2014).
Schilder, M. B. & van der Borg, J. A. Training dogs with help of the shock collar: short and long term behavioural effects. Applied Animal Behaviour Science 85, 319–334 (2004).
Hekman, J. P., Karas, A. Z. & Dreschel, N. A. Salivary cortisol concentrations and behavior in a population of healthy dogs hospitalized for elective procedures. Applied animal behaviour science 141, 149–157 (2012).
Albuquerque, N., Guo, K., Wilkinson, A., Resende, B. & Mills, D. S. Mouth-licking by dogs as a response to emotional stimuli. Behavioural processes 146, 42–45 (2018).
Yin, S. A. Low stress handling, restraint and behavior modification of dogs & cats. (CattleDog Pub., 2009).
Fatjo, J., Amat, M., Mariotti, V. M., de la Torre, J. L. R. & Manteca, X. Analysis of 1040 cases of canine aggression in a referral practice in Spain. Journal of Veterinary Behavior: Clinical Applications and Research 2, 158–165 (2007).
The authors are grateful to the funders for their support and to Zuzanna Walczak for coding 10% videos for the inter-rater reliability analysis and Chloe O'Sullivan for help in ethogram development.
Department of Epidemiology and Population Health, Institute of Infection and Global Health, University of Liverpool, Liverpool, L69 7BE, UK
Sara C. Owczarczak-Garstecka, Rob Christley & Carri Westgarth
Institute for Risk and Uncertainty, University of Liverpool, Liverpool, L69 7ZF, UK
Sara C. Owczarczak-Garstecka & Carri Westgarth
Department of Public Health and Policy, Institute of Psychology, Health and Society, University of Liverpool, Liverpool, L69 3GL, UK
Francine Watkins
Sara C. Owczarczak-Garstecka
Rob Christley
Carri Westgarth
S.C.O.G. coded the videos, S.C.O.G., C.W., F.W. and R.C. wrote the main manuscript text, S.C.O.G. and R.C. prepared the figures and S.C.O.G. carried out the analysis. All authors reviewed the manuscript.
Correspondence to Sara C. Owczarczak-Garstecka.
SCOG PhD studentship is funded by the EPSRC and ESRC research councils, Institute for Risk and Uncertainty and Dogs Trust. The funders had no role in the conceptualization, design, data collection, analysis, decision to publish, or preparation of the manuscript. SCOG, RC and CW have been bitten by dogs.
Supplementary Information File
Supplementary Dataset 1
Owczarczak-Garstecka, S.C., Watkins, F., Christley, R. et al. Online videos indicate human and dog behaviour preceding dog bites and the context in which bites occur. Sci Rep 8, 7147 (2018). https://doi.org/10.1038/s41598-018-25671-7
Perception of dynamic facial expressions of emotion between dogs and humans
Catia Correia-Caeiro
Kun Guo
Daniel S. Mills
Animal Cognition (2020)
Journal Top 100 | CommonCrawl |
Genetic Analysis Workshop 19: Sequence, Blood Pressure and Expression Data. Proceedings.
A novel statistical method for rare-variant association studies in general pedigrees
Huanhuan Zhu1,
Zhenchuan Wang1,
Xuexia Wang2 and
Qiuying Sha1Email author
Both population-based and family-based designs are commonly used in genetic association studies to identify rare variants that underlie complex diseases. For any type of study design, the statistical power will be improved if rare variants can be enriched in the samples. Family-based designs, with ascertainment based on phenotype, may enrich the sample for causal rare variants and thus can be more powerful than population-based designs. Therefore, it is important to develop family-based statistical methods that can account for ascertainment. In this paper, we develop a novel statistical method for rare-variant association studies in general pedigrees for quantitative traits. This method uses a retrospective view that treats the traits as fixed and the genotypes as random, which allows us to account for complex and undefined ascertainment of families. We then apply the newly developed method to the Genetic Analysis Workshop 19 data set and compare the power of the new method with two other methods for general pedigrees. The results show that the newly proposed method increases power in most of the cases we consider, more than the other two methods.
Rare Variant
Genetic Analysis Workshop
General Pedigree
Sequence Kernel Association Test
GAW19 Data
There is increasing interest in detecting associations between rare variants and complex traits. Although statistical methods to detect common variant associations are well developed, these variant-by-variant methods may not be optimal for detecting associations with rare variants as a result of allelic heterogeneity as well as the extreme rarity of individual variants [1]. Recently, several statistical methods for detecting associations of rare variants were developed for population-based designs, including the cohort allelic sums test [2], the combined multivariate and collapsing method [1], the weighted sum statistic [3], the variable minor allele frequency threshold method [4], the adaptive sum test [5], the step-up method [6], the sequence kernel association test [7], and the test for optimally weighted combination of variants [8].
Meanwhile, quite a few statistical methods for rare-variant association studies have been developed for family-based designs. For any type of study design, the statistical power will be improved if rare variants can be enriched in the samples. If one parent has a copy of a rare allele, half of the offspring are expected to carry it, and, hence, variants that are rare in the general population could be very common in certain families [9]. Therefore, family-based designs may play an important role in rare-variant association studies. Because of the importance of family-based designs in rare-variant association studies, several family-based rare-variant association methods for quantitative traits [10–12] and for qualitative traits [13–15] have been developed. However, most of these methods were developed under the assumption of random ascertainment and family-based designs with random ascertainment may not yield enrichment of rare variants. To analyze the sequencing data in general pedigrees provided by Genetic Analysis Workshop 19 (GAW19), we proposed a novel method to test rare-variant association in general pedigrees for quantitative traits. Applying the proposed method to the GAW19 data set, we compared the power of the proposed method with that of two popular methods for family-based designs.
Consider a sample of n pedigrees with n i members in the i th pedigree and a genomic region with M variants. Let y ij and g ij = (g ij1, …, g ijM ) T denote the trait value and genotypes of the M variants in the genomic region for the j th individual in the i th pedigree. Let x ij = ∑ m = 1 M w m g ijm denote the weighted combination of genotypes at the M variants, where w = (w 1, … ,w M ) T is a weight function.
For given genotypes, we assume that y ij ∼ N(a + x ij β, σ 2). Using the notation \( {g}_i={\left({g}_{i1},\dots, {g}_{i{n}_i}\right)}^T \), the retrospective likelihood is given by
$$ RL={\displaystyle \prod_{i=1}^n \Pr \left({g}_i\left|{y}_{i1},\dots, {y}_{i{n}_i}\right.\right)}={\displaystyle \prod_{i=1}^n\frac{ \Pr \left({y}_{i1},\dots, {y}_{i{n}_i}\left|{g}_i\right.\right) \Pr \left({g}_i\right)}{{\displaystyle {\sum}_{g_i^{*}} \Pr \left({y}_{i1},\dots, {y}_{i{n}_i}\left|{g}_i^{*}\right.\right)} \Pr \left({g}_i^{*}\right)}}={\displaystyle \prod_{i=1}^n\frac{ \exp \left(-{\displaystyle {\sum}_{j=1}^{n_i}}{\left({y}_{ij}-a-{x}_{ij}\beta \right)}^2/2{\sigma}^2\right) \Pr \left({g}_i\right)}{{\displaystyle {\sum}_{g_i^{*}} \exp \left(-{\displaystyle {\sum}_{j=1}^{n_i}}{\left({y}_{ij}-a-{x}_{ij}^{*}\beta \right)}^2/2{\sigma}^2\right)} \Pr \left({g}_i^{*}\right)}}, $$
where \( {\displaystyle {\sum}_{g_i^{*}}} \) represents the summation of all possible genotypes. Based on RL, the score test statistic for testing the null hypothesis H 0 : β = 0 is given by
$$ {T}_{score}={U}^2/V $$
where\( U={\displaystyle {\sum}_{i=1}^n{\displaystyle {\sum}_{j=1}^{n_i}}}\left({x}_{ij}-\overline{x}\right)\left({y}_{ij}-\overline{y}\right) \), V = w T Σw∑ n i = 1 y i T Φ i y i , \( {y}_i={\left({y}_{i1},\dots, {y}_{i{n}_i}\right)}^T \), \( \overline{y}=\frac{1}{{\displaystyle {\sum}_{i=1}^n{n}_i}}{\displaystyle {\sum}_{i=1}^n}{\displaystyle \kern.8em {\sum}_{j=1}^{n_i}{y}_{ij}} \), Φ i is twice the kinship coefficient of the i th pedigree, and Σ = cov(g 11, g 11) is the covariance matrix of the multiple variant genotype of one individual. Σ can be estimated by \( \widehat{\varSigma}=\frac{1}{{\displaystyle {\sum}_{i=1}^n{n}_i}}{\displaystyle {\sum}_{i=1}^n{\displaystyle {\sum}_{j=1}^{n_i}}}\left({g}_{ij}-\overline{g}\right){\left({g}_{ij}-\overline{g}\right)}^T \), where \( \overline{g}=\frac{1}{{\displaystyle {\sum}_{i=1}^n}{n}_i}{\displaystyle {\sum}_{i=1}^n}{\displaystyle {\sum}_{j=1}^{n_i}{g}_{ij}} \). It is worth pointing out that T score is equivalent to the quantitative version of the retrospective likelihood score statistic proposed by Schaid et al [16].
Because rare variants are essentially independent, following Pan [17] and Sha et al [8], we replace \( \widehat{\varSigma} \) by \( {\widehat{\varSigma}}_0= diag\left(\widehat{\varSigma}\right) \). Then, the score test statistic T score becomes
$$ {T}_0(w)={w}^Tu{u}^Tw/\left({w}^T{\widehat{\varSigma}}_0w{\displaystyle {\sum}_{i=1}^n}{y}_i^T{\varPhi}_i{y}_i\right), $$
where \( u={\displaystyle {\sum}_{i=1}^n}{\displaystyle {\sum}_{j=1}^{n_i}}\left({g}_{ij}-\overline{g}\right)\left({y}_{ij}-\overline{y}\right) \). As a function of w, T 0(w) reaches its maximum when \( w={\widehat{\varSigma}}_0^{-1}u \) and the maximum value of T 0(w) is \( {u}^T{\widehat{\varSigma}}_0^{-1}u/{\displaystyle \sum_{i=1}^n{y}_i^T{\varPhi}_i{y}_i} \). We define the statistic of optimally weighted score test (OW-score) as
$$ {T}_{OW- score}={u}^T{\widehat{\varSigma}}_0^{-1}u/{\displaystyle {\sum}_{i=1}^n{y}_i^T{\varPhi}_i{y}_i={\displaystyle {\sum}_{m=1}^M}\left({u}_m^2/{\sigma}_{mm}\right)/\left({\displaystyle {\sum}_{i=1}^n}{y}_i^T{\varPhi}_i{y}_i\right),} $$
where σ mm is the (m, m) th element of \( {\widehat{\varSigma}}_0 \) and u m is the m th element of u. Under the null hypothesis, T OW-score is asymptotically distributed as a mixture of independent χ 2 statistics [18, 19]. Alternatively, the distribution of T OW-score can be approximated by a Satterwaite approximation for the distribution of quadratic forms [7, 20, 21] or a scaled χ 2 distribution [16]. We propose to approximate the distribution of T OW-score by a scaled χ 2 distribution with the scale δ and degrees of freedom d estimated by the expectation and variance of T OW-score . Note that u ∼ N(0, Σ∑ i = 1 n y i T Φ i y i ). We have \( {\widehat{\mu}}_T=\widehat{E}\left({T}_{OW- score}\right)= trace\left(\widehat{\varSigma}{\widehat{\varSigma}}_0^{-1}\right) \) and \( {\widehat{\sigma}}_T^2=\operatorname{va}\widehat{r}\left({T}_{OW- score}\right)=2 trace\left(\widehat{\varSigma}{\widehat{\varSigma}}_0^{-1}\widehat{\varSigma}{\widehat{\varSigma}}_0^{-1}\right) \). Then, the scale δ is estimated as \( \widehat{\delta}={\widehat{\sigma}}_T^2/\left(2{\widehat{\mu}}_T\right) \) and the degree of freedom d is estimated as \( \widehat{d}=2{\widehat{\mu}}_T^2/{\widehat{\sigma}}_T^2 \)
We compare the performance of our OW-score with (a) WS-score, the score test given by equation (1) with weight given by Madsen and Browning [3] and (b) famSKAT, family-based sequence kernel association test given by Chen et al [11].
We applied our proposed method as well as the WS-score test and famSKAT to the simulated data from GAW19. All tests were conducted on 849 individuals, from 20 pedigrees, that had no missing genotypes or phenotypes. Sex, age, blood pressure medication status, and smoking status were considered as covariates in this study. We were aware of the underlying simulation model.
There are two related phenotypes, systolic blood pressure (SBP) and diastolic blood pressure (DBP), at three time points. We considered the average of DBP at three time points as the phenotype of interest in our analysis. We compared the power of the three tests (OW-score, WS-score, and famSKAT) to detect association between each of the top 14 genes that influence the phenotype of interest. We used the variants between the first functional single nucleotide polymorphism (SNP) and the last functional SNP in each gene in our analysis. We did not consider CABP2 because the power of the three tests are essentially the same due to only one variant in this gene. To adjust the effects of the covariates on the phenotype of interest, we first applied a linear model by regressing the phenotype of interest on the covariates: sex, the average of age, the average of blood pressure medication status, and the average of smoking status. The power comparisons based on the 200 replicated data sets are given in Table 1. Significance level is assessed at 5 %. This table shows that the OW-score test identified three genes with power greater than 40 %, famSKAT identified 1 gene with power greater than 40 %, and the WS-score test could not identify any genes with power greater than 40 %. OW-score and famSKAT have different power mainly because they use different weights. Let w m and W m denote the weights, rescaled to the interval (0, 1), of the OW-score test and famSKAT for the m th variant. Then, w m > W m when minor allele frequency (MAF) is less than 0.01; w m ≤ W m when MAF is in the interval (0.01, 0.05); w m > W m when MAF is greater than 0.05. The OW-score test has much higher power than famSKAT for RAI1 and REPIN1 because none of the MAFs of the causal variants in RAI1 and REPIN1 are in the interval (0.01, 0.05).
Power comparisons of the 3 tests using the average of DBP at 3 time points as phenotypes (significance level is assessed at 5 %)
T OW-score
T WS-score
FamSKAT
LEPR
MTRR
PTTG1IP
REPIN1
SLC35E2
ZFP37
Notes: the powers greater than 40 % are in bold
We also evaluated the type I error rate of the proposed OW-score test. To evaluate the type I error, we used 1000 blocks (100 variants in each block) from chromosome 5 that are far from causal variants. In each block, we applied the OW-score test to each of the 200 replicates to test association between genotypes and the phenotype of interest. We obtained 1 p value for each replicate and each block. The type I errors of the proposed test were 0.04887, 0.00921, and 0.00131 at significance levels of 0.05, 0.01, and 0.001, respectively. We also considered the average of SBP at three time points as the phenotype of interest, which yielded similar results.
Next-generation sequencing technologies make directly testing rare variant association possible. However, the development of powerful statistical methods for rare-variant association studies is still underway. In this article, we proposed a novel statistical method for rare-variant association studies based on general pedigrees for quantitative traits. The application to the GAW19 data set showed that the proposed method has correct type I error rate and is more powerful than the other two methods against which our method was compared.
We described our method for quantitative traits. For qualitative traits, we can derive a score test similar to that given by equation (1). However, the performance of the proposed method for qualitative traits requires further investigation. Like many statistical methods for rare variant association studies, the proposed method can consider phenotype measurement at only one time point. Statistical methods based on sequence data have been developed for unrelated individuals that have phenotype measurements at multiple time points [22]. From a statistical standpoint, modeling using longitudinal phenotypes is more informative than that using phenotypes at a single time point and thus can increase the power of an association test [22, 23]. Our future work includes extension of the proposed method to longitudinal phenotypes.
In this article, we developed a novel statistical method for rare variant association studies in general pedigrees (randomly ascertained pedigrees or ascertained pedigrees). Application to the GAW19 data set showed that the newly proposed method is more powerful than the other two methods in most of the cases. Our new method uses a retrospective view, which allows us to account for complex and undefined ascertainment of families. The GAW19 data is based on randomly ascertained pedigrees. Results of applying our method to GAW19 data showed that the proposed method has correct type I error based on random ascertainment. When random ascertainment is violated and ascertainment is based on trait values, the proposed method is expected to have correct type I error. If pedigrees are ascertained because of extreme trait values, the proposed method is expected to have higher power than methods based on randomly ascertained pedigrees.
The GAW19 whole genome sequence data were provided by the T2D-GENES (Type 2 Diabetes Genetic Exploration by Next-generation sequencing in Ethnic Samples) Consortium, which is supported by National Institutes of Health (NIH) grants U01 DK085524, U01 DK085584, U01 DK085501, U01 DK085526, and U01 DK085545. The other genetic and phenotypic data for GAW19 were provided by the San Antonio Family Heart Study and San Antonio Family Diabetes/Gallbladder Study, which are supported by NIH grants P01 HL045222, R01 DK047482, and R01 DK053889. The GAW is supported by NIH grant R01 GM031575.
This article has been published as part of BMC Proceedings Volume 10 Supplement 7, 2016: Genetic Analysis Workshop 19: Sequence, Blood Pressure and Expression Data. Summary articles. The full contents of the supplement are available online at http://bmcproc.biomedcentral.com/articles/supplements/volume-10-supplement-7. Publication of the proceedings of Genetic Analysis Workshop 19 was supported by National Institutes of Health grant R01 GM031575.
QS designed the overall study, HZ and ZW conducted statistical analyses, and HZ, XW, and QS drafted the manuscript. All authors read and approved the final manuscript.
Department of Mathematical Sciences, Michigan Technological University, 1400 Townsend Drive, Houghton, MI 49931, USA
Department of Mathematics, University of North Texas, 1155 Union Circle #311430, Denton, TX 76203-5017, USA
Li B, Leal SM. Methods for detecting associations with rare variants for common diseases: application to analysis of sequence data. Am J Hum Genet. 2008;83(3):311–21.View ArticlePubMedPubMed CentralGoogle Scholar
Morgenthaler S, Thilly WG. A strategy to discover genes that carry multiallelic or mono-allelic risk for common diseases: a cohort allelic sums test (CAST). Mutat Res. 2007;615(1-2):28–56.View ArticlePubMedGoogle Scholar
Madsen BE, Browning SR. A groupwise association test for rare mutations using a weighted sum statistic. PLoS Genet. 2009;5(2):e1000384.View ArticlePubMedPubMed CentralGoogle Scholar
Price AL, Kryukov GV, de Bakker PI, Purcell SM, Staples J, Wei LJ, Sunyaev SR. Pooled association tests for rare variants in exon-resequencing studies. Am J Hum Genet. 2010;86(6):832–8.View ArticlePubMedPubMed CentralGoogle Scholar
Han F, Pan W. A data-adaptive sum test for disease association with multiple common or rare variants. Hum Hered. 2010;70(1):42–54.View ArticlePubMedPubMed CentralGoogle Scholar
Hoffmann TJ, Marini NJ, Witte JS. Comprehensive approach to analyzing rare genetic variants. PLoS One. 2010;5(11):e13584.View ArticlePubMedPubMed CentralGoogle Scholar
Wu MC, Lee S, Cai T, Li Y, Boehnke M, Lin X. Rare variant association testing for sequencing data with the sequence kernel association test (SKAT). Am J Hum Genet. 2011;89(1):82–93.View ArticlePubMedPubMed CentralGoogle Scholar
Sha Q, Wang X, Wang X, Zhang S. Detecting association of rare and common variants by testing an optimally weighted combination of variants. Genet Epidemiol. 2012;36(6):561–71.View ArticlePubMedGoogle Scholar
Shi G, Rao D. Optimum designs for next-generation sequencing to discover rare variants for common complex disease. Genet Epidemiol. 2011;35(6):572–9.PubMedPubMed CentralGoogle Scholar
Liu D, Leal S. A unified framework for detecting rare variant quantitative trait associations in pedigree and unrelated individuals via sequence data. Hum Hered. 2012;73(2):105–22.View ArticlePubMedPubMed CentralGoogle Scholar
Chen H, Meigs JB, Dupuis J. Sequence kernel association test for quantitative traits in family samples. Genet Epidemiol. 2013;37(2):196–204.View ArticlePubMedGoogle Scholar
Svishcheva GR, Belonogova NM, Axenovich TI. FFBSKAT: fast family-based sequence kernel association test. PLoS One. 2014;9(6):e99407.View ArticlePubMedPubMed CentralGoogle Scholar
Zhu X, Feng T, Li Y, Lu Q, Elston RC. Detecting rare variants for complex traits using family and unrelated data. Genet Epidemiol. 2010;34(2):171–87.View ArticlePubMedPubMed CentralGoogle Scholar
Feng T, Elston R, Zhu X. Detecting rare and common variants for complex traits: sibpair and odds ratio weighted sum statistics (SPWSS, ORWSS). Genet Epidemiol. 2011;35(5):398–409.View ArticlePubMedPubMed CentralGoogle Scholar
Zhu Y, Xiong M. Family-based association studies for next-generation sequencing. Am J Hum Genet. 2012;90(6):1028–45.View ArticlePubMedPubMed CentralGoogle Scholar
Schaid DJ, McDonnell SK, Sinnwell JP, Thibodeau SN. Multiple genetic variant association testing by collapsing and kernel methods with pedigree or population structured data. Genet Epidemiol. 2013;37(5):409–18.View ArticlePubMedGoogle Scholar
Pan W. Asymptotic tests of association with multiple SNPs in linkage disequilibrium. Genet Epidemiol. 2009;33(6):497–507.View ArticlePubMedPubMed CentralGoogle Scholar
Liu D, Lin X, Ghosh D. Semiparametric regression of multidimensional genetic pathway data: least-squares kernel machines and linear mixed models. Biometrics. 2007;63(4):1079–88.View ArticlePubMedPubMed CentralGoogle Scholar
Liu H, Tang Y, Zhang H. A new chi-square approximation to the distribution of non-negative definite quadratic forms in non-central normal variables. Comput Stat Data Anal. 2009;53:853–6.View ArticleGoogle Scholar
Kwee LC, Liu D, Lin X, Ghosh D, Epstein MP. A powerful and flexible multi locus association test for quantitative traits. Am J Hum Genet. 2008;82(2):386–97.View ArticlePubMedPubMed CentralGoogle Scholar
Liu D, Ghosh D, Lin X. Estimation and testing for the effect of a genetic pathway on a disease outcome using logistic kernel machine regression via logistic mixed models. BMC Bioinformatics. 2008;9:292.View ArticlePubMedPubMed CentralGoogle Scholar
Wang S, Fang S, Sha Q, Zhang S. Detecting association of rare and common variants by testing an optimally weighted combination of variants with longitudinal data. BMC Proc. 2014;8 Suppl 1:S91.View ArticlePubMedPubMed CentralGoogle Scholar
Furlotte N, Eskin E, Eyheramendy S. Genome-wide association mapping with longitudinal data. Genet Epidemiol. 2012;36(5):463–71.View ArticlePubMedPubMed CentralGoogle Scholar | CommonCrawl |
Global compactness results for nonlocal problems
DCDS-S Home
Entire solutions of nonlocal elasticity models for composite materials
June 2018, 11(3): 379-389. doi: 10.3934/dcdss.2018021
On a new fractional Sobolev space and applications to nonlocal variational problems with variable exponent
Anouar Bahrouni 1, and VicenŢiu D. RĂdulescu 2,3,,
Mathematics Department, University of Monastir, Faculty of Sciences, 5019 Monastir, Tunisia
Department of Mathematics, Faculty of Sciences, King Abdulaziz University, P.O. Box 80203, Jeddah 21589, Saudi Arabia
Department of Mathematics, University of Craiova, Street A.I. Cuza No. 13,200585 Craiova, Romania
* Corresponding author: Vicent¸iu D. Rădulescu
Received May 2017 Revised August 2017 Published October 2017
Fund Project: The second author is supported by a grant of the Romanian National Authority for Scientific Research and Innovation, CNCS-UEFISCDI, project number PN-Ⅲ-P4-ID-PCE-2016-0130.
The content of this paper is at the interplay between function spaces $L^{p(x)}$ and $W^{k, p(x)}$ with variable exponents and fractional Sobolev spaces $W^{s, p}$. We are concerned with some qualitative properties of the fractional Sobolev space $W^{s, q(x), p(x, y)}$, where $q$ and $p$ are variable exponents and $s∈ (0, 1)$. We also study a related nonlocal operator, which is a fractional version of the nonhomogeneous $p(x)$-Laplace operator. The abstract results established in this paper are applied in the variational analysis of a class of nonlocal fractional problems with several variable exponents.
Keywords: Fractional $p(x)$-Laplace operator, density, integral functional, Gagliardo seminorm, variational method.
Mathematics Subject Classification: Primary: 35J60; Secondary: 35J91, 35S30, 46E35, 58E30.
Citation: Anouar Bahrouni, VicenŢiu D. RĂdulescu. On a new fractional Sobolev space and applications to nonlocal variational problems with variable exponent. Discrete & Continuous Dynamical Systems - S, 2018, 11 (3) : 379-389. doi: 10.3934/dcdss.2018021
G. Autuori and P. Pucci, Elliptic problems involving the fractional Laplacian in $\mathbb{R}^{N}$, J. Differential Equations, 255 (2013), 2340-2362. doi: 10.1016/j.jde.2013.06.016. Google Scholar
A. Bahrouni, Trudinger-Moser type inequality and existence of solution for perturbed non-local elliptic operators with exponential nonlinearity, Commun. Pure Appl. Anal., 16 (2017), 243-252. doi: 10.3934/cpaa.2017011. Google Scholar
H. Brezis, Functional Analysis, Sobolev Spaces and Partial Differential Equations Universitext. Springer, New York, 2011. doi: 10.1007/978-0-387-70914-7. Google Scholar
H. Brezis and E. H. Lieb, A relation between pointwise convergence of functions and convergence of functionals, Proc. Amer. Math. Soc., 88 (1983), 486-490. doi: 10.2307/2044999. Google Scholar
C. Bucur and E. Valdinoci, Nonlocal Diffusion and Applications Lecture Notes of the Unione Matematica Italiana, 20. Springer, [Cham]; Unione Matematica Italiana, Bologna, 2016. doi: 10.1007/978-3-319-28739-3. Google Scholar
L. Caffarelli, J.-M. Roquejoffre and Y. Sire, Variational problems for free boundaries for the fractional Laplacian, J. Eur. Math. Soc. (JEMS), 12 (2010), 1151-1179. doi: 10.4171/JEMS/226. Google Scholar
L. Caffarelli, S. Salsa and L. Silvestre, Regularity estimates for the solution and the free boundary of the obstacle problem for the fractional Laplacian, Invent. Math., 171 (2008), 425-461. doi: 10.1007/s00222-007-0086-6. Google Scholar
L. Caffarelli and L. Silvestre, An extension problem related to the fractional Laplacian, Comm. Partial Differential Equations, 32 (2007), 1245-1260. doi: 10.1080/03605300600987306. Google Scholar
E. Di Nezza, G. Palatucci and E. Valdinoci, Hitchhiker's guide to the fractional Sobolev spaces, Bull. Sci. Math., 136 (2012), 521-573. doi: 10.1016/j.bulsci.2011.12.004. Google Scholar
S. Dipierro, M. Medina and E. Valdinoci, Fractional Elliptic Problems with Critical Growth in the Whole of ${\mathbb R}^n$ Lecture Notes, Scuola Normale Superiore di Pisa, 15. Edizioni della Normale, Pisa, 2017. doi: 10.1007/978-88-7642-601-8. Google Scholar
X. Fan and D. Zhao, On the spaces $L^{p(x)}(Ω)$ and $W^{m, p(x)}(Ω)$, J. Math. Anal. Appl., 263 (2001), 424-446. doi: 10.1006/jmaa.2000.7617. Google Scholar
U. Kaufmann, J. D. Rossi and R. Vidal, Fractional Sobolev spaces with variable exponents and fractional $p(x)$-Laplacians, preprint, http://mate.dm.uba.ar/~jrossi/krvP.pdf. Google Scholar
P. Marcellini, Regularity and existence of solutions of elliptic equations with $p, q$-growth conditions, J. Differential Equations, 90 (1991), 1-30. doi: 10.1016/0022-0396(91)90158-6. Google Scholar
G. Molica Bisci and V. Rădulescu, Ground state solutions of scalar field fractional Schrödinger equations, Calc. Var. Partial Differential Equations, 54 (2015), 2985-3008. doi: 10.1007/s00526-015-0891-5. Google Scholar
G. Molica Bisci, V. Rădulescu and R. Servadei, Variational Methods for Nonlocal Fractional Problems Encyclopedia of Mathematics and its Applications, 162. Cambridge University Press, Cambridge, 2016. doi: 10.1017/CBO9781316282397. Google Scholar
W. Orlicz, Über konjugierte Exponentenfolgen, Studia Math., 3 (1931), 200-211. doi: 10.4064/sm-3-1-200-211. Google Scholar
P. Pucci, X. Mingqi and B. Zhang, Existence and multiplicity of entire solutions for fractional $p$-Kirchhoff equations, Adv. Nonlinear Anal., 5 (2016), 27-55. doi: 10.1515/anona-2015-0102. Google Scholar
V. D. Rădulescu, Nonlinear elliptic equations with variable exponent: old and new, Nonlinear Anal., 121 (2015), 336-369. doi: 10.1016/j.na.2014.11.007. Google Scholar
V. D. Rădulescu and D. D. Repovš, Partial Differential Equations with Variable Exponents: Variational Methods and Qualitative Analysis Monographs and Research Notes in Mathematics. CRC Press, Boca Raton, FL, 2015. doi: 10.1201/b18601. Google Scholar
R. Servadei and E. Valdinoci, Mountain pass solutions for non-local elliptic operators, J. Math. Anal. Appl., 389 (2012), 887-898. doi: 10.1016/j.jmaa.2011.12.032. Google Scholar
R. Servadei and E. Valdinoci, Variational methods for non-local operators of elliptic type, Discrete Contin. Dyn. Syst., 33 (2013), 2105-2137. doi: 10.3934/dcds.2013.33.2105. Google Scholar
J. Simon, Régularité de la solution d'une équation non linéaire dans ${\mathbb R}^N$, Journées d'Analyse Non Linéaire (Proc. Conf., Besançon, 1977), pp. 205-227, Lecture Notes in Math., 665, Springer, Berlin, 1978. Google Scholar
[23] E. Zeidler, Nonlinear Functional Analysis and its Applications, II/B: Nonlinear Monotone Operators, Springer-Verlag, New York, 1990. doi: 10.1007/978-1-4612-0985-0. Google Scholar
V. Zhikov, Averaging of functionals of the calculus of variations and elasticity theory, Izv. Akad. Nauk SSSR Ser. Mat., 50 (1986), 675-710. Google Scholar
Xing-Bin Pan. Variational and operator methods for Maxwell-Stokes system. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3909-3955. doi: 10.3934/dcds.2020036
Yoshitsugu Kabeya. Eigenvalues of the Laplace-Beltrami operator under the homogeneous Neumann condition on a large zonal domain in the unit sphere. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3529-3559. doi: 10.3934/dcds.2020040
Reza Chaharpashlou, Abdon Atangana, Reza Saadati. On the fuzzy stability results for fractional stochastic Volterra integral equation. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020432
Nguyen Anh Tuan, Donal O'Regan, Dumitru Baleanu, Nguyen H. Tuan. On time fractional pseudo-parabolic equations with nonlocal integral conditions. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020109
Wenxiong Chen, Congming Li, Shijie Qi. A Hopf lemma and regularity for fractional $ p $-Laplacians. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3235-3252. doi: 10.3934/dcds.2020034
Lateef Olakunle Jolaoso, Maggie Aphane. Bregman subgradient extragradient method with monotone self-adjustment stepsize for solving pseudo-monotone variational inequalities and fixed point problems. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020178
Lihong Zhang, Wenwen Hou, Bashir Ahmad, Guotao Wang. Radial symmetry for logarithmic Choquard equation involving a generalized tempered fractional $ p $-Laplacian. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020445
Zaizheng Li, Qidi Zhang. Sub-solutions and a point-wise Hopf's lemma for fractional $ p $-Laplacian. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020293
Marc Homs-Dones. A generalization of the Babbage functional equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 899-919. doi: 10.3934/dcds.2020303
Leilei Wei, Yinnian He. A fully discrete local discontinuous Galerkin method with the generalized numerical flux to solve the tempered fractional reaction-diffusion equation. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020319
Abdollah Borhanifar, Maria Alessandra Ragusa, Sohrab Valizadeh. High-order numerical method for two-dimensional Riesz space fractional advection-dispersion equation. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020355
Andrew D. Lewis. Erratum for "nonholonomic and constrained variational mechanics". Journal of Geometric Mechanics, 2020, 12 (4) : 671-675. doi: 10.3934/jgm.2020033
Sujit Kumar Samanta, Rakesh Nandi. Analysis of $GI^{[X]}/D$-$MSP/1/\infty$ queue using $RG$-factorization. Journal of Industrial & Management Optimization, 2021, 17 (2) : 549-573. doi: 10.3934/jimo.2019123
Constantine M. Dafermos. A variational approach to the Riemann problem for hyperbolic conservation laws. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 185-195. doi: 10.3934/dcds.2009.23.185
Parikshit Upadhyaya, Elias Jarlebring, Emanuel H. Rubensson. A density matrix approach to the convergence of the self-consistent field iteration. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 99-115. doi: 10.3934/naco.2020018
Mostafa Mbekhta. Representation and approximation of the polar factor of an operator on a Hilbert space. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020463
Ole Løseth Elvetun, Bjørn Fredrik Nielsen. A regularization operator for source identification for elliptic PDEs. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021006
Sumit Arora, Manil T. Mohan, Jaydev Dabas. Approximate controllability of a Sobolev type impulsive functional evolution system in Banach spaces. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020049
Shuang Liu, Yuan Lou. A functional approach towards eigenvalue problems associated with incompressible flow. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3715-3736. doi: 10.3934/dcds.2020028
Shipra Singh, Aviv Gibali, Xiaolong Qin. Cooperation in traffic network problems via evolutionary split variational inequalities. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020170
2019 Impact Factor: 1.233
Anouar Bahrouni VicenŢiu D. RĂdulescu | CommonCrawl |
Pilyugin, Nicolai Nicolaevich
Statistics Math-Net.Ru
Total publications: 37
Scientific articles: 37
Presentations: 1
This page: 331
Abstract pages: 2501
Full texts: 1132
Doctor of physico-mathematical sciences (1990)
Speciality: 01.02.05 (Mechanics of fluids, gases and plasmas)
Birth date: 16.04.1943
Keywords: hypersonic aerodynamics; nonequilibrium flows; inverse problems; variation problems.
A variation problem on shapes of planar, axisymmetric, and spatial bodies which are optimal for minimizing the convective and radiation heat of their surfaces under hypersonic flowstreaming was stated and solved. Analytic and numerical solutions were obtained for diverse variation problems on optimal shapes of bodies and their flight trajectories in the atmospheres of the Earth and other planets. A problem on finding the rate constants of nonequilibrium reactions via measurements in hypersonic flow near the flying body was stated for the first time. Numerous solutions of inverse problems were obtained via measurements in ballistic Experiments. New theoretical expressions were obtained for the coefficients of triple recombination with electrons which essentially complement the well-known formulae by Pitaevsky and Langevin.
Graduated from Faculty of Theoretical Nuclear Physics of Moscow Engineering-Physical Institute in 1966 (department of theoretical physics). Ph.D. thesis was defended in 1973. D.Sci. thesis was defended in 1992. A list of my works contains more than 195 titles.
20 young scientists have defended their Ph.D. thesis" under my advisorship.
In 1986 I was awarded the prize of the Ministry of Higher Education of the USSR for a series of papers on the gasdynamical flows in turbulent wakes behind bodies flying with hypersonic velocities (jointly with S. G. Tikhomorov and N. N. Baulin). In 1993 I was awarded the N. E. Zhukovsky prize (with presenting the silver medal) for a series of papers on nonequilibrium flowstreaming around bodies (jointly with V. S. Khlebnikov and R. F. Talipov). In 1993 I was awarded the prize of the publishing company "Nauka" for the best publication of the year for a series of papers on inverse problems in nonequilibrium gasdynamics. In 1998 I was awarded the P. L. Kapitsa medal by the Presidium of the Russian Academy of Natural Sciences for the fundamental contribution to the research of nonequilibrium flows. In 1994 I was elected member of the New York Academy of Sciences.
In2001, at the VIII All-Russian congress on mechanics, I was elected member of the National Committee on Theoretical and Applied Mechanics.
Main publications:
Arguchintseva M. A., Pilyugin N. N. Ekstremalnye zadachi radiatsionnoi gazovoi dinamiki. M.: Izd-vo Mosk. un-ta, 1997, 196 s.
Pilyugin A. N., Pilyugin N. N. O vosstanovlenie konstant skorostei neravnovesnykh reaktsii s uchastiem elektronov iz ballisticheskikh eksperimentov (Obzor) // Fizika goreniya i vzryva, 1997, 33(2), 39–51.
Zhuravleva G. S., Pilyugin N. N. Vliyanie vduva gaza s poverkhnosti sfery na trenie i teploobmen pri neravnomernom turbulentnom giperzvukovom obtekanii // Teplofizika vysokikh temperatur, 1999, 37(3), 427–434.
Pilyugin N. N., Chukin S. S. O rekombinatsii elektronov v slede za telom iz splava alyuminiya s magniem, letyaschim v smesi vozdukha s ksenonom s giperzvukovoi skorostyu // Fizika goreniya i vzryva, 2001, 37(3), 45–51.
Pilyugin N. N. Obratnye zadachi po opredeleniyu konstant skorostei neravnovesnykh protsessov iz ballisticheskikh eksperimentov // Trudy XII Baikalskoi mezhdunarodnoi konf. Irkutsk: Izd-vo ISEM SO RAN, 2001, t. 4, 159–164.
http://www.mathnet.ru/eng/person17572
List of publications on Google Scholar
List of publications on ZentralBlatt
Publications in Math-Net.Ru
1. N. N. Pilyugin, "A simulation of the shape of a crater in an organic-glass target under high-velocity impact", TVT, 42:3 (2004), 477–483 ; High Temperature, 42:3 (2004), 481–488
2. N. N. Pilyugin, I. K. Ermolaev, Yu. A. Vinogradov, N. N. Baulin, "Experimental Investigation of the Penetration of Solids into a Target of Organic Glass under Impact at Velocities from $0.7$ to $2.1$ km/s", TVT, 40:5 (2002), 732–738 ; High Temperature, 40:5 (2002), 677–683
3. M. A. Arguchintseva, N. N. Pilyugin, "Optimization of the Shape of a Three-Dimensional Body for Radiation Heat Flux", TVT, 40:4 (2002), 603–616 ; High Temperature, 40:4 (2002), 557–570
4. N. N. Pilyugin, V. S. Khlebnikov, "Aerothermodynamic characteristics of an associated body under conditions of supersonic flow", TVT, 39:4 (2001), 620–628 ; High Temperature, 39:4 (2001), 578–585
5. N. N. Pilyugin, S. S. Chukin, "Investigation of electron recombination in the wake behind a body of aluminum/magnesium alloy flying in the air at a hypersonic speed", TVT, 38:4 (2000), 661–666 ; High Temperature, 38:4 (2000), 636–642
6. I. K. Ermolaev, Yu. A. Vinogradov, N. N. Pilyugin, "Destruction of organic glass under high-velocity impact", TVT, 38:2 (2000), 298–303 ; High Temperature, 38:2 (2000), 278–283
7. N. N. Pilyugin, S. S. Chukin, "The effect of magnesium impurities on nonequilibrium processes in the wake behind a model flying at hypersonic speed", TVT, 38:1 (2000), 74–80 ; High Temperature, 38:1 (2000), 69–76
8. G. S. Zhuravleva, N. N. Pilyugin, "The effect of injection of gas from the surface of a sphere on friction and heat transfer under conditions of nonuniform turbulent hypersonic flow", TVT, 37:3 (1999), 427–433 ; High Temperature, 37:3 (1999), 401–407
9. N. N. Pilyugin, V. S. Khlebnikov, "Characteristic regularities of flow before a body located in the near supersonic wake", TVT, 37:2 (1999), 268–273 ; High Temperature, 37:2 (1999), 244–249
10. A. V. Antipov, N. N. Pilyugin, E. B. Rudnyi, "The effect of injection of perfluoroethylene on the composition of plasma in the vicinity of a body during its motion in atmosphere", TVT, 37:1 (1999), 37–43 ; High Temperature, 37:1 (1999), 33–39
11. N. N. Pilyugin, V. V. Kuz'menko, E. B. Rudnyi, "Combined effect of impurities of water vapors, potassium and fluorine on the electron number density of air plasma", TVT, 36:2 (1998), 201–205 ; High Temperature, 36:2 (1998), 183–188
12. N. N. Pilyugin, R. F. Talipov, V. S. Khlebnikov, "Supersonic flow with gasdynamic and physicochemical nonuniformities past bodies", TVT, 35:2 (1997), 322–336 ; High Temperature, 35:2 (1997), 319–332
13. N. N. Pilyugin, A. N. Pilyugin, "Determination from the results of ballistic experiments of the rate constants of reactions of dissociative and triple recombination of argon ions: Theoretical determination of the coefficient of triple recombination", TVT, 35:1 (1997), 5–13 ; High Temperature, 35:1 (1997), 1–9
14. N. N. Pilyugin, A. N. Pilyugin, "Refinement of the kinetic model and determination (from the results of ballistic experiments) of the rate constants of dissociative and triple recombination of argon ions", TVT, 34:6 (1996), 837–844 ; High Temperature, 34:6 (1996), 825–832
15. N. N. Pilyugin, R. F. Talipov, V. S. Khlebnikov, "Supersonic nonuniform wake flow past bodies", TVT, 34:5 (1996), 780–795 ; High Temperature, 34:5 (1996), 770–784
16. A. I. Glushko, N. N. Pilyugin, V. M. Tarutin, "The effect of $\mathrm{NH}_3$ and $\mathrm{H}_2$ additives on the reduction of the electron number density in air plasma", TVT, 34:4 (1996), 506–511 ; High Temperature, 34:4 (1996), 500–505
17. N. N. Pilyugin, E. B. Rudnyi, "The effect of impurities due to ablation on the electron number density of air plasma", TVT, 34:3 (1996), 355–364 ; High Temperature, 34:3 (1996), 349–358
18. N. N. Pilyugin, I. V. Orfanov, "The temperature determination by the spectral line intensity of cesium and lithium in the layer of evaporation under conditions of supersonic flow over a cone: Theoretical investigation", TVT, 34:1 (1996), 69–74 ; High Temperature, 34:1 (1996), 65–70
19. A. G. Kokorin, I. V. Orfanov, N. N. Pilyugin, M. B. Belikov, "The temperature determination by the spectral line intensity of cesium and lithium in the layer of evaporation under conditions of supersonic flow about a cone: 1. Experimental investigation", TVT, 33:6 (1995), 900–907 ; High Temperature, 33:6 (1995), 896–903
20. A. K. Dmitriev, V. E. Lopatin, N. N. Pilyugin, V. N. Chikirev, "Investigation of electron density behind a teflon sphere under conditions of supersonic flow", TVT, 33:5 (1995), 669–676 ; High Temperature, 33:5 (1995), 663–670
21. N. N. Pilyugin, E. B. Rudnyi, "Calculation of the equilibrium composition of air plasma with products of ablation of teflon for conditions of ballistic experiments", TVT, 33:4 (1995), 532–538 ; High Temperature, 33:4 (1995), 527–533
22. N. N. Pilyugin, A. N. Pilyugin, "Determination of the recombination rate constant for electrons with krypton ions from the results of ballistic experiments", TVT, 33:3 (1995), 351–358 ; High Temperature, 33:3 (1995), 351–358
23. A. I. Glushko, N. N. Pilyugin, V. M. Tarutin, "The effect of additives with acceptor properties on the reduction of electron number density in air plasma", TVT, 33:2 (1995), 210–217 ; High Temperature, 33:2 (1995), 208–215
24. N. N. Pilyugin, S. Yu. Menzhinskii, A. N. Pilyugin, "Determination of the rate constant of triple recombination of electrons with aluminum ions from the results of ballistic experiments", TVT, 32:5 (1994), 656–665 ; High Temperature, 32:5 (1994), 612–620
25. N. N. Pilyugin, E. B. Rudnyi, "Calculation of the equilibrium composition of air plasma with products of ablation of aluminum for conditions of ballistic experiments", TVT, 32:4 (1994), 511–517 ; High Temperature, 32:4 (1994), 476–482
26. N. N. Pilyugin, "The determination of the rates of electron attachment to aluminum oxides from the results of ballistic experiments", TVT, 32:3 (1994), 339–353 ; High Temperature, 32:3 (1994), 318–331
27. N. N. Pilyugin, R. F. Talipov, G. A. Tirskii, S. V. Utyuzhnikov, "Calculation of supersonic turbulent flow past blunt bodies using complete equations for the viscous shock layer", TVT, 32:2 (1994), 242–248 ; High Temperature, 32:2 (1994), 229–234
28. N. N. Pilyugin, "Measurement of electric charges near bodies under conditions of hypersonic motion", TVT, 32:1 (1994), 114–126 ; High Temperature, 32:1 (1994), 110–121
29. S. Yu. Menzhinskii, N. N. Pilyugin, "Determination of physicochemical constants in the wake behind a body flying in argon at hypersonic velocity", TVT, 31:5 (1993), 787–794 ; High Temperature, 31:5 (1993), 724–731
30. A. N. Pilyugin, N. N. Pilyugin, S. G. Tikhomirov, "The determination of recombination rate constants for electrons with ions from the results of ballistic experiments", TVT, 31:4 (1993), 517–525 ; High Temperature, 31:4 (1993), 467–475
31. N. N. Pilyugin, A. V. Pronin, "The determination of the $\mathrm{N}^+_2 + e$ recombination rate constant from ballistic experiments", TVT, 31:2 (1993), 163–168 ; High Temperature, 31:2 (1993), 133–138
32. N. N. Pilyugin, R. F. Talipov, "Heat transfer on blunt-nosed cones in supersonic nonuniform flows with injection from the surface", TVT, 31:1 (1993), 97–104 ; High Temperature, 31:1 (1993), 88–94
33. N. N. Baulin, E. V. Ermakova, N. N. Pilyugin, "Determination of the physicochemical constants in the flow behind a body from ballistic experiments", TVT, 30:2 (1992), 299–310 ; High Temperature, 30:2 (1992), 235–245
34. O. V. Zverev, N. N. Pilyugin, "Heat transfer and friction of blunt body in a supersonic flow of an air-xenon mixture in chemical equilibrium", TVT, 29:6 (1991), 1164–1170 ; High Temperature, 29:6 (1991), 948–953
35. O. V. Zverev, N. N. Pilyugin, "Investigation of radiation from a mixture of hydrogen and xenon around models in high supersonic flight", TVT, 28:2 (1990), 338–344 ; High Temperature, 28:2 (1990), 260–265
36. I. G. Eremeytsev, O. V. Zverev, N. N. Pilyugin, "Study of the radiation of mixtures of air with xenon in a shock layer around models flying at hypersonic velocity", TVT, 27:3 (1989), 549–556 ; High Temperature, 27:3 (1989), 434–440
37. N. N. Baulin, O. V. Zverev, N. N. Pilyugin, S. G. Tikhomirov, "Study of the radiation of a shock layer around models flying in air with hypersonic velocities", TVT, 27:2 (1989), 306–311 ; High Temperature, 27:2 (1989), 242–247
Presentations in Math-Net.Ru
1. Неравновесное излучение при гиперзвуковом обтекании
N. N. Pilyugin, G. A. Tirskii
Scientific Seminar of the Laboratory of mathematical modeling of nonlinear processes in gaseous media | CommonCrawl |
← Polymath10: The Erdos Rado Delta System Conjecture
Polymath 10 Post 3: How are we doing? →
Polymath10, Post 2: Homological Approach
Posted on November 11, 2015 by Gil Kalai
We launched polymath10 a week ago and it is time for the second post. In this post I will remind the readers what the Erdos-Rado Conjecture and the Erdos-Rado theorem are, briefly mention some points made in the previous post and in the comments, make some remarks of administrative nature, and describe in detail a homological plan for attack, some of whose ingredients were mentioned in the first post.
Of course, there are various other avenues that can be explored: In a series of comments (e.g. this thread and that thread and this) Tim Gowers proposed a line of attack related to understanding quasirandom behavior of families of sets in terms of their pairwise intersections. (Update: Tim developed his ideas in further comments. A theme which is common to his approach as well as to the homological approach is to see if we can "improve" certain properties of the family after moving to an exponentially smaller subfamily. Second update: this post was written after 70 or so comments for post 1. There are many further interesting comments. ) Karim Adiprasito described a different topological combinatorics approach. Another clear direction is to try to extend the ideas of Spencer and Kostochka which led to the best known bounds today. Raising more ideas for attacking the conjecture is most welcome. For example, in Erdos Ko Rado theory, besides direct combinatorial arguments (mainly those based on "shifting,") spectral methods are also quite important. Of course, the sunflower conjecture may be false as well and ideas on how to construct large families without sunflowers are also most welcome.
Terry Tao kindly set a Wiki page for the project and proposed to conduct computer experimentation for small values of and . Of course, computer experimentation will be most welcome! Some of the suggestions described below can also be tested experimentally for small values.
Let me also mention some surprising connections between the sunflower conjecture and various issues arising in matrix multiplication. As pointed by Shachar Lovett (in this comment and the following one), a counterexample to a certain structural special case of the sunflower conjecture will imply an almost quadratic algorithm for matrix multiplication!
Dömötör and I hyperoptimistically conjectured that the Erdos-Rado example is optimal for Balanced families. But Hao gave a very simple counterexample.
The status of our project at this stage is described very nicely by Tim Gowers who wrote:
At the time of writing, Gil's post has attracted 60 comments, but it is still at what one might call a warming-up stage, so if you are interested in the problem and understand what I have written above, it should still be easy to catch up with the discussion. I strongly recommend contributing — even small remarks can be very helpful for other people, sparking off ideas that they might not have had otherwise. And there's nothing quite like thinking about a problem, writing regular bulletins of the little ideas you've had, and getting feedback on them from other Polymath participants. This problem has the same kind of notoriously hard feel about it that the Erdős discrepancy problem had — it would be wonderful if a Polymath collaboration could contribute to its finally getting solved.
Update to an earlier post. Karim Adiprasito, June Huh, and Eric Katz have now posted their paper "Hodge theory for combinatorial geometries" which contains, among other things, a proof of the Heron-Rota-Welsh conjecture on matroids.
Here is a reminder of the sunflower theorem and conjecture:
The Erdos-Rado sunflower Theorem
A sunflower (a.k.a. Delta-system) of size is a family of sets such that every element that belongs to more than one of the sets belongs to all of them. We call the common element to all the sets the head of the sunflower (or the kernel of the sunflower), and the elements that belong to just one among the sets, the petals.
A basic and simple result of Erdos and Rado asserts that
Erdos-Rado sunflower theorem: There is a function so that every family of -sets with more than members contains a sunflower of size .
We denote by the smallest integer that suffices for the assertion of the theorem to be true.
The Erdos-Rado Sunflower Conjecture
The Erdos-Rado sunflower conjecture: .
Here, is a constant depending on . It may be also the case that we can take for some absolute constant C. The conjecture is already most interesting for . I recommend to reading Kostochka's survey paper and also, as we go, it will be useful to learn Spencer's argument and Kostochka's argument which made remarkable improvements over earlier upper bounds.
The main purpose of this post is to provide
A Homological attack on the sunflower Conjecture
Part 1: Combinatorial Extensions and Variations
A) The question as an Erdos-Ko-Rado type question
Let be the maximum size of a family of -subsets of [n]= containing no sunflower of size with head of size at most . (Note: it should me .)
Basic Question: Understand the function f(k,r,m;n). Is it true that , where is a constant depending on r, perhaps even linear in r.
A family of -sets satisfies property P(k,r,m) if it contains no sunflower of size with head of size at most .
B) Stars and links: Given a family of sets and a set , the star of is the subfamily of those sets in containing , and the link of is obtained from the star of by deleting the elements of from every set in the star. Another way to say that has property P(k,r,m) is that the link of every set of size at most less than contains no pairwise disjoint sets.
C) The balanced case
A family of -sets is balanced (or -colored, or multipartite) if it is possible to divide the ground set into parts so that every set in the family contains one element from every part.
Let be the maximum size of a balanced family of -subsets of [n]= containing no sunflower of size with head of size at most . By randomly dividing the ground set into colors we obtain that .
D) What we aim for. Below we describe two variations of a homological attack on the sunflower conjecture. If successful (as they are) they will lead to the following bounds.
The first variation based on conjectured homological properties of balanced families would yield
The alternative version would give
Part 2: Collections of sets as geometric objects, homology and iterated homology.
E) Simplicial complexes and homology
Staring with a family we will consider the collection of sets obtained by adding all subsets of sets in . This is a simplicial complex, , and we can regard it as a geometric object if we replace every set of size by a simplex of dimension . (We call sets in of cardinality by the name i-faces.
The definition of homology groups only depends on the combinatorial data. For simplicity we assume that all sets in (and hence in the associated simplicial complex) are subsets of {1,2,…,n}. We choose a field A (we can agree that the field will be the field of real numbers). Next we define for i>0 the vector space of -dimensional chains as a vector space generated by i-faces of K. We also define a boundary map for every . The kernel of is the space of i-cycles denoted by ; the image of is the space of i-boundaries, denoted by . The crucial property is that applying boundary twice gives you zero, and this allows to define homology groups . The betti numbers are defined as . We will give the definition of the boundary operator further below.
F) Acyclic families and intersecting families
A family of -sets is acyclic if it contains no -cycle, or equivalently if . (For coefficients in , a -cycle is a family of -sets such that every set of size is included in an even number of -sets in .)
Proposition: An acyclic family of k-subsets of [n], contains at most sets.
In the first post we asked: Are there some connections between the property "intersecting" and the property "acyclic?"
Unfortunately, but not surprisingly intersecting families are not always acyclic, and acyclic families are not always intersecting. (The condition from EKR theorem also disappeared.) As we mentioned in the previous post intersecting balanced families are acyclic! And as we will see for balanced families Erdos-Ko-Rado properties translate nicely into homological property.
G) Pushing the analogy further
We made an analogy between "intersecting" and "acyclic". Building on this analogy
1) What could be the "homological" property analogous to "every two sets have at least m elements in common"?
2) What could be the "homological" property analogous to "not having r pairwise disjoints sets"?
I will propose answers below the fold. What is your answer?
H) Weighted homology
For a simplicial complex on the vertex set [n] the boundary operator is defined by . Where .
Given a vector of nonzero weights we can define a weighted boundary operator by
where . It is a simple matter of scaling a matrix that still the boundary of the boundary is zero and (over any field) the homology with respect to this weighted boundary operator does not depend on .
I) Iterated homology
Being acyclic guarantees that
1) What is the global homological property that will give us ?
Answer for 1: There is no chain in which vanishes when you apply m (generic) weighted boundary operators successively.
Answer for 2: There is no chain in which vanishes when you apply each one out of (generic) weighted boundary operators.
When both answers coincide with the top dimensional homology . For larger value of those are kind of homology-like spaces whch are called "iterated homology."
J) m-fold acyclic families (first try)
Iterated homology gives us global properties of a family of sets that we want to relate to Erdos-Ko-Rado-like properties P(m,r). But in order to make such a connection we first need to study the connection between local and global properties. Here, by "local" I refer to properties described in terms of links. Let's go back to ordinary homology and try to understand the situation when we impose (top-dimension-) acyclicity on the family as well as on links. We will start with the simplest case: what about families which are acyclic, and all links of vertices are acyclic? Let us choose the case k=3, m=2.
Is the number of triangles in such a 2-fold acyclic family at most linear in ? perhaps even at most ?
Here is an example with more than triangles.
But things can get much worse: Consider a Steiner triple system: namely a collection of triangles where every pair of elements is included in one triangle. It is obviously 2-acyclic and the link of every vertex is a matching and thus, it is an acyclic graph. Still, we have quadratic number of triangles.
K) m-fold acyclic families revisited.
Theorem 1: Let be a family of -sets and let be the associated -dimensional simplicial complex. Suppose that
Then .
So we need a new crucial assumption: it is not enough to require that the top homology for the family and its links vanish, we need also that the th homology will vanish.
[One thing to keep in mind: Condition c) is not preserved when we delete sets from the family. We can hope that we can replace this condition by a weaker condition which is a monotone property relating th homology of with th homology of links of vertices. For every vertex there is a map from to . Perhaps the property is that this map is surjective for every vertex. update (Dec 13) I am less certain about what the property should be.]
This theorem extends also to every value of m.
Theorem 2: Let be a family of -sets and let be the associated -dimensional simplicial complex. Suppose that for every link of (including itself) whenever , then .
L) Collapsibility. An easier version of Theorem 2 is for the case that is "d-collapsible," for This is a combinatorial property which is stronger than the homology condition. Using a combinatorial strong form (like collapsibility) of the homological conditions may be relevant to our case as well.
M) A working conjecture that may assist an inductive argument.
The following working conjecture may be useful for some inductive arguments:
Working Conjecture: Suppose that is a family (or just a balanced family) of -subsets of [n]. Suppose that for every element the star of contains a sunflower of size with head of size . Then contains a sunflower of size with head of size smaller than .
Update: False as stated for general families: Fano plane fails it, as Dömötör pointed out. I dont have a cunterexample for balanced families.
Update: False also for balanced families, as Dömötör pointed out. I dont have a counterexample (general, or better balanced) for the case . (This case might be useful.)
PART III: Moving to the balanced case
N) Acyclicity and Erdos-Ko-Rado properties for balanced families.
Now we consider the various Erdos-Ko-Rado questions for balanced families and revisit the connection to homology. For example, note that for balanced families, if then one color class has just one element hence all sets in the family contains this element.
Proposition: For a balanced family of -sets if every set of size is included it at least sets of size , then contains pairwise disjoint sets.
Corollary: If is balanced and intersecting then it is acyclic. If is balanced and has a -cycle that vanished by applying each one of generic boundary operations, then contains pairwise disjoint -sets. [corrected]
O) The expected global consequences for balanced families without Delta systems
Conjecture 1 (special case of r=2): If is a balanced family of -sets from [n] and every two sets in the family share at least elements then there is no cycle in which vanishes when you apply successively different (generic) boundary operators.
Conjecture 1 (general case): If is a balanced family of -sets from [n] without a sunflower of size and head with less than elements. (In other words, it has no pairwise disjoint sets for every link of a set of size less than .) Then there is no cycle in which vanishes when you apply any combination of boundary operators successively out of different (generic) boundary operators.
This last conjecture would give
Part IV: An alternative ending to the program
P) Avoiding coloring
We had some difficulty to relating intersecting and acyclic families. One (conjectural) proposal was to move to balanced families. But another proposal is to relax the notion of acyclicity. (Essentially by adding additional boundary operators.)
Theorem 4: (1) Let be an intersecting family of -subsets from [n]. Then there is no cycle in which vanishes when you apply each one out of (generic) boundary operators
(2) Let be a family of -subsets from [n] without pairwise disjoint sets, Then there is no cycle in which vanishes when one applies every boundary operator out of (generic) boundary operators.
(3) If F is a family of -sets from [n] and every two members of share at least elements then there is no cycle in which vanishes when you apply any combination of m boundary operators successively out of different (generic) boundary operators.
These follow directly from the fact that algebraic shifting preserves the property that is intersecting, the property that has no pairwise disjoint sets, and the property that every pairwise intersection has at least m elements.
Unfortunately, as we mentioned, the property of interest to us is not preserved under shifting. We can hope that the effect of the extra boundary operators, in an inductive argument will be as follows:
Q) The Conjecture for the alternative direction:
Conjecture 2: If is a family of -sets from [n] and if it has no pairwise disjoint sets for every link of a set of size less than then there is no cycle in which vanishes when you apply any combination of boundary operators successively out of different (generic) boundary operators.
This would give
Part V: Shifting
R) Shifting
We mentioned the shifting method in this post. A collection F of k-subsets from [n]={1,2,…,n} is shifted (or compressed) if whenever a set S is in the collection and R is obtained by S by lowering the value of an element, then R is also in the family.
A shifting process is a method to move from an arbitrary family to a shifted one with the same size. Erdos, Ko and Rado described a combinatorial method for shifting in their paper on the Erdos-Ko-Rado theorem. A very basic facts from Erdos-Ko-Rado theory is
(EKR) P(2,m) and P(r,1) are preserved under shifting.
But not having a sunflower is not preserved under shifting. It is still possible that not having a sunflower for the family is translated to a different statement for the family obtained from it by shifting and indeed we will formulate Conjectures 1 and 2 in these terms.
S) Algebraic shifting
Algebraic shifting was mentioned in this post. A good reference for it is this 2002 paper.
Here is a quick definition of algebraic shifting:
(1) Start with an by generic matrix .
(2) Next consider the th-compound matrix whose entries correspond to the determinants all by minors of . Order the rows and columns of lexicographically.
(3) Given a family of subsets of [n] consider the submatrix of whose columns are indexed by sets in .
(4) Now, choose a basis to the rows of greedily, namely, go over the rows of one by one and add a row to the basis if it does not depend on the earlier rows.
(5) The algebraic shifting of is the family of indices for the rows in this basis.
Theorem: Property (EKR) continues to hold for algebraic shifting.
T) Algebraic shifting and homology
Algebraic shifting also preserves the Betti numbers as well as the dimension of various iterated homology groups.
For example, is acyclic, namely there is no -chain that vanishes when a boundary operation is applied, if and only if all sets in contains '1'.
There is no chain in which vanishes when you apply successively m weighted boundary operators if and only if all sets in contains .
There is no chain in which vanishes when you apply each one out of (generic) weighted boundary operators, if and only every set in contains an element .
U) Our conjectures in terms of algebraic shifting
Conjecture 1 and 2 are equivalent to:
Conjecture 1′: Algebraic shifting of balanced families with property P(r,m) leads to a shifted family so that every set has at least m elements in the set {1,2,…,(r-1)k}.
Conjecture 2′: If is a family of k-subsets of {1,2,…,n} with property P(r,m) then for the algebraic shifting of , every set has at least m elements in the set {1,2,…,m+(r-1)k}.
V) Balanced shifting
Even when dealing with balanced families we considered a shifting operation that does not preserve the balance structure.Variants for algebraic shifting for the balanced case were defined and may be useful. (EKR) is not known for balanced shifting.
Question: Does balanced shifting have property (EKR).
W) Methods from commutative algebra
Methods from commutative algebra are quite powerful for demonstrating (often in another language) results about algebraic shifting and iterated homology groups.
This entry was posted in Combinatorics, Polymath10 and tagged algebraic shifting, Iterated homology, sunflower conjecture, Topological combinatorics. Bookmark the permalink.
126 Responses to Polymath10, Post 2: Homological Approach
domotorp says:
I think that in M) for your "working conjecture" a finite projective plane gives a counterexample.
Right, I overlooked that. In any case items L and M are calling to be on the alert for a "collapsibility-type" statement which may simplify proofs. But what I suggested is too strong.
Dömötör, I still wonder if the conjecture holds for the balanced case:
Working Conjecture (balanced case): Suppose that F is a balanced family of k-subsets of [n]. Suppose that for every element v, the star of v contains a sunflower of size r with head of size m. Then F contains a sunflower of size r with head of size smaller than m.
This is true for k=2, but fails for k=3. Let the set of vertices be and the edges be the triples whose sum is even. Every v has r=2 sized sunflower, but there are no disjoint edges.
Hmm, right! very nice example! (BTW do you have a quick way to see that the hypergraph is intersecting?) I guess I care most about the case that m=k-1. So I wonder if we can construct a similar example of a family of k-sets (perhaps even a balanced family) so that every star of a vertex contains a sunflower with head k-1, but the family itself does not contain a sunflower with smaller head than k-1.
A quick way to see is that two disjoint triples would sum up to 3, so both cannot be even.
Even m=k-1 is false for r>2, even for balanced families, so we can construct a balanced family of k-sets so that every star of a vertex contains a sunflower with head k-1, but the family itself does not contain a sunflower with smaller head than k-1. Below I sketch this for r=3. Start with a digraph on k vertices where the outdegree of each vertex is 2, and the girth of the underlying multigraph is more than 3. The base set of our hypergraph is and the support of each hyperedge has size at most 2. The hypergraph has and every k-tuple with one non-zero element. The k-tuples with two non-zeros which are in the hypergraph are as follows. The k-tuple whose i-th element is 1 (resp. 2) and j-th element is 1 or 2 is in the hypergraph if there is a directed edge from the i-th vertex to the j-th vertex with label 1 (resp. 2). This guarantees that all stars have sunflowers with head k-1, and the large girth shows (with a simple case analysis) that there are no sunflowers with smaller heads.
Thanks Dömötör, it is useful to know it. I plan to describe further the notions of (b,c)-cycles and relevance to Erdos-Ko-Rado and sunflower statements in the 3rd thread.
It can be useful to think about the simplest cases for the conjectures. One simple case is the assertion that a balanced family of k-subsets from {1,2,…,k} so that every two sets have at least m elements in common contains at most elements. The other case is to consider balanced families with no sunflower of size three with head of 0 or 1 elements.
The very very simplest case is the following: If is a balanced family of -subsets from [n] = , and every pair of sets have at least two elements in common then .
Maybe it is known and maybe some example a la Erdos-Ko-Rado (for the non balanced case) shows it false.
If it is correct it may have some simple combinatorial proof. There may be a proof using a combinatorial version of shifting for balanced families. And there may be a homological proof. We know that and for every link of a vertex we have as well. We need some condition on the -homology of the entire .
The very very very simplest case is for .
In the very very very simplest case above, you don't need the condition that F is balanced. It would be interesting to see at which k the need for this condition kicks in. (I try to write out the k = 4 case, but my 4 year old keeps drawing faces at my piece of paper. I will come back to it (hopefully))
Ok, this is not entirely waterproofed but I feel that the very very simple conjecture above is true in the unbalanced case with possibly some exceptions at small n (small definable in terms of k.) It seems sensible to distinguish between the case where every two sets A, B from the family have the SAME pair of elements in common (i. e. F is a sun flower and attains the bound from the very very simple conjecture) or the case where there are family A, B, C that do not form a sunflower with head of size 2, in spite of every two of them having at least two elements of in common.
If I'm not mistaken, in the latter case we have that when the number of members in the family is at most which, being a polynomial in $n$ of degree $k-3$, will for large enough n be less than the bound from the conjecture, which is a polynomial in n of degree .
(For we can have more than members but I didn't check if and when it beats )
Dear Vincent, when is large then many of the Erdos-Ko-Rado questions become easier as observed already by Erdos-Ko-Rado. So the answers for P(k,2,m) and P(k,r,1) are obtained when n is large for families containing and one element from respectively.
'Family A, B, C' should read 'family members A, B, C'. Also, how do I make my Latex look Latexy?
gowers says:
After each opening dollar you write "latex" with no space. It's hard to demonstrate without making it look latexy and thus hiding the demonstration, but if I use a pound sign to stand for the dollar sign, then to make n choose 2 you would for example write £latex \binom n2£.
This isn't to do with the homological approach (which I plan to try to understand, but I have not got there yet). Instead it is a partial answer to a question I asked on the first post. The question was this: if you have a collection of -sets, how many do you need in order to guarantee that either three of them are disjoint or of them form an intersecting family?
The simple upper bound is around . This one proves by taking a maximal disjoint subfamily, which contains at most two sets, and using the pigeonhole principle to find at least sets that all contain the same element from the union of this subfamily. A simple lower bound is obtained by picking two disjoint intersecting families of size . Then by the pigeonhole principle you don't have three disjoint sets and you don't have an intersecting family of size . This family has size .
The question I asked was whether there is a lower bound that tends to infinity with for fixed . The answer is yes: here's a simple construction that gives a lower bound of around .
Start by picking a graph with vertices that has no independent set of size 3 and no clique of size . This is known to exist from the best known bounds for — in fact, we can improve it by a log factor. For each vertex in this graph, let be the set of edges incident to . Finally, add separate points to each to create sets that have size . (The typically have size about , so we don't have to add too many points.)
Note that is an edge of the graph if and only if , and the points we have added do not affect whether the sets intersect, so the Ramsey property of the graph implies that no three of the sets are pairwise disjoint and no of them form an intersecting family. Also, there are sets in the set system.
Now duplicate each set times. Then no of the sets form an intersecting family, no three are disjoint, and there are about sets in the set system.
There was a (not very strong) hope that it might be possible to obtain an improved bound by showing that every -intersecting family either contains a sunflower or a large -intersecting family. The above construction shows that to pass to the -intersecting family we would need to divide by at least or something like that, and the product of those is , which is not an interesting improvement on .
I think that or even would be a spectacular progress. The best we have is where tends to infinity. And this is very ingenious.
Of course comments continuing the older thread are most welcome. The homological suggestions are sort of a last resort. There were some interesting further comments on the group question, in fact I will make one additional comment there but then we can safely move the discussion to here.
Ah — when I wrote that I stupidly didn't check what the best known bound was.
In that case it seems worth thinking about whether the bound of can be improved further, or matched by an upper bound. In the latter case, one wouldn't immediately get an improvement to because getting a pairwise intersecting family is not the same as getting a family of sets with a non-empty intersection, so it is far from obvious what the inductive step would be in general.
For that reason it might be interesting to consider the following question: how large an intersecting family of -sets do you need in order to guarantee either three sets that form a sunflower with a head of size 1 or a subfamily of sets with all pairwise intersections of size at least 2?
If tends to infinity while is fixed, then would tend to zero, and you can't duplicate a set less than once. Unless I misunderstood something in the construction.
The construction is meant to be for fixed and tending to infinity. The idea is that one wants to pass from a large set system that contains no three disjoint sets to a slightly smaller one that is intersecting. The construction shows that the intersecting subfamily may have to be smaller by a factor of . The sizes of the set systems are much bigger than .
Philip Gibbs says:
Are there any best results for small listed anywhere?
I found , is there better known?
The first ten sets are
The other ten sets are the same combinations but with six different numbers
This is sharp, see more on http://mathoverflow.net/questions/163689/what-is-the-best-lower-bound-for-3-sunflowers.
Dear Gil,
I'm sorry but I'm rather confused about the definition of acyclic families. After some staring I understand the situation when working over Z/2Z. We then can naturally identify each family of k-sets with the k-chain that is the formal sum over its members and it is clear that this chain is a cycle iff it satifies the condition you describe. Also in that case I can make sense of the statement that a family is acyclic if it does not 'contain' a cycle, because I interpret it as the family not having a subfamily that is a cycle in the above sense. However there are a lot of things that I don't see, notably:
1. What does it mean for a family to contain, or to be, a cycle (or even a chain, really) over the real numbers? Does it mean that no non-zero real linear combination of its members is a cylce?
2. Why do results like the bound not depend on the field A you work over?
3. With the above notion of acylic (over Z/2Z) it is clear that if a family is acyclic, the associated chain complex K as in your post has Z_k(K) = 0 and hence H_k(K) = 0, but how do we know that it is the *only* way to get H_k(K) = 0?
I know this must all be standard material but I would be very happy if you could say something about it (or post a link to some introductory text).
Hi Vincent. Good questions. For every field of coefficients we sat that a family of sets of size k is acyclic if it does not support a non-zero cycle w.r.t. the field.
This means that , where is the simplicial complex spanned by . Yes, for the top dimension the only way to have vanishing homology is that the space of cycle vanishes.
The argument for question 2 goes like this and it is independent of the field of coefficients. Consider the family of all -subsets from and the matrix that represent the boundary operator from . Remember that (over every field) the boundary of a k-dimensional simplex is a (k-1)-cycle. This means that a column that correspond to a k-sets not containing '1' linearly depends on the k columns obtained by adding '1' and deleting an element from S. This show that the rank of the matrix is at most . However if you consider the rows corresponding to sets not containing '1' and the columns corresponding to sets containing 1 you see a permutation submatrix of size . This gives the result.
Thank you, this is most helpful!
I can also explain with a similar argument why a family of -sets which does not contain a chain that vanishes for each one of (generic) boundary operators must satisfies . This time you consider a matrix which for every sets describes the different boundary operators. Now if you take a simplex of dimension and apply successively the boundary operators, then you get a -dimensional chain which vanishes when you apply each one of the boundary operators once more. This implies that in this matrix the columns which correspond to sets not containing any elements from {1,2,…,r} depends on those that contains one or more element from {1,2,…,r}. This gives the inequality.
So in some sense an early case of the conjecture (for the balanced case) is this: you have a balanced collection of k sets without a sunflower of size 3 and head of size 0 or 1. You want to show that if you take four generic boundary operators, there will not be a chain that will vanish for every two of them. Perhaps the simplest way to go about it is to consider such a (4,2) cycle. and try to find in it a sunflower of the requested kind.
For the second approach we want to argue directly that a general family of k-sets without a sunflower of size 3 with head of size 0 or 1 cannot vanish for applying all pairs of boundary operators among 2k+2 generic boundary operators. Again the simplest way would be to look at such a (2k+2,2)-cycle and identify in it the required sunflower.
This is a non-comment, but I wanted to say that I am still thinking about the Ramsey-type problem of how many -sets you need before three must be disjoint or must be intersecting, but don't have anything interesting to say about it.
If you insist on the stronger property that the sets must have non-empty intersection (as opposed to being pairwise intersecting) then it's easy to get a lower bound that's roughly the same as the obvious upper bound of . For example, let the -sets be the sets of edges incident at the vertices of . No two of these sets are disjoint (since any two vertices have an edge in common), there are of them, and the number of sets containing any given edge is . By duplicating times with we obtain a family of almost sets that all intersect but that such that no element is contained in of the sets.
So the question that interests me (even though it may not ultimately be of any help with the sunflower question) is whether the upper bound can be improved to when we ask only for the weaker property that the family is pairwise intersecting.
It is perhaps worth noting that an obvious family of -sets with no three disjoint is all -subsets of a set of size . Unfortunately, the proportion of these sets that contain the element 1 is about 1/3, so we don't even get a dependence on with this example.
Somehow it seems that to get a good example the -sets need to be spread out in a set of size close to . But this means that random sets have quite a high chance of being disjoint, so to avoid three disjoint sets you have to do a bit of work to force them to intersect. But then to increase the size of the set system it seems hard to do anything radically different from the duplication I have been using. So it looks as though the extremal examples for the Ramseyish problem will have huge sunflowers. However, I don't have any serious ideas for converting this intuition into a rigorous argument.
A comment regarding the case of the hyperoptimistic (former) conjecture — i.e. the maximum size of a balanced collection of 3-sets without an r-sunflower. Hao gave an example
https://gilkalai.wordpress.com/2015/11/03/polymath10-the-erdos-rado-delta-system-conjecture/#comment-22294
of a balanced family of 9 3-sets without a 3-sunflower, which disproved the hyperoptimistic conjecture. In trying to understand his example I found it useful to think of the link graphs of the vertices 0, 1, and 2 from (which together form a 3-colored bipartite multigraph*) between and . For instance
000, 010, 100, corresponds to edges 00, 01, 10 having color 0
In this 3-colored bipartite multigraph, we are trying to avoid (as subgraphs)
i) monochromatic/rainbow* , and
ii) monochromatic/rainbow* , and
iii) an edge of multiplicity 3
as these correspond to different types of 3-sunflowers in the original set system.
* By -colored multigraph I mean an edge colored multigraph having the property that the edges between a pair of vertices have distinct colors; this implies that the multiplicity of any edge is at most .
By , I mean one vertex of degree which is adjacent to r vertices of degree 1.
By , I mean a collection of vertex disjoint edges (i.e. a matching of size ).
A graph is rainbow if all of its edges are different colors.
Using this formulation, it becomes easier to see that Hao's example is maximal (as Sw asked https://gilkalai.wordpress.com/2015/11/03/polymath10-the-erdos-rado-delta-system-conjecture/#comment-22304). Although I have no elegant proof (it would be nice to have one), it seems possible to show (I started to), using some case analysis, that the maximum number of edges in a -colored bipartite graph which avoids
i) monochromatic/rainbow , and
ii) monochromatic/rainbow , and
is at most 9.
Note that there are no restrictions on (which corresponds to the number of vertices in the "third" part whose link graphs we are considering; although is probably a good test case), or on the number of vertices in the bipartite graph (which corresponds to the number of vertices in the other two parts). The same will be true in what follows.
My QUESTION as it relates to the case of the balanced version of the problem:
How many edges can a -colored bipartite multigraph have which avoids
iii) an edge of multiplicity ?
If we could determine the maximum such value (over all ), then we would have an answer for the case of balanced collections of 3-sets containing no -sunflower.
I got a bit lost above on how the elements of V1, V2, and V3 relate to you your bipartite multigraph.
For 000 are you imaging that the first 0 is a vertex representing an element from v1, the second 0 is a vertex representing an element from v2, and the third 0 (edge color) represents an element from v3?
That's correct. The link graph of the third 0 is 00, 01, 10; the link graph of the third 1 is 01, 00, 10; and the link graph of the third 2 is 11, 12, 21. Since we are looking at all of these link graphs together, we give the link edges colors corresponding to which vertex they came from.
I am looking at cases where there are limits for the number of petals in flowers with a head of size starting with small number cases. This is very simplistic compared to other approaches discussed but perhaps I am not the only one who finds it too daunting to jump in at the deep end.
The simplest case is k-sets with for . This means no two sets can have an intersection. not even an empty one, so there can only be set and elements. This corresponds to .
Second case is . So up to two sets with an empty intersection are allowed and the maximal solution is
The third case is a little more interesting. . Every pair of sets must have a non-empty intersection because r_0=2. The intersections can't be no larger than 1 because No three sets can have a non-empty intersection because the intersection would have to be of size 1 but . Each elements that appears in any set will therefore appear in exactly two sets. Take the first set in a collection of k-sets. This set must have an intersection of size 1 with each of the other , sets and these intersections must all be distinct. Since the set has elements we conclude . Furthermore this limit can be achieved by distributing elements so that they each appear in a different pair of sets. The maximal solution is therefore .
What about the k-set with ?
The intersection between any two of the sets must be of size and all such intersections must be distinct. The first k-set has subsets of size so there is an upper bound on the size of the k-set of
When can this limit be achieved? It can when or or . In the case the limit can be achieved when there is a biplane of order ( a biplane is a symmetric 3-design see https://en.wikipedia.org/wiki/Block_design ) Biplanes are known of order 0,1,2,3,4,7,9 and 11.
Apologies for the Latex errors. The above bound should be
Dear Philip, corrected now…(I did not guess correctly)
In the case above, one type of sunflower is allowed with a head that must be of a given size and with two petals. The next step up is to allow two types of sunflower with heads of size and where . In other words and for all other sizes of head.
It is more difficult to set a good limit for these cases except the not-intersecting case when . I.e. the collection of k-sets is allowed to contain mutual pairs with no common elements. All other pairs of k-sets must have intersections of size and no triples of sets can have a non-empty intersection.
We already know there is an upper limit to the number of k-sets in such a collection for given k, because it would be a special case of a delta-system with . So let's assume that is a collection of such k-sets of the maximum possible size
Since is maximal, there must be two sets and in which do not intersect, because otherwise we could add a new set to all of whose elements are different from those already used and that would give a bigger collection of k-sets with the required properties.
There can't be a third k-set which does not intersect with or since that would give a sunflower with an empty head and three petals. Furthermore any non-empty intersection between two sets is of size . Therefore the collection can be subdivided into four collections , such that
the k-sets in all have elements in common with but none in common with ,
the k-sets in all have elements in common with but none in common with
the k-sets in all have elements in common with and elements in common with
All of these intersections must be distinct and there are subsets of size in each k-set, so
This limit can be achieved when or or when and there is a biplane of order
dmoskovich says:
Instead of keeping the same chain groups and playing with boundary maps, I wonder whether it might make sense to associate objects other than simplices intersecting along m-faces with sets with m elements in common- to play with the chain groups.
Consider two sets with 3 elements. If they don't intersect then H_0 will tell us that. Whether they intersect at 3 elements of at 2 or 1 elements will be distinguished by H_1 not of triangles, but of their 1-skeletons (so suppress 2-cycles). To distinguish between 1 and 2-element intersections, count elements in the union (that distinguishes all possibilities in this simple example but of course not in general).
Dear Daniel, it will certainly be very interesting to relate intersection properties to ordinary chain groups and consider more general objects, like skeleta as you suggested or even the full filtration of the simplicial complex by skeleta, or perhaps other objects.
Yes- my intuition is that this might be a good direction. I'm wondering what the logical first step of this direction might be…
Let me make a few comments on the homological approach.
Given b generic boundary operators, let us define a (b,c)-cycle as a (k-1)-chain that vanishes after every successive application of c boundary operators from the b. Both our approaches are based on a conjecture that certain (b,c) cycles must contain certain sunflowers. The first approach is for the balanced case and the second for the general case.
We also want to be on the look for a more general "combinatorial" notion of (b,c)-cycle which might already contain the required sunflower.
The very special case of a=1 and b=1. The claim is that for the balanced case a cycle (which is just a (1,1)-cycle) must contain two disjoint sets. All we need to get an induction to work is that
a) links of vertices in a (1,1)-cycle is a (1,1)-cycle
b) every set of size (k-1) which is included in one set of size k in the cycle is included in at least two.
This is true (over every field of coefficients) for a cycle. We can adopt b) as the notion of "combinatorial (1,1) cycle" and b) is enough for the inductive argument to go through.
To contain a "combinatorial (1,1)-cycle" just means that the comפlex does not collapse to its codimension-one skeleton.
Now everything I said extends to (a,1) cycles and I think also (a,a) cycles. But let me elaborate on it separately as this comment gets long.
The next case to consider is (b,1) cycles. A (b,1) cycle is a collection of sets so that some linear combination of them with non-zero coefficients is a chain which vanishes for applying each boundary operator among b generic ones. A (b,1) cycle satisfies
1) The link of a vertex of a (b,1) cycle is a (b,1)-cycle
2) every set of size k-1 is included in at least b+1 sets of size k.
In the balanced case a simple inductive argument shows that 1) and 2) imply that we must have b+1 pairwise disjoint sets. Again the weaker combinatorial property 2) suffices for that.
Here is the argument: By induction the link of a vertex contains b+1 pairwise disjoint sets. So we have something like this. (For b=2)
Th elink of v contains three disjoint sets A,B and C . We can have three disjont triangles as follows. . B is included in an additional triangle . is colored red hence not in . C is included in three triangles and one of them must have a red vertex different than both v and w.
Our more general aim is to show that in the balanced case (mb,m)-cycle must contain a sunflower of size b+1 with head of size at most m-1. Here are some thoughts, hopes, and concerns about it. Lets just worry about sunflowers of size 3.
A) A possible direction: The results about (2,1) cycles gives some information about links. With some extra results on vanishing of lower dimensional "homology" some techniques used for Theorem 2 may apply.
B) Hope: Using combinatorial notions of (b,c) cycles may simplify matters.
C) Concern: Maybe inductive arguments like the one describe here can be used to reduce the conjectures for "no sunflowers with small heads" to the original conjecture of "no sunflower". Again a good place to check is for "no sunflowers of size 3 with head of size 0 or 1."
D) (hopeful) The inductive argument actually allow to get a sunflower from weaker structure. We will return to it later. (Added later: Actually I don't see it allows such a thing.)
The next case to consider is the case of (b,b)-cycles. Here the statement is this: if F is a balanced family of k-sets which form a (b,b)-cycle then there are two sets in the family whose intersection has size smaller than b. Again it looks to me that the inductive arguments work (and used basic properties of (b,b)-cycles which applies in greater generality).
Another challenge would be to prove the same conclusion (which we know is correct) using arguments of Theorems 1 and 2. Or to start consider the case b=2 and trying to guess/verify the additional homological property in addition to the property that the family and links of vertices have vanishing top dimensional homology.
Some of the insights regarding sunflowers may extend to many-families questions and such extensions may be useful for some inductive arguments. So for example we can ask, given k for which 3 numbers a,b,c whenever we have three families A, B, C of k-sets of cardinality a,b,c respectively, we can find a sunflower with one set from A one from B and one from C. of course we can also consider families of sets of different sizes.
I might misunderstand something, but if all sets of A and B contain some element x that is not in any set from C, then we will never have a colorful sunflower.
That's right, we do need to make some extra assumptions. I specifically wondered about the following: If A B are cycles and balanced (all are families of k-sets). Do we necessarily have a pair of disjoint sets one from A and one from B? If A B and C are cycles for two boundary operators and balanced (all are families of k-sets). Do we necessarily have a colorful triple of pairwise disjoint sets?
Actually, I think that or basic argument for cycles indeed shows that.
I feel very ashamed, but the homological notions (like cycle) are still quite alien to me. But if you sketch a proof, I might be able to follow it, or maybe even understand cycles better!
Dear Dömötör, what about the following: we assume further that A B and C have a k-set in common. can we find a colorful sunflower then? Actually I care about k=2.
I think the same counterexample works with the addition of one element. Again, all sets of A and B contain some x, except for one set from each, which is the common set of A, B and C. If all sets from C avoid x and intersect this common set, we won't have a rainbow triple.
Let me draw attention to GFP's comment from the first thread. Fix k and n (and r that we can set to r=3). Consider the following sunflower process: we choose k sets from [n] at random until any new k-sets leads to a sunflower. The question is what is the expected size of the resulting sunflower-free family. It will be interesting to answer this question!
(We can also start from all the sets and remove sunflowers at random.)
I've just replied to that comment without noticing that it was on the first post. Let me briefly repeat here one thing that I said there, which is that I think it would be interesting to have a look at this question experimentally for some small values of and . How do the set systems produced compare with the best known examples? If I get time, and if nobody else has already done it, I'll write a program to look into this.
I have some code that can do this already. I will post some results when I have them.
I'll look forward to it!
I used and gave up searching after 10 million attempts to find the next set. These were averaged over 100 trials
k=2: 4.52
k=3: 10.13
The last one may go a bit higher if I increase the number of attempts. For k=6 I would need a lot more attempts so it would take a long time. Some improvements in strategy may help. e.g. selecting elements randomly from all previous ones and k new ones could be faster.
Let me know if more info would be useful and I will run again.
Probably it will be useful to let both n run from k (or say 2k) and look at all pairs (k,n) trying to see a pattern to the behavior.
I'd be interested to know not just what the averages were but what the best examples were. It's nice that these examples beat , though I don't know what the state of the art is.
I meant to say also that I agree with Gil that trying some smaller ground sets would be interesting.
Finally, I would like if possible to stare at some of the maximal examples — maybe there could be a page on the wiki for them. I'd like to know for example how spread out the sets are, whether the intersections tend to have roughly the same size, etc. etc.
Give me a little time and I will follow up these suggestions.
The maximum sizes I am seeing from this process are only a small percentage better than the averages and are a long way short of the best constructions known. For example you can get 20 k-sets for k=3 and 54 for k=4. There must be other random methods that would find better examples if that is one goal.
Here is a collection of 34 sets for k=4. I don't see many bigger than this.
{ 9 1 13 0 } { 9 1 6 10} { 11 5 14 0 }
{ 3 1 0 11 } { 10 1 5 13 } { 3 1 11 12 }
{ 11 9 14 1 } { 10 2 13 9 } { 3 14 9 12 }
{ 9 13 2 0 } { 3 6 7 14 } { 3 4 14 12 }
{ 3 11 5 7 } { 4 9 10 0 } { 8 9 10 0 }
{ 2 3 7 14 } { 11 14 0 15 } { 10 1 9 13 }
{ 3 8 11 7 } { 9 13 2 10 } { 7 9 6 10 }
{ 9 1 13 12 } { 6 10 2 13 } { 3 8 11 5 }
{ 4 11 9 14 } { 7 9 1 10 } { 2 3 6 14 }
{ 11 5 14 15 } { 9 13 0 12 } { 6 10 13 9 }
{ 10 5 9 13 } { 3 4 14 9 } { 4 11 14 1 }
{ 4 8 9 10 }
All 16 numbers are in use so perhaps n needs to go higher
I may have a bug, let me check that before you do anything with these results
Ferdinand Ihringer says:
@Phillip Gibbs
I used my generic program for finding large examples in generic extremal problems. Some of my maximums are better than yours, some are not. It's all very close. I will just upload my summary (I calculated a bunch of small values for with ), but maybe it would make sense to unify the results somehow?
My program is still running and I still have to write a script to extract the results nicely. I only gave it 5 minutes for each set of parameters. For my generic program needed ca. 10 hours for a complete search, so results will be far from optimal.
There was a bug that affected some of the earlier numbers a little but these ones pass sanity checks. Feel free to delete the others results to avoid confusion. I have done k=2 and k=3 with varying n. The best possible for k=3 is 20 but that never came up.
(k,n) mean max
(2,2) 1.00 1
(2,10) 5.48 6
(3,6) 8.06 10
(3,7) 10.15 12
(3,10) 11.49 14
(4,11) 23.4 26
(4,100) 34.09 39
Many thanks, Philip. The outcomes seems monotone with n which surprised me. Is there a reason to think that this is always the case? And is there some guess we can make on what happens when n tends to infinity?
The maximum will be monotone but I don't always find the maximum in these runs. The mean is less obvious. In the n tends to infinity limit it will select a k-set at each step so as to maximise the number of elements that have not been used before in previous k-sets.
I don't know what is the most useful next step I can do but I am thinking about compiling a database of best solutions as a function of (k,n) and I will try to present them in an ordered way that allows us to see any symmetry
I have posted all the best examples of k-sets I could find at http://pastebin.com/XNkZwxzV
One other thing I wanted to say was that I had an idea for a generalization of the problem, but then realized that it was uninteresting. Nevertheless, perhaps it is worth mentioning, just in case someone can think of a way of making it interesting after all.
The generalization was to consider an analogous question for -dimensional subspaces of a vector space, where a sunflower is defined exactly as before, though it is probably nicer to describe it in terms of dimensions of subspaces. This is a generalization because for the -sets question we can think of the elements of the ground set as a linearly independent set of vectors and take the subspaces generated by the -sets.
The reason the problem is uninteresting is that if you take two-dimensional subspaces of in general position, then any two of them intersect in a one-dimensional subspace, but any three of them intersect in a zero-dimensional subspace. So it is easy to produce continuum-many sunflower-free systems of subspaces.
It's tempting to use this fact about subspaces to build large sunflower-free sets, but the obvious things I've tried produce only very small ones. For example, in the vector space we could take -dimensional subspaces, but they have size and there are at most of them, so we get at best a bound like with this construction, and my guess is that pretty well all similar constructions suffer from the same kind of problem.
This is an interesting direction. It is an interesting question what is the upper bound if we are asking about vector spaces over finite fields like and consider them as sets systems.(So an r-dimensional space is regarded as a -set. It is interesting what is the largest size of sunflower of this kind. Can we get above ?)
There is another way to go even over which is to consider families of subspaces which, regarded as vectors in the corresponding exterior product, are linearly independent. (This follows also from a certain combinatorial property for pairs of spaces.) there the question is a genuine stengthening of the original question. For Erdos-Ko-Rado theory this more general setting reduces to the regular one since shifting is still available. But this does not apply to sunflower questions.
Since I read up on this PolyMath problem two days ago, I spend some time looking for a finite vector space version of the problem. (As a trained finite geometer that is the natural thing to do for me I suppose.) Finally, I found your comment, so maybe here my comment fits. I did not think about the same problem. I just thought a little bit about the following, where I replaced "of size x" with "of dimension x":
A sunflower of size is a family of subspaces over a finite field with elements such that every subspace that belongs to more than one of these subspaces belongs to all of them.
As in the set case you will have a function such that every family of -spaces with more than elements contains a sunflower of size . The proof for this is just the same.
-analog of the Erdos-Rado conjecture: Prove that , where is a constant depending on and .
Is there any good reason to look at this? Maybe not. At least reminds one of the dual of a (partial) hyperoval (a family Y of 2-spaces in a 3-space such that no 1-space lies in more than two elements of Y). Then some things get much easier for large (with the set case seen as ). Sometimes you can do a tradeoff between and (I know an example where you can replace and with and ). Of course this will never help with the set case, but maybe it gives one new ideas or more confidence that the original Erdos-Rado conjecture is true.
As I have no useful ideas, I did not want to comment, but this looks like a good place to mention this canonical vector space analog.
Dear Ferdinand, I agree that the q-analog and other vector space analogs are potentially interesting. This applies also to Erdos-Ko-Rado theory of course.
Thanks. I don't know why you mention EKR theorems, but there the q-analog of the standard EKR theorem is known for a long time (most of it due to Hsieh, 197?). In some sense the q-analog is easier. There's a two line EKR proof for n >= k^3. The same two line q-analog argument shows the q-analog of the EKR theorem for n >= 3k (and n >= 2k+1 with some fine tuning).
GFP says:
I have implemented a small variation of the process, where I also require that the family is intersecting. One could obtain a usual sunflower-free family by 'duplicating the family'. For each pair the process was run 100 times. Below is a list of the values I obtained (so again, we get twice as much for the usual sunflower-free problem)
For the pair (k,n)= (3, 4)
Average length is : 4.0
Max found is: 4
Average length is : 8.61
Max found is: 10
For the pair (k,n)= (3, 10)
Average length is : 18.06
It seems to easily beat the 2^{k} construction, and even the 10^{n/2 -clog(n)} construction (for the small values of k I could manage at any rate), which I believe is the state of the art.
One particular context I was also thinking about was the "n=\infty case". The advantage of this setting is that the randomness is much easier to control; for instance we know that the process would be initially driven by intersections of size 1, then when we run out of those by intersections of size 2 and so on…
If you can beat it for small values, you can also beat it for big ones, as . So your example gives and , I think we need a little better to beat it with this simple product recursion, though other tricks might help further.
Yes, the aforementioned best known lower bound is obtained by making the observation that $g(mn)\geq g(m)g(n)^{m}$ and that $g(3)=10$ (where g is the extremal function for intersecting 3-sunflower free families). If we could find a $k$ for which $g(k)^{1/(k-1)}>10^{1/2}$ then using the same argument we'd obtain an improved lower bound.
Do you know what happens if you insist that the -sets are -coloured? Is it still possible to beat the bound by quite a lot in this way?
Using random trials on the balanced case I found
The homological balance conjecture would give which is not in contradiction with these results.
With some more trials I have and
For there is a balanced intersecting collection of 13 k-sets so . Here is what the 13 k-sets it looks. Elements in different columns are coloured so that they are distinct
{ 0 0 0 0 }
The structure seen in the example of 13 balanced intersecting sets can be reused to show that . Ignoring the extra row of zeros we can take the solution of size and make three copies with two extra columns of identical numbers in each staggered over three columns as in the example. This gives an intersecting solution so it can be doubled up to give the overall factor of six with two extra columns.
For k=6 this construction gives a set of size 6×26=156 but using further random trials I now have
The generalisation of this construction gives provided that where is the number of elements appearing in the union of the collection that gives the limit . The extra condition is slightly better when we use connected sets. In particular for which gives . This is for r=3.
This may be enough to show that the growth rate and the conjecture for balanced sets is the same as for general sets.
Why not ?
domotrop, because the elements from the unbalanced collection can each be given a different colour in the new balanced collection with sets of size . These are then combined with the second set which is balanced so as to fill in the rest of the colours. That's not very clear. I will try and find a way to write it out in more detail.
You can see how it works in the example of 13 k-sets with k=4 that I posted. This combines the unbalanced intersecting collection {2,3} {1,3} {1,2} with the balanced non-intersecting collection {0,1} (0,2} {1,0} (2,0} to give a balanced intersecting collection of 12 k-sets. In this case a 13th k-set can be added so the construction is not always optimal.
I see, nice construction. But I don't think that this is enough to show that the growth rates are the same because of the condition .
Suppose and then implies
For any there is a maximal unbalanced collection of sets which has some number of elements in its union. Using the construction times with this unbalanced collection and the maximal balanced collection for to begin with we get, which implies
Taking the limit with and fixed
Since this is true for all we can now take the limit to get therefore
The good news is we only need to consider balanced sets. The bad news is that balanced sets are no simpler than unbalanced sets.
Right, thx!
Maybe we can run the computations on balanced (k-colored) families as well. I met Avi today and he asked an very natural question: Can we reduce the delta-system conjecture to a specific case for k and n, say . Avi also mentioned some applications of delta systems in TCS which I was not aware of and some other cool developments.
What is the status of the statement ? It would be quite surprising. Can we improve ?
Let be a delta set of size with elements in its union. Let be a balanced delta set of size , Then if we can construct a balanced delta set of size as follows.
Since we can assign a unique colour from a palate of colours to each element in the union of . For each set in take the colours assigned to its elements and recolour all the sets in with the remaining colours in the palette using a one to one mapping between the colours. Now form coloured sets from the union of the coloured version of with each recoloured set in (We assume that the elements in are distinct from the elements in ). Do this for each of the sets in to form sets.
To illustrate this with an example consider the (2,3) delta set {1,2}, (1,3}, (2,3} and the (4,3) balanced delta set {r4, g8}, {r4, g9}, {r5,g7}, {r6,g7} assign the 3 elements 1,2,3 to the colours r,g,b from a palate of four colours r,g,b,y. The 12 sets from the construction are
{r1, g2, b4, y8} (r1, g4, b2, y8 } {r4, g1, b2, y8}
{r1, g2, b4, y9} {r1, g4, b2, y9 } {r4, g1, b2, y9 }
(r1, g2, b6, y7} {r1, g6, b2, y7 } {r6, g1, b3, y7}
Proof that the constructed delta set has no sunflower of size :
Suppose we can find a sunflower of size . Consider three cases one of which must apply:
(1) All the sets in the sunflower are constructed using distinct sets from
(2) All the sets in the sunflower are constructed using the same set from
(3) The sets in the sunflower contain some pair using the same set from and some other using a different set from .
All three cases can be ruled out. Case (1) would imply a sunflower of size in , case (2) would imply a sunflower in , and in case (3) the intersections of the three described sets in pairs cannot be the same. This completes the proof.
From this construction we can conclude that
provided where is the (smallest) total number of elements in a maximal delta set
Suppose that and where and may depend on . (The following argument also works if these can be infinity)
For any let be the number of elements in a maximal delta set. Then using the inequality times we get
Taking the limit with all else fixed.
Since this is true for all we can take the limit so . Since we also know that therefore
Beautiful! and looks perfectly ok. (But other participants, please do check it too, maybe I missed something). Now if we could only always enlarge a balanced system for (k,r) to a general system of size larger by a factor we would have a counter example to the conjecture. (We have times available sets.) Unfortunately, it looks that we cannot enlarge at all the basic example.
Given this equality, the homological balanced conjecture will give a bound of , and the homological unbalanced conjecture would yield a weaker result.
The example can be extended with {r4, g4, b4, y7} and still be intersecting, and the construction has some flexibility in the recolouring. It is also possible to renumber the sets from differently for each set from . So there is hope that enlargement could work.
As Philip said, there may other ways to alternate between balanced and non balanced example so that we may gain something, maybe even break the exponential barrier. This certainly deserve thinking.
Another idea we can explore which may be relevant to both positive and negative direction is a more drastic reduction of the kind we did when we originally moved from general families to balanced families. (This was suggested both by Tim and by me and perhaps others.)
Suppose we start want to color with 5k colors rather than k colors with the following additional conditions
(1) every set is colored by k consequtive colors modulo 5k.
(2) Two sets that share k-1 elements must involve at least k+1 colors.
Perhaps by taking random coloring and excluding violators of (1) and (2) we can start from an arbitrary family and end with an exponentially smaller (but only exponential smaller) subfamily satisfying these conditions.
If there are k-1 fixed elements that are contained in every set, then I don't think you can have both (1) and (2) for more than two sets.
About breaking the exponential barrier, here is a paper that shows how to "break it" for many problems (though I don't see how it could be applied to sunflowers): http://www.sciencedirect.com/science/article/pii/S0097316597927801
But we cannot have fixed elements in or more sets.
Not if you want to use that we don't have a sunflower, but I thought that you wanted to make a general construction, like we had earlier.
That's a good point. the earlier construction and also if you only worry about (1) are general.In particular they apply for the questions about no sunflower with small heads. If you want to achieve (2) you need to exclude sunflowers. There may be various other "exponential" reductions.
I've also wondered if the following, Kneser-type coloring question can be useful:
How many colors to we need to color all k-tuples of an n element set avoiding monochromatic 3-sunflowers?
Let me try to work out the obvious bounds here, just to get a feel for the problem. Of course, we need at least colours, where is the size of the largest 3-sunflower-free family. Obviously one interesting problem is whether this second bound is likely to be a good one. That is, if we define to be divided by the smallest number of colours needed to avoid a monochromatic sunflower — which is the average size of each colour class — we know that , but must it in fact be much less?
For very large I would guess no, given that there are some powerful results around that tell us that complete hypergraphs can be almost partitioned into copies of small fixed hypergraphs — or even exactly partitioned when certain divisibility conditions are satisfied. But the range that interests us is more like I think, and here it is not at all clear that extremal sunflower-free examples would fit together nicely.
daeseoklee says:
Hello I'm a highschool student in Korea. I recently read over a book on extremal combinatorics,
wanted to find some problems to think about, and fortunately got to this website.
I really like this environment of discussing about a problem. It's so cool.
And I admire intelligent people here such as Tao and Gowers.
I have some approaches and (probably)related problems. It might be meaningless but
please leave some comments on it.
1. This approach is only meaningful in the case of 3 petals.
Let there's a family , and .
Regard the set as 0 1 vectors. A simple observation is that
three different vectors form a sunflower iff
component of doesn't appear in , represents a point
occupied by a single set, represents an intersection of whole 3 sets,
and represents the backgrounds.
And another possible representation of this is here,
Define Then,
three different vector in the family form a sunflower iff .
From this, we can generalize this problem to the case of real vectors.
(Think about a finite set of vectors in ,
such that each pair of vectors satisfy some distance condition. (like ||u-v||>=1 in some p-norm?)
And what number of vectors guarentee that there're three different u,v,w such that I(u,v,w)<1?)
But this might be unsucessful since normally I(u,v,w) would be too large.
I'll continue this a few hours later
Dear daeseoklee, thanks for participating! What is the definition of ?
I meant ++-3 when is the ordinary inner product and is sum of all coordinatewise products of u,v,w.
Ah I got why this type error happened. Maybe it omits inner product notation (sharp parenthesis). I meant u•v+v•w+w•u-3u•v•w where u•v•w is sum of coordinatewise products
While I was away for a few days I thought about how to construct delta systems. With a few observations I was able to reduce it to a problem of constructing polytopes. I don't know if this is already understood but it is new to me so I will try to explain it. My goal is to work through the case by hand and then try to understand the case well enough to use a computational search. The case reduces to looking at ordinary polytopes in three dimensions where the faces are made with triangles, squares and pentagons and all vertices are trivalent. For you have to move up to 4-dimensional polytopes built from the 3-dimensional ones and they may be embedded in spaces of different topologies so this will be a lot more difficult. These polytopes are the duals of the simplical complex used in the homological approach so there may be some cross-over that will help.
(sorry I was using the term delta system incorrectly. I meant families of k-sets with no sunflowers of size 3)
The first goal is to construct all families of k-sets with no sunflowers of size . It suffices to construct all complete families, where "complete" means that no further k-sets can be added without forming a sunflower with petals. If incomplete families are needed they can be derived from complete ones by removing some k-sets.
The first useful observation is as follows: Given is a complete family , select any element that appears in at least one k-set of and form the family of (k-1)-sets from the subsets of which contained , with . is then a complete family. This is because if you could add a new k-set to without forming a sunflower then you could add the element to that set and add the set to without forming a sunflower.
From this observation the task of constructing complete families reduces to first constructing complete families, and then working out how to combine them,
was defined as the link of x.
Thanks, I am sure this construction must have been done before. Let me know if you are aware of a reference.
For a reference, see the top of this post…
I see the definition of link. I was wondering if anyone had gone through this whole classification based on polytopes. I may not be able to access all the literature because I don't have journal subscriptions.
Unfortunately, I don't believe your "first useful observation." Take k=2 and r=4, and let F be the graph formed by three disjoint triangles. This is maximal but no is.
Thank you for pointing this out. There are also counterexamples for the r=3, k=3 case, e.g. a solution based on two tetrahedra. However, it should not be difficult to find these possibilities as special cases.
There are three complete (2,3) families as follows
square: {1,2} {2,3} {3,4} {4,1}
pentagon: {1,2} {2,3} {3,4} {4,5} {5,1}
tow triangles: {1,2} {2,3} {1,3} {4,5} {5,6} {6,4}
The only complete (1,3) family is {1} {2} so (2,3) families must be formed by linking pairs into collections of polygonal structures. It remains to check that any bigger polygons or collections of polygons than these three would have more than two mutually distinct subsets which would therefore form a sunflower with an empty head.
Now we need to see how to combine these (2,3) families to build a complete (3,3) family .
Any k-set with 3 elements in can be associated with three complete (2,3) families and these can be any combination of the three possibilities, the square, the pentagon or the two triangles. This can be pictured by placing the k-set on a vertex of a graph where three edges and three faces meet. The faces can be squares, pentagons or triangles and can be labelled with the symbols , and accordingly (usually these elements are represented by numbers)
The process of making vertexes in this way can be repeated to form a graph for the (3,3) family.
When triples of complete (2,3) families are brought together in this way, some of the elements in the families need to match for consistency at the vertex, however there are other elements from the smaller families that could be made to match but they don't have to. We have the choice. This has to be done in such a way that no sunflowers are created. Potentially this generates a large number of cases to check until all consistent families are found. Luckily there is an observation that simplifies this task.
The procedure is to build the graph structure vertex by vertex and face by face choosing from squares, pentagons and triangles and always using trivalent vertices, but whenever two elements don't have to be the same, always make them different, provisionally. Continue this until the graph forms a complete polytope. Since no polygons with more than 5 edges are used and all vertices are trivalent, we know that this process will eventually end. There are only a finite number of trivalent polytopes made with triangles, squares and pentagons (note that the polyopes don't have to be formed with flat faces. It is only the topology that matters). Listing them all is the first step towards finding all complete (3,3) families. However, our family can be constructed from more than one such polytope.
Each square face and each pentagon face is the polytopes will be labelled with a unique element. The triangular faces will come in pairs Labelled with the same element because they form the two-triangle (2,3) family. The two triangles in each pair can be different faces in the same polytope but they must not touch at an edge of vertex. They can also be two faces in two different polytopes in which case the labelling element would form a connection between them
The remaining question is how do you form the cases where some of the elements that could be different are chosen to be the same? The only way this can be done consistently is when the collection of polytopes has a symmetry. Faces that map to each other under some subset of the symmetry transformations can be made the same consistently, provided the transformations never maps a vertex to another vertex in the same face.
To conclude, all complete (3,3) families can be constructed by considering all collections of trivalent polytopes made with faces of up to 5 edges and considering reductions modulo some subset of their symmetries.
When forming families using this polytope construction we only accept the results if no sunflowers of size are formed. The next useful observation is that the only such sunflowers that can form before any symmetry reduction are those made of mutually disjoint sets. This is because if any sunflowers of size with an element in its head were included in then it would also form a sunflower in , but we have only used the three complete (2,3) families which have no sunflowers of size 3.
To make the construction more concrete consider some special cases. If the only faces used are squares the only trivalent polytope that can be formed is a cube with the faces numbered from 1 to 6. The family this forms with one set for each of the eight vertices incorporating the elements on faces that meet at the vertex is
{1,2,3} {4,2,3} {1,5,3} {4,5,3} {1,2,6} {4,2,6} {1,5,6} {4,5,6}
This includes pairs of mutually disjoint sets but no triples, so it is a valid (3,3) family. Only one cube can be included in the family because two cubes would have four mutually disjoint sets.
For a second example consider the case where only pentagons are used as faces. The polytope formed is a dodecahedron labelled with 12 elements. There are 20 sets in the family corresponding to the vertices. In this case it is possible to find four mutually disjoint sets by choosing vertices that form a tetrahedron inside the dodecahedron. However the family can be reduced modulo a reflection which sends each vertex, face and edge to its antipodal point on the dodecahedron. This reduces the number of elements to 6 and the number of sets to 10. In the resulting family there are no mutually disjoint sets, so in this case two copies on the reduced dodecahedron can be included in a family. This gives the maximal (3,3) family of 20 sets.
What about families built from pairs of triangles? The trivalent polytopes formed from triangles are tetrahedrons, but the faces must be paired across different tetrahedrons otherwise the pairs would touch. Every tetrahedron in the family must be connected to each other one by either one or two pairs to avoid disjoint sunflowers. There are valid ways to do this using 3,4 or 5 tetrahedrons. Since there are four vertices on each tetrahedron this gives families of 12, 16 and 20 sets. (I was not aware of this second maximal solution until I constructed it in this way)
What are the possible symmetries that can be used to reduce the collections of polytopes? The polytopes themselves can have many symmetries and if the collection has more than one polytope of the same form there will be permutation symmetries of those. However, the permutation symmetries would just reduce the number of polytopes by making them equivalent.
The symmetries must not map any vertex to another vertex in the same polygon. Since rotation symmetries in 3D always have a fixed point any rotation symmetry of a polytope will be invalid. I conjecture that the only valid transformation is the reflection that maps each point to its antipodal point. This can easily be verified in individual cases.
To be able to complete the classification of (3,3) families a list of all trivalent polytopes with faces having at most 5 edges is needed. If a trivalent polytope has triangles, squares and pentagons, then using Euler's formula . Fortunately wikipedia provides a shortcut to finding all cases that work with a list of the dual polytopes made from triangles at https://en.wikipedia.org/wiki/Deltahedron If I have not missed anything there are eleven cases with the following values for
(0,0,12) (0,2,8) (0,3,6) (0,4,4) (0,5,2) (0,6,0) (1,3,3) (2,0,6) (2,2,2) (2,3,0) (4,0,0)
From here it is a straightforward exercise to construct all the solutions.
As Dömötör pointed out this construction does not give all complete/maximal families, and I don't think I can fix that. However it is still a good way of constructing large families so it could help with the search for possible counterexamples.
To illustrate the point I have a new best family for r=3, k=4. It was found by combining the new family of 20 sets for r=3. k=3, but there is an easier way to describe the construction which starts with the following 11 sets
{ 0 1 2 3 4 } { 0 1 5 6 7 } { 0 2 7 8 9 }
{ 0 3 6 9 10 } { 0 4 5 8 10 } { 1 2 6 8 10 }
{ 1 3 5 8 9 } { 1 4 7 9 10 } { 2 3 5 7 10 }
{ 2 4 5 6 9 } { 3 4 6 7 8 }
These form the Paley biplane with the property that there is a one-to-one correspondence between pairs of elements and pairs of sets which contain them both. To get the r=3, k=4 family just take all subsets of size four from each set to form a family of 55 sets. According to the wiki the best known solution has 54 sets so this is a new record. It is not a huge advance but at least it shows we are in new territory.
Hold on, {0 2 3 4}, {0 1 5 6} and {0 7 8 9} form a sunflower, don't they? But I also have high hopes that this approach will give a better construction for r=3, k=4.
Yes you are right. I will keep looking.
Dömötör wrote: "…the homological notions (like cycle) are still quite alien to me. But if you sketch a proof, I might be able to follow it, or maybe even understand cycles better!"
Let me try to explain the notions a little better, and later sketch some proofs using them.
A (k-1)-dimensional chain is a linear combination of k-sets with real coefficients. Such a chain is a (b,c)-cycle if it vanishes whenever we apply c successive boundary operations from b generic such boundary operations. A family of k-sets is a (b,c)-cycle if we can assign nonzero weights to its members as to get a (k-1)-chain which is a (b,c)-cycle. Our conjecture is that for a balanced (non-empty) family of k-sets which is a (2m,m)-cycle must contain a sunflower of size three with head of size smaller than m. If true this will say that for r=3, (and hence by Philip result .)
This is fairly cryptic but I think it is useful to replace (or "approximate") the notion of a (b,c) cycle by a stronger combinatorial notion as follows:
When c < k, a family of k-sets is a combinatorial (b,c)-cycle if every link of a vertex is a combinatorial (b,c) cycle. When c=k, a family of k-sets is a (b,c) cycle if its cardinality is larger than .
Combinatorial (b,c)-cycles are good approximations for (b,c)-cycles (that we can call topological or algebraic (b,c)-cycles.) It is easier to understand them and sometimes to get results you need to juggle between the two notions.
Pingback: Polymath 10 Post 3: How are we doing? | Combinatorics and more
Pingback: attack on/ of the sunflowers! | Turing Machine
Pingback: Polymath10-post 4: Back to the drawing board? | Combinatorics and more
Pingback: Polymath10 conclusion | Combinatorics and more | CommonCrawl |
Foundations of Science
pp 1–20 | Cite as
Obliterating Thingness: An Introduction to the "What" and the "So What" of Quantum Physics
Kathryn Schaffer
Gabriela Barreto Lemos
First Online: 24 May 2019
This essay provides a short introduction to the ideas and potential implications of quantum physics for scholars in the arts, humanities, and social sciences. Quantum-inspired ideas pepper current discourse in all of these fields, in ways that range from playful metaphors to sweeping ontological claims. We explain several of the most important concepts at the core of quantum theory, carefully delineating the scope and bounds of currently established science, in order to aid the evaluation of such claims. In particular, we emphasize that the smallest units of matter and light, as described in quantum physics, are not things, meaning that they do not obey the logic we take for granted when discussing the behavior of macroscopic objects. We also highlight the substantial debate that exists within physics about the interpretation of the equations and empirical results at the core of quantum physics, noting that implicit (and contested) philosophical commitments necessarily accompany any discussion of quantum ideas that takes place in non-technical language.
Quantum Physics Entanglement Physics and society Physics and the humanities
This essay is a short introduction to some core concepts and philosophical problems associated with quantum physics. We are writing it to respond to, and to enhance, conversations about the meaning of quantum physics that are currently underway in contexts beyond the physics laboratory.
Far beyond the physics laboratory. We are two physicists who regularly work with artists and designers. We increasingly hear from colleagues and students in these creative fields that quantum physics is an important source of ideas for their work, even though they may have never taken formal physics courses. In our personal experience, the ideas of quantum physics seem to be undergoing vigorous "cultural processing" in this historical moment, largely beyond the gaze of professional physicists.
"Cultural processing" is our own term. It is meant to loosely encompass anything that people do with the theories, empirical results, narratives, or methodologies of a scientific field that takes place outside the central institutions and practices of scientific research. It is an umbrella term for the many heterogeneous ways that some ideas from the world of science end up having meaning outside their original contexts.
Many scientists simply object to the idea that scientific ideas could have meaning outside their original contexts. We do not. For one thing, any time that scientists themselves attempt to translate scientific ideas for a public audience, they are engaging in a form of cultural processing. The drive to share scientific ideas, and to re-express them in non-technical language, speaks to the potential for science to have meaning beyond a research paper or a technological application. It would be ridiculous to think that big ideas like relativity or quantum entanglement would have no importance beyond the lab; their non-technical relevance (including the potential for shaping worldview) is a part of why many scientists pursue, and share, their research in the first place.
We also believe that questions of the meaning of scientific claims already are (and should be) open to the input of non-scientists. Enabling interdisciplinary exploration into the philosophical, metaphorical, and generative potential of quantum physics concepts is one of our aims with this essay.
In the art and design world, we see a particular demand for greater understanding of quantum physics spurred by the influence of a single interdisciplinary feminist scholar, Karen Barad (2007).1 Barad (who herself was trained as a physicist) has opened up entirely new communities of interest in quantum phenomena. You do not need to be familiar with Barad's work to understand the rest of this essay, but we will refer to her as a central example of someone who takes the ideas of quantum physics to be deeply meaningful outside of the laboratory setting. Barad believes that the facts of quantum physics are so philosophically important that they wholly change how we should think about people, relationships, subjectivity, objectivity, nature, culture, and scholarship itself. Citing Barad, scholars in the arts, humanities, and many interdisciplinary fields now write about the "observer effect" and "entanglement"—technical physics concepts—in work that has a distinctly social or political (that is, not primarily physics-based) emphasis.
In the social sciences, quantum concepts have also gotten a recent boost due to the work of International Relations scholar Wendt (2015). Wendt thinks the social sciences have been led astray by implicit assumptions of a mechanistic and deterministic ("classical") universe. In contrast to Barad, however, he also engages in some provocative speculation beyond currently established science. To Wendt, the unresolved problem of the nature and origin of consciousness is a critical barrier towards progress in any field attempting to study humans (thinking beings with subjective, conscious experiences) scientifically. Wendt believes quantum physics will ultimately be important for explaining consciousness. Thus, Wendt seeks more from quantum physics than the inspiration for new ideas. He believes it may be critical to the social sciences because it may be the underlying science of how humans think and behave.
That could be true. Or, it could be false. Or it could be a piece of a more complex truth. At the moment, we simply do not know, since the scientific understanding of the mind, brain, and conscious experience is still far from complete. One reason for noting Wendt's work here is to emphasize the importance of empirical tests to evaluate any claims that take steps beyond what we have already established scientifically, including claims that quantum physics has something to do with consciousness. Wendt acknowledges that his argument includes a gamble, and that future science may falsify some of his claims.
Unfortunately, the cultural processing of quantum physics more broadly also involves quantum pseudoscience proponents, who promote (and profit from) unsubstantiated and unscientific claims that quantum physics has something to do with human thought, spirituality, or health (Freire et al. 2011). These forms of pseudoscience, which we encounter surprisingly often in conversations with students and colleagues, are problematic because they are at odds with the established knowledge and methods of science.
To be clear, speculating beyond the limits of current science is perfectly fair game for anyone, and such speculations need not be responsible to the knowledge and methods of science in every context (for example, in science fiction). Yet we think it is important to be able to tell the difference, and to respect the things that set scientific knowledge apart from other forms of thought or belief. Any claims of the sort "A physically causes B" are explicitly the kind of claims that should be substantiated by rigorous scientific research, which includes experimental verification. Such substantiation, generally speaking, is absent for pseudoscientific claims.
The specific quantum healing claim that the act of conscious thought can directly cause changes in the external world is problematic at the outset, because science has no complete model to describe what conscious though is, much less to model ways it might exert a causal influence. If the proposed mechanism for such influence is claimed to be "direct," on the basis of some quantum physics effect (not through complex and multifaceted psychological and social mechanisms), that claim is not based on any empirically substantiated science; no such mechanism is proposed or described within the field of quantum physics as it currently stands. Even if we take the claim as speculative, any possible mechanisms for physical causation need to be evaluated for consistency with everything else we know about how the world works. Precisely because we have such a sophisticated understanding of the forces involved in physical interactions (enabling a host of technologies from brain scanning to remote sensing), this is a high bar to clear. Any causal mechanism that is supposedly based on physics needs to be explainable along with and in relation to all the other physical causal mechanisms we already understand. In other words, any proposed physical mechanism for quantum healing needs to be explained in the same framework that explains MRI scans, thermal imaging, and all the other mechanisms we already know for physical communication with and about the body.
This is all a long way of saying: if it sounds too good to be true, it probably is. Quantum physics is not a trick or a way of evading the ordinary rules of nature. Quantum physics has some amazing implications, but it is very much grounded in the physical and the possible, describing processes that are going on in atoms, in computer chips, in lasers, and in nuclear bombs. Powerful stuff, but all well-understood, empirically founded, technologically activated physics.
In the next several sections, we will say more to define the scope of quantum physics, discuss some core representational and philosophical issues, and describe some of its key empirically-founded insights. Throughout, we will directly address possible philosophical conclusions one could draw from quantum physics, as well as try to clearly draw the line that divides science from pseudoscience.
2 The Scope and Form of Quantum Physics
To a physicist, quantum physics, quantum mechanics, and quantum theory all refer to the same thing: our physical theory for phenomena on very small size scales (comparable to, or smaller than, the individual molecules and atoms that make up materials around us).2 A theory in physics is much more than a set of ideas. It expresses quantitative relationships between things we can measure, which means it involves equations. There are some comparatively clear criteria for determining whether a theory in physics is a successful one or not. To be successful, it must be able to account for experimental results that have already been obtained, and, more importantly, be able to predict the outcomes of future experiments. Successful theories become integrated into the working practice of physics. A central part of that research is to explore the consequences and applications of the theory, as well as to continually develop more rigorous tests that probe its scope and limitations. Any robust and repeatable observation that is in conflict with the theory may require a revision to or replacement of the theory. If this does not occur, and evidence in support of the theory grows, it becomes even more deeply ingrained in how we do and think about physics.
Quantum theory is one of the most successful physical theories ever developed. Since its foundation in the early twentieth century, this theory has been tested through decades of rigorous experimentation. Its track record of accurate prediction is astonishing, allowing countless technologies to be designed on the basis of those predictions. Every digital camera and smartphone in the world is a testament to the success of quantum theory, as is every nuclear power plant. Quantum theory is so fundamental to our understanding of nature that it underlies entire fields of scientific research (e.g. chemistry). Quantum physics is thus not at all speculative. It is not a form of philosophy, and it is not something that is principally expressed, or employed, through verbal language. At its center is a practical toolkit of equations that technical experts use to model, understand, and design a wide range of structures and phenomena that involve light, electrons, and atoms.
One of the most remarkable lessons from twentieth century experimental physics is that light and electrons and atoms require a very different kind of description than macroscopic objects and events. There are two important points here. First, nature is not the same on every scale. On small scales, bits of matter move and interact in completely different ways than bodies in a room or planets around a star. Second, we discover that the equations we need to describe and predict those motions and interactions are of a completely different character, too.
If you are not used to thinking about how equations relate to things in the world, it might be hard to imagine what we mean by equations having a "different character." It might also be hard to believe that talking about equations is important in a supposedly non-technical introduction. However the core content of quantum physics is expressed through equations, and one of the key points we want to make is that there is ambiguity any time you try to translate those equations into words. This opens up philosophically interesting (and possibly problematic) territory, so it is worth highlighting.
Whether we are talking about macroscopic or microscopic phenomena (large or small scales), physics deals with things like motion, interactions (like collisions), forces, causality, and changes in physical systems over time. To take an especially simple example, imagine that some asteroids in deep space collide and a chunk of rock goes hurtling away into space. The role of a physical theory is to do something like provide an equation to describe the motion of that rock. The equation will contain symbols that stand for the measurable properties of the physical system. The mathematical relationships between symbols in the equation serve as a model for the physical relationships between the properties themselves. The model (equation) can be manipulated to learn things about the physical system, like where the rock will be at a future time.
If we instead consider a single electron flying through empty space, we are in the realm of quantum physics and we need to use a different equation. Let's compare the equations we would use in the macroscopic and microscopic cases, to talk about some of their differences (don't worry—you do not need to do any mathematics to follow the discussion).
An equation we could use to describe the motion of a rock hurtling through empty space might look like this:
$$\begin{aligned} x(t)=x_0+vt. \end{aligned}$$
Each symbol in this equation has a meaning that we can express in plain language.3 The symbol x(t) represents the position of the rock, along its direction of motion, as a function of time. The symbol \(x_0\) represents the position of the initial asteroid collision that sent off the rock at some speed. The symbol v represents the speed or velocity of the rock as it hurtles through space. And t represents the time that has elapsed since the rock started moving.
Now, in contrast, here is an equation we could use to describe an electron freely traveling through space:
$$\begin{aligned} \varPsi (x,t)=\frac{1}{\sqrt{2\pi \hbar }}\int ^\infty _{-\infty }\phi (p)e^{ipx/\hbar }e^{-iEt/\hbar }dp. \end{aligned}$$
You will immediately notice that it looks a lot more complicated. While the first equation used only algebraic operations, this one depends on calculus (note the long curly integration symbol) and has imaginary numbers in it (the i), which is why quantum physics is not usually taught early in an individual's education. The most important point we want to make about this equation is that, in contrast to what we did with the previous equation, we cannot simply tell you what each symbol in the equation means. Arguably, nobody can. Yes, all of the terms in the quantum equations are well defined in the sense that physicists know how to use them operationally, and how to relate them to experimental results. Yet we have problems when we try to say anything in words about why the equations work the way they do, or what underlying structure of reality yields the behavior they predict.
In classical equations governing macroscopic physics, we treat measurable physical properties as if they have definite physical reality and definite relationships to one another. Those definite properties have a direct one-to-one correspondence with the terms in the equation. The term x(t) means the position of the rock at a time t, which is understood as a well-defined property of a thing that we could measure in any way we pleased. The equation then expresses a relationship between that property and other properties (the velocity and starting location) that are similarly understood to be well defined and independent of the method and sequential order of a set of measurements. All the ways we ordinarily talk about properties and relationships in everyday language apply.
In the quantum case, nothing in the equation stands for anything analogous to x(t). Specifically, the term \(\varPsi (x,t)\)—the wavefunction for the electron, which is what the equation describes—is not necessarily a property of anything in the physical world. It functions as a tool. \(\varPsi (x,t)\) encapsulates statistical predictions for the probability of finding the electron in a certain place at a certain time. In quantum equations, we explicitly lose any references to properties such as position as if they are clear, definite, facts about the world. Moreover, nothing in the mathematics tells us what underlying physical mechanism leads to the need for a probabilistic description in the first place. That is, it does not tell us what the electron itself actually is, what its properties are, nor how it ends up behaving in this odd way.
Measurement also matters in the quantum physics mathematics in a way that it does not matter for classical physics computations. When we manipulate the equation for the rock, we do not need to take into account whether we are thinking about measuring the velocity first or the position first. When we manipulate the quantum physics equation, we have to explicitly account for the types of measurement we might make, and their order. The order of measuring the properties of the electron matters. In our mathematical model, each type of measurement actually changes the wave function itself.
Thus the way quantum equations function is simply different. They do not have terms in it that relate to simple, nameable things nor to permanent and independently knowable properties of those things. Quantum theory gives us a probabilistic description of many possibilities. Nobody knows for sure why these recipes work, nor how to talk about the relationship between the mathematical operations and the underlying physical nature of the electron itself.
The point is that it does work. Both equations given in this section work, just in different contexts (macro versus micro). Both equations are idealizations, and real scenarios often require more complex versions (to take into account forces due to gravity or due to other particles, for example). But, when the conditions are close to the ideal, the equations function predictively and descriptively, in their different ways. Macroscopic physical reality can be described with equations that have nameable things and well defined properties. Microscopic reality requires probabilistic equations with a less direct relationship between the symbols and anything we can simply name or define.
3 The Boundary Between Theory and Interpretation
We seem to be stuck accepting that quantum mechanics equations are just different. Most importantly, they make no unambiguous references to the structure or form of physical reality prior to specific measurements. Quantum physics does not tell us what the electron is, or what the wave function means. To go the extra step of assigning words to the things that are represented in this scheme, we have to pick a particular interpretation.
The interpretation is a set of philosophical commitments associating the terms in quantum mechanics equations, and the phenomena observed in laboratories, with specific meanings. This step is necessary if we want to say things like "in quantum physics an electron is..." or "when the electron went through the apparatus, what happened was..." The point is that such sentences will come out differently if we make different interpretational choices.
Thus, and this is the punchline of this section (and a key punchline of the whole essay) there is no single quantum ontology.4 A quantum ontology would be a scientifically supported way of answering questions like these: What is actually going on with the electron flying through space? Is the electron itself actually "spread out," physically embodying many possibilities at the same time? Does \(\varPsi\) correspond to a real physical thing, or does it capture something only about what is knowable about a situation? Could there be multiple versions of reality—multiple universes, even—in which the electron is in all of the different places that are expressed as possibilities in \(\varPsi\)? Is the randomness we observe—the need for a probabilistic description—something fundamental to the universe, or an expression of limited knowledge?
All of these are questions about the structure of physical reality on the quantum scale, and none of these questions can be answered unambiguously by the physical theory itself. While quantum theory is wildly successful and well proven as a tool, it leaves open major questions about how the universe actually works. Although we will allude to a few specific interpretations in this essay, there are nearly as many interpretations of quantum mechanics as physicists working in the field. A group of quantum physicists might include individuals who subscribe to a variety of different interpretations (Schlosshauer 2011), such as, for example, the Copenhagen interpretation (Faye and Folse 1994), QBism (Fuchs 2017), the Information-Theoretical interpretation (Brukner and Zeilinger 2009; Bub 2005), the Relational interpretation (Bitbol 2007; Laudisa and Rovelli 2013), the Many Worlds and Everettian interpretations (Barrett et al. 2010), Bohmian Mechanics (also called de Broglie-Bohm or Pilot Wave interpretation) (Durr and Teufel 2017; Goldstein 2009), Consistent Histories (Griffiths 2017), the Time Symmetric interpretations (Leifer and Pusey 2017; Wharton 2007; Aharonov 2008), or the Objective Collapse Models (Ghirardi 2016). Or, none of the above.
Most non-technical writing about quantum physics does not emphasize this point. Authors typically pick an interpretation and explain quantum physics from within that framework. It is hard enough explaining the weirdness of quantum physics within a single interpretation, much less trying to explain that everything could be completely different if we picked another. But if you are interested in asking about the meaning of quantum physics "outside the lab," we think that it is important to acknowledge that there is no consensus on the meaning of quantum physics "inside the lab."
This point is relevant, for example, if you are reading Karen Barad. She takes quantum ontology as the starting point for rethinking all ontology (as well as epistemology and ethics, in fact). To do so, she must commit to an interpretation in order to have a quantum ontology to start with. In her case, she picks an interpretation that is widely favored among physicists (for historical and cultural reasons, not because there is any evidence supporting it). In this interpretation,5 something like an electron is treated as a fundamentally indeterminate entity prior to measurement. This means that it does not have well-defined properties (like location, or velocity) until it is measured. Measurement (which need not be measurement by conscious humans, but could be some form of interaction with an environment as well) creates definiteness. Barad takes this as a fundamental fact of how the universe operates: definite properties, and definite things, only emerge through interactions. The notion that anything, on any scale, has a persistent and well-defined identity, is thus called into question.
Our point is that you might end up with quite different philosophical conclusions if you started with a different interpretation of quantum physics. In some, entities like electrons (and everything else) have perfectly well defined properties. To account for quantum phenomena, we know that it is impossible to have complete access to information about those properties (otherwise, different forms of equations, more like the classical case, would work). But the existence of definite things and definite properties is not ruled out by any established quantum physics.
A final note for this section on interpretation is that the lack of a single clear interpretation does not mean that the nature and structure of the universe is a philosophical free-for-all. There are many speculative or imagined ideas about quantum physics that are simply inconsistent with empirical facts or the scientific method (like many quantum healing claims, as we mentioned in the introduction). What an interpretation of quantum physics deals with is the meaning we assign to (a) terms that show up in equations or (b) phenomena that are observed in well-controlled, repeatable physics experiments, like the kind that are described in peer-reviewed research publications. If an author or speaker claims to discuss the physical, causal implications of quantum physics and there are no equations or rigorously-performed quantum physics experiments involved (at least in the background), it is not actually about quantum physics, in any interpretation. Period.
4 Core Ideas of Quantum Physics
We have argued that there is no single framework for discussing quantum phenomena through language. Short of listing empirical results and providing equations and recipes to predict them, there can be no "interpretation-free" description of the microscopic world.6
Nevertheless, an equally important point is that no matter what physical reality corresponds to the equations of quantum physics, it is a weird one. Weird, meaning inconsistent with what you would expect based on macroscopic experience.7
What we want to do in this section is call out some of the core facts of quantum physics that, regardless of how we interpret them, are in conflict with the intuition and experience we have on the macroscopic scale. We will use a mix of analogies, hypothetical examples, and a little bit of our own invented language, in a deliberate attempt to avoid some of the most common (and tightly interpretation-bound) pedagogical constructs, letting us hopefully emphasize the interpretation issues more clearly.
4.1 Quantization
Quantum physics gets its name from one core fact: at the smallest scale, nature is "digital" not "analog." Think of the difference between a digital and an analog clock. In the digital clock the smallest "chunk" is a second, whereas an analog clock runs continuously. The second-hand can be in between two one-second tick-marks on the clock dial. Physical entities like matter and light come in smallest chunks, like the bits and bytes of digital information. So do properties of those physical things, like their motion energy, or their electric charge. Quantum (plural quanta) is the name given to an individual chunk.
This fact, by itself, means that the rules of the game are different on a microscopic scale. Basic physical quantities like energy and momentum can only be exchanged in certain specific quantum units. This imposes constraints on the interactions that are possible among quanta of matter or light. In physical interactions, quanta can only exchange energy or momentum in whole quantum "chunks," and never in smaller amounts. In the macroscopic world, it is as though a tea kettle heats up by gradually warming up from zero to the boiling temperature, spending at least a tiny moment at every temperature in between. Microscopically, there are specific steps to any such process, and there simply is no "in between."
4.2 Non-thingness
Electrons and other particles that make up matter are themselves quanta. Photons (individual "chunks" of light) are quanta. So what are quanta? Well, that's where we hit the interpretation problems described in the previous section. There is no single way to talk about what they are, so let's focus instead on what they are not. The single most important idea to grasp about quanta is that they are not things. This is at the heart of the radical weirdness of quantum physics.
Let's define what we mean by things so that this statement gains some weight. By things we mean objects or materials that operate by the familiar rules and logic of the macroscopic world. Examples include coffee, cats, cars, carpets. Things take up space. If they move, they do so in a continuous way along a single trajectory in space. While a thing may have a large physical extent, we never talk about it being in two completely different places at the same time. Things also cannot jump instantly from one place to another. They have physical properties, like their size, location, or speed of motion. Those properties may change over time, but at any single moment, the properties have definite values that can be used to describe the thing in question. Things continue to exist when we're not looking at them. If they are created or destroyed, compounded or broken apart, there is a single narrative we can use to describe what happened.
None of the statements we made about things, above, can be applied in a straightforward way to describe how quanta work. To get a taste for this, there is an experiment you can try at home. Take boxes and lay them out making a pentagon shape. Now take some red balls and some blue balls in your hand and place exactly one ball (of either color) in each box. Call a friend into the room and ask them to open any two adjacent boxes. Is it possible to arrange the balls (one in each box) such that your friend will always find one red ball and one blue ball, for any two adjacent boxes they decide to open? If the balls were not things but quanta, then it would be possible to arrange such a scenario with different sequences of measurements (Liang et al. 2011).
Of course, it isn't simple to say "instead of following macroscopic behavior, quanta work like this..." because while the empirical facts are well established, the words we would use to describe them are tied to specific interpretations. Given a single scenario involving an electron in a laboratory, one physicist might be comfortable describing its behavior by saying "the position of an electron is intrinsically undefined, all we know is that it behaves as if it were in many places simultaneously." Describing the same physical scenario, another might say "an electron is a spread-out entity that does not have a single location." Or, "the electron always has a single, definite location, but knowledge of that location is deeply impossible." Or even, "many parallel universes exist, and in each of those, a copy of the same electron exists at a different place." These are a just a few of the many radically different (and quite radical) statements linked to different interpretations of the electron's non-thingness. The equations describing all of these statements are the same in each case. The observed behavior of the electron is the same in each case. The electron's departure from everyday physics is the same in each case. But the words, and the worldviews that accompany them, may be quite different.
Since we view non-thingness as a central feature of quantum physics, we would like to help you to build some intuition for it through analogy. Humans work all the time with abstract concepts that have some non-thing-like behaviors. For example, money.8
To explore money as an analogy for conceptualizing quanta, imagine that you have some dollar bills in cash and you deposit them into your bank account using an ATM machine in Chicago. You put real physical money into the machine at a specific location. But you know that as soon as the machine counts the bills and credits your bank account, any meaningful relationship to tangible dollar bills is lost. When you held the dollar bills, the money had a well-defined place: it was in your hand. Once you deposit in the bank, where exactly is it?
Sure, the ATM creates a computer record, and that computer record is located somewhere (probably duplicated in many places). Yet it doesn't seem right to say "the money becomes bits stored in a computer." If the whole transaction were recorded on paper instead of bits on a computer, it would still be the same money. Money-in-the-bank is an abstract concept that does not necessarily depend on the form of any particular record we use to keep track of it.
This abstract concept of money in the bank or a dollar you own is a concept that behaves much like quanta do. Dollars that you own do not always have a well-defined trajectory in space, and we cannot always sensibly ask where they are at any given moment. Suppose you deposit some dollar bills into the ATM in Chicago, and later fly to L.A. You can withdraw your money from an ATM there. Would you say the money was somehow in L.A. before you went there? How did it "know" you were going to L.A. and not New York? If you had chosen to go to New York, you would have been able to withdraw it there. In a sense, then, your money is equally present anywhere that is connected to the same bank network, and where you find it at any given moment depends on where you initiate a bank transaction. This is a lot like the way that a quantum lacks a well-defined location in an apparatus, until it is measured.
Along these same lines, the money does not need to pass through points in between two locations where you enact transactions with it. We would not say that between your transaction in Chicago and your transaction in L.A. that the money must have been in a city like Denver, somewhere in the middle. Of course, if you go to Denver, you can make your money be there by initiating a bank transaction there instead. But would it have been there without you? Would it have been in any of the cities along any route from Chicago to L.A.?
The lack of definite trajectory in this example is similar to the behavior of electrons and photons and other quanta. It is a weird comparison, because money is an invented abstraction, and electrons and photons and other quanta are constituents of the touchable, viewable physical world. Yet, the intuition you have for the way money works is a useful start for grasping the non-thingness of quanta.
One useful feature of the analogy is the way that your transactions play an active role in determining where your money is. In the physical world, if someone or something interacts with a quantum, it changes the quantum's behavior. This is known as the "observer effect," although it does not necessarily require a conscious observer. Consider a quantum like an electron that is sent through an apparatus in which it can travel multiple paths. We discover that it does different things depending on where (on which path) we place our detectors. That is, the act of detecting the quantum actively changes what it does.
One idea that the money analogy does not quite capture is that the mere existence of multiple possible physical trajectories can affect the outcome for individual electrons passing through an apparatus. This is a bizarre thing, that is not at all true for money in the bank. The mere presence of a path through Denver as one route from Chicago to L.A. does not change the behavior of your money. In a physics experiment, different outcomes will happen if more paths are present, even if every measurement only ever shows it on one single path. This is exactly the kind of experimental result that leads to the interpretative disagreements we described before: is there a guiding force that makes the electron act as if it were in many places at once? Is the concept of location just something we can't use with electrons, when we are not actively observing them? Are there many copies of the same electron simultaneously taking all possible paths in many universes?
In the end, even if some analogies can help provide some intuition about what we mean by non-thingness, we are likely to hit dead ends with every analogy that uses words or familiar everyday concepts. The familiar, and the everyday, is rooted in the macroscopic world, and the microscopic world simply plays by different rules.
4.3 Randomness
All quantum phenomena display randomness.
We encounter randomness everyday on a macroscopic scale, but the randomness in quantum physics is of a different character. For example, consider flipping a coin and obtaining a random result, heads or tails. The way this differs from quantum randomness is that in macroscopic random events there are knowable (at least in principle) reasons why a particular outcome occurs. You could make a movie of the coin flip, analyze the air currents, and reconstruct how the exact finger motion and trajectory of the coin through the air resulted in it landing heads-up. In other words, we can construct a single coherent narrative of the coin, from the moment it was thrown to the outcome of the experiment. It may be challenging in practice to predict or fully analyze the outcome of a coin flip, but it isn't impossible in principle.
On the quantum scale, predicting or fully analyzing the outcomes of random events is impossible, even in principle. (Well, to be fair, there are some disagreements about how far to go with the "even in principle" statement, which we'll explain in a moment). One issue is that there is no way to continuously measure ("take a movie of") a random quantum process without physically interacting with it and affecting the outcome of the process. There also is no single coherent narrative describing a quantum process leading to the prediction with certainty of an experimental outcome. The logic that quantum random events follow in physics experiments is inconsistent with the idea that the entities have well-defined and knowable reasons for any given outcome.
A quantum system analogous to a coin flip might be an experiment in which we send quanta through an apparatus and then measure a certain property that has a 50% probability of having one value (call it "A") and a 50% probability of some other value (call it "B"). For example, it could have 50% probability of facing up or facing down. Between the beginning and the end of the experiment, was it facing up or down? Was there some cause or reason responsible for an individual quantum ending up facing up or facing down? Is that reason something that we could know? We cannot answer these questions in the same way that we can for a coin flip, but to say more in words about what is going on, we have to take on a particular interpretation.9
For example, one interpretation of quantum randomness says that the quantum does not have actual properties until measured. This is taking non-thingness to the extreme, to say that definite properties only exist in certain moments, like measurements, and not in the moments in between. In this interpretation, randomness is truly fundamental, and no story or set of reasons can explain why quanta manifest as they do in any individual case.
Another interpretation is to say that the quantum does have properties between measurements but to know them would require knowing everything about the entire universe. In this interpretation, there is a story that explains why the quantum ends up manifesting in a particular way, but that story potentially involves what is happening billions of light-years away. Does that make it unknowable in principle? We could debate what we mean by "in principle" and land on different sides of the argument, but it certainly involves a different scale of unknowability than the practical unknowability of the outcome of a coin flip.
And yet a third direction of interpretation says that the quantum does have properties between measurements but to fully know and characterize them we would need to have access to many worlds in which every possible outcome is equally real. (And, of course, there are yet other interpretations that say other things).
Again, we all agree that quantum (microscopic) randomness isn't the same as macroscopic randomness, because that's what experiments show us. But when we shift to trying to explain what that means, every rigorously supported option has dramatic consequences in terms of how we think about physical reality and knowability.
With all that said, this is another place we need to caution against over-reading the implications. The fact that randomness is a seemingly incontrovertible aspect of fundamental reality does not mean we live in an "anything goes" universe, or that highly precise predictions are impossible. Quantum randomness is built into the equations of quantum theory. While those equations can only make statistical predictions, the statistical predictions are of very high quality. We have to know where electrons will go, to high accuracy, when we design technologies like computer memory. We have a great deal of knowledge about what quanta will do in most situations that quantum physics addresses, it is just knowledge that pertains statistically to the behavior of many quanta as an ensemble rather than exactly predicting the behavior of each single quantum.
4.4 Entanglement
The final quantum oddity that we want to highlight is entanglement. Entanglement is a term for a way that quanta can have fixed and definite relationships to one another while still individually showcasing the same deep quantum randomness. In a sense, certain relationships themselves become more definable than the things doing the relating.
To set up an illustration of this concept, first imagine you have a pile of identical coins. You take half and you give half to a friend who then leaves town. You agree that you are both going to do a little coin-flipping experiment and record your results. You toss each one of your coins one at a time and record whether you get heads or tails, writing down all the results in a sequence. Your friend does the same.
You later compare your results and find that even though both of you saw apparently random sequences of results, there was a perfect match: every time you obtained heads, they did too, and every time you obtained tails, they did too. Could this ever happen in the ordinary, macroscopic world? Well, yes. We could imagine that the coins internally contained some complex system of microchips, clocks, and weights, pre-programmed to execute identical sequences even at a distance. No matter how odd the correlation, we can always concoct the perfect conspiracy theory to explain what we see, even though of course ordinary coins would not be expected to show this kind of random-but-connected behavior.
Empirically, we find that quanta are capable of acting like these fictional coins, showing long-distance correlations despite behavior that is random. This is the phenomenon known as entanglement. The difference between entanglement and the coin example, though, is that quanta are not things. With quanta, we lose the ability to invoke "conspiracy theories," because we cannot construct any single clear story about what they were doing prior to our measurement. Experiments called Bell tests confirm that quantum correlations are capable of persisting even when all possible conspiracy theories are ruled out by the absence of well-defined quantum properties prior to the specific measurements we choose to make. (For a non-technical explanation of the idea behind these experiments see Kwiat and Hardy (2000)).
Entanglement does not happen for coins, but it does happen for quanta. It is not magic, in the sense that it is a feature fully described in the mathematics used by physicists. But it is certainly dramatically different from the way that macroscopic reality works. What we see is that relationships among quanta can be preserved by nature despite the individual quanta behaving randomly. Moreover, these relationships are maintained even when the quanta are separated enough that no physical signal (that is, one traveling at the speed of light or slower) could possibly reach from one to another in time to explain how they "know" about each other. Importantly, entanglement cannot be used to instantaneously communicate information from one place to another, because that would require a causal connection between the two quanta. This is an essential point overlooked by most people. Entanglement is a correlation and not a causal link of the kind that is necessary if you want to send a signal from one place to another.
Because of the physical impossibility of causal signaling between two entangled quanta, quantum entanglement is more than long-distance correlation. In the macroscopic world, there are plenty of examples of long-distance correlation between random events. For example, there may be correlations between the random fluctuations in stock market prices in the U.S. and Japan. The difference is that stock markets around the world are physically connected by causal links that communicate information back and forth, unlike entangled quantum systems. Imagine severing all phone and internet connections (and any other physical connection—including global weather patterns and actual exchanges of goods and services) between the U.S. and Japan, as if the two stock market systems were operating on completely different planets. In that case, we would expect the random fluctuations in each market to be uncorrelated, because they would have nothing to do with one another. This thought experiment demonstrates that stock market correlations are a purely "classical" form of correlation, because they depend on the possibility of causal communication between the two systems. If you removed all possible causal connections, the correlation would disappear and the events would be independently random.
Quantum entanglement is also more than the ability to have instantaneous knowledge about something at a distance. Suppose you have one silver coin and one gold coin. Without checking which is which, you slip one into your pocket and one into your friend's pocket. Later, you can look in your own pocket and instantly know which coin your friend will find in their pocket, no matter how far away they are. There will always be a perfect correlation between what you find and what they find, and you will know something about their experience despite having no communication with them when they pull the coin from their pocket. While this thought experiment shares some features with quantum entanglement, it too is a form of classical correlation. The distinction here is that there is an unambiguous fact of the matter of whether you put the gold coin in your pocket or the silver coin. A third person could come check your pocket and they would know definitively which coin you had. Their observation also would not change anything about the situation. Quanta, on the other hand, behave as if there is no fact of the matter prior to measurement. Entangled systems may have certain well-defined overall properties for the system as a whole, but we cannot treat the properties of individual quanta as definite or well-defined. Moreover, a third person checking a quantum system will break the entanglement and destroy the possibility of further correlation.
Quantum entanglement is odd, and different from macroscopic, classical forms of correlation. It is extensively studied in laboratory experiments, but one important point to make is that the laboratory experiments that showcase long-distance entanglement generally require highly controlled environments. Entanglement does happen constantly in nature, but maintaining entangled relationships for long times or over large distances requires that the entangled quanta do not interact with anything else.
For this reason, quantum entanglement does not offer a particularly plausible mechanism for long-distance interactions among complex real systems like, say, human brains (or anything within human brains). Any electron or photon in your brain is constantly interacting with the rest of the matter in your brain, and thus cannot maintain an entangled state with other quanta in the outside world. Any time we are discussing complex structures of quanta (like complex chemical structures, or biological structures), long-distance entanglement effects are suppressed to the point of being irrelevant, simply because of constant interactions between quanta and their neighbors.
5 Quantum to Macroscopic
The previous section established that the microscopic universe behaves in fundamentally different ways than the macroscopic universe. The macroscopic, everyday physical realm is the realm of things with definite properties and definite trajectories through space. Randomness occurs, but outcomes are still linked to a sequence of specific causes. Physical relationships between physical objects are deeply tied to the physical objects themselves (we can't talk about the force exerted by the lamp on the table without there being a lamp and a table). In abstract thought, we have concepts like money or love that may violate these precepts, but physical things do not.
On the microscopic scale, quanta play by different rules. Even though they are physically real entities, they defy description as things. They show a different logic underlying their random behavior, and they demonstrate entanglement.
How does one set of rules and behaviors transition into the other? That is, how do the behaviors of quantum non-things build up to create the behaviors of macroscopic things?
Nobody knows.
One possibility that physicists have considered is that it is a matter of the sheer size of the system. This would mean that there is a physical mechanism that acts on systems above a certain mass, inhibiting quantum weirdness, making it act like a macroscopic thing. Quite a few experiments are being carried out today that look for a potential size scale beyond which quantum features are suppressed due to gravity. No clear macroscopic to quantum boundary yet has been found, and many physicists believe there is no such boundary.
We do know that the more a quantum system interacts with its environment, the more thing-like it tends to become, because little particles in the air, or electromagnetic fluctuations in the environment, for example, can cause quanta to lose their non-thinginess extremely quickly. That is why most quantum phenomena only manifest in very strict laboratory conditions. The quanta must be isolated from the air in vacuum chambers and all interactions with the quanta must be delicately controlled.
In any case, we can never directly access the quantum world. We can only know its effects on macroscopic measurement apparatuses. In a sense, we only access a translation of the micro-scale into the framework of macro things familiar to human experience. The quantum-to-macroscopic boundary is therefore a kind of "language boundary." Just as with translations from one human language to another, there will always be some information that is lost in the process. Since we cannot shrink ourselves down to the quantum scale, we may face fundamental limitations in understanding what the universe is like on the other side of the micro/macro divide.
Not only do we not know exactly how the quantum world actually works, but we don't know how the multiplicity of random quantum possibilities ends up translating to a single measured outcome, which is what we actually see. How exactly does the measurement process affect the behavior of a quantum? Why and how do the interactions with the environment or with a measurement apparatus make a quantum go from a non-thing to a thing? Does this happen instantaneously? Is this "transition" from non-thing to thing more like an illusion, or does it correspond to a distinct change in the rules of physics?10
As you can see, there are many open questions about how the rules change from micro to macro. That something changes is, however, a simple empirical fact. And it is an important point to remember when talking about how quantum physics might relate to the human realm. Even the tiniest dust grain you can imagine has enough quanta within it, and is in such a constant state of interaction with its environment, that it loses quantum behavior. That tiniest bit of dust is a thing with a definite place and definite physical properties. Even if atoms within the grain of dust may, at individual moments, experience entanglement phenomena with each other or with their environment, the dust grain as a whole is not meaningfully entangled with anything else. If it drifts randomly in the wind, that random behavior is of the macroscopic variety, amenable to a narrative description in terms of cause and effect. The equations a physicist would use to describe the grain of dust are simple, classical equations.
This is an important point to make because most of the things we care about in our everyday lives as humans are much larger than a grain of dust, and thus are even farther away from the quantum scale. People, notably. As far as physics is concerned, people are distinctly macroscopic entities, displaying none of the behaviors that quanta display, even if the quanta within our own bodies are busily doing their own thing in their full strangeness. The details end up being irrelevant on our scale: even if there is technically some entanglement that occurs between the outermost electrons of atoms in the layer of dead skin when my hand touches yours, this has no measurable or perceivable consequences of any kind for either of us. It is a curiosity of the natural world that it occurs, but likely that's all there is to say about it.
6 So What?
Alright, quantum physics is strange (or at least seems strange, to organisms adapted to function in the macroscopic world). But so what? What, if anything, does this have to do with everyday life, scholarship in other fields, or the problems and questions that we face as humans? There are a few ways you might answer this.
First, you could say "nothing." You can get by, and most people do, without ever explicitly paying attention to quantum physics at any time in your life.
Second, you could say "well, it is of practical importance to technology," because it is. Whether you care or not, you use devices all the time that employ quantum physics. Quantum theory is directly tied to applications with obvious economic value. Many of those technologies, like nuclear weapons, also raise obvious ethical questions. As an important current example, researchers are making significant advances in the development of computers that use quantum entanglement for novel types of computations and improvements in computing speed. Any serious consideration of the potential economic, social, or ethical impact of this new technology will require understanding how quantum computation is distinct from classical computation, which means understanding some of the basics of quantum physics itself.
Third, like scholar Karen Barad, you could say quantum physics changes "everything," because it tells us that the universe does not respect the basic preconceptions about reality that we develop as inhabitants of the macroscopic realm. Thus, perhaps our entire philosophical worldview, and even our vocabulary (which is normally quite bound in a thing-based ontology) should completely shift. If, on a fundamental level, relationships are more definable than the things doing the relating, should that challenge how we view the concept of a relationship on any scale? If, on a fundamental level, the properties of entities are indeterminate until interactions occur, should we give up any formal distinctions between subject and object in every context? These are the kinds of philosophical leaps you might take if you commit to a certain interpretation of quantum physics and take quantum ontology as the final word.
A fourth way that quantum physics might be valuable beyond the lab is as a source of metaphors and analogies. Quantum physics metaphors are rich. Opening up to thinking about non-thingness, indeterminate identity, blurry subject-object boundaries, and the dissolution of narrative may all be constructive things to do in our contemporary social and political moment, even if that context has little to do with the actual physics. It might give us some new inspiration, and new points of view, for thinking differently. We want to underscore, though, that the association of these concepts with phenomena in physics does not confer any "scientific" authority to arguments that invoke quantum concepts metaphorically. There is rarely a direct connection between the metaphorical context and the quantum physics context. Moreover, we would argue that quantum concepts, expressed verbally, lack a certain kind of scientific authority in the first place, since we have no consensus on their interpretation.
There are also scholars who work with direct mathematical analogies. This avoids some of the interpretation problems inherent in the verbal expression of quantum concepts. Yet, as with verbal metaphors, mathematical analogies based on quantum physics need not depend on (nor imply) any direct relationship to quantum physics itself. Quantum theory invokes a particular type of mathematical model (involving something called Hilbert spaces) to describe quanta. Similar mathematical models may be useful in other domains, such as cognition or finance.11 But, consider that we can use a linear equation to describe the trajectory of an asteroid through space, or to describe the growth of savings in a child's allowance jar. The fact that the same equation works in both cases does not require or imply any connection between asteroids and allowances. Likewise, even though the name "quantum" is often used to label Hilbert space models, there need not be any connection between the non-physics and physics applications of such models.
In summary, there are many different ways that quantum physics might matter beyond the lab, perhaps as many as there are individual people who take the time to think about the question. As your authors, we are interested in the ethics of quantum technology and curious about the potential of quantum metaphors and analogies. We do not go as far as someone like Karen Barad, but we do take the stance that quantum physics presents productive and important challenges to person's worldview. To us, the most important lessons are in the way that quantum physics requires unlearning ideas about how the world itself works, and about how scientific knowledge functions, that are grounded in macroscopic experience. Because there is no single clear ontology implied by quantum physics, nature does not give us any solid or satisfying replacements for the naive ideas we are forced to give up.
This is humbling, in a way that offers a counterbalance to some of the posturing we often observe from scientists. Too often scientists and science communicators adopt the role of an authority full of answers, leading them, in the quantum physics case, to sweep problems of interpretation under the rug. Individuals invoking quantum metaphors often seem eager to borrow this authoritative posture. Yet, in our view the unsettled interpretation of quantum physics—our persistent stuckness, our lack of authoritative answers—is one of the most important things about it. Even though quantum physics has unquestionably expanded our knowledge of the world, it also forces us to consider that some knowledge may be impossible. Here we stand a full century after the development of quantum physics, and yet we are arguably no closer to resolving basic questions about what is in this universe we inhabit.
As a final note, we recently described this essay and the notion of "obliterating thingness" to School of the Art Institute of Chicago grad student Joshi Radin. In response, she immediately quipped "obliterating thingness sounds like an experience you can get on drugs. Why do you need quantum physics?"
It was a good point, so we just want to reiterate: non-thingness as a central feature of quantum physics is not about "everything affects everything else" or a breakdown of physical barriers in the world. It is not something that, for us, translates to the head-space of feeling oneness with the universe or peace or comfort. That's quantum "woo" talking, not quantum physics. The way that quantum physics obliterates thingness is in the way that it undermines our ability to use language, and the thought structures associated with it (like narrative), to label and describe what we observe in nature when we test its behavior on small scales. It has more to do with a breakdown of our ability to represent reality in ways that feel like they make intuitive sense, leaving us with equations and recipes but no clear understanding of what they actually mean.
Again, to us, the feeling that comes along with contemplating quantum physics is nothing like a sense of peace or wholeness or connectedness. It is the feeling of deep humility, often tinged with frustration. It does not matter how many years you spend as an expert in quantum physics, how much confidence you have in the project of science, or how hard you try to make quantum physics make sense. You will often still find yourself wanting to scream "what the fuck, universe?" while staring at even the most basic experimental results. Quantum reality deeply undermines the sense-making processes we are used to being able to perform as humans. A hundred years of effort has yet shown no way out of the fog. It is possible that the fog is permanent. And that is deeply, deeply humbling, to us, even as we still experience wonder in the power of scientific inquiry. That deep humility is something we do hope to share more broadly. It is a form of cultural processing that we personally value and hope to add to the rich interdisciplinary conversations underway about what all of this means.
Our intent is for this essay to complement other available readings out there that introduce quantum physics in non-technical language. Thus, we have deliberately avoided going through many of the pedagogical examples (the well-known "double slit experiment" for example) that often anchor those introductions. We have also deliberately avoided some of the common vocabulary—like wave-particle duality—because we want to highlight that such vocabulary choices are linked to specific quantum interpretations. Keep these points in mind to make the best use of this essay in conjunction with some of the readings below, or others that you find on your own. Also look through the footnotes for some additional references.
Raymer, Michael. "Quantum Physics: What Everyone Needs to Know", Oxford University Press (2017).
All the basic elements of quantum physics, including some potential applications, explained to non-scientists in a precise, yet simple and pedagogical text. This is a great first encounter with quantum phenomenology.
Albert, David Z. "Quantum mechanics and experience", Harvard University Press (1992).
A slightly technical exposition of the many interpretations of quantum mechanics and their limitations. This book is not meant as a first encounter with quantum physics. It is a good book for those who have already a basic understanding of quantum phenomena, and want to dig into their different philosophical interpretations.
Whitaker, A. "Einstein, Bohr and the Quantum Dilemma", Cambridge University Press (1996).
A very detailed account of the development of quantum theory, focusing on its history and its philosophy.
Barad, K. "Meeting the universe halfway: Quantum Physics and the Entanglement of Matter and Meaning" , Duke University Press Books (2007).
An intriguing non-technical book in which quantum phyics is connected to science studies, feminist, poststructuralist, and other critical social theories.
Barad is faculty at the University of California, Santa Cruz. Her book (Barad 2007) is widely read in the humanities, in science studies, and in the arts, although it is interesting to note that she is almost entirely unknown among physicists.
In some contexts, scholars may differentiate these terms. In the physics context, however, the terms are interchangable: any could be used as the title for an introductory course or textbook covering the same material.
Note that, in this equation as well as any other used in physics, it doesn't matter what the symbols actually are as long as we know what they represent, or how to use them.
"Ontology" is a word from the field of philosophy that refers to a theory of "what is" in the universe. It is often contrasted with "epistemology," describing a theory of how we know about things in the universe.
Which is known as the "Copenhagen interpretation," referencing the place where it was mostly developed by Niels Bohr.
Arguably, even the choice of the way that the equations are constructed is linked to interpretation, although ultimately all successful formulations of quantum physics need to be mathematically equivalent wherever they link to descriptions or predictions of well-established quantum phenomena.
This point is argued nicely in the book (Lewis 2016), where the author argues that, while there is no single quantum ontology, all possible interpretations of quantum physics are philosophically significant. See also Freire et al. (2011) for a discussion on the philosophical implications of the different interpretations.
We will use the money analogy casually here, but it is potentially quite nuanced. As we were writing this essay, we had some productive exchanges with David Orrell, author of Orrell (2018), who argues that money shares many properties with the quantum systems studied by physicists, and perhaps should be modeled with a similar type of mathematics.
The interpretation options described in this section are loosely related to the interpretation options presented previously, but different interpretations can take a mix of stances on different aspects of quantum behavior, like randomness. There is such a large set of possible interpretations of quantum physics that we have decided not to attempt to enumerate or name any particular subset, but just to attempt to illustrate some of the differences as they apply to an individual concept like randomness.
In the many worlds interpretation of quantum mechanics everything behaves with quantum weirdness, even us. When we observe a quantum (via some measurement apparatus) we actually become entangled with the measurement apparatus and with the quantum. In the different branches of the universe different versions of us will see different measurement results. This is an example of an interpretation in which our experience of a different set of rules for the macroscopic realm is more like an illusion.
For an introduction to the field of quantum cognition, see the book Quantum models of cognition and decision, by Jerome R. Busemeyer and Peter D. Bruza (Cambridge University Press, 2012). For an example of analogies in the financial realm, see Schaden, Martin. "Quantum finance." Physica A: Statistical Mechanics and its Applications 316.1-4 (2002): 511–538.
This essay grew from a series of conversations that were made possible by the Scientist in Residence program at the School of the Art Institute of Chicago (SAIC), which brought Gabriela Barreto Lemos to the SAIC campus for the Fall semester of 2016. We also acknowledge the support of the SAIC Undergraduate Dean's office and the SAIC Department of Liberal Arts as the primary sponsors of a Spring 2018 symposium entitled "Quantum Unlearning," for which an earlier version of this essay was prepared to provide background reading. Artist (and symposium co-organizer) Kyle Bellucci Johanson has shaped the direction of this essay significantly and provided invaluable feedback. We have benefited from conversations with individuals spanning many disciplines, among which we wish to specifically thank Jacques Pienaar, Erik Nichols, Joseph Kramer, and Robb Drinkwater for helpful comments on prior drafts. We thank the students in Kathryn's Spring 2018 "Matter, Deconstructed" course at SAIC for providing useful feedback on a couple of confusing points. Finally, we thank the organizers and participants involved in the Spring 2018 Symposium "Quantum Theory and the International" at the Mershon Center for International Studies at Ohio State University, which one of us (Schaffer) attended, and which ultimately spurred the revision and publication of this work.
Aharonov, Y., & Vaidman, L. (2008). The two-state vector formalism of quantum mechanics: An updated review. In J. G. Muga, R. S. Mayato, & Í. Egusquiza (Eds.), Time in quantum mechanics, Lecture Notes in Physics (Vol. 734, pp. 399–447). Springer: Berlin. https://doi.org/10.1007/978-3-540-73473-4_13.
Barad, K. (2007). Meeting the universe halfway: Quantum physics and the entanglement of matter and meaning. Durham: Duke University Press.CrossRefGoogle Scholar
Barrett, J., Kent, A., Saunders, S., & Wallace, D. (2010). Many worlds? Everett, quantum theory, and reality. Oxford: Oxford University Press.Google Scholar
Bitbol, M. (2007). Physical relations or functional relations? A non-metaphysical construal of Rovelli's relational quantum mechanics. http://philsci-archive.pitt.edu/id/eprint/3506. Accessed 22 May 2019.
Brukner, Č., & Zeilinger, A. (2009). Information invariance and quantum probabilities. Foundations of Physics, 39, 677–689.CrossRefGoogle Scholar
Bub, J. (2005). Quantum mechanics is about quantum information. Foundations of Physics, 35, 541–560.CrossRefGoogle Scholar
Durr, D., & Teufel, S. (2017). Bohmian mechanics: The physics and mathematics of quantum theory. Berlin: Springer.Google Scholar
Faye, J., & Folse, H. J. (1994). Niels Bohr and contemporary philosophy. Dordrecht: Springer.CrossRefGoogle Scholar
Freire, O, Jr., Pessoa, O, Jr., & Bromberg, J. L. (2011). Teoria quântica: estudos históricos e implicações culturais. Campina Grande/Sao Paulo: EDUEPB/Livraria da Física.CrossRefGoogle Scholar
Fuchs, C. A. (2017). Notwithstanding Bohr, the reasons for QBism. https://arXiv.org/abs/1705.03483v1.
Ghirardi, G. (2016). Collapse theories. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy (Fall 2018 Edition). https://plato.stanford.edu/archives/fall2018/entries/qm-collapse/. Accessed 22 May 2019.
Goldstein, S. (2009). Bohmian mechanics. In The stanford encyclopedia of philosophy. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/sum2017/entries/qm-bohm/.
Griffiths, R. B. (2017). The consistent histories approach to quantum mechanics. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy (Spring 2017 Edition). https://plato.stanford.edu/archives/spr2017/entries/qm-consistent-histories/. Accessed 22 May 2019.
Kwiat, P., & Hardy, L. (2000). The mystery of quantum cakes. American Journal of Physics, 68, 33.CrossRefGoogle Scholar
Laudisa, F., & Rovelli, C. (2013). Relational quantum mechanics. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy (Summer 2013 Edition). https://plato.stanford.edu/archives/sum2013/entries/qm-relational/. Accessed 22 May 2019.
Leifer, M., & Pusey, M. (2017). Is a time symmetric interpretation of quantum theory possible without retrocausality? Proceedings of Royal Society A, 473, 20160607.CrossRefGoogle Scholar
Lewis, P. (2016). Quantum ontology: A guide to the metaphysics of quantum mechanics. Oxford: Oxford University Press.CrossRefGoogle Scholar
Liang, Y.-C., Spekkens, R. W., & Wiseman, H. M. (2011). Specker's parable of the overprotective seer: A road to contextuality, nonlocality and complementarity. Physics Reports, 506(1–2), 1–39.CrossRefGoogle Scholar
Orrell, D. (2018). Quantum economics: The new science of money. London: Icon Books.Google Scholar
Schlosshauer, M. (2011). Elegance and enigma: The quantum interviews. Berlin: Springer.CrossRefGoogle Scholar
Wendt, A. (2015). Quantum mind and social science: Unifying physical and social ontology. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Wharton, K. B. (2007). Time-symmetric quantum mechanics. Foundations of Physics, 37(1), 159–168.CrossRefGoogle Scholar
© The Author(s) 2019
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
1.School of the Art Institute of ChicagoChicagoUSA
2.International Institute of PhysicsFederal University of Rio Grande do NorteNatalBrazil
3.University of Massachusetts BostonBostonUSA
Schaffer, K. & Barreto Lemos, G. Found Sci (2019). https://doi.org/10.1007/s10699-019-09608-5
First Online 24 May 2019
Publisher Name Springer Netherlands | CommonCrawl |
Windshield wipers on connected vehicles produce high-accuracy rainfall maps
Matthew Bartos1 na1,
Hyongju Park2 na1,
Tian Zhou2,
Branko Kerkez1 &
Ramanarayan Vasudevan2
Scientific Reports volume 9, Article number: 170 (2019) Cite this article
Connected vehicles are poised to transform the field of environmental sensing by enabling acquisition of scientific data at unprecedented scales. Drawing on a real-world dataset collected from almost 70 connected vehicles, this study generates improved rainfall estimates by combining weather radar with windshield wiper observations. Existing methods for measuring precipitation are subject to spatial and temporal uncertainties that compromise high-precision applications like flash flood forecasting. Windshield wiper measurements from connected vehicles correct these uncertainties by providing precise information about the timing and location of rainfall. Using co-located vehicle dashboard camera footage, we find that wiper measurements are a stronger predictor of binary rainfall state than traditional stationary gages or radar-based measurements. We introduce a Bayesian filtering framework that generates improved rainfall estimates by updating radar rainfall fields with windshield wiper observations. We find that the resulting rainfall field estimate captures rainfall events that would otherwise be missed by conventional measurements. We discuss how these enhanced rainfall maps can be used to improve flood warnings and facilitate real-time operation of stormwater infrastructure.
Accurate rainfall measurements are essential for the effective management of water resources1. Historical rainfall records are used extensively in the design of water infrastructure2, while at finer scales, real-time rainfall measurements are an integral component of flood forecasting systems3. Despite the central role that precipitation measurements play in the design and operation of water infrastructure, current methods for measuring precipitation often do not provide the spatial resolution or measurement certainty required for real-time applications3. As the demand for real-time precipitation data increases, new sensing modalities are needed to address deficiencies found in conventional data sources.
The need for high-resolution precipitation estimates is perhaps best illustrated by the problem of urban flash flooding. Flooding is the number one cause of natural disaster fatalities worldwide, with flash floods accounting for a majority of flooding deaths in developed countries4. Despite the risks posed by flash flooding, there is "no existing model [that is] capable of making reliable flash flood forecasts in urban watersheds"3. Flash flood forecasting is to a large extent hindered by a lack of high-resolution precipitation data, with spatial resolutions of <500 m and temporal resolutions of 1–15 minutes required for urban areas5,6.
Contemporary rain measurement technologies—such as stationary rain gages and weather radar—struggle to achieve the level of precision necessary for flash flood forecasting. While rain gages have long served as a trusted source of surface-level precipitation measurements7, they often fail to capture the spatial variability of rain events, especially during convective storms8,9,10. This inability to resolve spatial patterns in rainfall is made worse by the fact that the number of rain gages worldwide is rapidly declining1. Weather radar is a useful tool for capturing the spatial distribution of rainfall. However, radar-rainfall estimates are subject to large spatial and temporal uncertainties11,12,13,14. Additionally, weather radar tends to show systematically large biases for major flood events, and may perform poorly for small watersheds6, making urban flood forecasting problematic.
The rise of connected and autonomous vehicles offers an unprecedented opportunity to enhance the density of environmental measurements15,16. While dedicated sensor networks are expensive to deploy and maintain, fleets of connected vehicles can capture real-time data at fine spatial and temporal scales through the use of incidental onboard sensors. With regard to rainfall measurement, windshield wiper activity offers a novel means to detect the location and timing of rainfall with enhanced precision. When used in conjunction with modern signal processing techniques, wiper-based sensing offers several attractive properties: (i) vehicles achieve vastly improved coverage of urban areas, where flood monitoring is important; (ii) windshield wiper intensity is easy to measure and requires little overhead for processing (as opposed to video or audio data); and (iii) vehicle-based sensing can be readily scaled as vehicle-to-infrastructure communication becomes more widespread. Moreover, many new vehicles come equipped with optical rain sensors that enable direct measurement of rainfall intensities. When paired with data assimilation techniques, these sensors may enable even higher-accuracy estimation of rainfall fields compared to wipers alone.
While a small number of studies have investigated vehicle-based precipitation measurements, the results of these studies are strictly based on simulated wiper data instead of real measurements. As such, the premise that windshield wiper data can be used to improve rainfall estimates has never been verified using a large real-world dataset. Hill (2015) combines simulated binary (wet/dry) rainfall sensors with weather radar observations to generate improved areal rainfall estimates, which are then validated against rainfall fields produced by interpolation of tipping-bucket rain gages15. Similarly, Haberlandt (2010) combines simulated vehicle wiper measurements with rain gage observations to improve rainfall field estimates, and then validates the resulting product against weather radar16. Although these studies highlight the potential for vehicle-based measurements to improve the spatial and temporal resolution of rainfall estimates, their findings have not yet been validated using data from real-world connected vehicles.
To address these challenges, this study leverages windshield wiper measurements collected from nearly 70 vehicles to produce corrected rainfall maps (see Fig. 1 for a description of the study area and data sources). In the first part of this paper, we demonstrate that windshield wiper measurements offer a reliable indicator of rainfall by comparing wiper measurements against dashboard camera footage that indicates the ground truth binary rainfall state (raining/not raining). In the second part of this paper, we develop a Bayesian data fusion procedure that combines weather radar with vehicle-based wiper measurements to produce an updated probabilistic rainfall field map. We validate this novel data product by showing that it is more effective than the original radar data at predicting the binary rainfall state. Finally, we discuss how these enhanced rainfall maps can be used to improve flood warnings and facilitate real-time operation of stormwater infrastructure.
Overview of the study area on June 12, 2014. Blue circles represent rain gages. Vehicle paths are shown as green lines, while roads are shown in gray. A radar overlay shows the average precipitation intensity as estimated by radar. The map is produced with a custom script using the Python programming language (Python 3.6: https://www.python.org). See data access links for code used to generate this map.
Windshield wipers improve binary rainfall detection
Windshield wiper measurements enhance rainfall estimation by enabling greater certainty about the timing and location of rainfall. While wiper intensity on its own is generally a poor predictor of rainfall intensity (see Figure S1 in the Supplementary Information), we find that wiper status (on/off) is a stronger predictor of binary rainfall state than either radar or gage-based measurements. This result suggests that vehicle-based measurements can be used to validate and correct rainfall fields derived from conventional data sources.
Wiper measurements provide a more accurate indicator of binary rainfall state than either radar or gage measurements. We determine the binary classification performance for each technology (gages, radar and wipers) by comparing the measured rainfall state with co-located dashboard video footage. Dashboard video is taken to represent the ground truth, given that the presence or absence of rainfall can readily be determined by visually inspecting the windshields for raindrops. Figure 2 shows an example of co-located radar, gage, wiper and camera measurements for a single vehicle trip. The top two frames show dashboard camera footage collected over the course of the vehicle trip. Rainfall is visible during the first half of the trip (top left) while no rain can be seen during the second half of the trip (top right). The map (bottom left) shows the path of the vehicle along with (i) the reported wiper intensity, (ii) the average radar rainfall intensity during the trip, and (iii) the two nearest rain gages. Two time series (right) compare radar and gage measurements of rainfall intensity near the vehicle's location (center right) with reported wiper intensity (bottom right). The binary classification performance for each data source is assessed by manually labeling the ground truth rainfall state based on the dashboard camera footage, and then comparing these labels with the binary rainfall state predicted by co-located wiper, radar and rain gage data sources.
Analysis of a single vehicle trip occurring from 21:46–22:26 on August 11, 2014. The top two panels show video footage during the rainy (left) and dry (right) segments of the trip. The bottom left panel shows a map of the vehicle's trip, with the wiper intensity indicated by color. A radar overlay shows the average rainfall intensity over the 40-minute time period. Blue circles represent the gages nearest to the vehicle path. The two bottom right panels show the precipitation intensity as estimated by radar and gage measurements (center), and the 1-minute average wiper intensity (bottom). Photographs are reproduced with permission from the University of Michigan Transportation Research Institute. The map is produced with a custom script using the Python programming language (Python 3.6: https://www.python.org). See data access links for code used to generate this map.
Comparing radar, gage, and wiper measurements with co-located vehicle footage across three storm events, we find that wiper status is the best estimator of binary rainfall state, with a true positive rate (TPR) of 93.1%, and a true negative rate (TNR) of 98.2%. By comparison, weather radar achieves a smaller TPR of 89.5%, while stationary gages show a much smaller TPR of 44.5% (see Table 1). These results can partly be explained by the superior spatial and temporal resolution of the wiper measurements. Wipers detect intermittent changes in rainfall at a temporal resolution on the order of seconds, while radar and gage measurements can only detect the average rate over a 5-minute period. When ground truth camera observations are collected at a 3-second temporal resolution, the benefit of wiper measurements over radar measurements becomes even more pronounced, with a TPR advantage of 5.2%, a TNR advantage of 7.7%, and an overall wiper TPR of 97.0% (see the supplementary note on factors affecting binary detection performance). The results of this analysis suggest that conventional rainfall measurement technologies can be enhanced through the inclusion of vehicle-based measurements.
Table 1 Classification performance of each rainfall measurement technology.
Assimilation of wiper data yields corrected rainfall maps
Based on the observation that wiper measurements are a strong binary predictor of rainfall, we develop a Bayesian filtering framework that combines radar rainfall estimates with wiper observations to generate corrected rainfall maps. Radar is first used to estimate a prior distribution of rainfall intensities. This prior is then updated with wiper observations to produce a corrected rainfall intensity field that better captures the binary rainfall state. The results of this filtering procedure are demonstrated in Fig. 3, which shows the original rainfall intensity field (top) along with the corrected rainfall intensity field (bottom). Vehicle paths are shown (bottom) to highlight the effect of wiper measurements on the posterior rainfall intensity distribution. In cases where both radar and wipers agree on the binary rainfall state, the rainfall intensity field remains unchanged. For example, when the wiper and radar intensities are both nonzero (as seen in the bottom-left panel, leftmost vehicle), the posterior rainfall intensity is simply equal to the prior rainfall intensity. In other cases, vehicles detect no rainfall in regions where radar had previously estimated rainfall (bottom-left panel, rightmost vehicle). In these cases, the Bayesian filter reduces the intensity of the rainfall field within the proximity of the vehicle. Conversely, in the case where vehicles detect rainfall in regions where little to no rainfall was observed in the original dataset (right panel), the Bayesian filter amplifies the rainfall intensity field within the vicinity of the vehicle, resulting in a new rainfall intensity distribution that better represents the binary rainfall state. The predicted rainfall intensity depends on both the wiper measurement and the intensity of the radar rainfall prior within the neighborhood of the vehicle. Thus, vehicles located near a prior rainfall front (bottom-right panel, center of frame) will have a larger effect on the posterior rainfall intensity than vehicles located far away from a prior rainfall front (bottom-right panel, top of frame). For a more detailed view of the evolution of the rainfall field under both the original and corrected data sets, refer to Video S1.
Original and updated rainfall maps. Top (left and right): Original prior weather radar rainfall intensity map. Radial radar scans have been resampled to a 1 km grid to ensure computational tractability. Bottom (left and right): updated posterior rainfall intensity map, combining radar data with wiper measurements using the Bayesian filter. In the bottom left panel, a "hole" in the rainfall field occurs when a vehicle detects no rain in a location where radar alone estimated rain. In the bottom right panel, vehicles detect rainfall where radar previously did not detect rainfall. The map is produced with a custom script using the Python programming language (Python 3.6: https://www.python.org). See data access links for code used to generate this map.
The wiper-corrected rainfall field predicts the binary rainfall state with greater accuracy than the radar-only data product. To validate the wiper-corrected rainfall field, we use an iterated "leave-one-out" approach, in which an updated rainfall field is generated while excluding a vehicle, and the resulting data product is compared against the measured rainfall state of the omitted vehicle. Repeating this process for each vehicle yields the receiver operator characteristics shown in Fig. 4. These curves map the relationship between the TPR and TNR for both the original rainfall field (radar only) and the corrected rainfall field (radar and wiper). Curves located closer to the upper-left corner (i.e. those with a larger area under the curve) exhibit the best performance, given that they have a large true positive rate and a small false negative rate. Based on these curves, it can be seen that the corrected data product performs consistently better than the original radar product at predicting the presence or absence of rain, with a TPR and TNR close to unity. The overall performance of the updated rainfall product—as measured by the area under the curve (AUC)—is roughly 0.957, compared to only 0.878 for the original radar data. These results confirm that inclusion of vehicle-based measurements enables improved prediction of the underlying rainfall field.
Binary classification performance of the updated rainfall product. Receiver operator characteristic (ROC) curves indicate the rainfall state prediction accuracy for the original radar estimate and the updated (wiper-corrected) data product. The area under the curve (AUC) measures overall classification performance.
The enhanced rainfall maps developed in this study have the potential to assist in the real-time operation of transportation and water infrastructure. In particular, high accuracy rainfall field estimates will enable improved prediction of flash floods in urban centers, and will help to inform real-time control strategies for stormwater systems. As mentioned previously, flash flood forecasting is contingent on high-resolution areal rainfall estimates, with accurate measurements on the order of 500 m or finer required for forecasting in urban areas. By enabling real-time validation and filtering of radar rainfall estimates, vehicle-based sensors may help fill measurement gaps and improve the prediction of flood events near roadways. Monitoring of roadways is especially important given that in the US, roughly 74% of flood fatalities are vehicle related4. As connected and autonomous vehicles become more widely adopted, the spatial coverage and measurement certainty of this new sensing modality will be even further enhanced.
In addition to assisting with flash flood response, high-precision rainfall data products may one day inform the operation of new "smart" water infrastructure. Recent work has highlighted the potential of "smart" water systems to mitigate water hazards through real-time control of distributed gates, valves and pumps17,18,19,20,21. When informed by accurate and timely data, these systems can significantly reduce operating costs, prevent combined sewer overflows, and halt the degradation of aquatic ecosystems by adaptively reconfiguring water infrastructure in real time17,18. However, recent findings suggest that optimal control strategies for "smart" water systems are highly sensitive to the location, timing and intensity of rainfall inputs22. In this regard, the wiper-corrected rainfall product presented in this study may help to enable more fine-grained control of water infrastructure by reducing uncertainty in conventional rainfall field estimates.
While this work evaluates the updated rainfall product in terms of its ability to predict the binary rainfall state, future work should use vehicle-based sensors to further validate and improve the predicted rainfall intensity. Currently, visual inspection of the ground truth data source (camera footage) only allows for verification of the binary rainfall state and not the predicted rainfall intensity. Other potential sources of ground truth rainfall intensity, such as stationary rain gages, are also problematic. While rain gages provide an independent source of rainfall intensity data, they are only able to produce estimates of rainfall accumulation at point locations every 5 minutes, and are often far removed from the nearest vehicle path. Moreover, as shown in Table 1, gages are by far the poorest predictor of the binary rainfall state among all data sources considered. These issues raise questions as to the appropriateness of rain gages as a source of ground truth rainfall intensity data. With these issues in mind, a natural extension of the work presented in this paper could use other vehicle-based sensors to better estimate the rainfall intensity at each vehicle's location. Drawing on dashboard camera footage, object detection techniques could be used to isolate and count raindrops on the windshield of each vehicle. The volume of rainfall deposited over each wiper interval may then be estimated, thereby yielding an estimate of rainfall intensity at the vehicle's location. Similarly, many newer vehicles feature optical rain sensors that are capable of measuring precipitation rate directly. When combined with the Bayesian sensor fusion framework presented in this study, these sensors could enhance the accuracy of the estimated rainfall intensity field. While outside the scope of this work, these techniques represent promising directions for future research and should be considered in subsequent studies.
This study generates enhanced probabilistic rainfall maps by combining conventional radar-based precipitation fields with ubiquitous windshield wiper measurements from almost 70 unique vehicles. We find that while windshield wiper intensity is a poor predictor of rainfall intensity, wiper activity is a stronger predictor of binary rainfall state than conventional radar and gage-based data sources. With this result in mind, we develop a novel Bayesian filtering framework that combines a radar-based rainfall prior with binary windshield wiper observations to produce an updated rainfall map. We find that the Bayesian filtering process is effective at detecting changes in the rainfall field that conventional measurement technologies may otherwise miss. We validate the updated rainfall data product by assessing its ability to reproduce the binary rainfall state anticipated by an omitted vehicle. Based on this analysis, we find that the corrected rainfall field is better at predicting the binary rainfall state than the original radar product. As connected vehicles become more widespread, the ubiquitous sensing approach proposed by this study may one day help to inform real-time warning and control systems for water infrastructure by providing fine-grained estimates of the rainfall field.
Evaluating vehicle-based measurements
In the first part of this study, we assess the degree to which windshield wiper activity serves as a proxy for both rainfall intensity and binary rainfall state. First, wiper measurements are compared against conventional rainfall measurement technologies to determine if there is a direct relationship between wiper intensity and rain intensity. Next, we assess the degree to which each data source reflects the ground truth rainfall state by comparing measurements from all three sources (gages, radar and wipers) with vehicle-based video footage. Video footage provides instantaneous visual confirmation of the rainfall state (raining or not raining), and is thus taken to represent the ground truth. We characterize the binary classification performance of each technology in terms of its true positive and true negative rates.
To ensure that our analysis is computationally tractable, we isolate the study to a subset of three storms in 2014. We assess the validity of our procedure for storms of different magnitudes by selecting a large storm (2014-08-11), a medium-sized storm (2014-06-28) and a small storm (2014-06-12). Storms are selected during the summertime months to avoid conflating rainfall measurements with snow measurements. The year 2014 is chosen because it is the year for which the greatest number of vehicles are available. Unless otherwise specified, data are co-located using a nearest neighbor search. For comparison of wiper and gage readings, we select only those gages within a 2 km range of any given vehicle.
We consider four data sources: (i) stationary rain gages, (ii) weather surveillance radar, (iii) vehicle windshield wiper data, and (iv) vehicle dashboard camera footage. We provide a brief description of each data source here:
Gage data are obtained from personal weather stations maintained by the Weather Underground23. Within the city of Ann Arbor (Michigan), Weather Underground hosts 21 personal weather stations, each of which yield rainfall estimates at a time interval of approximately 5 minutes. Locations of gages are indicated by blue circles in Fig. 1. Although verified gage data from the National Weather Service (NWS) and the National Oceanic and Atmospheric Administration (NOAA) are available, Weather Underground gages are selected because (i) NOAA and NWS each maintain only a single gage in the city of Ann Arbor, meaning that intra-urban spatial variations in precipitation intensity cannot be captured, and (ii) the temporal resolution of NOAA and NWS gages are relatively coarse for real-time applications (with NOAA offering a maximum temporal resolution of 15 minutes and NWS offering a maximum temporal resolution of 1 hour).
Weather radar observations are obtained from NOAA's NEXRAD Level 3 Radar product archive24. We use the "Instantaneous Precipitation Rate" data product (listed as variable code 176 in the NEXRAD Level 3 archive25). Radar precipitation estimates are obtained at a temporal resolution of 5 minutes, and a spatial resolution of 0.25 km by 0.5 degree (azimuth). Radar station KDTX in Detroit is used because it is the closest radar station to the City of Ann Arbor. Radial radar scans are interpolated to cartesian coordinates using a nearest neighbor approach.
Vehicle-based wiper intensities are obtained from the University of Michigan Transportation Research Institute (UMTRI) Safety Pilot Model Deployment database26. For each vehicle, this dataset includes time series of latitude, longitude, and windshield wiper intensity at a temporal resolution of 2 milliseconds. Windshield wiper intensity is given on an ordinal scale from 0 to 3, with 0 indicating that the wiper is turned off, 1 representing the lowest wiper intensity, and 3 representing the highest wiper intensity. A wiper reading of 4 indicates that the vehicle's "mister" is activated, distinguishing between wiper use for rain removal and wiper use for windshield cleaning. For this study, wiper usage for cleaning (i.e. wiper mode 4) was filtered out before the analysis. Note that wiper intensity codes are based on electrical signals generated by the wiper itself, meaning that no manual wiper mode classification is needed. For the year 2014, 69 unique vehicles are available in the UMTRI dataset. However, typically less than ten vehicles are active at any given time during the observation period. Vehicles with no sensor output or invalid readings were removed from the dataset prior to the analysis (see the supplementary note for more details). Other sources of human error (such as accidentally turning wipers on), are captured by the true positive and true negative rates included in Tables 1 and S1.
Camera observations are also obtained from the UMTRI vehicle database26. Located on the inside of each vehicle, cameras provide streaming video footage of the windshield, side-facing windows, rear-facing windows, and the driver. For the purposes of validation, we use the front-facing windshield camera. Camera frames are manually inspected for rain drops striking the windshield. Time intervals where rain is observed are classified as "raining"; similarly time intervals where no new droplets are observed are classified as "not raining". Manual inspection and labeling of the video data was performed independently by two reviewers to ensure robustness.
A Bayesian filtering framework
In the second part of this study, we develop a Bayesian filtering framework that combines binary wiper observations with radar-based rainfall intensity measurements to generate corrected rainfall maps. In simple terms, the Bayesian filter generates an updated rainfall field, in which binary (on/off) wiper measurements adaptively correct the underlying radar rainfall field. Windshield wiper status is taken to represent a measurement of the ground truth binary rainfall state, given that it is a better predictor of the binary rainfall state than radar- or gage-based measurements. Under this framework, four distinct cases are possible. If both the wiper and radar measure precipitation, the radar reading is taken to be correct, and the original rainfall field remains the same. Similarly, if neither the wiper nor the radar measure precipitation, the radar rainfall field remains zero. However, if the radar measures precipitation at a target location and the wiper does not, then the filter will update the rainfall field such that rainfall intensity is reduced within the proximity of the vehicle (with a decay pattern corresponding to the Gaussian kernel and an intensity of zero at the location of the wiper reading). Similarly, if the wiper measures precipitation, but the radar measures no precipitation, the rainfall intensity will be increased within the proximity of the vehicle (by combining the local distribution of the radar rainfall prior with a point estimate of rainfall intensity based on the wiper intensity). In our implementation, provided that no other information is available, this point estimate is generated using the empirical rainfall intensity distribution associated with the given wiper intensity. The empirical rainfall intensity distributions associated with each wiper intensity are shown in Figure S1 in the Supplementary Information.
Note that while wiper intensity by itself does not exhibit a strong correlation with rainfall intensity, the Bayesian filter uses both wiper and radar measurements to generate the posterior rainfall intensity estimate. In other words, the posterior rainfall intensity at the vehicle's location is a probabilistic estimate that depends on both the wiper-based estimate and the local prior intensity within the neighborhood of the vehicle. Thus, a nonzero wiper measurement located far away from a radar rainfall front will result in a smaller posterior intensity than one located near a radar rainfall front (as discussed in the results section and shown in Fig. 3). The relative contribution of the wiper measurement and radar prior are controlled using a weighting parameter representing the user's trust in each data source. This probabilistic assimilation of data sources helps to reduce the uncertainty associated with using the wiper intensity to estimate rainfall intensity. It should be noted that other methods for obtaining a point estimate of rainfall intensity are possible—such as choosing the closest nonzero intensity in the radar rainfall prior. For newer vehicles equipped with rain sensors, the rainfall intensity can also be measured directly using the sensor output. As mentioned in the discussion section, however, it is currently difficult to evaluate the relative accuracy of these approaches, given the lack of reliable ground truth rainfall intensity data at the appropriate spatial and temporal scales.
A more formal description of the filtering framework is given here in terms of a noisy sensor model (for additional details, see Park et al. (2018)27). Consider a noisy sensor model in which each sensor produces a binary measurement given a target state. The target state is represented as a random tuple z = (q, I) where q is a location state (e.g. the latitude and longitude at the target), and I is an information state (e.g. the precipitation intensity at the target) with all the random quantities indicated by bold italics. We denote by Mt the event that sensors correctly measure the intensity, and by \({\bar{M}}_{t}\) the event that sensors fail to measure the intensity correctly. The joint measurement likelihood at any time t is given by:
$$p({M}_{t}|{\boldsymbol{z}},{x}_{t})$$
where xt represents the locations of the sensors at time t. Equation 1 yields the probability distribution of precipitation intensity measurement at q by sensors at xt. The expected value of Equation 1 with respect to I is equivalent to the rainfall intensity experienced at the location q. Because the effective range of the wipers is limited, we account for the probability of detection as a function of the distance between the sensor and the target. We denote by Dt the event that sensors detect the target, and by \({\bar{D}}_{t}\) the event that sensors fail to detect the target at time t. The probability of detecting a target located at q by sensors located at xt, p(Dt|q, xt), is taken to decay with increasing distance to the sensor. Using the law of total probability, the conditional probability of a correct measurement is then given by:
$$p({M}_{t}|{\boldsymbol{z}},{x}_{t})=p({M}_{t}|{\boldsymbol{z}},{D}_{t},{x}_{t})p({D}_{t}|q,{x}_{t})+p({M}_{t}|{\boldsymbol{z}},{\bar{D}}_{t},{x}_{t})p({\bar{D}}_{t}|q,{x}_{t})$$
where Dt is conditionally independent of I when conditioned on q. For example, consider xt = (0, 0), and q = (q1, q2). If the decay function is taken to be a 2D Gaussian centered at xt with covariance matrix σI where I is a 2 by 2 identity matrix, then:
$$p({D}_{t}|q,{x}_{t})={\tilde{\eta }}_{t}\frac{1}{2\pi {\sigma }^{2}}\exp \,(-\frac{{q}_{1}^{2}+{q}_{2}^{2}}{2{\sigma }^{2}})$$
where \({\tilde{\eta }}_{t}\) is a normalization constraint. If the target is not detected (i.e., \({\bar{D}}_{t}\)), then the measurement is assumed to be unreliable, and the likelihood, \(p({M}_{t}|{\boldsymbol{z}},{\bar{D}}_{t},{x}_{t})\), is modeled using a prior distribution. If there is no prior information available, the function is modeled using a uniform distribution. Now let bt (z) represent the posterior probability of the precipitation intensity given a target location q at time t. Using Bayes' Theorem, bt (z) can be formulated:
$${b}_{t}\,({\boldsymbol{z}})={\eta }_{t}\,p({M}_{t}|{\boldsymbol{z}},{x}_{t}){b}_{t-1}({\boldsymbol{z}}),\,t=1,2,\ldots $$
where ηt is a normalization constant and b0 is uniform if no information is available at t = 0. This filtering equation forms the basis of the rainfall field updating algorithm. To reduce computational complexity, the filtering operation is implemented using a Sequential Importance Resampling (SIR) Particle Filter28.
The results of the Bayesian sensor fusion procedure are evaluated by determining the proportion of instances where the combined data product is able to predict the binary rainfall state. We characterize the true and false positive rates for the largest storm event (2014-08-11) using an iterated "leave-one-out" cross-validation approach. First, a single vehicle is removed from the set of vehicles. The Bayesian update procedure is then executed using all vehicles except the excluded vehicle, and an updated rainfall map is generated. Next, the rainfall states predicted by the corrected rainfall field (radar and wiper) and the original rainfall field (radar only) are compared against the rainfall states predicted by the omitted vehicle. The performance of each data product is evaluated based on its ability to reproduce the binary rainfall state observed by the omitted vehicle. Performing this process iteratively yields the true and false positive rates for both the original (radar only) and updated (radar and wiper) rainfall fields. This procedure is repeated for each vehicle in the set of vehicles to generate Receiver-Operator Characteristic (ROC) curves, which characterize the true and false positive rates across an ensemble of simulations.
Data Access Links
Code and data for this study are available at: github.com/kLabUM/vehicles-as-sensors.
Overeem, A., Leijnse, H. & Uijlenhoet, R. Country-wide rainfall maps from cellular communication networks. Proceedings of the National Academy of Sciences 110, 2741–2745, https://doi.org/10.1073/pnas.1217961110 (2013).
Cheng, L. & AghaKouchak, A. Nonstationary precipitation intensity-duration-frequency curves for infrastructure design in a changing climate. Scientific Reports 4, https://doi.org/10.1038/srep07093 (2014).
Hapuarachchi, H. A. P., Wang, Q. J. & Pagano, T. C. A review of advances in flash flood forecasting. Hydrol. Process. 25, 2771–2784, https://doi.org/10.1002/hyp.8040 (2011).
Doocy, S., Daniels, A., Murray, S. & Kirsch, T. D. The human impact of floods: a historical review of events 1980–2009 and systematic literature review. PLoS Currents, https://doi.org/10.1371/currents.dis.f4deb457904936b07c09daa98ee8171a (2013).
Berne, A., Delrieu, G., Creutin, J.-D. & Obled, C. Temporal and spatial resolution of rainfall measurements required for urban hydrology. Journal of Hydrology 299, 166–179, https://doi.org/10.1016/j.jhydrol.2004.08.002 (2004).
Smith, J. A., Baeck, M. L., Meierdiercks, K. L., Miller, A. J. & Krajewski, W. F. Radar rainfall estimation for flash flood forecasting in small urban watersheds. Advances in Water Resources 30, 2087–2097, https://doi.org/10.1016/j.advwatres.2006.09.007 (2007).
Grimes, D., Pardo-Igúzquiza, E. & Bonifacio, R. Optimal areal rainfall estimation using raingauges and satellite data. Journal of Hydrology 222, 93–108, https://doi.org/10.1016/s0022-1694(99)00092-x (1999).
Xiaoyang, L., Jietai, M., Yuanjing, Z. & Jiren, L. Runoff simulation using radar and rain gauge data. Adv. Atmos. Sci. 20, 213–218, https://doi.org/10.1007/s00376-003-0006-7 (2003).
Yilmaz, K. K. et al. Intercomparison of rain gauge, radar, and satellite-based precipitation estimates with emphasis on hydrologic forecasting. J. Hydrometeor 6, 497–517, https://doi.org/10.1175/jhm431.1 (2005).
Sun, X., Mein, R., Keenan, T. & Elliott, J. Flood estimation using radar and raingauge data. Journal of Hydrology 239, 4–18, https://doi.org/10.1016/s0022-1694(00)00350-4 (2000).
Winchell, M., Gupta, H. V. & Sorooshian, S. On the simulation of infiltration- and saturation-excess runoff using radar-based rainfall estimates: Effects of algorithm uncertainty and pixel aggregation. Water Resources Research 34, 2655–2670, https://doi.org/10.1029/98wr02009 (1998).
Morin, E., Krajewski, W. F., Goodrich, D. C., Gao, X. & Sorooshian, S. Estimating rainfall intensities from weather radar data: the scale-dependency problem. Journal of Hydrometeorology 4, 782–797 10.1175/1525-7541(2003)004<0782:ERIFWR>2.0.CO;2 (2003).
Smith, J. A., Seo, D. J., Baeck, M. L. & Hudlow, M. D. An intercomparison study of NEXRAD precipitation estimates. Water Resources Research 32, 2035–2045, https://doi.org/10.1029/96wr00270 (1996).
Islam, T., Rico-Ramirez, M. A., Han, D. & Srivastava, P. K. Artificial intelligence techniques for clutter identification with polarimetric radar signatures. Atmospheric Research 109–110, 95–113, https://doi.org/10.1016/j.atmosres.2012.02.007 (2012).
Hill, D. J. Assimilation of weather radar and binary ubiquitous sensor measurements for quantitative precipitation estimation. Journal of Hydroinformatics 17, 598, https://doi.org/10.2166/hydro.2015.072 (2015).
Haberlandt, U. & Sester, M. Areal rainfall estimation using moving cars as rain gauges – a modelling study. Hydrol. Earth Syst. Sci. 14, 1139–1151, https://doi.org/10.5194/hess-14-1139-2010 (2010).
Bartos, M., Wong, B. & Kerkez, B. Open storm: a complete framework for sensing and control of urban watersheds. Environmental Science: Water Research & Technology, https://doi.org/10.1039/c7ew00374a (2017).
Kerkez, B. et al. Smarter stormwater systems. Environmental Science & Technology 50, 7267–7273, https://doi.org/10.1021/acs.est.5b05870 (2016).
Wong, B. P. & Kerkez, B. Adaptive measurements of urban runoff quality. Water Resources Research 52, 8986–9000, https://doi.org/10.1002/2015WR018013 (2016).
Wong, B. P. & Kerkez, B. Real-time environmental sensor data: An application to water quality using web services. Environmental Modelling & Software 84, 505–517, https://doi.org/10.1016/j.envsoft.2016.07.020 (2016).
Mullapudi, A., Wong, B. P. & Kerkez, B. Emerging investigators series: building a theory for smart stormwater systems. Environ. Sci.: Water Res. Technol. 3, 66–77, https://doi.org/10.1039/c6ew00211k (2017).
Wong, B. Real-time measurement and control of urban stormwater systems. Ph.D. thesis, University of Michigan (2017).
Weather Underground. Weather Underground personal weather stations (City of Ann Arbor) (2014).
NOAA National Weather Service (NWS) Radar Operations Center. NOAA next generation radar (NEXRAD) level 3 products (instantaneous precipitation rate), https://doi.org/10.7289/V5W9574V (1992).
NOAA National Weather Service (NWS) Radar Operations Center. NEXRAD/TDWR Level-III products (2014).
University of Michigan Transportation Research Institute. Safety pilot model deployment/Ann Arbor connected vehicle test environment data (2014).
Park, H., Liu, J., Johnson-Roberson, M., & Vasudevan, R. Robust Environmental Mapping by Mobile Sensor Networks. IEEE International Conference on Robotics and Automation, 2395–2402, (2018).
Berzuini, C., Best, N. G., Gilks, W. R. & Larizza, C. Dynamic conditional independence models and markov chain monte carlo methods. Journal of the American Statistical Association 92, 1403–1412 (1997).
MathSciNet Article Google Scholar
Funding for this project was provided by MCubed (grant 985), the Ford Motor Company–University of Michigan Alliance (grant N022977), and the University of Michigan. Vehicle metadata and camera footage are provided courtesy of the University of Michigan Transportation Research Institute (UMTRI). We would like to thank UMTRI Director Jim Sayer and UMTRI Lead Engineer Scott Bogard for helping to obtain the vehicle sensor data used in this study.
Matthew Bartos and Hyongju Park contributed equally.
Department of Civil and Environmental Engineering, University of Michigan, Ann Arbor, MI, 48109, United States
Matthew Bartos & Branko Kerkez
Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI, 48109, United States
Hyongju Park, Tian Zhou & Ramanarayan Vasudevan
Matthew Bartos
Hyongju Park
Tian Zhou
Branko Kerkez
Ramanarayan Vasudevan
M.B. wrote the paper, performed the analysis, and helped with the implementation of the filtering algorithm. H.P. developed, implemented, and validated the filtering algorithm. T.Z. analyzed the dashboard camera data and assisted with analysis of the windshield wiper data. B.K. and R.V. originated the concept of the study, guided the development of the methods, and assisted in writing the paper. Additional inspection and labeling of vehicle dashboard footage was performed by Aditya Prakash Singh. All authors reviewed the manuscript.
Correspondence to Branko Kerkez.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Video S1
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Bartos, M., Park, H., Zhou, T. et al. Windshield wipers on connected vehicles produce high-accuracy rainfall maps. Sci Rep 9, 170 (2019). https://doi.org/10.1038/s41598-018-36282-7
Urban change as an untapped opportunity for climate adaptation
Monika Egerer
Dagmar Haase
Alessandro Ossola
npj Urban Sustainability (2021)
Quantification of node importance in rain gauge network: influence of temporal resolution and rain gauge density
Shubham Tiwari
Sanjeev Kumar Jha
Ankit Singh
Scientific Reports (2020)
Guidelines on Optical Coherence Tomography Angiography Imaging: 2020 Focused Update
Enrico Borrelli
Mariacristina Parravano
Giuseppe Querques
Ophthalmology and Therapy (2020) | CommonCrawl |
Amsalu Degu1,
Peter Njogu2,
Irene Weru3 &
Peter Karimi1
Gynecologic Oncology Research and Practice volume 4, Article number: 15 (2017) Cite this article
Although cervical cancer is preventable, it is still the second leading cause of cancer deaths among women in the world. Further, it is estimated that around 5–10% of hospital admissions are due to drug related problems (DRPs), of which 50% are avoidable. In cancer therapy, there is an immense potential for DRPs due to the high toxicity of most chemotherapeutic regimens. Hence, this study sought to assess DRPs among patients with cervical cancer at Kenyatta National Hospital (KNH).
A cross-sectional study was conducted at the oncology units of KNH. A total of 81 study participants were recruited through simple random sampling. Data were collected from medical records and interviewing patients. The appropriateness of medical therapy was evaluated by comparing with National Compressive Cancer Network and European Society for Medical Oncology practice guideline of cervical cancer treatment protocol. The degree of adherence was determined using eight-item Morisky medication adherence scale. The likelihood of drug interaction was assessed using Medscape, Micromedex and Epocrates drug interaction checkers. The data were entered in Microsoft Excel and analysed using statistical software STATA version 13.0. Descriptive statistics such as mean, percent and frequency were used to summarise patients' characteristics. Univariable and multivariable binary logistic regression were used to investigate the potential predictors of DRPs.
A total of 215 DRPs were identified from 76 patients, translating to a prevalence of 93.8% and a mean of 2.65 ± 1.22 DRPs. The predominant proportion of DRPs (48.2%) was identified in patients who had been treated with chemoradiation regimens. Adverse drug reactions 56(69.1%) and drug interactions 38(46.9%) were the most prevalent DRPs. Majority (67.9%) of the study population were adherent to their treatment regimens. Forgetfulness 18(69.2%), expensive medications 4(15.4%) and side effects of medications 4(15.4%) were the main reasons for medication non-adherence. Patients with advanced stage cervical cancer were 15.4 times (AOR = 15.4, 95% CI = 1.3–185.87, p = 0.031) more likely to have DRPs as compared to patients with early stage disease.
Adverse drug reactions, drug interactions, and need of additional drug therapy were the most common DRPs identified among cervical cancer patients. Advanced stage cervical cancer was the only predictor of DRPs.
In the past few decades, medicines have had a substantial positive effect on health by reducing mortality and disease burden. Interestingly, there is ample evidence that potential problems exists since the right medicine does not always reach the right patient and around 50% of all patients fail to take their medication correctly [1]. Moreover, irrational use of drugs is a major global problem, and World Health Organization (WHO) estimates that above 50% of all drugs are prescribed and dispensed inappropriately with consequent wastage of scarce resources and widespread health hazards [2].
A drug-related problem (DRP) is defined as an event involving drug therapy that has a potential to interfere with the desired health outcomes [3]. Alternatively, a drug therapy problem is any detrimental event experienced by a patient which impedes attainment of the desired goals of treatment. In the absence of appropriate intervention, medication problems have considerable negative impact on the health of the patients [4].
Drug-related problems are categorised into different classes, namely need for additional drug therapy, medication use without indication, improper drug selection, overdosage, sub-therapeutic dosage, adverse drug reactions (ADRs), drug interactions, inappropriate laboratory monitoring and non-adherence [4].
In cancer therapy, there is a tremendous potential for DRPs due to the high toxicity and the complexity of most chemotherapeutic regimens [5]. Cancer patients have a high incidence of coexisting chronic diseases and the treatment of cancer carry an inherent risk of DRPs [6]. Moreover, problems arising due to drugs are more common in cancer patients, and commonly present a major hurdle to health care providers [7].
Drug-related problems due to cancer chemotherapy can have severe consequences arising from the high toxicity and narrow therapeutic range of anticancer drugs [5]. Anticancer agents are differentiated from other class of drugs due to the frequency and severity of side effects at therapeutic doses [8]. Chemoradiation with cisplatin is associated with increased acute haematological and gastrointestinal toxicity in cervical cancer patients [9]. Since cancer patients receive multiple drug therapy, they are at a higher risk to develop DRPs. Accordingly, a substantial clinical need is required to address this problem by identifying cancer therapy-induced problems. Moreover, the prevalence of DRPs in patients with cervical cancer is not known in Kenya though the chemotherapeutic agents are expected to produce serious adverse outcomes to the patients. Thus, it was imperative that assessment was carried out to identify DRPs in cervical cancer patients to overcome these hurdles.
An extensive study of DRPs would render valuable perspicacity for the healthcare providers to lessen the incidence of DRPs [10]. However, there is a paucity of data on comprehensive DRPs among cervical cancer patients. Therefore, this study investigated the prevalence, types and predictors of DRPs in cervical cancer patients admitted at the oncology units of Kenyatta National Hospital.
Study design and setting
A cross-sectional study design was conducted from April to June 2017 at the oncology units of Kenyatta National Hospital (KNH), the biggest tertiary hospital in Kenya. Single population proportion formula was used to calculate the sample size [11].
$$ \mathbf{n}=\frac{{\mathbf{Z}}_{\frac{\kern0.75em \boldsymbol{\upalpha}}{2}}^{\kern1em 2\kern0.75em }\mathbf{x}\ \mathbf{P}\left(1-\mathbf{P}\right)\ }{{\mathbf{d}}^2} $$
where: n is the minimum sample size required for large population (≥10,000).
Z α/2 is the critical value for a 95% confidence interval (= 1.96 from Z- table).
P is the proportion of drug-related problems in cervical cancer patients. Since there were no previous studies in Kenya, P was assumed to be 50% (0.5).
d is the margin of error (5%)
$$ \mathrm{Hence},\mathrm{estimated}\ \mathrm{minimum}\ \mathrm{sample}\ \mathrm{size}\ \left(\mathrm{n}\right)=\kern0.5em \frac{(1.96)^2\kern0.5em \times 0.5\left(1-0.5\right)}{(0.05)^2}=384 $$
However, since study population was less than 10,000, we estimated the sample size using the following reduction formula.
Corrected sample size \( =\frac{\mathbf{n}\ \mathbf{x}\ \mathbf{N}}{\mathbf{n}+\mathbf{N}} \) Where N = source population and n = estimated sample size for N ≥ 10,000 population. According to KNH Health Information Department report, an average of 90 cervical cancer patients was on treatment in both inpatient and outpatient oncology units of KNH in the preceding three months period (September–November, 2016). The study was carried out for three months period, and hence the approximate size of the source population was 90 cervical cancer patients. Then, corrected sample size =\( \frac{384\ \mathrm{x}\ 90}{384+90}=73 \). Therefore, the corrected sample size with a 10% contingency for incomplete medical records of the patient and non-response provided a final sample size of 81 cervical cancer patients.
Patients aged 18 years and above with documented diagnosis of cancer and, treatment regimens were targeted. However, only those who signed the informed consent were included in the study.
Data collection techniques
Two qualified nurses from the oncology units of KNH were trained to assist in data collection. Relevant information about each patient such as socio-demographic characteristics, histological types of cervical cancer, stage of cancer, types of co-morbidity, treatment regimen, ADR, the rate of adherence and reasons for non-adherence, were recorded by reviewing medical records and interviewing the patients. A pilot study was done in 10% of the sample size to ensure the validity of the data collection instruments. After pre-testing, all necessary adjustments were executed on the data collection instruments before implementing in the main study. The adequacy of medical therapy was evaluated using National Guidelines for Cancer Management in Kenya [12], National Compressive Cancer Network (NCCN) practice guideline of cervical cancer treatment [13], European Society for Medical Oncology (ESMO) practice guideline of cervical cancer [14] and WHO cancer pain management protocols [15]. The probability of drug interaction was assessed using Medscape, Micromedex, Web MD and Epocrates drug interaction checkers. The degree of adherence was determined using Eight-Item Morisky Medication Adherence Scale [16]. The Modification of Diet in Renal Disease (MDRD) Study eq. [17], Du Bois method [18] and Calvert formula [19] were used to determine estimated Glomerular filtration rate (eGFR), body surface area and carboplatin dosing, respectively. DRPs were categorised as the need of additional drug therapy, medication use without indication, improper drug selection, overdosage, sub-therapeutic dosage, adverse drug reactions, drug interactions, inappropriate laboratory monitoring and patient's non-adherence by the Cipolle et al. classification system [4].
The data were entered into the Microsoft Excel worksheet and analysed using statistical software STATA version 13.0. Descriptive statistics such as percent and frequency were used to summarise categorical variables of patients' characteristics. Mean and standard deviation were used to compile continuous variables. The univariable and multivariable binary logistic regression analyses were employed to investigate the potential predictors of DRPs. A p-value of ≤0.05 was considered statistically significant.
Sociodemographic characteristics of study participants
The study was conducted among 81 cervical cancer patients. The mean age of the study population was 53.3 ± 11.6 years, and the predominant portion of the study subjects 47(58.0%) were aged 50 years and above. Among the 81 study participants, 61(75.3%) were married, 44(54.3%) had a primary level of education, while only 2(2.5%) had attained tertiary level of education. Twenty four participants (29.6%) were housewives. The monthly income level of majority of the population 59(72.8%) was less than USD 100, and most of the patients 40(49.4%) were on treatment with 5–9 drugs (Table 1).
Table 1 Sociodemographic characteristics of the study participants
Clinical characteristics of the study participants
As illustrated in Fig. 1, three histological types of cervical cancer were identified among the study subjects. Squamous cell carcinoma (91.4%) was the most common type, followed by adenocarcinoma (7.4%) while invasive anaplastic carcinoma (1.2%) was the least common histological type.
Histological types of cervical cancer among the study participants
The study showed that 44.4% and 35.8% of study population had stage II and III cervical cancer, respectively, with stages IIB (33.3%) and IIIB (28.4%) being the most prevalent. However, stages I and IV had low prevalence rates (Fig. 2).
Stages of cervical cancer identified among study participants
Among the study population, 39.5% patients did not have co-existing co-morbidities. Nonetheless, 35.8%, 17.3%, and 3.7% patients had been diagnosed with one, two, three, and four and above co-morbidities, respectively (Fig. 3). Anaemia 21(25.9%), retroviral disease 15(18.3%) and hypertension 13(16.1%) were the most common types of co-morbidities. Conversely, pulmonary embolism, sepsis, acute kidney injury, goitre and gastric ulcer were the least frequent co-morbidities among the study participants (Table 2). When age was taken into consideration, most of the study participants (29.6%) who had co-existing co-morbidities were aged 51 years and above (Fig. 4).
Percentage of co-morbidities among patients with cervical cancer
Table 2 Types of co-morbidities among patients with cervical cancer
Percentage of co-morbidities across different age groups of the study participants
Types of regimen used in the management of cervical cancer
Chemoradiation 41(50.6%) comprising of weekly cisplatin and daily radiotherapy was the most widely used treatment regimen in the management of cervical cancer in our setting. Further, hysterectomy and brachytherapy had been used in the management of 15(18.5%) and 11(13.6%) of the patients, respectively. Cisplatin and paclitaxel 9(11.1%) were the most commonly used combination anticancer agents in the treatment of cervical cancer (Table 3).
Table 3 Types of regimen used in the management of cervical cancer
Granisetron and dexamethasone combination 32(39.5%) was the most commonly used prophylactic antiemetic regimen followed by a combination of ondansetron and dexamethasone 18(22.2%). Conversely, metoclopramide and ondansetron monotherapy were less frequently used in management of chemotherapy-induced emesis among the study subjects (Table 4).
Table 4 Types of prophylactic antiemetic regimens used in cervical cancer
The finding of the study showed that paracetamol, morphine, tramadol and codeine were the most commonly used analgesics among the study participants. Nonetheless, significant proportion (37.4%) of cervical cancer patients did not receive any form of pain medication (Table 5).
Table 5 Analgesics regimens used in cervical cancer at Kenyatta National Hospital
Prevalence of drug-related problems
A total of 215 DRPs were identified from 76 cervical cancer patients, translating to a prevalence of 93.8% and a mean of 2.65 ± 1.22 DRPs per patient. Adverse drug reactions, drug interactions and the need for additional drug therapy were the most prevalent DRPs, which accounted for 56(69.1%), 38(46.9%) and 32(39.5%) cases, respectively.
In addition, 26(32.1%) patients were non-adherent to their medications, and 16(19.8%) patients received a sub-therapeutic dose of their treatment regimens. Nevertheless, overdosage, improper drug selection, medication use without indication and inappropriate laboratory monitoring accounted for relatively low proportion of drug therapy problems (Table 6).
Table 6 Categories of drug related problems
As illustrated in Fig. 5, most (54.3%) DRPs were found in the 51 years and above age group while the 40–50 years age group accounted for 28.4%. The least proportion of drug-related problems occurred in the 29–39 years age group.
Percentage of drug-related problems based on age group of cervical cancer patients
As shown in Fig. 6, the predominant proportion of DRPs (48.2%) was identified in patients treated with chemoradiation regimens while 16.1% and 13.6% drug therapy problems were identified in patients who had been managed with radical hysterectomy and brachytherapy, respectively. An equivalent proportion (11.1%) of drug therapy problems were detected in patients treated with radiotherapy and combination of cisplatin and paclitaxel regimens. In contrast, the least proportion of drug therapy problems were identified in patients treated with the combination of carboplatin and paclitaxel and cisplatin and vinorelbine regimens.
Percentage of drug-related problems across different treatment regimens
According to the eight-item Morisky medication adherence scale, 67.9% of cervical cancer patients were highly adherent, 18.5% of patients had an average level of medication adherence, while 13.6% of patients were poorly adherent to their treatment regimens (Fig. 7).
Rate of adherence to medications among cervical cancer patients
Forgetfulness 18(69.2%), expensive medications 4(15.4%) and side effects of medications 4(15.4%) were the main reasons for non-adherence to medications in the participants. Long duration of therapy and complicated regimens accounted for equal contribution for medication non-adherence while lack of trust on the efficacy of medications was the least common reason for non-adherence in cervical cancer patients (Table 7).
Table 7 Reasons for medications non-adherence among cervical cancer patients (n = 26)
As indicated in Tables 8, 45 drug-drug interactions were identified among the study participants. Ondansetron and dexamethasone were the most common interacting drugs accounting for 12(26.7%) of the total drug interactions. The other frequently encountered drug interactions were dexamethasone and paclitaxel 4(8.9%), and codeine and morphine 2(4.4%). Each of the other pairs of interacting drugs encountered in this study accounted for approximately 2.2% of the total drug interactions.
Table 8 Interacting drugs identified among cervical cancer patients (n = 45)
In terms of severity, 68.9% of the drug interactions were significant which required modification or close monitoring of the outcome of the drug interactions. Furthermore, 26.7% of drug interactions were considered as minor interactions. However, 4.4% of drug interactions were serious which necessitate the use of alternative medications in the treatment regimen (Fig. 8).
Severity of drug interactions among women with cervical cancer (n = 45)
Of the 166 ADRs identified in this study, the most common were vomiting, nausea, and leucopenia which accounted for 40(49.4%), 24(29.6%), and 18(22.2%) ADRs, respectively. On the other hand, constipation, and thrombocytopenia were the least prevailing ADRs (Table 9).
Table 9 Types of adverse drug reactions in cervical cancer patients (n = 81)
Predictors of drug related problems
In the univariable and multivariable binary logistic regression analysis, patients whose cervical cancer was at an advanced stage were 15.4 times (AOR = 15.4, 95% CI = 1.3–185.87, p = 0.031) more likely to have DRPs compared to patients with early stage cervical cancer. Hence, stage of cervical cancer was the only predictor of DRPs in cervical cancer patients (Table 10).
Table 10 Univariable and multivariable binary logistic regression analysis of predictors of drug related problems
Patients who had been treated with more than five drugs were 2.9 times (COR = 2.9, 95% CI = 1.10–7.78, p = 0.032) more likely to have ADRs as compared to patients treated with less than five medications. In addition, patients with advanced stage disease were 5.9 times (AOR = 5.9, 95% CI = 1.43–24.61, p = 0.017) more likely to have ADRs as compared to patients with early stage of cervical cancer. Nonetheless, patients between 40 and 50 years old were 0.1 times (AOR = 0.1, 95% CI = 0.02–0.6, p = 0.013) less likely to have ADRs compared to patients with less than 40 years of age (Table 11).
Table 11 Univariable and multivariable binary logistic regression analysis of predictors of adverse drug reactions
The study revealed that patients with cervical cancer and retroviral disease were 8.8 times (AOR = 8.8, 95% CI = 1.22–68.23, p = 0.037) more likely to have drug interactions as compared to cervical cancer patients without concurrent retroviral disease. The other patient factors did not have statistically significant association with drug interactions (Table 12).
Table 12 Univariable and multivariable binary logistic regression analysis of predictors of drug interactions
It was noted that patients treated with more than five drugs were 3.6 times (AOR = 3.6, 95% CI = 1.24–11.23, p = 0.026) more likely to have dosing problems as compared to patients treated with less than five medications. Besides, patients who had been managed with cisplatin and paclitaxel regimen were 9.8 times (AOR = 9.8, 95% CI = 1.25–77.81, p = 0.030) more likely to have dosing problems than patients who were not using this regimen (Table 13).
Table 13 Univariable and multivariable binary logistic regression analysis of predictors of dosing problems
The present study revealed that the mean age of the study participants was 53.3 ± 11.6 years, and the predominant portion of the study subjects 47(58.0%) were 51 years and above. This study is fairly comparable with similar studies conducted in India and Tanzania [20, 21]. Late incidence of cervical cancer in the older age may be due to the insidious transformation of the cervical epithelium into cancerous cells by the combined effects of high-risk strains of human papillomavirus (HPV) and other risk factors [22].
Most of the study population had stages IIB (33.3%) and IIIB (28.4%) cervical cancer while stages IA and IB2 were the least prevalent. Likewise, a similar study in India showed that stage IIIB (38%) and stage IIB (35%) were the most common clinical stages found in cervical cancer patients [21]. The high prevalence of locally advanced stage of cervical cancer patients in our setting may be due to inadequate understanding of the early symptoms of cervical cancer and poor habit of early screening. Moreover, since the majority of the patients had a maximum of primary level education, they might have inadequate understanding of the importance for early Pap smear screening leading to the predominance of advanced stage cervical cancer at the time of diagnosis. Most of the patients in stage I were managed using surgical intervention. According to our eligibility criteria, the patients must be on drug or chemotherapy to be included in the study since we are as assessing drug related problems. Hence, majority of the patients in stage I were not eligible to be included in the study. Moreover, advanced radiological imaging techniques such as PET scan were not available in our facility to screen early stage of precancerous lesion in the cervix. That is why cervical cancer patients with stage I were least prevalent in our setting.
The mortality rate after stage IIIB was very high in our setting due to the progression of the disease. Besides, the rate of transfer to more advanced treatment facilities in advanced stage of the disease was very high. Those are the main reasons why cervical cancer patients with stage IV were very limited in our setting.
Most of the study participants (39.5%) did not have co-existing co-morbidities. Nonetheless, 35.8%, 17.3%, and 3.7% patients were diagnosed with one, two, three, and four and above co-morbidities, respectively. In contrast, a similar study in Zimbabwe indicated that majority of the study participants (79.4%) had concurrent co-morbidities [23]. In the present study, the most common co-morbidity was anaemia (25.9%) probably arising from tumour-induced bleeding and iron deficiency secondary to malignancy [24]. This finding is in agreement with an Iranian study in which anaemia was the most common (59.0%) complication among cervical cancer patients [25]. Contrastingly, a study done in Nigeria identified hypertension (29.8%) and diabetes mellitus (27.4%) as the most common co-morbidities in cervical cancer patients [26].
Retroviral disease (18.3%) was the second leading type of co-morbidity in cervical cancer patients. Correspondingly, a cross-sectional study in Zimbabwe showed that 25.6% of the study participants had a retroviral disease [27]. In addition, some studies have shown that a strong association exists between human immunodeficiency virus (HIV) infection and cervical cancer with a high prevalence of high-risk HPV DNA in women with HIV infection [28, 29]. This could probably be due to a weakened immune system secondary to retrovirus infection which puts them at higher risk of HPV infections. Moreover, the retrovirus may augment the oncogenic activities of HPV which predispose the patients to develop cervical cancer [30]. Although thromboembolic disorders are among the top ranked co-morbidities in cervical cancer patients [31], they had relatively low occurrence among the study participants.
Chemoradiation was the most widely used treatment regimen in the management of cervical cancer at KNH accounting for 50.6% of treatment modalities which is higher than in a similar study conducted in Ethiopia (37.6%) [32]. In the present study, cisplatin and paclitaxel (11.1%) were the most commonly used combination anticancer agents in the treatment of cervical cancer. Contrastingly, cisplatin and 5-fluorouracil combination regimen was widely used in a Nigerian study [26].
The study showed that granisetron and dexamethasone combination was the most commonly used prophylactic antiemetic in our setting with a usage frequency of 39.5%, followed by a combination of ondansetron and dexamethasone (22.2%). Serotonin receptor type 3 (5-HT3) antagonists such as ondansetron and granisetron are the gold standard treatment protocol for chemotherapy-induced nausea and vomiting due to superior efficacy and better tolerability of side effects as compared to conventional antiemetics [33]. The 5-HT3 receptor antagonists are also preferred over dopamine receptor antagonists since they are devoid of extrapyramidal side effects [34]. Previous studies reported that the efficacy of 5-HT3-receptor antagonists was augmented with the addition of dexamethasone [35]. Although equivalent doses of different 5-HT3-receptor antagonists had comparable efficacy [34], a combination of ondansetron and dexamethasone use was not common in our setting due to drug-drug interaction. This finding corroborated the frequent use of granisetron and dexamethasone combination in our setting which is in line with the standard protocol [35].
The finding of 93.8% prevalence of DRPs in our setting is fairly higher than in a similar Norwegian study (73%) [36]. However, the finding of this study is comparable with a similar study done in Nigeria which showed that the prevalence of DRPs in cervical cancer patients was 89.2% [26]. Besides, a mean of 2.65 ± 1.22 drug therapy problems were identified in the study population which is relatively higher than 2.1 DRPs detected per patient in a study done in Norway [36]. The higher prevalence DRPs in our setting may be due to inadequate understanding of the disease and medications among the patients and absence of local standard treatment protocols for cervical cancer patients.
There was a high preponderance of DRPs in the 51 years and above age group that accounted for 54.3% of the cases. This could probably be due to the high prevalence of co-morbidities in patients 51 years and above (29.6%) and the ageing of the metabolising organs which predispose the patient to DRPs.
Adverse drug reactions (69.1%) and drug interactions (46.9%) were the most prevalent DRPs, a finding that is in agreement with a similar study done in Nigeria [26] but higher than a finding reported by a Singaporean study [37]. The high incidence of ADRs may be attributed to the complexity and immunosuppressive effects of cancer treatment regimens.
Nausea and vomiting were among the top ranking ADRs. These findings are in line with a study done in India in which nausea and vomiting were prevalent among cancer patients treated with anticancer agents [38, 39]. This could probably be linked to the emetogenic potential of cisplatin and paclitaxel and the cytotoxic effects of anticancer agents in the highly proliferating cells of the gastrointestinal tract. Additionally, the higher incidence of nausea and vomiting could be due to poor management of delayed nausea and vomiting secondary to the anticancer agents.
Although morphine, tramadol and codeine were the most commonly used pain medications, only 3.7% of the population had constipation as ADR. In our facility, these pain medications were usually given along with stool softeners and this clinical practice could probably be the main reason why constipation due to these opioids-based pain medications was not a major issue in our setting. Pain control was in line with WHO guideline for pain control in cancer patients and hence we didn't notice any discrepancies except drug interactions due to the combined use of two opioid analgesics (i.e. codeine and morphine) among 4.4% of the study participants. Pain medications were considered as essential drugs for palliative care treatment in cancer patients in Kenya [40]. Hence, almost all public healthcare facilities offering cancer treatment were universally accessible to these essential drugs. However, controlled pain medications such as opioid analgesics were accessed to cancer patients under supervised prescription by the palliative care specialists. In addition, being controlled drugs those medicines may not be available at the lower level of healthcare facilities.
When age was taken into consideration, elderly patients (age ≥ 51 years) had encountered most (40.7%) of the ADRs. This finding is similar to that reported by Poddar et al. [41] where the incidence of ADRs among geriatric patients was significantly higher than other age groups. This may be due to diminished metabolising capacity and excretory functions in the elderly patients leading to accumulation of drugs in the body and thus increasing the risk of ADRs [42].
Chemoradiation was the most commonly used treatment modality and was also associated with the majority of the ADRs in our setting which is comparable with other studies [43, 44]. Furthermore, the present study revealed that 33.3% and 28.4% patients had stage IIB and IIIB cervical cancer, respectively which were categorised as locally advanced cervical cancer. It has been shown that chemoradiation is the standard treatment of choice in the management of locally advanced cervical cancer due to the overall tolerability of side effects and enhancement of survival [45, 46]. This could probably be the reason why this regimen was widely used in our setting and was therefore associated with the majority of the ADRs.
Due to the complexity of the chemotherapeutic regimens, cancer patients are susceptible to potential drug interactions. Not surprisingly, this study unveiled that 46.9% of cervical cancer patients had potential drug interactions in the treatment regimens. A similar study in Dutch reported 46% prevalence of potential drug interactions among cancer patients [47]. This high prevalence of drug interactions may step up the adverse effects of anticancer agents or lessen the therapeutic outcomes of the treatment regimen. With regard to severity, 68.9% significant drug interactions were detected from the treatment regimens of cervical cancer patients which is slightly higher than a study done in Tehran that reported a prevalence of 59.7% [48]. However, only 4.4% of the drug interactions were identified as serious drug interaction which necessitates use of alternative drug regimens.
Ondansetron and dexamethasone were the most common interacting drugs accounting for 26.7% of the total drug interactions. Previous studies reported that premedication of dexamethasone diminished the efficacy of paclitaxel in breast cancer and ovarian carcinoma [49, 50]. According to the findings of the present study, dexamethasone and paclitaxel accounted for 8.9% of the drug interactions. Thus, it is plausible to assume that the prophylactic use of dexamethasone antiemetic in paclitaxel-based regimens might reduce the antitumor activity of paclitaxel in cervical cancer patients.
A cross-sectional descriptive study conducted in Ethiopia revealed that 69.7% of cervical cancer patients were adherent to their treatment regimens while 30.3% of patients were non-adherent [32]. Similarly, 67.9% of cervical cancer patients were adherent to their treatment regimens in our setting. However, the rate of medication adherence (61.1%) among cervical cancer patients in India was slightly lower than our setting [51]. This could probably be due to the availability of better facilities to strengthen the awareness of the patients about their medications adherence at the Oncology Units of KNH.
Among 26 non-adherent cervical cancer patients, forgetfulness (69.2%), expensive medications (15.4%) and side effects of medications (15.4%) were the main reasons for non-adherence while long duration of therapy and complicated regimens contributed equivalently (7.7%) to medication non-adherence. On the other hand, lack of trust on the efficacy of medicines was the least common reason for non-adherence in cervical cancer patients at KNH. Comparatively, a study from Ethiopia revealed that long duration of therapy, side effects of the medication and expensive medication were among the top-ranking reasons for medication non-adherence in cervical cancer patients [32].
The present study revealed that patients with advanced stage cervical cancer were 15.4 times (AOR = 15.4, 95% CI = 1.3–185.87, p = 0.031) more likely to have DRPs as compared to patients with early stage cervical cancer. In addition, patients with advanced stage cervical cancer were 5.9 times (AOR = 5.9, 95% CI = 1.4–24.6, p = 0.017) more likely to experience ADRs as compared to patients with early stage disease.
Koh et al. [52] reported that multiple uses of drugs were a significant predictor of the incidence of DRPs. Hence, the higher likelihood of DRPs in the advanced stage cervical cancer may be due to multiple medications secondary to the complexity of the conditions which predispose the patients to DRPs. Likewise, stage of cervical cancer was the only predictor of DRPs in cervical cancer patients. Previous studies in Sweden [53], Malaysia [54], Nigeria [26] and Ethiopia [55] reported that polypharmacy and presence of co-morbidities were positively associated with DRPs. Conversely, our study revealed that number of medications and presence of co-morbidities were not statistically significant predictors of drug related problems.
Patients who had been treated with more than five drugs were more likely to have ADRs and dosing problems and less likely to have inappropriate laboratory monitoring as compared to patients treated with less than five medications. Similarly, previous study in Pakistan showed that polypharmacy was positively associated with ADRs [56]. Moreover, a similar study in Singapore showed that chronic use of five or more drugs was associated with the presence of DRPs [37]. The higher likelihood of having ADRs may plausibly be due to the enhanced pharmacological effects of the drugs secondary to the undesired drug interaction at the level of metabolism and excretion.
In the univariable logistic regression analysis, patients who had been managed with cisplatin and paclitaxel regimen were 9.8 times more likely to have dosing problems. Additionally, cervical cancer patients with the retroviral disease were 8.8 times (AOR = 8.8, 95% CI = 1.2–68, p = 0.037) more likely to have drug interactions as compared to patients without concurrent retroviral disease. Conversely, the other sociodemographic factors did not have statistically significant association with drug interactions. The higher likelihood of having drug interactions may plausibly be due to the complexity of drug regimens in the management of both conditions. Previous studies showed that an increased risk of nephrotoxicity due to the combination tenofovir and platinum analogues such as cisplatin particularly in patients with renal insufficiency. Moreover, there was a mounting report of haematological toxicity with a combination of taxane class of anticancer agents such as paclitaxel and zidovudine [57]. Since the majority of cervical cancer patients with retroviral disease were treated with tenofovir and cisplatin-based regimens in our setting, they were a higher risk of having nephrotoxicity due to drug-drug interaction between the anticancer and anti retroviral agents. Hence, having a retroviral disease as co-morbidity in cervical patients might be an important predictor for drug interaction.
Adverse drug reactions, drug interactions, and need of additional drug therapy were the most common DRPs identified among cervical cancer patients. Nausea and vomiting were the most prevalent ADRs among the study participants. In the multivariable binary logistic regression analysis, advanced stage of cervical cancer and treatment with more than five drugs were significant predictors of ADRs. Likewise, coexisting retroviral disease and treatment with more than five medications were predictors of drug interactions and dosing problems, respectively.
ADRs:
AOR:
Adjusted Odds Ratio
Crude Odds Ratio
DRPs:
Drug Related Problems
ESMO:
European Society for Medical Oncology
GFR:
Glomerular Filtration Rate.
National Compressive Cancer Network.
United States Dollar.
World Health Organization. The Pursuit of responsible use of medicines: sharing and learning from Country experiences [Internet]. Amasterdam; 2012. Available from: http://apps.who.int/iris/bitstream/10665/75828/1/WHO_EMP_MAR_2012.3_eng.pdf?ua=1
World Health Organization. Essential medicines and health products [Internet]. 2015 [cited 2016 Oct 28]. Available from: http://www.who.int/medicines/areas/rational_use/en/
Ruths S, Viktil KK, Blix HS. Classification of drug-related problems. Tidsskr Nor Laegeforen. 2007;127(23):3073–6.
Cipolle R, Strand L, Morley P. Pharmaceutical care practice: the patient-centered approach to medication management services. 3rd ed. USA: McGraw-hill Education; 2012.
Jaehde U, Liekweg A, Simons S, Westfeld M. Minimising treatment-associated risks in systemic cancer therapy. Pharm World Sci. 2008;30(2):161–8.
Cehajic I, Bergan S, Bjordal K. Pharmacist assessment of drug-related problems on an oncology ward. Eur J Hosp Pharm. 2015;22(4):194–7.
Iftikhar A, Jehanzeb K, Ullah A. Clinical pharmacy services in medical oncology unit, Peshawar, Pakistan. Pharmacologyonline. 2015;1:10–2.
Ambili R. Toxicities of anticancer drugs and its management. Int J Basic Clin Pharmacol. 2012;1(1):2–12.
Ikushima H, Osaki K, Furutani S, Yamashita K, Kawanaka T, Kishida Y, et al. Chemoradiation therapy for cervical cancer: toxicity of concurrent weekly cisplatin. Radiat Med. 2006;24(2):115–21.
Koh Y, Kutty FB, Li SC. Drug-related problems in hospitalized patients on polypharmacy: the influence of age and gender. Ther Clin Risk Manag. 2005;1(1):39–48.
Kasiulevicius V, Sapoka V, Filipaviciute R. Sample size calculation in epidemiological studies. Gerontologija. 2006;7(4):225–31.
Ministry of Health. National Guidelines for Cancer Management Kenya [Internet]. 2013 [cited 2017 Jun 15]. Available from: http://kehpca.org/wp-content/uploads/National-Cancer-Treatment-Guidelines2.pdf
National Compressive Cancer Network. NCCN Clinical Practice Guidelines in Oncology: Cervical Cancer. [Internet]. 2016 [cited 2017 Jun 15]. Available from: https://www.nccn.org/professionals/physician_gls/pdf/cervical.pdf
Marth C, Landoni F, Mahner S, McCormack M, Gonzalez-Martin A, Colombo N. Cervical cancer: ESMO Clinical Practice Guidelines for diagnosis, treatment and follow-up. Ann Oncol. 2017;28(Supplement 4):iv72–83.
World Health Organization. Cancer pain relief: with a guide to opioid availability [Internet]. 1996 [cited 2016 Nov 30]. Available from: http://apps.who.int/iris/bitstream/10665/37896/1/9241544821.pdf
Oliveira-Filho AD, Barreto-Filho JA, Neves SJ, Lyra Junior DP. Association between the 8-item Morisky medication adherence scale (MMAS-8) and blood pressure control. Arq Bras Cardiol. 2012;99(1):649–58.
Levey AS, Coresh J, Greene T, Marsh J, Stevens LA, Kusek JW, et al. Expressing the modification of diet in renal disease study equation for estimating glomerular filtration rate with standardized serum creatinine values. Clin Chem. 2007;53(4):766–2.
Du Bois D, Du Bois EFA. Formula to estimate the approximate surface area if height and weight be known. 1916. Nutrition. 1989;5(5):303–11.
van Warmerdam LJ, Rodenhuis S, ten Bokkel Huinink WW, Maes RA, Beijnen JH. The use of the Calvert formula to determine the optimal carboplatin dosage. J Cancer Res Clin Oncol. 1995;121(8):478–86.
Majinge PM. Treatment outcome of cervical cancer patients at ocean road cancer institute, Dar es salaam [internet]. Muhimbili University of Health and Allied. Sciences. 2011; Available from: http://ihi.eprints.org/966/
Chauhan R, Trivedi V, Rani R, Singh U. A hospital based study of clinical profile of cervical cancer catients of Bihar, an eastern state of India. Womens Heal Gynecol. 2016;2(2):1–4.
Burd EM. Human papillomavirus and cervical cancer. Clin Microbiol Rev. 2003;16(1):1–17.
Kagura Y. A study to determine the relationship between prevalence of late stage diagnosis of cervical cancer and number of comorbid illnesses in women aged 65 years and above in Zimbabwe [internet]. University of Zimbabwe; 2015. Available from: http://ir.uz.ac.zw/jspui/bitstream/10646/2901/1/Kagura_A-Study_to_Determine_The_Relationship_Between_Prevalence_Of_Late_Stage_Diagnosis_Of_Cervical_Cancer_.pdf
Candelaria M, Cetina L, Duenas-Gonzalez A. Anemia in cervical cancer patients: implications for iron supplementation therapy. Med Oncol. 2005;22(2):161–8.
Shahbazian H, Marrefi MS, Arvandi S, Shahbazian N. Investigating the prevalence of anemia and its relation with disease stage and patients ' age with cervical cancer referred to Department of Radiotherapy and Oncology of Ahvaz Golestan hospital during 2004-2008. Int J Pharm Res Allied Sci. 2016;5(2):190–3.
Mustapha S. Drug related problems in cervical cancer patients on chemotherapy in Ahmadu Bello University teaching hospital, Nigeria [internet]. Near East University; 2016. Available from: http://docs.neu.edu.tr/library/6405400533.pdf
Chirenje ZM, Loeb L, Mwale M, Nyamapfeni P, Kamba M, Padian N. Association of cervical SIL and HIV-1 infection among Zimbabwean women in an HIV/STI prevention study. Int J STD AIDS. 2002;13(11):765–8.
Holmes RS, Hawes SE, Toure P, Dem A, Feng Q, Weiss NS, et al. HIV infection as a risk factor for cervical cancer and cervical intraepithelial neoplasia in Senegal. Cancer Epidemiol Biomark Prev. 2009;18(9):2442–6.
Adjorlolo-Johnson G, Unger ER, Boni-Ouattara E, Touré-Coulibaly K, Maurice C, Vernon SD, et al. Assessing the relationship between HIV infection and cervical cancer in Côte d'Ivoire: a case-control study. BMC Infect Dis. 2010;10:242.
Mandelblatt JS, Kanetsky P, Eggert L, Gold KIHIV. Infection a cofactor for cervical squamous cell neoplasia? Cancer Epidemiol Biomark Prev. 1999;8(1):97–106.
Barbera L, Thomas G. Venous thromboembolism in cervical cancer. Lancet Oncol. 2008;9(1):54–60.
Gebre Y, Zemene A, Fantahun A, Aga F. Assessment of treatment compliance and associated factors among cervical cancer patients in Tikur Anbessa specialized hospital, oncology unit, Ethiopia 2012. Int J Cancer Stud Res. 2015;4:67–74.
Hesketh PJ. Comparative review of 5-HT3 receptor antagonists in the treatment of acute chemotherapy-induced nausea and vomiting. Cancer Investig. 2000;18:163–73.
Goodin S, Cunningham R. 5-HT(3)-receptor antagonists for the treatment of nausea and vomiting: a reappraisal of their side-effect profile. Oncologist. 2002;7(5):424–36.
National Compressive Cancer Network. NCCN Clinical Practice Guideline in Oncology:Antiemsis version1 [Internet]. 2015 [cited 2017 Jul 23]. Available from: http://www.prolekare.cz/dokumenty/Antiemetikum_guidelines.pdf
Cehajic I, Bergan S, Bjorda K. Pharmacist assessment of drug-related problems on an oncology ward. Eur J Hosp Pharm. 2015;22(4):1–4.
Yeoh TT, Tay XY, Si P, Chew L. Drug-related problems in elderly patients with cancer receiving outpatient chemotherapy. J Geriatr Oncol. 2015;6(4):280–7.
Wahlang JB, Laishram PD, Brahma DK, Sarkar C, Lahon J, Nongkynrih BS. Adverse drug reactions due to cancer chemotherapy in a tertiary care teaching hospital. Ther Adv Drug Saf. 2017;8(2):61–6.
Sharma A, Kumari KM, Manohar HD, Bairy KL, Thomas J. Pattern of adverse drug reactions due to cancer chemotherapy in a tertiary care hospital in South India. Perspect Clin Res. 2015;6(2):109–15.
Ministry of Health. Kenya Essential Medicines List [Internet]. 2016 [cited 2017 Sep 21]. Available from: http://apps.who.int/medicinedocs/documents/s23035en/s23035en.pdf
Poddar S, Sultana R, Sultana R, Akbor MM, Azad MAK, Hasnat A. Pattern of adverse drug reactions due to cancer chemotherapy in tertiary care teaching Hospital in Bangladesh. Dhaka Univ. J Pharm Sci. 2009;8(1):11–6.
Klotz U. Pharmacokinetics and drug metabolism in the elderly. Drug Metab Rev. 2009;41(2):67–76.
Duenas-Gonzalez A, Cetina L, Coronel J, Gonzalez-Fierro A. The safety of drug treatments for cervical cancer. Expert Opin Drug Saf. 2016;15(2):169–80.
Surendiran A, Balamurugan N, Gunaseelan K, Akhtar S, Reddy KS, Adithan C. Adverse drug reaction profile of cisplatin-based chemotherapy regimen in a tertiary care hospital in India: an evaluative study. Indian J Pharmacol. 2010;42(1):40–3.
Todo Y, Watari H. Concurrent chemoradiotherapy for cervical cancer: background including evidence-based data, pitfalls of the data, limitation of treatment in certain groups. Chin J Cancer Res. 2016;28(2):221–7.
Lukka H, Hirte H, Fyles A, Thomas G, Elit L, Johnston M, et al. Concurrent cisplatin-based chemotherapy plus radiotherapy for cervical cancer--a meta-analysis. Clin Oncol. 2002;14(3):203–12.
van Leeuwen RW, Brundel DH, Neef C, van Gelder T, Mathijssen RH, Burger DM, et al. Prevalence of potential drug-drug interactions in cancer patients treated with oral anticancer drugs. Br J Cancer. 2013;108(5):1071–8.
Tavakoli-Ardakani M, Kazemian K, Salamzadeh J, Mehdizadeh M. Potential of drug interactions among hospitalized cancer patients in a developing country. Iran J Pharm Res. 2013;12:175–82.
Sui M, Chen F, Chen Z, Fan W. Glucocorticoids interfere with therapeutic efficacy of paclitaxel against human breast and ovarian xenograft tumors. Int J Cancer. 2006;119:712–7.
Hou WJ, Guan JH, Dong Q, Han YH, Zhang R. Dexamethasone inhibits the effect of paclitaxel on human ovarian carcinoma xenografts in nude mice. Eur Rev Med Pharmacol Sci. 2013;17(21):2902–8.
Dutta S, Biswas N, Muhkherjee G. Evaluation of socio-demographic factors for non-compliance to treatment in locally advanced cases of cancer cervix in a rural medical College Hospital in India. Indian J Palliat Care. 2013;19(3):158–65.
Peterson C, Gustafsson M. Characterisation of drug-related problems and associated factors at a clinical pharmacist service-naive Hospital in Northern Sweden. Drugs Real World Outcomes. 2017;4(2):97–107.
Zaman Huri H, Hui Xin C, Sulaiman CZ. Drug-related problems in patients with benign prostatic hyperplasia: a cross sectional retrospective study. PLoS One. 2014;9(1):e86215.
Sisay EA, Engidawork E, Yesuf TA, Ketema EB. Drug related problems in chemotherapy of cancer patients. J Cancer Sci Ther. 2015;7(2):55–9.
Ahmed B, Nanji K, Mujeeb R, Patel MJ. Effects of polypharmacy on adverse drug reactions among geriatric outpatients at a tertiary care hospital in Karachi: a prospective cohort study. PLoS One. 2014;9(11):e112133.
Makinson A, Pujol JL, Le Moing V, Peyriere H, Reynes J. Interactions between cytotoxic chemotherapy and antiretroviral treatment in human immunodeficiency virus-infected patients with lung cancer. J Thorac Oncol. 2010;5(4):562–71.
The authors would like to acknowledge AFIMEGQ Programme for financial support towards this project.
The study was conducted under the financial support of Africa for Innovation, Mobility, Exchange, Globalization and Quality (AFIMEGQ) Programme.
Author's contribution
AD conducted the actual study and the statistical analysis. AD, PN, IW and PK were involved in developing the idea, designing of the study and the write up of the manuscript. All authors approved the submitted version of the manuscript.
Department of Pharmaceutics and Pharmacy Practice, University of Nairobi, College of Health Sciences, School of Pharmacy, P.O. Box 19676-00202, Nairobi, Kenya
Amsalu Degu & Peter Karimi
Department of Pharmaceutical Chemistry, University of Nairobi, College of Health Sciences, School of Pharmacy, Nairobi, 19676-00202, Kenya
Peter Njogu
Kenyatta National Hospital, Division of Pharmacy, Nairobi, 20723-00202, Kenya
Irene Weru
Amsalu Degu
Peter Karimi
Correspondence to Amsalu Degu.
Ethical approval of the study protocols was obtained from the Kenyatta National Hospital/University of Nairobi Ethics and Research Committee (Protocol number: P963/12/2016). Before data collection, informed written consent was obtained from the study participants. Each patient was notified about the objective of the study, procedures for selection and assurance of confidentiality. To ensure confidentiality of the patients' information, the name and address of the patients were not recorded during data collection.
The authors declare that they have no competing interest.
Degu, A., Njogu, P., Weru, I. et al. Assessment of drug therapy problems among patients with cervical cancer at Kenyatta National Hospital, Kenya. gynaecol oncol res pract 4, 15 (2017). https://doi.org/10.1186/s40661-017-0054-9
Kenyatta national hospital | CommonCrawl |
Some isomorphic properties of m-polar fuzzy graphs with applications
Ganesh Ghorai1 &
Madhumangal Pal1
The theory of graphs are very useful tool in solving the combinatorial problems in different areas of computer science and computational intelligence systems. In this paper, we present a frame work to handle m-polar fuzzy information by combining the theory of m-polar fuzzy sets with graphs. We introduce the notion of weak self complement m-polar fuzzy graphs and establish a necessary condition for m-polar fuzzy graph to be weak self complement. Some properties of self complement and weak self complement m-polar fuzzy graphs are discussed. The order, size, busy vertices and free vertices of an m-polar fuzzy graphs are also defined and proved that isomorphic m-polar fuzzy graphs have same order, size and degree. Also, we have presented some results of busy vertices in isomorphic and weak isomorphic m-polar fuzzy graphs. Finally, a relative study of complement and operations on m-polar fuzzy graphs have been made. Applications of m-polar fuzzy graph are also given at the end.
After the introduction of fuzzy sets by Zadeh (1965), fuzzy set theory have been included in many research fields. Since then, the theory of fuzzy sets has become a vigorous area of research in different disciplines including medical and life sciences, management sciences, social sciences engineering, statistic, graph theory, artificial intelligence, signal processing, multi agent systems, decision making and automata theory. In a fuzzy set, each element is associated with a membership value selected from the interval [0, 1]. Zhang (1994, 1998) introduced the concept of bipolar fuzzy sets. Instead of using particular membership value as in fuzzy sets, m-polar fuzzy set can be used to represent uncertainty of a set more perfectly. Chen et al. (2014) introduced the notion of m-polar fuzzy set as a generalization of fuzzy set theory. The membership value in m-polar fuzzy set is more expressive in capturing uncertainty of data.
An m-polar fuzzy set on a non-void set X is a mapping \(\mu :X\rightarrow [0,1]^m\). The idea behind this is that "multipolar information" exists because data of real world problems are sometimes come from multiple agents. m-polar fuzzy sets allow more graphical representation of vague data, which facilitates significantly better analysis in data relationships, incompleteness, and similarity measures. Graph theory besides being a well developed branch of Mathematics, it is an important tool for mathematical modeling. Realizing the importance, Rosenfeld (1975) introduced the concept of fuzzy graphs, Mordeson and Nair (2000) discussed about the properties of fuzzy graphs and hypergraphs. After that, the operation of union, join, Cartesian product and composition on two fuzzy graphs was defined by Mordeson and Peng (1994). Sunitha and Vijayakumar (2002) further studied the other properties of fuzzy graphs. The concept of weak isomorphism, co-weak isomorphism and isomorphism between fuzzy graphs was introduced by Bhutani (1989). Later many researchers have worked on fuzzy graphs like in Bhutani et al. (2004); Al-Hawary (2011); Koczy (1992); Lee-kwang and Lee (1995); Nagoorgani and Radha (2008), Samanta and Pal (2011a, b, 2013, 2014, 2015). Akram (2011, 2013) introduced and defined different operations on bipolar fuzzy graphs. Again, Rashmanlou et al. (2015a, 2015b, 2016) studied bipolar fuzzy graphs with categorical properties, product of bipolar fuzzy graphs and their degrees, etc. Using these concepts many research is going on till date on bipolar fuzzy graphs such as Ghorai and Pal (2015b), Samanta and Pal (2012a, b, 2014), Yang et al. (2013). Chen et al. (2014) first introduced the concept of m-polar fuzzy graphs. Then Ghorai and Pal (2016a) presented properties of generalized m-polar fuzzy graphs, defined many operations and density of m-polar fuzzy graphs (2015a), introduced the concept of m-polar fuzzy planar graphs (2016b) and defined faces and dual of m-polar fuzzy planar graphs (2016c). Akram and Younas (2015), Akram et al. (2016) introduced irregular m-polar fuzzy graphs and metrics in m-polar fuzzy graphs. In this paper, weak self complement m-polar fuzzy graphs is defined and a necessary condition is mentioned for an m-polar fuzzy graph to be weak self complement. Some properties of self complement and weak self complement m-polar fuzzy graphs are discussed. The order, size, busy vertices and free vertices of an m-polar fuzzy graphs are also defined and proved that isomorphic m-polar fuzzy graphs have same order, size and degree. Also, we have proved some results of busy vertices in isomorphic and weak isomorphic m-polar fuzzy graphs. Finally, a relative study of complement and operations on m-polar fuzzy graphs have been made.
First of all we give the definitions of m-polar fuzzy sets, m-polar fuzzy graphs and other related definitions from the references (Al-Harary 1972; Lee 2000).
Throughout the paper, \([0,1]^m\) (m-power of [0, 1]) is considered to be a poset with point-wise order \(\le \), where m is a natural number. \(\le \) is defined by \(x\le y \Leftrightarrow \) for each \(i=1,2,\ldots ,m\), \(p_i(x)\le p_i(y)\) where \(x, y\in [0,1]^m\) and \(p_i:[0,1]^m \rightarrow [0,1]\) is the ith projection mapping.
As a generalization of bipolar fuzzy sets, Chen et al. (2014) defined the m-polar fuzzy sets in 2014.
(Chen et al. 2014) Let X be a non-void set. An m-polar fuzzy set on X is defined as a mapping \(\mu :X\rightarrow [0,1]^m\).
The m-polar fuzzy relation is defined below.
(Ghorai and Pal 2016a) Let A be an m-polar fuzzy set on a set X. An m-polar fuzzy relation on A is an m-polar fuzzy set B of \(X\times X\) such that \(p_i\circ B(x,y)\le min\{p_i\circ A(x),p_i\circ A(y)\}\) for all \(x,y\in X\), \(i=1,2,\ldots ,m\). B is called symmetric if \(B(x,y)=B(y,x)\) for all \(x,y\in X\).
We define an equivalence relation \(\sim \) on \(V\times V-\{(x,x): x\in V\}\) as follows:
We say \((x_1,y_1)\sim (x_2,y_2)\) if and only if either \((x_1,y_1)=(x_2,y_2)\) or \(x_1=y_2\) and \(y_1=x_2\). Then we obtain an quotient set denoted by \(\widetilde{V^2}\). The equivalence class containing the element (x, y) will be denoted as xy or yx.
We assume that \(G^*=(V,E)\) is a crisp graph and \(G=(V,A,B)\) is an m-polar fuzzy graph of \(G^*\) throughout this paper.
Chen et al. (2014) first introduced m-polar fuzzy graph. We have modified their definition and introduce generalized m-polar fuzzy graph as follows.
(Chen et al. 2014; Ghorai and Pal 2016a) An m-polar fuzzy graph (or generalized m-polar fuzzy graph) of \(G^*=(V,E)\) is a pair \(G=(V,A,B)\) where \(A: V\rightarrow [0,1]^m\) is an m-polar fuzzy set in V and \(B: \widetilde{V^2}\rightarrow [0,1]^m\) is an m-polar fuzzy set in \(\widetilde{V^2}\) such that \(p_i\circ B(xy)\le min\{p_i\circ A(x),p_i\circ A(y)\}\) for all \(xy\in \widetilde{V^2}\), \(i=1,2,\ldots ,m\) and \(B(xy)={\mathbf{0}} \) for all \(xy\in \widetilde{V^2}-E\), \(\big ({\mathbf{0}} =(0,0,\ldots ,0)\) is the smallest element in \([0,1]^m\big )\). We call A as the m-polar fuzzy vertex set of G and B as the m-polar fuzzy edge set of G.
Let \(G^*=(V, E)\) be a crisp graph where \(V=\{u_1, u_2, u_3, u_4\}\) and \(E=\{u_1u_2, u_2u_3, u_3u_4, u_4u_1\}\). Then, \(G=(V, A, B)\) be a 3-polar fuzzy graph of \(G^*\) where \(A=\left\{ \frac{\langle 0.5, 0.7, 0.8\rangle }{u_1}, \frac{\langle 0.4, 0.7, 0.8\rangle }{u_2}, \frac{\langle0.7, 0.6, 0.8\rangle}{u_3}, \frac{\langle0.3, 0.6, 0.9\rangle}{u_4}\right\} \) and \(B=\left\{ \frac{\langle 0.4, 0.6, 0.7\rangle }{u_1u_2}, \frac{\langle 0.3, 0.6, 0.5\rangle }{u_2u_3}, \frac{\langle 0.2, 0.5, 0.6\rangle }{u_3u_4}, \frac{\langle 0.2, 0.4, 0.8\rangle }{u_4u_1}, \frac{\langle 0, 0, 0\rangle }{u_1u_3}, \frac{\langle 0, 0, 0\rangle }{u_4u_2}\right\} \).
Ghorai and Pal (2016a) introduced many operations on m-polar fuzzy graphs such as Cartesian product, composition, union and join which are given below.
(Ghorai and Pal 2016a) The Cartesian product of two m-polar fuzzy graphs \(G_1=(V_1,A_1,B_1)\) and \(G_2=(V_2,A_2,B_2)\) of the graphs \(G^*_1\) and \(G^*_2\) respectively is denoted as a pair \(G_1\times G_2=(V_1\times V_2,A_1\times A_2,B_1\times B_2)\) such that for \(i=1,2,\ldots ,m\)
\(p_i\circ (A_1\times A_2)(x_1,x_2)= min\{p_i\circ A_1(x_1),p_i\circ A_2(x_2)\}\) for all \((x_1,x_2)\in V_1\times V_2\).
\(p_i\circ (B_1\times B_2)((x,x_2)(x,y_2))= min\{p_i\circ A_1(x),p_i\circ B_2(x_2y_2)\}\) for all \(x\in V_1\), \(x_2y_2\in E_2\).
\(p_i\circ (B_1\times B_2)((x_1,z)(y_1,z))= min\{p_i\circ B_1(x_1y_1),p_i\circ A_2(z)\}\) for all \(z\in V_2\), \(x_1y_1\in E_1\).
\(p_i\circ (B_1\times B_2)((x_1,x_2)(y_1,y_2))=0\) for all \((x_1,x_2)(y_1,y_2)\in \widetilde{(V_1\times V_2)^2}-E\).
(Ghorai and Pal 2016a) The composition of two m-polar fuzzy graphs \(G_1=(V_1,A_1,B_1)\) and \(G_2=(V_2,A_2,B_2)\) of the graphs \(G^*_1=(V_1,E_1)\) and \(G^*_2=(V_2,E_2)\) respectively is denoted as a pair \(G_1[G_2]=(V_1\times V_2,A_1\circ A_2,B_1\circ B_2)\) such that for \(i=1,2,\ldots ,m\)
\(p_i\circ (A_1\circ A_2)(x_1,x_2)= min\{p_i\circ A_1(x_1),p_i\circ A_2(x_2)\}\) for all \((x_1,x_2)\in V_1\times V_2\).
\(p_i\circ (B_1\circ B_2)((x,x_2)(x,y_2))= min\{p_i\circ A_1(x),p_i\circ B_2(x_2y_2)\}\) for all \(x\in V_1\), \(x_2y_2\in E_2\).
\(p_i\circ (B_1\circ B_2)((x_1,z)(y_1,z))= min\{p_i\circ B_1(x_1y_1),p_i\circ A_2(z)\}\) for all \(z\in V_2\), \(x_1y_1\in E_1\).
\(p_i\circ (B_1\circ B_2)((x_1,x_2)(y_1,y_2)) =min\{p_i\circ A_2(x_2),p_i\circ A_2(y_2),p_i\circ B_1(x_1y_1)\}\) for all \((x_1,x_2)(y_1,y_2)\in E^0-E\).
\(p_i\circ (B_1\circ B_2)((x_1,x_2)(y_1,y_2))=0\) for all \((x_1,x_2)(y_1,y_2)\in \widetilde{(V_1\times V_2)^2}-E^0\).
(Ghorai and Pal 2016a) The union \(G_1\cup G_2=(V_1\cup V_2,A_1\cup A_2,B_1\cup B_2)\) of the m-polar fuzzy graphs \(G_1=(V_1,A_1,B_1)\) and \(G_2=(V_2,A_2,B_2)\) of \(G^*_1\) and \(G^*_2\) respectively is defined as follows: for \(i=1,2,\ldots ,m\)
\(p_i\circ (A_1\cup A_2)(x)=\left\{ \begin{array}{ll} p_i\circ A_1(x) &{}\quad {\text {if}}\; x\in V_1-V_2\\ p_i\circ A_2(x) &{}\quad {\text {if}}\; x\in V_2-V_1\\ max\{p_i\circ A_1(x),p_i\circ A_2(x)\} &{}\quad {\text {if}}\; x\in V_1\cap V_2. \end{array}\right. \)
\(p_i\circ (B_1\cup B_2)(xy)=\left\{ \begin{array}{ll} p_i\circ B_1(xy) &{}\quad {\text {if}}\; xy\in E_1-E_2\\ p_i\circ B_2(xy) &{}\quad {\text {if}}\; xy\in E_2-E_1\\ max\{p_i\circ B_1(xy),p_i\circ B_2(xy)\} &{}\quad {\text {if}}\; xy\in E_1\cap E_2. \end{array}\right. \)
\(p_i\circ (B_1\cup B_2)(xy)=0\) if \(xy\in \widetilde{(V_1\times V_2)^2}-E_1\cup E_2\).
(Ghorai and Pal 2016a) The join of the m-polar fuzzy graphs \(G_1= (V_1,A_1,B_1)\) and \(G_2= (V_2,A_2,B_2)\) of \(G^*_1\) and \(G^*_2\) respectively is defined as a pair \(G_1+ G_2= (V_1\cup V_2,A_1+ A_2,B_1+ B_2)\) such that for \(i=1,2,\ldots ,m\)
\(p_i\circ (A_1+A_2)(x)=p_i\circ (A_1\cup A_2)(x)\) if \(x\in V_1\cup V_2\).
\(p_i\circ (B_1+B_2)(xy)=p_i\circ (B_1\cup B_2)(xy)\) if \(xy\in E_1\cup E_2\).
\(p_i\circ (B_1+B_2)(xy)=min\{p_i\circ A_1(x),p_i\circ A_2(y)\}\) if \(xy\in E^\prime \), where \(E^\prime \) denotes the set of all edges joining the vertices of \(V_1\) and \(V_2\).
\(p_i\circ (B_1+B_2)(xy)=0\) if \(xy\in \widetilde{(V_1\times V_2)^2}-E_1\cup E_2\cup E^\prime \).
Remark 9
Later on, Akram et al. (2016) applied the concept of m-polar fuzzy sets on graph structure and also defined the above operations on them.
Different types of morphism are defined on m-polar fuzzy graphs by Ghorai and Pal (2016a).
Definition 10
(Ghorai and Pal 2016a) Let \(G_1=(V_1,A_1,B_1)\) and \(G_2=(V_2,A_2,B_2)\) be two m-polar fuzzy graphs of the graphs \(G^*_1= (V_1,E_1)\) and \(G^*_2= (V_2,E_2)\) respectively. A homomorphism between \(G_1\) and \(G_2\) is a mapping \(\phi :V_1\rightarrow V_2\) such that for each \(i=1,2,\ldots ,m\)
\(p_i\circ A_1(x_1)\le p_i\circ A_2(\phi (x_1))\) for all \(x_1\in V_1\),
\(p_i\circ B_1(x_1y_1)\le p_i\circ B_2(\phi (x_1)\phi (y_1))\) for all \(x_1y_1\in \widetilde{V^2_1}\).
\(\phi \) is said to be an isomorphism if it is a bijective mapping and for \(i=1,2,\ldots ,m\)
\(p_i\circ A_1(x_1)= p_i\circ A_2(\phi (x_1))\) for all \(x_1\in V_1\),
\(p_i\circ B_1(x_1y_1)= p_i\circ B_2(\phi (x_1)\phi (y_1))\) for all \(x_1y_1\in \widetilde{V^2_1}\).
In this case, we write \(G_1\cong G_2\).
(Ghorai and Pal 2016a) A weak isomorphism between \(G_1=(V_1,A_1,B_1)\) and \(G_2=(V_2,A_2,B_2)\) is a bijective mapping \(\phi :V_1\rightarrow V_2\) such that
\(\phi \) is a homomorphism,
\(p_i\circ A_1(x_1)= p_i\circ A_2(\phi (x_1))\) for all \(x_1\in V_1\), for each \(i=1,2,\ldots ,m\).
(Ghorai and Pal 2016a) \(G=(V,A,B)\) is called strong if \(p_i\circ B(xy)=min\{p_i\circ A(x),p_i\circ A(y)\}\) for all \(xy\in E\), \(i=1,2,\ldots ,m\).
A strong m-polar fuzzy graph G is called self complementary if \(G\cong \overline{G}\).
Degree of a vertex in an m-polar fuzzy graph is defined as below.
(Akram and Younas 2015) The neighborhood degree of a vertex v in the m-polar fuzzy graph G is denoted as \(deg(v)=\big (p_1\circ deg(v), p_2\circ deg(v), \ldots , p_m\circ deg(v)\big )\) where \(p_i\circ deg(v)=\sum \nolimits _{\begin{array}{c} u\ne v\\ uv\in E \end{array}}p_i\circ {B}(uv)\), \(i=1,2,\ldots ,m\).
Remark 14
If \(G_1=(V_1, A_1, B_1)\) and \(G_2=(V_2, A_2, B_2)\) are two m-polar fuzzy graphs. Then the canonical projection maps \(\pi _1: V_1\times V_2\rightarrow V_1\) and \(\pi _2: V_1\times V_2\rightarrow V_2\) are indeed homomorphisms from \(G_1\times G_2\) to \(G_1\) and \(G_1\times G_2\) to \(G_2\) respectively. This can be seen as follows:
\(p_i\circ (A_1\times A_2)(x_1, x_2)=min\{p_i\circ A_1(x_1), p_i\circ A_2(x_2)\}\le p_i\circ A_1(x_1)=p_i\circ A_1(\pi _1(x_1, x_2))\) for all \((x_1, x_2)\in V_1\times V_2\) and \(p_i\circ (B_1\times B_2)((x_1, z)(y_1, z))=min\{p_i\circ B_1(x_1y_1), p_i\circ A_2(z)\}\le p_i\circ B_1(x_1y_1)=p_i\circ B_1(\pi _1(x_1, z)\pi _1(y_1, z))\) for all \(z\in V_2\) and \(x_1y_1\in E_1\). In a similar way we can check the other conditions also. This shows that the canonical projection maps \(\pi _1: V_1\times V_2\rightarrow V_1\) is a homomorphism from \(G_1\times G_2\) to \(G_1\).
Weak self complement m-polar fuzzy graphs
Self complement m-polar fuzzy graphs have many important significant in the theory of m-polar fuzzy graphs. If an m-polar fuzzy graph is not self complement then also we can say that it is self complement in some weaker sense. Simultaneously we can establish some results with this graph. This motivates to define weak self complement m-polar fuzzy graphs.
Let \(G=(V,A,B)\) be an m-polar fuzzy graph of the crisp graph \(G^*=(V,E)\). The complement of G is an m-polar fuzzy graph \(\overline{G}=(V,\overline{A},\overline{B})\) of \(\overline{G^*}=(V,\widetilde{V^2})\) such that \(\overline{A}=A\) and \(\overline{B}\) is defined by \(p_i\circ \overline{B}(xy)=min\{p_i\circ A(x), p_i\circ A(y)\}- p_i\circ B(xy)\) for \(xy\in \widetilde{V^2}\), \(i=1,2,\ldots ,m\).
Let \(G=(V,A,B)\) be a 3-polar fuzzy graph of the graph \(G^*=(V,E)\) where \(V=\{u,v,w,x\}\), \(E=\{uv,vw,wu,ux\}\), \(A=\left\{ \frac{\langle 0.2,0.3,0.5\rangle }{u},\frac{\langle 0.5,0.6,0.3\rangle }{v},\frac{\langle 0.7,0.2,0.3\rangle }{w},\,\frac{\langle 0.2,0.5,0.7\rangle }{x} \right\} \), \(B=\left\{ \frac{\langle 0.2,0.3,0.3\rangle }{uv},\frac{\langle 0.4,0.1,0.1\rangle }{vw},\frac{\langle 0.1,0.1,0.1\rangle }{wu}, \frac{\langle 0.1,0.2,0.4\rangle }{xu},\frac{\langle 0,0,0\rangle }{xv},\frac{\langle 0,0,0\rangle }{wx}\right\} \). Then by Definition 15, we have constructed the complement \(\overline{G}\) of G which is shown in Fig. 1.
Let \(\overline{\overline{G}}=(V,\overline{\overline{A}},\overline{\overline{B}})\) be the complement of \(\overline{G}\) where \(\overline{\overline{A}}=\overline{A}=A\) and
$$\begin{aligned} p_i\circ \overline{\overline{B}}(uv)&= {} min\{p_i\circ \overline{A}(u),p_i\circ \overline{A}(v)\}-p_i\circ \overline{B}(uv)\\&= {} min\{p_i\circ A(u),p_i\circ A(v)\}-\{min\{p_i\circ A(u), p_i\circ A(v)\}- p_i\circ B(uv)\}\\&= {} p_i\circ B(uv)\quad {\hbox {for}}\; uv\in \widetilde{V^2},\, i=1,2,\ldots ,m. \end{aligned}$$
Hence, \(\overline{\overline{G}}=G\).
The m-polar fuzzy graph \(G=(V,A,B)\) is said to be weak self complement if there is a weak isomorphism from G onto \(\overline{G}\). In other words, there exist a bijective homomorphism \(\phi : G \rightarrow \overline{G}\) such that for \(i=1,2,\ldots ,m\)
\(p_i\circ A(u)= p_i\circ \overline{A}(\phi (u))\) for all \(u\in V\),
\(p_i\circ B(uv)\le p_i\circ \overline{B}(\phi (u)\phi (v))\) for all \(uv\in \widetilde{V^2}\).
Let \(G=(V,A,B)\) be a 3-polar fuzzy graph of the graph \({G^*}=(V,E)\) where \(V=\{u,v,w\}\), \(E=\{uv,vw\}\), \(A=\left\{ \frac{\langle 0.3,0.4,0.4\rangle }{u},\frac{\langle 0.2,0.5,0.7\rangle }{v},\frac{\langle 0.3,0.6,0.7\rangle }{w}\right\} \), \(B=\left\{ \frac{\langle 0.1,0.1,0.2\rangle }{uv},\frac{\langle 0.1,0.2,0.2\rangle }{vw},\frac{\langle 0,0,0\rangle }{wu}\right\} \). Then \(\overline{G}=(V,\overline{A},\overline{B})\) is also a 3-polar fuzzy graph where \(\overline{A}=A\) and \(\overline{B}=\left\{ \frac{\langle 0.1,0.3,0.2\rangle }{uv},\frac{\langle 0.1,0.3,0.5\rangle }{vw},\frac{\langle 0.3,0.4,0.4\rangle }{wu}\right\} \). We can easily verify that, the identity map is an weak isomorphism from G onto \(\overline{G}\)(see Fig. 2). Hence G is weak self complement.
In Ghorai and Pal (2015a), Ghorai and Pal proved that if G is a self complementary strong m-polar fuzzy graph then for all \(xy\in \widetilde{V^2}\) and \(i=1,2,\ldots ,m\)
$$\begin{aligned} \sum _{x\ne y} p_i\circ B(xy)=\frac{1}{2} \sum _{x\ne y} min\{p_i\circ A(x),p_i\circ A(y)\}. \end{aligned}$$
The converse of the above result does not hold always.
For example, let us consider a 3-polar fuzzy graph \(G=(V,A,B)\) of \({G^*}=(V,E)\) where \(V=\{u,v,w\}\), \(E=\{uv,vw,wu\}\), \(A=\left\{ \frac{\langle 0.2,0.3,0.4\rangle }{u},\frac{\langle 0.4,0.5,0.6\rangle }{v},\frac{\langle 0.5,0.7,0.8\rangle }{w}\right\} \), \(B=\left\{ \frac{\langle 0.2,0.3,0.4\rangle }{uv},\frac{\langle 0.1,0.2,0.2\rangle }{vw},\frac{\langle 0.1,0.05,0.1\rangle }{wu}\right\} \). Then we have the following
$$\begin{aligned}&p_1\circ B(uv)+p_1\circ B(vw)+p_1\circ B(wu)=0.2+0.1+0.1=0.4\; {\text {and}}\\&\frac{1}{2}\left[ min\{p_1\circ A(u),p_1\circ A(v)\}+min\{p_1\circ A(v),p_1\circ A(w)\}+min\{p_1\circ A(w),p_1\circ A(u)\}\right] \\&\quad = \frac{1}{2}[min\{0.2,0.4\}+min\{0.4,0.5\}+min\{0.5,0.2\}]=\frac{1}{2}(0.2+0.4+0.2)=0.4. \end{aligned}$$
$$\begin{aligned} \sum _{u\ne v} p_1\circ B(uv)=0.4=\frac{1}{2} \sum _{u\ne v} min\{p_1\circ A(u),p_i\circ A(v)\}. \end{aligned}$$
Similarly,
$$\begin{aligned} \sum _{u\ne v} p_2\circ B(uv)=0.55=\frac{1}{2} \sum _{u\ne v} min\{p_2\circ A(u),p_2\circ A(v)\} \end{aligned}$$
$$\begin{aligned} \sum _{u\ne v} p_3\circ B(uv)=0.7=\frac{1}{2} \sum _{u\ne v} min\{p_3\circ A(u),p_3\circ A(v)\}. \end{aligned}$$
Hence for \(i=1,2,3\) we have,
$$\begin{aligned} \sum _{u\ne v} p_i\circ B(uv)=\frac{1}{2} \sum _{u\ne v} min\{p_i\circ A(u),p_i\circ A(v)\}. \end{aligned}$$
But G is not self complementary as there exists no isomorphism from G onto \(\overline{G}\) (see Fig. 3).
G and it's complement \(\overline{G}\)
Weak self complement 3-polar fuzzy graphs
Example of 3-polar fuzzy graph G which is not self complement
Example of 3-polar fuzzy graph G which is weak self complement
3-Polar fuzzy graph G and busy value of its vertices
Weak isomorphic 3-polar fuzzy graphs \(G_1\) and \(G_2\)
Example of weak isomorphic graphs whose complement is not weak isomorphic
\(G_1\), \(G_2\), \(G_1\circ G_2\) and \(\overline{G_1\circ G_2}\)
Example of 3-polar fuzzy graphs \(G_1\) and \(G_2\) where \(\overline{G_1\circ G_2}\ncong \overline{G_1}\circ \overline{G_2}\)
Graphical representation of tug of war
5-Polar fuzzy evaluation graph corresponding to the teacher's evaluation by students
Now suppose an m-polar fuzzy graph \(G=(V,A,B)\) is a weak self complement. Then the following inequality holds.
Theorem 21
Let \(G=(V,A,B)\) be a weak self complement m-polar fuzzy graph of \({G^*}\). Then for \(i=1,2,\ldots ,m\)
$$\begin{aligned} \sum _{x\ne y} p_i\circ B(xy)\le \frac{1}{2} \sum _{x\ne y} min\{p_i\circ A(x),p_i\circ A(y)\}. \end{aligned}$$
Since G is weak self complement, therefore there exists a weak isomorphism \(\phi : V \rightarrow V\) such that \(p_i\circ A(x)= p_i\circ \overline{A}(\phi (x))\) for all \(x\in V\) and \(p_i\circ B(xy)\le p_i\circ \overline{B}(\phi (x)\phi (y))\) for all \(xy\in \widetilde{V^2}\), \(i=1,2,\ldots ,m\).
Using the above we have,
$$\begin{aligned}&p_i\circ B(xy)\le p_i\circ \overline{B}(\phi (x)\phi (y))=min\{p_i\circ A(x),p_i\circ A(y)\}- p_i\circ B(\phi (x)\phi (y))\\&{\hbox {i.e., }}p_i\circ B(xy)+p_i\circ B(\phi (x)\phi (y))\le min\{p_i\circ A(\phi (x)),p_i\circ A(\phi (y))\}. \end{aligned}$$
Therefore, for all \(xy\in \widetilde{V^2}\), \(i=1,2,\ldots ,m\)
$$\begin{aligned}&\sum _{x\ne y} p_i\circ B(xy)+\sum _{x\ne y} p_i\circ B(\phi (x)\phi (y) \\&\quad \le \sum _{x\ne y} min\{p_i\circ A(\phi (x)),p_i\circ A(\phi (y))\} \\&\quad =\sum _{x\ne y} min\{p_i\circ A(x),p_i\circ A(y)\} \end{aligned}$$
$$\begin{aligned} 2\sum _{x\ne y} p_i\circ B(xy)\le \sum _{x\ne y} min\{p_i\circ A(x),p_i\circ A(y)\} \end{aligned}$$
$$\begin{aligned} \sum _{x\ne y} p_i\circ B(xy)\le \frac{1}{2}\sum _{x\ne y} min\{p_i\circ A(x),p_i\circ A(y)\}. \end{aligned}$$
\(\square \)
The converse of the above theorem is not true in general. For example, consider the 3-polar fuzzy graph of Fig. 3. We see that for the 3-polar fuzzy graph G, the condition of Theorem 21 is satisfied. But, G is not weak self complementary as there is no weak isomorphism from G onto \(\overline{G}\).
If \(p_i\circ B(xy)\le \frac{1}{2} min\{p_i\circ A(x),p_i\circ A(y)\}\) for all \(xy\in \widetilde{V^2}\), \(i=1,2,\ldots ,m\) then G is a weak self complement m-polar fuzzy graph.
Let \(\overline{G}=(V,\overline{A},\overline{B})\) be the complement of G where \(\overline{A}(x)=A(x)\) for all \(x\in V\) and \(p_i\circ \overline{B}(xy)=min\{p_i\circ A(x), p_i\circ A(y)\}- p_i\circ B(xy)\) for \(xy\in \widetilde{V^2}\), \(i=1,2,\ldots ,m\).
Let us now consider the identity map \(I: V\rightarrow V\). Then \(A(x)=A(I(x))=\overline{A}(I(x))\) for all \(x\in V\) and
$$\begin{aligned} p_i\circ \overline{B}(I(x)I(y))&= {} p_i\circ \overline{B}(xy)\\&= {} min\{p_i\circ A(x), p_i\circ A(y)\}- p_i\circ B(xy)\\&\ge {} min\{p_i\circ A(x), p_i\circ A(y)\}-\frac{1}{2} min\{p_i\circ A(x),p_i\circ A(y)\}\\&= {} \frac{1}{2} min\{p_i\circ A(x),p_i\circ A(y)\}\ge p_i\circ B(xy). \end{aligned}$$
So, \(p_i\circ B(xy)\le p_i\circ \overline{B}(I(x)I(y))\) for \(i=1,2,\ldots ,m\) and \(xy\in \widetilde{V^2}\). Hence, \(I: V\rightarrow V\) is a weak isomorphism. \(\square \)
Consider the 3-polar fuzzy graph \(G=(V,A,B)\) of \({G^*}=(V,E)\) where \(V=\{u,v,w\}\), \(E=\{uv,vw,wu\}\), \(A=\left\{ \frac{\langle 0.2,0.3,0.4\rangle }{u},\frac{\langle 0.4,0.5,0.6\rangle }{v},\frac{\langle 0.5,0.7,0.9\rangle }{w}\right\} \), \(B=\left\{ \frac{\langle 0.1,0.1,0.2\rangle }{uv},\frac{\langle 0.2,0.2,0.3\rangle }{vw},\frac{\langle 0.1,0.1,0.2\rangle }{wu}\right\} \). We see that for each \(i=1,2,3\) and \(xy\in \widetilde{V^2}\), \( p_i\circ B(xy)\le \frac{1}{2} min\{p_i\circ A(x),p_i\circ A(y)\}\) .
Also, consider the complement of G of Fig. 4. Let us now consider the identity mapping \(I: G\rightarrow \overline{G}\) such that \(I(u)=u\) for all \(u\in V\). Then, I is the required weak isomorphism from G onto \(\overline{G}\). Hence, G is weak self complementary.
Order, size and busy value of vertices of m-polar fuzzy graphs
In this section, the order, size, busy value of vertices of an m-polar fuzzy graph is defined.
The order of the m-polar fuzzy graph \(G=(V,A,B)\) is denoted by |V| (or O(G)) where
$$\begin{aligned} O(G)=|V|=\sum _{x\in V} \frac{1+ \sum \nolimits _{i=1}^{m} p_i\circ A(x)}{2}. \end{aligned}$$
The size of G is denoted by |E| (or S(G)) where
$$\begin{aligned} S(G)=|E|=\sum _{xy\in E} \frac{1+ \sum \nolimits _{i=1}^{m} p_i\circ B(xy)}{2}. \end{aligned}$$
Two isomorphic m-polar fuzzy graphs \(G_1=(V_1,A_1,B_1)\) and \(G_2=(V_2,A_2,B_2)\) of the graphs \(G^*_1=(V_1,E_1)\) and \(G^*_2=(V_2,E_2)\) have same order and size.
Let \(\phi \) be an isomorphism from \(G_1\) onto \(G_2\). Then \(A_1(x)=A_2(\phi (x))\) for all \(x\in V_1\) and \(p_i\circ B_1(xy)=p_i\circ B_2(\phi (x)\phi (y))\) for \(i=1,2,\ldots ,m\), \(xy\in \widetilde{V^2_1}\).
$$\begin{aligned} O(G_1)&= {} |V_1|=\sum _{x\in V_1} \frac{1+ \sum \nolimits _{i=1}^{m} p_i\circ A_1(x)}{2} \\&= {} \sum _{\phi (x)\in V_2} \frac{1+ \sum \nolimits _{i=1}^{m} p_i\circ A_2(\phi (x))}{2}=O(G_2) \end{aligned}$$
$$\begin{aligned} S(G_1)&= {} |E_1|=\sum _{xy\in E_1} \frac{1+ \sum \nolimits _{i=1}^{m} p_i\circ B_1(xy)}{2} \\&= {} \sum _{\phi(x)\phi(y)\in E_2} \frac{1+ \sum \nolimits _{i=1}^{m} p_i\circ B_2(\phi (x)\phi (y))}{2}=S(G_2). \end{aligned}$$
The busy value of a vertex u of an m-polar fuzzy graph G is denoted as \(D(u)=(p_1\circ D(u),p_2\circ D(u),\ldots ,p_m\circ D(u))\) where \(p_i\circ D(u)=\sum \limits _{k}min\{p_i\circ A(u),p_i\circ A(u_k)\}\); \(u_k\) are the neighbors of u. The busy value of G is denoted as D(G) where \(D(G)=\sum \limits _{k}D(u_k)\), \(u_k\in V\).
Consider the 3-polar fuzzy graph \(G=(V,A,B)\) of \(G^*=(V,E)\) where \(V=\{u,v,w,x\}\), \(E=\{uv,vw,ux,uw,vx\}\), \(A=\left\{ \frac{\langle 0.6,0.3,0.5\rangle }{u},\frac{\langle 0.8,0.4,0.3\rangle }{v},\, \frac{\langle 0.5,0.6,0.4\rangle }{w},\frac{\langle 0.7,0.5,0.6\rangle }{x}\right\} \) and \(B=\left\{ \frac{\langle 0.5,0.2,0.2\rangle }{uv},\frac{\langle 0.1,0.3,0.2\rangle }{vw},\frac{\langle 0.6,0.2,0.4\rangle }{ux},\frac{\langle 0.3,0.2,0.3\rangle }{uw},\frac{\langle 0.7,0.4,0.2\rangle }{vx}\right\} \). Then we have from Fig. 5,
$$\begin{aligned}&p_1\circ D(u) = 1.7,\quad p_2\circ D(u)=0.9,\quad p_3\circ D(u)=1.2,\\&p_1\circ D(v) = 1.8,\quad p_2\circ D(v)=1.1,\quad p_3\circ D(v)=0.9,\\&p_1\circ D(w) = 1,\quad p_2\circ D(w)=0.7,\quad p_3\circ D(w)=0.7,\\&p_1\circ D(x) = 1.3,\quad p_2\circ D(x)=0.7,\quad p_3\circ D(x)=0.8. \end{aligned}$$
So, \(D(u)=(1.7,0.9,1.2), D(v) = (1.8,1.1,0.9), D(w)=(1,0.7,0.7), D(x)=(1.3,0.7,0.8)\).
If \(p_i\circ A(u)\le p_i\circ deg(u)\) for \(i=1,2,\ldots ,m\), then the vertex u of G is called a busy vertex. Otherwise it is a free vertex.
If \(p_i\circ B(u_1v_1)=min\{p_i\circ A(u_1),p_i\circ A(v_1)\}\), \(i=1,2,\ldots ,m\) for \(u_1v_1\in E\), then it is called an effective edge of G.
Let \(u\in V\) be a vertex of the m-polar fuzzy graph \(G=(V,A,B)\).
u is called a partial free vertex if it is a free vertex of G and \(\overline{G}\).
u is called a fully free vertex if it is a free vertex of G and it is a busy vertex of \(\overline{G}\).
u is called a partial busy vertex if it is a busy vertex of G and \(\overline{G}\).
u is called a fully busy vertex if it is a busy vertex in G and it is a free vertex of \(\overline{G}\).
Let \(\phi \) be an isomorphism from \(G_1=(V_1,A_1,B_1)\) onto \(G_2=(V_2,A_2,B_2)\). Then \(deg(u)=deg(\phi (u))\) for all \(u\in V_1\).
Since \(\phi \) is an isomorphism between \(G_1\) and \(G_2\), we have \(p_i\circ A_1(u)=p_i\circ A_2(\phi (u))\) for all \(u\in V_1\) and \(p_i\circ B_1(x_1y_1)=p_i\circ B_2(\phi (x_1)\phi (y_1))\) for all \(x_1y_1\in \widetilde{V_1^2}\), \(i=1,2,\ldots ,m\).
Hence, \(p_i\circ deg(u)=\sum \nolimits _{\begin{array}{c} u\ne v\\ uv\in E_1 \end{array}} p_i\circ B_1(uv) =\sum \nolimits _{\begin{array}{c} \phi (u)\ne \phi (v)\\ \phi (u)\phi (v)\in E_2 \end{array}} p_i\circ B_2(\phi (u)\phi (v)) =p_i\circ deg(\phi (u))\) for \(u\in V_1\), \(i=1,2,\ldots ,m\). So, \(deg(u)=deg(\phi (u))\) for all \(u\in V_1\). \(\square \)
If \(\phi \) is an isomorphism from \(G_1\) onto \(G_2\) and u is a busy vertex of \(G_1\), then \(\phi (u)\) is a busy vertex of \(G_2\).
Since \(\phi \) is an isomorphism between we have, \(p_i\circ A_1(u)=p_i\circ A_2(\phi (u))\) \(u\in V_1\) and \(p_i\circ B_1(x_1y_1)=p_i\circ B_2(\phi (x_1)\phi (y_1))\) for \(x_1y_1\in \widetilde{V_1^2}\), \(i=1,2,\ldots ,m\).
If u is a busy vertex of \(G_1\), then \(p_i\circ A_1(u)\le p_i\circ deg(u)\) for \(i=1,2,\ldots ,m\). Then by the above and Theorem 32, \(p_i\circ A_2(\phi (u))=p_i\circ A_1(u)\le p_i\circ deg(u)=p_i\circ deg(\phi (u))\) for \(i=1,2,\ldots ,m\). Hence, \(\phi (u)\) is a busy vertex in \(G_2\). \(\square \)
Let the two m-polar fuzzy graphs \(G_1\) and \(G_2\) be weak isomorphic. If \(u\in V_1\) is a busy vertex of \(G_1\), then the image of u under the weak isomorphism is also busy in \(G_2\).
Let \(\phi :V_1\rightarrow V_2\) be a weak isomorphism between \(G_1\) and \(G_2\).
Then, \(p_i\circ A_1(x)=p_i\circ A_2(\phi (x))\) for all \(x\in V_1\) and \(p_i\circ B_1(x_1y_1)\le p_i\circ B_2(\phi (x_1)\phi (y_1))\) for all \(x_1y_1\in \widetilde{V_1^2}\), \(i=1,2,\ldots ,m\).
Let \(u\in V_1\) be a busy vertex. Then, for \(i=1,2,\ldots ,m\), \(p_i\circ A_1(u)\le p_i\circ deg(u)\).
Now by the above for \(i=1,2,\ldots ,m\)
$$\begin{aligned} p_i\circ A_2(u)&= {} p_i\circ A_1(u)\le p_i\circ deg(u)=\sum \limits _{\begin{array}{c} u\ne v\\ uv\in E_1 \end{array}} p_i\circ B_1(uv)\\&\le {} \sum \limits _{\begin{array}{c} \phi (u)\ne \phi (v)\\ \phi (u)\phi (v)\in E_2 \end{array}} p_i\circ B_2(\phi (u)\phi (v)) =p_i\circ deg(\phi (u)). \end{aligned}$$
Hence, \(\phi (u)\) is a busy vertex in \(G_2\). \(\square \)
Complement and isomorphism in m-polar fuzzy graphs
In this section some important properties of isomorphism, weak isomorphism, co weak isomorphism related with complement are discussed.
Let \(G_1=(V_1,A_1,B_1)\) and \(G_2=(V_2,A_2,B_2)\) be two m-polar fuzzy graphs of the graphs \(G^*_1=(V_1,E_1)\) and \(G^*_2=(V_2,E_2)\). If \(G_1\cong G_2\) then \(\overline{G_1}\cong \overline{G_2}\).
Let \(G_1\cong G_2\). Then there exists an isomorphism \(\phi : V_1\rightarrow V_2\) such that \(A_1(x)=A_2(\phi (x))\) for all \(x\in V_1\) and \(p_i\circ B_1(xy)=p_i\circ B_2(\phi (x)\phi (y))\), for each \(i=1,2,\ldots ,m\) and \(xy\in \widetilde{V^2_1}\).
Now, \(\overline{A_1}(x)=A_1(x)=A_2(\phi (x))=\overline{A_2}(\phi (x))\) for all \(x\in V_1\).
Also, for \(i=1,2,\ldots ,m\) and \(xy\in \widetilde{V^2_1}\) we have,
$$\begin{aligned} p_i\circ \overline{B_1}(xy)&= {} min\{p_i\circ A_1(x),p_i\circ A_1(y)\}-p_i\circ B_1(xy)\\&= {} min\{p_i\circ A_2(\phi (x),p_i\circ A_2(\phi (y)\}-p_i\circ B_2(\phi (x)\phi (y))=p_i\circ \overline{B_2}(\phi (x)\phi (y)). \end{aligned}$$
Hence, \(\phi \) is an isomorphism between \(\overline{G_1}\) and \(\overline{G_2}\) i.e., \(\overline{G_1}\cong \overline{G_2}\). \(\square \)
Suppose there is a weak isomorphism between two m-polar fuzzy graphs \(G_1\) and \(G_2\). Then there may not be a weak isomorphism between \(\overline{G_1}\) and \(\overline{G_2}\).
For example, consider two 3-polar fuzzy graphs \(G_1\) and \(G_2\) of Fig. 6. Let us now define a mapping \(\phi : V_1 \rightarrow V_2\) such that \(\phi (a)=u\), \(\phi (b)=v\), \(\phi (c)=w\). Then \(\phi \) is a weak isomorphism from \(G_1\) onto \(G_2\). But, there is no weak isomorphism from \(\overline{G_1}\) onto \(\overline{G_2}\) (see Fig. 7) because \(\overline{B_2}(uw=\phi (a)\phi (c))={\mathbf{0}} =(0,0,\ldots ,0)<\overline{B_1}(ac)=(0.1,0.1,0.05)\), and \(\overline{B_2}(vw=\phi (b)\phi (c))={\mathbf{0}} =(0,0,\ldots ,0)<\overline{B_1}(bc)=(0.1,0.1,0.1)\).
In a similar way, we can construct example to show that if there is a co-weak isomorphism between two m-polar fuzzy graphs \(G_1\) and \(G_2\) then there may not be a co-weak isomorphism between \(\overline{G_1}\) and \(\overline{G_2}\).
Let \(G_1=(V_1,A_1,B_1)\) and \(G_2=(V_2,A_2,B_2)\) be two m-polar fuzzy graphs of the graphs \(G^*_1=(V_1,E_1)\) and \(G^*_2=(V_2,E_2)\) such that \(V_1\cap V_2= \emptyset \). Then \(\overline{G_1+G_2}\cong \overline{G_1}\cup \overline{G_2}\).
To show that \(\overline{G_1+G_2}\cong \overline{G_1}\cup \overline{G_2}\), we need to show that there exists an isomorphism between \(\overline{G_1+G_2}\) and \(\overline{G_1}\cup \overline{G_2}\).
We will show that the identity map \(I:V_1\cup V_2 \rightarrow V_1\cup V_2\) is the required isomorphism between them. For this, we will show the following:
for all \(x\in V_1\cup V_2\), \(\overline{(A_1+A_2)}(x)=(\overline{A_1}\cup \overline{A_2})(x)\),
and \(p_i\circ \overline{(B_1+B_2)}(xy)=p_i\circ (\overline{B_1}\cup \overline{B_2})(xy)\) for \(i=1,2,\ldots ,m\) and \(xy\in \widetilde{V_1\times V_2}^2\).
Let \(x\in V_1\cup V_2\).
$$\begin{aligned} \overline{(A_1+A_2)}(x)&= {} (A_1+A_2)(x)=(A_1\cup A_2)(x)\quad ({\text {by Definition }}8)\\&= {} \left\{ \begin{array}{ll} A_1(x) &{}\quad {\text {if}}\; x\in V_1-V_2\\ A_2(x) &{}\quad {\text {if}}\; x\in V_2-V_1\\ \end{array}\right. \\&= {} \left\{ \begin{array}{ll} \overline{A_1}(x) &{}\quad {\text {if}}\; x\in V_1-V_2\\ \overline{A_2}(x) &{}\quad {\text {if}}\; x\in V_2-V_1\\ \end{array}\right. =(\overline{A_1}\cup \overline{A_2})(x). \end{aligned}$$
Now for each \(i=1,2,\ldots ,m\) and \(xy\in \widetilde{V_1\times V_2}^2\) we have,
$$\begin{aligned}&p_i\circ \overline{(B_1+B_2)}(xy)\\&\quad =min\{p_i\circ (A_1+A_2)(x),p_i\circ (A_1+A_2)(y)\}-p_i\circ (B_1+B_2)(xy)\\&\quad = \left\{ \begin{array}{ll} min\{p_i\circ (A_1\cup A_2)(x),p_i\circ (A_1\cup A_2)(y)\}-p_i\circ (B_1\cup B_2)(xy), &{}\quad {\text {if}}\; xy\in E_1\cup E_2\\ min\{p_i\circ (A_1\cup A_2)(x),p_i\circ (A_1\cup A_2)(y)\}-min\{p_i\circ A_1(x),p_i\circ A_2(y)\}, &{}\quad {\text {if}}\; xy\in E^\prime \\ \end{array}\right. \\&\quad = \left\{ \begin{array}{ll} min\{p_i\circ A_1(x),p_i\circ A_1(y)\}-p_i\circ B_1(xy), &{}\quad {\text {if}}\; xy\in E_1-E_2\\ min\{p_i\circ A_2(x),p_i\circ A_2(y)\}-p_i\circ B_2(xy), &{}\quad {\text {if}}\; xy\in E_2-E_1\\ min\{p_i\circ A_1(x),p_i\circ A_2(y)\}-min\{p_i\circ (A_1)(x),p_i\circ (A_2)(y)\}, &{}\quad {\text {if}}\; xy\in E^\prime \\ \end{array}\right. \\&\quad = \left\{ \begin{array}{ll} p_i\circ \overline{B_1}(xy), &{}\quad {\text {if}}\; xy\in E_1-E_2\\ p_i\circ \overline{B_2}(xy), &{}\quad {\text {if}}\; xy\in E_2-E_1\\ 0, &{}\quad {\text {if}}\; xy\in E^\prime \\ \end{array}\right. \\&\quad = p_i\circ (\overline{B_1}\cup \overline{B_2})(xy). \end{aligned}$$
Let \(G_1=(V_1,A_1,B_1)\) and \(G_2=(V_2,A_2,B_2)\) be two m-polar fuzzy graphs of the graphs \(G^*_1=(V_1,E_1)\) and \(G^*_2=(V_2,E_2)\) such that \(V_1\cap V_2= \emptyset \). Then \(\overline{G_1\cup G_2}\cong \overline{G_1}+\overline{G_2}\).
Consider the identity map \(I:V_1\cup V_2\rightarrow V_1\cup V_2\). We will show that I is the required isomorphism between \(\overline{G_1\cup G_2}\) and \(\overline{G_1}+\overline{G_2}\).
For this, we will show the following:
for all \(x\in V_1\cup V_2\), \(\overline{(A_1\cup A_2)}(x)=(\overline{A_1}+\overline{A_2})(x)\),
and \(p_i\circ \overline{(B_1\cup B_2)}(xy)=p_i\circ (\overline{B_1}+\overline{B_2})(xy)\) for \(i=1,2,\ldots ,m\) and \(xy\in \widetilde{V_1\times V_2}^2\).
$$\begin{aligned} \overline{A_1\cup A_2}(x)&= {} (A_1\cup A_2)(x)\\&= {} \left\{ \begin{array}{ll} A_1(x), &{}\quad {\text {if}}\; x\in V_1-V_2\\ A_2(x), &{}\quad {\text {if}}\; x\in V_2-V_1\\ \end{array}\right. \\&= {} \left\{ \begin{array}{ll} \overline{A_1}(x), &{}\quad {\text {if}}\; x\in V_1-V_2\\ \overline{A_2}(x), &{}\quad {\text {if}}\; x\in V_2-V_1\\ \end{array}\right. \\&= {} (\overline{A_1}\cup \overline{A_2})(x) \end{aligned}$$
and for \(i=1,2,\ldots ,m\), \(xy\in \widetilde{V_1\times V_2}^2\) we have,
$$\begin{aligned}&p_i\circ \overline{(B_1\cup B_2)}(xy)\\&\quad =min\{p_i\circ (A_1\cup A_2)(x),p_i\circ (A_1\cup A_2)(y)\}-p_i\circ (B_1\cup B_2)(xy)\\&\quad =\left\{ \begin{array}{ll} min\{p_i\circ A_1(x),p_i\circ A_1(y)\}-p_i\circ B_1(xy), &{}\quad {\text {if}}\; xy\in E_1- E_2\\ min\{p_i\circ A_2(x),p_i\circ A_2(y)\}-p_i\circ B_2(xy), &{}\quad {\text {if}}\; xy\in E_2-E_1\\ min\{p_i\circ A_1(x),p_i\circ A_2(y)\}-0, &{}\quad {\text {if}}\; x\in V_1,y\in V_2\\ \end{array}\right. \\&\quad =\left\{ \begin{array}{ll} p_i\circ \overline{B_1}(xy), &{}\quad {\text {if}}\; xy\in E_1-E_2\\ p_i\circ \overline{B_2}(xy), &{}\quad {\text {if}}\; xy\in E_2-E_1\\ min\{p_i\circ A_1(x),p_i\circ A_2(y)\}-0, &{}\quad {\text {if}}\; x\in V_1,y\in V_2\\ \end{array}\right. \\&\quad =\left\{ \begin{array}{ll} p_i\circ \overline{B_1}(xy), &{}\quad {\text {if}}\; xy\in E_1-E_2\\ p_i\circ \overline{B_2}(xy), &{}\quad {\text {if}}\; xy\in E_2-E_1\\ min\{p_i\circ A_1(x),p_i\circ A_2(y)\}-0, &{}\quad {\text {if}}\; xy\in E^\prime \\ \end{array}\right. \\&\quad =p_i\circ (\overline{B_1}+\overline{B_2})(xy). \end{aligned}$$
This completes the proof. \(\square \)
Let \(G_1=(V_1,A_1,B_1)\) and \(G_2=(V_2,A_2,B_2)\) be two strong m-polar fuzzy graphs of the graphs \(G^*_1=(V_1,E_1)\) and \(G^*_2=(V_2,E_2)\) respectively. Then \(\overline{G_1\circ G_2}\cong \overline{G_1}\circ \overline{G_2}\).
Let \(G_1\circ G_2=(V_1\times V_2,A_1\circ A_2,B_1\circ B_2)\) be an m-polar fuzzy graph of the graph \(G^*=(V,E)\) where \(V=V_1\times V_2\) and \(E=\{(x,x_2)(x,y_2): x\in V_1, x_2y_2\in E_2\}\cup \{(x_1,z)(y_1,z): z\in V_2, x_1y_1\in E_1\}\cup \{(x_1,x_2)(y_1,y_2): x_1y_1\in E_1, x_2\ne y_2\}\).
We show that the identity map I is the required isomorphism between the graphs \(\overline{G_1\circ G_2}\) and \(\overline{G_1}\circ \overline{G_2}\). Let us consider the identity map \(I: V_1\times V_2 \rightarrow V_1\times V_2\).
In order to show that I is the required isomorphism, we show that for each \(i=1,2,\ldots ,m\) and for all \(xy\in \widetilde{V_1\times V_2}^2\), \(p_i\circ \overline{(B_1\circ B_2)}(xy)=p_i\circ (\overline{B_1} \circ \overline{B_2})(xy)\). Several cases may arise.
Case (i): Let \(e=(x,x_2)(x,y_2)\) where \(x\in V_1\), \(x_2y_2\in E_2\). Then \(e\in E\).
Since \(G_1\circ G_2\) is strong m-polar fuzzy graph, we have for each \(i=1,2,\ldots ,m\)
$$\begin{aligned}&p_i\circ \overline{(B_1\circ B_2)}(e)=0{\text { and}}\\&p_i\circ (\overline{B_1} \circ \overline{B_2})(e)=min\{p_i\circ A_1(x),p_i\circ \overline{B_2}(x_2y_2)\}=0 \end{aligned}$$
(since \(G_2\) is strong and \(x_2y_2\in E_2\), therefore for each \(i=1,2,\ldots ,m\), \(p_i\circ \overline{B_2}(x_2y_2)=0\)).
Case (ii): Let \(e=(x,x_2)(x,y_2)\) where \(x_2\ne y_2\), \(x_2y_2\notin E_2\). Then \(e\notin E\).
So for each \(i=1,2,\ldots ,m\), \(p_i\circ (B_1\circ B_2)(e)=0\) and
$$\begin{aligned} p_i\circ \overline{(B_1\circ B_2)}(e)&= {} min\{p_i\circ (A_1\circ A_2)(x,x_2),p_i\circ (A_1\circ A_2)(x,y_2)\}\\&= {} min\{p_i\circ A_1(x),p_i\circ A_2(x_2),p_i\circ A_2(y_2)\}. \end{aligned}$$
Again, since \(x_2y_2\in \overline{E_2}\), therefore for each \(i=1,2,\ldots ,m\),
$$\begin{aligned} p_i\circ (\overline{B_1} \circ \overline{B_2})(e)&= {} min\{p_i\circ A_1(x),p_i\circ \overline{B_2}(x_2y_2)\}\\&= {} min\{p_i\circ A_1(x),p_i\circ A_2(x_2),p_i\circ A_2(y_2)\}. \end{aligned}$$
Case (iii): Let \(e=(x_1,z)(y_1,z)\) where \(x_1y_1\in E_1\), \(z\in V_2\).
Then \(e\in E\). So for each \(i=1,2,\ldots ,m\), \(p_i\circ \overline{(B_1\circ B_2)}(e)=0\) as in Case (i).
Also, since \(x_1y_1\notin \overline{E_1}\), therefore for each \(i=1,2,\ldots ,m\), \(p_i\circ (\overline{B_1}\circ \overline{B_2})(e)=0\).
Case (iv): Let \(e=(x_1,z)(y_1,z)\) where \(x_1y_1\notin E_1\), \(z\in V_2\). Then \(e\notin E\).
Hence for each \(i=1,2,\ldots ,m\), \(p_i\circ (B_1\circ B_2)(e)=0\),
$$\begin{aligned} p_i\circ \overline{(B_1\circ B_2)}(e)&= {} min\{p_i\circ (A_1\circ A_2)(x_1,z),p_i\circ (A_1\circ A_2)(y_1,z)\}\\&= {} min\{p_i\circ A_1(x_1),p_i\circ A_1(y_1),p_i\circ A_2(z)\}\text { and}\\ p_i\circ (\overline{B_1} \circ \overline{B_2})(e)&= {} min\{p_i\circ A_2(z),p_i\circ \overline{B_1}(x_1y_1)\}\\&= {} min\{p_i\circ A_1(x_1),p_i\circ A_1(y_1),p_i\circ A_2(z)\} \;(G_1{\text { being strong}}). \end{aligned}$$
Case (v): Let \(e=(x_1,x_2)(y_1,y_2)\) where \(x_1y_1\in E_1\), \(x_2\ne y_2\). Then \(e\in E\). So we have for each \(i=1,2,\ldots ,m\), \(p_i\circ \overline{(B_1\circ B_2)}(e)=0\) as in Case (i).
Also, since \(x_1y_1\in E_1\), we have for each \(i=1,2,\ldots ,m\), \(p_i\circ (\overline{B_1} \circ \overline{B_2})(e)=0\).
Case (vi): Let \(e=(x_1,x_2)(y_1,y_2)\) where \(x_1y_1\notin E_1\), \(x_2\ne y_2\). Then \(e\notin E\) and hence for each \(i=1,2,\ldots ,m\), \(p_i\circ (B_1\circ B_2)(e)=0\),
$$\begin{aligned} p_i\circ \overline{(B_1\circ B_2)}(e)&= {} min\{p_i\circ (A_1\circ A_2)(x_1,x_2),p_i\circ (A_1\circ A_2)(y_1,y_2)\}\\&= {} min\{p_i\circ A_1(x_1),p_i\circ A_1(y_1),p_i\circ A_2(x_2),p_i\circ A_2(y_2)\} \end{aligned}$$
and since \(x_1y_1\in \overline{E_1}\),
$$\begin{aligned} p_i\circ (\overline{B_1} \circ \overline{B_2})(e)&= {} min\{p_i\circ A_2(x_2),p_i\circ A_2(y_2),p_i\circ \overline{B_1}(x_1y_1)\}\\&= {} min\{p_i\circ A_1(x_1),p_i\circ A_1(y_1),p_i\circ A_2(x_2),p_i\circ A_2(y_2)\}\, (\overline{G_1} \text { being strong by } [10]). \end{aligned}$$
Case (vii): Finally, let \(e=(x_1,x_2)(y_1,y_2)\) where \(x_1y_1\notin E_1\), \(x_2y_2\notin E_2\). Then \(e\notin E\) and hence for each \(i=1,2,\ldots ,m\), \(p_i\circ (B_1\circ B_2)(e)=0\),
$$\begin{aligned} p_i\circ \overline{(B_1\circ B_2)}(e)=min\{p_i\circ (A_1\circ A_2)(x_1,x_2),p_i\circ (A_1\circ A_2)(y_1,y_2)\}. \end{aligned}$$
Now, \(x_1y_1\in \overline{E_1}\) and if \(x_2=y_2=z\), then we have the Case (iv).
Again, if \(x_1y_1\in \overline{E_1}\) and if \(x_2\ne y_2\), then we have Case (vi).
Thus combining all the cases we have, for each \(i=1,2,\ldots ,m\), and \(xy\in \widetilde{V_1\times V_2}^2\),
$$\begin{aligned} p_i\circ \overline{(B_1\circ B_2)}(xy)=p_i\circ (\overline{B_1} \circ \overline{B_2})(xy). \end{aligned}$$
If \(G_1\) and \(G_2\) are not strong, then \(\overline{G_1\circ G_2} \ncong \overline{G_1}\circ \overline{G_2}\) always. For example, consider the two 3-polar fuzzy graphs \(G_1\) and \(G_2\) which are not strong (see Fig. 8). From Figs. 8 and 9, we see that, \(\overline{G_1\circ G_2}\ncong \overline{G_1}\circ \overline{G_2}\).
Now a days, fuzzy graphs and bipolar fuzzy graphs are most familiar graphs to us and they can also be thought of as 1-polar and 2-polar fuzzy graphs respectively. These graphs have many important application in social networks, medical diagnosis, computer networks, database theory, expert system, neural networks, artificial intelligence, signal processing, pattern recognition, engineering science, cluster analysis, etc. The concepts of bipolar fuzzy graphs can be generalized to m-polar fuzzy graphs. For example, consider the sorting of mangoes and guavas. Now the different characteristics of a given fruit can change the decision in sorting process more towards the decision mango or vice versa. There are two poles present in this case. One is \(100\%\) sure mango and the other is \(100\%\) sure guava. This shows that the situation is bipolar. This situation can be generalized further by adding a new fruit, for example sweet lemon into the sorting process.
Consider the another example of tug of war where two people pull the rope in opposite directions. Here, who uses the bigger force, the center of the rope will move in the respective direction of their pulling. The situation is symmetric in this case. We present an example where m people pull a special rope in m different directions. We use this example to represent it as an m-polar fuzzy graph. We assume that O is the origin and there are m straight paths leading from O. We also assume that there is a wall in between these paths. In this setting, we have the special rope with one node at O and m endings going out from this nodes—one end corresponding to each of the paths. Suppose on every path there is a man standing and pulling the rope in the direction of the path on which he is standing. This situation can be represented as an m-polar fuzzy graph by considering the nodes as m-polar fuzzy set and edges between them as m-polar fuzzy relations, which is shown in Fig. 10. In this context, one can ask the question what is the strength require in order to pull the node O from the center into one of the paths (assuming no friction)? The answer to this is that if the corresponding forces which are pulling the rope are \(F_k\), \(k=1,2,\ldots ,m\), then the node O will move to the \(j\hbox {th}\) path if \(F_j > \sum \nolimits _{\begin{array}{c} k=1,2,\ldots ,m\\ k\ne j \end{array}}{F_k}\).
Evaluation graph corresponding to the teacher's evaluation by the students
In this section we present the model of m-polar fuzzy graph which is used in evaluating the teachers by the students of 4th semester of a department in an university during the session 2015–2016. Here the nodes represent the teachers of the corresponding department and edges represent the relationship between two teachers. Suppose the department has six teachers denoted as \(T=\{t_1,t_2,t_3,t_4,t_5,t_6\}\). The membership value of each node represents the corresponding teachers feedback response of the students depending on the following: {regularity of classes, style of presentation, quality of lectures, generation of interest and encouraging future reading among students, updated information}. Since all the above characteristics of a teacher according to the different students are uncertain in real life, therefore we consider 5-polar fuzzy subset of the vertex set T (Fig. 11).
In the Table 1, the membership values of the teacher's are given which is according to the evaluation of the students.
Table 1 5-Polar fuzzy set A of T
Table 2 5-Polar fuzzy relation B on A
Table 3 Average response score of the teachers
Edge membership values which represent the relationship between the teachers can be calculated by using the relation \(p_i\circ B(uv)\le min\{p_i\circ A(u), p_i\circ A(v)\}\) for all \(u,v\in T\), \(i=1,2,\ldots ,5\). These values are given in the Table 2.
We rank the teacher's performance according the following:
Teacher's average response score <60%, teacher's performance according to the students is \(\mathbf{Average}\).
Teacher's average response score ≥60% and <70%, teacher's performance according to the students is \(\mathbf{Good}\).
Teacher's average response score ≥70% and <80%, teacher's performance according to the students is \(\mathbf {Very}\) \(\mathbf{Good}\).
Teacher's average response score is ≥80%, teacher's performance according to the students is \(\mathbf{Excellent}\).
From the Table 3, we see that the performance of the teachers \(t_1,t_2,t_5,t_6\) are very good whereas the performance of the teachers \(t_3\) and \(t_4\) are excellent. Among these teachers, teacher \(t_3\) is the best teacher according the response score of the students of the department during the session 2015–2016.
The theory of fuzzy graphs play an important role in many fields including decision makings, computer networking and management sciences. An m-polar fuzzy graph can be used to represent real world problems which involve multi-agent, multi-attribute, multi-object, multi-index, multi-polar information and uncertainty. In this research paper, we have studied the isomorphic properties of m-polar fuzzy graphs with some applications. We are extending our research work on m-polar fuzzy intersection graphs, m-polar fuzzy interval graphs, properties of m-polar fuzzy hypergraphs, degrees of vertices of m-polar fuzzy graphs and its application in decision making, etc.
Akram M (2011) Bipolar fuzzy grpahs. Inf Sci 181(24):5548–5564
Akram M (2013) Bipolar fuzzy graphs with applications. Knowl Based Syst 39:1–8
Akram M, Younas HR (2015) Certain types of irregular \(m\)-polar fuzzy graphs. J Appl Math Comput. doi:10.1007/s12190-015-0972-9
Akram M, Akmal R, Alsheri N (2016) On \(m\)-polar fuzzy graph structures. Springerplus 5:1448. doi:10.1186/s40064-016-3066-8
Al-Hawary T (2011) Complete fuzzy graphs. Int J Math Combin 4:26–34
Bhutani KR (1989) On automorphism of fuzzy graphs. Pattern Recognit Lett 9:159–162
Bhutani KR, Moderson J, Rosenfeld A (2004) On degrees of end nodes and cut nodes in fuzzy graphs. Iran J Fuzzy Syst 1(1):57–64
Chen J, Li S, Ma S, Wang X (2014) \(m\)-polar fuzzy sets: an extension of bipolar fuzzy sets. Sci World J. doi:10.1155/2014/416530
Ghorai G, Pal M (2015a) On some operations and density of \(m\)-polar fuzzy graphs. Pac Sci Rev A Nat Sci Eng 17(1):14–22
Ghorai G, Pal M (2015b) Ceratin types of product bipolar fuzzy graphs. Int J Appl Comput Math. doi:10.1007/s40819-015-0112-0
Ghorai G, Pal M (2016a) Some properties of \(m\)-polar fuzzy graphs. Pac Sci Rev A Nat Sci Eng. 18(1):38–46. doi:10.1016/j.psra.2016.06.004
Ghorai G, Pal M (2016b) A study on \(m\)-polar fuzzy planar graphs. Int J Comput Sci Math 7(3):283–292
Ghorai G, Pal M (2016c) Faces and dual of \(m\)-polar fuzzy planar graphs. J Intell Fuzzy Syst 31(3):2043–2049
Harary F (1972) Graph theory, 3rd edn. Addison-Wesley, Reading
Koczy LT (1992) Fuzzy graphs in the evaluation and optimization of networks. Fuzzy Sets Syst 46:307–319
Lee KM (2000) Bipolar valued fuzzy sets and their basic operations. In: Proceedings of the international conference, Bangkok, Thailand, pp 307–317
Lee-kwang H, Lee KM (1995) Fuzzy hypergraph and fuzzy partition. IEEE Trans Syst Man Cybernet 25:196–201
Mordeson JN, Peng CS (1994) Operations on fuzzy graphs. Inf Sci 19:159–170
Mordeson JN, Nair PS (2000) Fuzzy graphs and hypergraphs. Physica Verlag, Heidelberg
Nagoorgani A, Radha K (2008) On regular fuzzy graphs. J Phys Sci 12:33–40
Rosenfeld A (1975) Fuzzy graphs. In: Zadeh LA, Fu KS, Shimura M (eds) Fuzzy sets and their applications. Academic Press, New York, pp 77–95
Rashmanlou H, Samanta S, Pal M, Borzooei RA (2015a) A study on bipolar fuzzy graphs. J Intell Fuzzy Syst 28:571–580
Rashmanlou H, Samanta S, Pal M, Borzooei RA (2015b) Bipolar fuzzy graphs with categorical properties. Int J Comput Intell Syst 8(5):808–818
Rashmanlou H, Samanta S, Pal M, Borzooei RA (2016) Product of bipolar fuzzy graphs and their degree. Int J Gen Syst 45(1):1–14
Samanta S, Pal M (2011a) Fuzzy tolerance graphs. Int J Latest Trends Math 1(2):57–67
Samanta S, Pal M (2011b) Fuzzy threshold graphs. CIIT Int J Fuzzy Syst 3(12):360–364
Samanta S, Pal M (2012a) Bipolar fuzzy hypergraphs. Int J Fuzzy Logic Syst 2(1):17–28
Samanta S, Pal M (2012b) Irregular bipolar fuzzy graphs. Int J Appl Fuzzy Sets 2:91–102
Samanta S, Pal M (2013) Fuzzy \(k\)-competition graphs and \(p\)-competitions fuzzy graphs. Fuzzy Inf Eng 5(2):191–204
Samanta S, Pal M (2014) Some more results on bipolar fuzzy sets and bipolar fuzzy intersection graphs. J Fuzzy Math 22(2):1–10
Samanta S, Pal M (2015) Fuzzy planar graphs. IEEE Trans Fuzzy Syst 23(6):1936–1942
Sunitha MS, Vijayakumar (2002) A Complement of fuzzy graphs. Indian J Pure Appl Math 33:1451–1464
Yang HL, Li SG, Yang WH, Lu Y (2013) Notes on "bipolar fuzzy graphs". Inf Sci 242:113–121
Zadeh LA (1965) Fuzzy sets. Inf Control 8:338–353
Zhang WR (1994) Bipolar fuzzy sets and relations: a computational framework for cognitive modeling and multiagent decision analysis. In: Proceedings of IEEE conference, pp 305–309
Zhang WR (1998) Bipolar fuzzy sets. In: Proceedings of Fuzzy-IEEE, pp 835-840
Both authors have significant contributions to this paper and the final form of this paper is approved by both of them. Both authors read and approved the final manuscript.
The authors would wish to express their sincere gratitude to the Editor in Chief and anonymous referees for their valuable comments and helpful suggestions.
Department of Applied Mathematics with Oceanology and Computer Programming, Vidyasagar University, Midnapore, 721 102, India
Ganesh Ghorai
& Madhumangal Pal
Search for Ganesh Ghorai in:
Search for Madhumangal Pal in:
Correspondence to Ganesh Ghorai.
Ghorai, G., Pal, M. Some isomorphic properties of m-polar fuzzy graphs with applications. SpringerPlus 5, 2104 (2016) doi:10.1186/s40064-016-3783-z
Accepted: 01 December 2016
m-Polar fuzzy graphs
Order and size
Busy and free vertices
Isomorphisms
Self complement and weak self complement
5-Polar fuzzy evaluation graph | CommonCrawl |
communications earth & environment
Rupture of wet mantle wedge by self-promoting carbonation
Atsushi Okamoto ORCID: orcid.org/0000-0001-5757-62791,
Ryosuke Oyanagi ORCID: orcid.org/0000-0002-2930-10642,3,
Kazuki Yoshida1,
Masaoki Uno ORCID: orcid.org/0000-0002-6182-05771,
Hiroyuki Shimizu4 &
Madhusoodhan Satish-Kumar ORCID: orcid.org/0000-0003-3604-73995
Communications Earth & Environment volume 2, Article number: 151 (2021) Cite this article
More than one teramole of carbon per year is subducted as carbonate or carbonaceous material. However, the influence of carbonation/decarbonation reactions on seismic activity within subduction zones is poorly understood. Here we present field and microstructural observations, including stable isotope analyses, of carbonate veins within the Higuchi serpentinite body, Japan. We find that the carbon and oxygen isotope compositions of carbonate veins indicate that carbonic fluids originated from organic materials in metasediments. Thermodynamic calculations reveal that carbonation of serpentinite was accompanied by a solid volume decrease, dehydration, and high magnesium mobility. We propose that carbonation of the mantle wedge occurs episodically in a self-promoting way and is controlled by a solid volume contraction and fluid overpressure. In our conceptual model, brittle fracturing and carbonate precipitation were followed by ductile flow of carbonates and hydrous minerals; this might explain the occurrence of episodic tremor and slip in the serpentinized mantle wedge.
Subduction zones transport surface materials deep into Earth's interior. Metamorphism of subducted lithosphere releases water and CO2 to the overlying accretionary prism or mantle wedge1,2,3,4,5,6. Subduction zone fluids contain various chemical species and have a wide range of pH that varies in response to P–T conditions7,8,9. Large gradients in temperature and chemical potential along the subduction interface generate ideal conditions for mineral dissolution, precipitation, and metasomatism4,10,11,12. Such chemical reactions can change mechanical strength, permeability, and fluid pressure, and thus influence the rheological and seismological characteristics of subduction zones10. For example, silica precipitation may control the recurrence periods of ordinary earthquakes13 and slow earthquakes within subduction zones14.
Along the slab–mantle interface below the forearc Moho, hydration of mantle peridotite forms serpentine minerals and hydrous metasomatic minerals such as talc, amphibole, phlogopite, and chlorite4,10,15,16,17,18. Hydration can lower fluid pressure, and the plastic flow or frictional sliding of hydrous minerals result in steady-state slip18. Therefore, the downdip limit of large earthquakes (seismogenic zone) can be defined either by the brittle–ductile transition in crustal rocks (i.e., 350–400 °C) or the intersection of the subduction thrust and the forearc Moho10,19. Slow slip events (SSEs) are observed at depths corresponding to the transition between the seismogenic zone and the deeper stably sliding zone20,21,22,23. Slow slip events include long-term slip events that occur in relatively shallow parts of the subduction interface, and episodic tremor and slip (ETS) in relatively deep parts. In warm subduction zones such as Nankai and northern Cascadia, ETS is abundant in the corner of the mantle wedge20,21,22,23, a region that could be dominated by serpentinite formed by fluids released from the subducting slab. Several geological models involving chemical reactions have been proposed to explain ETS, including those that involve silica precipitation14,24 and the formation of serpentine–brucite assemblages25, and dehydration-induced heterogeneity within eclogitic oceanic crust26. Metasomatic reaction between ultramafic rocks and crustal rocks has been proposed as a mechanism to explain ETS27. Because most metasomatic reactions that occur in serpentinized mantle release liquid water10,12,27, mechanical instabilities might be generated by fluid overpressure. However, the build-up of fluid overpressure is controlled by various factors including the relative rates of fluid generation, pore generation, and pore collapse28,29, and the geological mechanisms controlling the interplay between these factors in the ETS source region remain debated.
Estimates of the carbon budget in subduction zones suggest that more than one teramole of carbon is subducted each year as carbonates or carbonaceous materials1,2,30. The nature of carbon-bearing fluids (e.g., carbon dioxide, methane) and their interaction with rocks are highly sensitive to redox conditions31. Graphite has been considered as a sink of carbon within the subducting slab due to its low solubility in fluids31. However, recent experimental and thermodynamic modeling studies have revealed that graphite solubility is enhanced by pH and dissolved silica32, and that the dissolution of carbonaceous materials in sediments plays an essential role in generating carbon-bearing fluids in subduction zones33. Carbon-bearing fluids are also produced by the decomposition of carbonates via infiltration of H2O-rich fluids1,2,5 and by fluid-induced dissolution of carbonate minerals coupled with precipitation of silicate minerals11. In particular, mantle peridotite has the potential to influence deep carbon cycling by acting as a voluminous sink of CO2. Carbonation of exhumed oceanic mantle and carbon storage within ophicarbonate rocks are commonly reported34, whereas the behavior of carbon-bearing fluids in subduction-related serpentinites is more complex35,36,37,38,39. When carbonates are in contact with serpentinite, graphite is often formed due to the relatively reducing conditions associated with serpentinite35,36. In contrast, reports of high-pressure carbonated serpentinites related to subduction zones37,38,39 highlight the potential for long-term CO2 sequestration in the subducting slab and mantle wedge, even although some studies suggest that most subducted carbonate is recycled back to the surface5. In addition, carbonation and decarbonation reactions can induce changes to the mechanical properties of mantle rocks. For example, infiltration of reducing fluids can promote strain localization in carbonated serpentinites, implying that carbonic fluids could have an influence on earthquake processes in subduction zones36. Experiments involving in situ carbonation of antigorite indicate volume contraction during this reaction40. These results indicate that carbonation of mantle wedge potentially have effects on the behavior of subduction zone thrusts. However, our understanding of carbonation processes in the mantle wedge is limited by a lack of well-characterized examples from ancient exhumed subduction zones. Carbonated serpentinites from the high-pressure (HP) Sanbagawa metamorphic belt in Japan, which represents part of an exhumed Cretaceous subduction zone, provide an unique opportunity to understand the mechanisms of carbonation within a mantle wedge corner under P–T conditions similar to those at which active ETS is reported.
Carbonation of serpentinite associated with brittle fracturing
The Sanbagawa belt is a HP metamorphic belt that extends ~800 km along the Median Tectonic Line from the Kanto Mountains in the east to Kyushu in the west along Japan (Supplementary Fig. 1)41. It is composed mainly of metasediments and metabasalts that formed during Cretaceous subduction of an oceanic plate. The metamorphic belt also contains meter- to kilometer-scale ultramafic blocks including mantle peridotites, serpentinites, and tremolite-rich rocks42. The mineral compositions and restricted distribution of these ultramafic blocks to regions higher than chlorite zone indicate that they originated in the mantle wedge25,42,43. The Higuchi serpentine body (15 × 8 m; Fig. 1a) is located in the Kanto Mountains near the boundary between the garnet and chlorite zones (36°07′30.7″N 139°07′00.0″E; Supplementary Fig. 1). Raman analysis of carbonaceous materials indicates peak temperatures of 400–450 °C in this area44, and mineral phase equilibria suggest peak pressures of ~0.5–0.9 GPa45. The long axis of the serpentinite body is subparallel to a mineral lineation in the surrounding pelitic schists. The Higuchi body is composed of massive or foliated antigorite (Fig. 1b, c) that is cut by dense networks of multi-generational carbonate(s) + talc veins (Fig. 1a–e and Supplementary Fig. 2), including magnesite (Fig. 1c, d), dolomite (Fig. 1a, b), and dolomite + calcite (Fig. 1e). The carbonate veins propagate from the margin to the center of the body in a branching network (Supplementary Fig. 2). Along contacts with serpentinite (Fig. 1f), the pelitic schists are progressively converted to layers of chlorite rock ~50 cm wide that preserve bands containing primary metamorphic graphite, and then to lenses of actinolite + chlorite schist up to ~30 cm wide (Supplementary Fig. 3).
Fig. 1: Field characteristics of the carbonated serpentinite body in Higuchi, Sanbagawa belt, Japan.
a Drone photograph and schematic illustration (at right) of the Higuchi serpentinite body with carbonate veins. Open rectangles indicate the locations of b, f, and e, respectively. b Dolomite veins cut through massive serpentinite blocks. c Layered dolomite veins that have experienced shear deformation. d Magnesite + talc veins. e Thick dolomite + calcite veins associated with fragments of serpentinite blocks. f Chlorite rock and actinolite–chlorite schist at the boundary between pelitic schist and the Higuchi serpentinite body. Yellow and pink arrows indicate relict bands of graphite and quartz-rich material, respectively. Mgs magnesite, Dol dolomite, Cal calcite, Chl chlorite, Act actinolite.
Blocks of massive serpentinite (Fig. 1b) are composed of randomly oriented antigorite grains (Fig. 2a, b) with minor amounts of Cr-rich spinel. Olivine, pyroxenes, and brucite are absent. Cr-rich spinel is commonly altered to magnesiochromite (Fig. 2c and Supplementary Table 1), but some Cr-rich spinel grains in the serpentinite and actinolite–chlorite schists retain unaltered cores with XCr (= Cr/Cr+Al) = 0.52–0.58 and XMg (= Mg/Mg+Fe) = 0.55–0.64. Such depleted spinel compositions in the Higuchi serpentinite are similar to those in forearc peridotites46 and those of other ultramafic bodies in the Sanbagawa belt43,47 (Fig. 2c). Carbonate veins in the Higuchi body are composed mainly of dolomite and magnesite, with lesser amounts of calcite. Magnesite commonly occurs as patches 0.2–2.0 mm in size within antigorite blocks (Fig. 2a, d, e), often accompanied by networks of talc veins <1 mm thick. Magnesite–talc layers also occur along foliation surfaces (Fig. 2a), and infill the spaces between asymmetric antigorite blocks produced by brittle shear deformation (Fig. 2e). Dolomite + talc veins are the most distinct veins in the serpentinite body because they are relatively thick (>2 cm) and can be >3 m long (Figs. 1b, c and 2f). Talc occurs along the margins of these veins (Fig. 2f). Some thick dolomite veins show a layered structure including enclaves of serpentinite (Fig. 1b, c, e). The asymmetry of serpentine fragments, the presence of asymmetric folds (Fig. 1c), and the presence of several layers composed of dynamically recrystallized dolomite grains cutting coarse-grained dolomite veins (Fig. 2g), indicate that vein formation and development of localized shear zones was a repetitive process. In some relatively thick veins, euhedral dolomite crystals grew in the center of the veins, and anhedral calcite crystals filled the intervening pore spaces (Fig. 2h). The dolomite crystals in contact with calcite have higher-Fe rims (Supplementary Fig. 4), suggesting that such dolomite rims were in equilibrium with calcite (Supplementary Table 1). Application of calcite–dolomite solvus thermometry48 to these veins indicates a carbonation temperature of 380–400 °C (Supplementary Fig. 4).
Fig. 2: Microstructure and chemistry of minerals in the Higuchi serpentinite body.
a Photomicrograph (crossed polarized light) showing massive antigorite and magnesite patches associated with talc vein networks. b Detail of antigorite showing random grain orientations. c XCr (= Cr/(Cr+Al)) vs. XMg (= Mg/Mg+Fe2+) for Cr-rich spinel in the serpentinite body (OY, 04C) and actinolite–chlorite schist (03B), compared with data from abyssal peridotites and Mariana forearc peridotites46. Serpentinized peridotites in the Sanbagawa belt, central Shikoku43,47 (references therein): SH Shiraga body, HA Higashi Akaishi body, IM Imono body. d Mineral map of magnesite patches rimed by talc. Location of map shown in a. e Asymmetric antigorite blocks infilled by magnesite + talc produced by brittle shear deformation. f Dolomite veins with talc rims from the outcrop in Fig. 1b. g Dynamically recrystallized dolomite grains within a localized shear zone from a coarse-grained dolomite vein shown in Fig. 1c. h BSE image of a thick dolomite–calcite vein. Euhedral dolomite crystals with bright rims occur along the margin of the vein, and anhedral calcite occurs in the center of the vein. The compositional profile of dolomite along line A-B is shown in Supplementary Fig. 3. i Isotopic compositions (δ13CPDB and δ18OSMOW) of carbonates (magnesite, dolomite, and calcite) in veins from the Higuchi serpentinite, compared with marbles from central Shikoku and the compositions of calcite in basic schists and pelitic schists, and associated veins in the Nagatoro area50.
Sources and compositions of carbonic fluids
Carbonic fluids in subduction zones are commonly sourced from carbonates in seafloor sediments, hydrothermal alteration of basaltic oceanic crust1,2,30, or organic materials in sediments30,33. In the Sanbagawa belt, the rare marbles have \({\delta }^{13}C\) values of 0.4–2.8‰, typical of marine limestones49 (Fig. 2i). The stable isotope compositions of calcite in pelitic and basic schists, and related veins, have relatively constant \({\delta }^{18}O\) values (15–17‰) and a large variation in \({\delta }^{13}C\) (−12–2‰), reflecting multiple sources of CO2 including oxidation of in situ biogenic carbonaceous material and metasomatic processes50. In contrast, carbonates in the Higuchi serpentinite body show a relatively narrow range of \({\delta }^{13}C\) (–10.3‰ to –9.3‰), with the exception of one dolomite sample (–7.6‰), and a wide range of \({\delta }^{18}{O}\) between 17.0‰ and 20.2‰. These C–O isotope data indicate that (1) CO2 in the fluids that carbonated the Higuchi serpentinite body was not derived from limestones, but from the degradation of organic material or carbonates derived from methanotrophic processes within the pelites, and (2) carbonic fluids were probably mixed with H2O produced by dehydration of serpentinite during carbonation reactions, such as:51,52
$$\begin{array}{cc}{{{{{\mathrm{Mg}}}}}}_{48}{{{{{\mathrm{Si}}}}}}_{34}{{{{{\mathrm{O}}}}}}_{85}({{{{{\mathrm{OH}}}}}})_{62}+48{{{{{{\mathrm{CO}}}}}}}_{2}({{{{{\mathrm{aq}}}}}})\to {48{{{{{\mathrm{MgCO}}}}}}}_{3} + {34{{{{{\mathrm{SiO}}}}}}}_{2}+31{{{{{{\mathrm{H}}}}}}}_{2}{{{{{\mathrm{O}}}}}}, \\ \;\;{{{{{\mathrm{Antigorite}}}}}} \hfill\qquad \qquad \qquad\,\; {{{{{\mathrm{Magnesite}}}}}}\hfill\, \!\!\!\!\!\!\!\!\!\!\!{{{{{{\mathrm{Quartz}}}}}}}\hfill\end{array}$$
$$\begin{array}{cc}2{{{{{{\mathrm{Mg}}}}}}}_{48}{{{{{{\mathrm{Si}}}}}}}_{34}{{{{{{\mathrm{O}}}}}}}_{85}{({{{{{\mathrm{OH}}}}}})}_{62} +45{{{{{{\mathrm{CO}}}}}}}_{2}({{{{{\mathrm{aq}}}}}})\,\to \, 45{{{{{{\mathrm{MgCO}}}}}}}_{3}+17{{{{{{\mathrm{Mg}}}}}}}_{3}{{{{{{\mathrm{Si}}}}}}}_{4}{{{{{{\mathrm{O}}}}}}}_{10}{({{{{{\mathrm{OH}}}}}})}_{2}+45{{{{{{\mathrm{H}}}}}}}_{2}{{{{{\mathrm{O}}}}}} \hfill\\ \quad\;{{{{{\mathrm{Antigorite}}}}}}\hfill \quad\qquad\qquad\;\; {{{{{\mathrm{Magnesite}}}}}}\hfill\ \!\!\!\!\!\!\!\!\!\!\!\!{{{{{{\mathrm{Talc}}}}}}}\hfill\end{array}.$$
Possible tectonic setting of carbonation and serpentinization
Geological field relationships, including the chemical compositions and P–T conditions recorded by HP metamorphic rocks, indicate that the Higuchi body experienced serpentinization and carbonation at depths of 20–35 km, comparable to conditions in the corner of the mantle wedge in active warm subduction zones such as the Nankai subduction zone in SW Japan22 (Fig. 3a). Our observations suggest that the toe of the forearc mantle was initially serpentinized without macroscopic fracturing (Fig. 3b), followed by local carbonation associated with intense fracturing (Fig. 3c). The metasomatic sequences observed in the Higuchi body progress from the pelitic schists to the interior of the serpentinites as follows: Chl (after pelitic schist)/Act + Chl/Cal + Dol/Dol + Talc/Mgs + Talc (after serpentinite) (Fig. 3c). Mass balance analyses reveal that chloritization of the pelitic schists was characterized by gains in MgO and FeO, and losses of SiO2 and H2O at nearly constant Al2O3, resulting in an overall ~35% reduction in solid volume (Supplementary Table 2 and Supplementary Fig. 5). In contrast, although the quantitative mass balance analysis of the carbonation of the serpentinite body is difficult due to the heterogeneous distribution of carbonate veins, the carbonation reactions were characterized by the formation of carbonates (magnesite, dolomite, or calcite) + talc at the expense of antigorite, and accompanied by gains in CO2, SiO2, and CaO, and losses of H2O and MgO (Fig. 3c).
Fig. 3: Schematic illustration of carbonation in the mantle wedge.
a Subduction zone setting at the leading edge of the mantle wedge. b Pervasive serpentinization at the leading edge of the mantle wedge is associated with the release of H2O from the subducting slab. c Localized carbonation of serpentinized mantle wedge and serpentinite blocks associated with production of carbonic fluids.
Progress of carbonation assisted by reaction-induced fracturing and Mg-mobility
We conducted thermodynamic modeling of the interactions between pelite-derived fluids and serpentinite at 400 °C and 0.5 GPa with variable fluid–rock ratios (F/R ratio; see Methods section). The initial fluids were assumed to be in equilibrium with the graphite-bearing pelitic schists at various oxygen fugacities, fO2 (Supplementary Fig. 6). Given graphite saturated fluids, the atomic fraction of oxygen to oxygen + hydrogen in initial fluid, XO, is determined for each fO253.
At 400 °C and 0.5 GPa, graphite-bearing assemblages appear at high F/R ratios when antigorite reacts with relatively reducing fluids (Fig. 4a). This is because the solubility of graphite decreases slightly with the addition of antigorite to reducing fluids, as the precipitation of clinopyroxene and tremolite reduces the concentration of Ca-bearing carbonic aqueous complexes, such as Ca(HCOO)+, Ca(HCO3)+, and CaCO3,aq. In cases with initial fluids at around the quartz–fayalite–magnetite (QFM) buffer (fO2 = –28.1, XO = 0.337), relatively common metasomatic minerals (tremolite, chlorite, talc)12 appear along the interfaces between ultramafic rocks and metasediments at log [F/R] of 1.5–2.5.
Fig. 4: Results of thermodynamic calculations of the interaction between pelitic-schist-derived fluids and antigorite at 400 °C and 0.5 GPa.
a Stable mineral assemblage as a function of fluid/rock mass ratio (F/R) and oxygen fugacity, fO2 and the atomic fraction of oxygen to oxygen + hydration, XO, of the initial solution. Pale blue and pink shaded regions indicate carbonate- and graphite-bearing assemblages, respectively. Red dashed lines indicate QFM + 0 (XO = 0.337) and QFM +0.3 (XO = 0.344) of the initial solution. b–f Results in the case of log fO2 = –27.8 (QFM+0.3, XO = 0.344) of the initial solution as a function of log [F/R]. b Mole percent of product minerals. c Schematic illustration showing the relationship between the F/R ratio and mineralogical evolution of the Higuchi serpentinite body. d Change in moles of H2O (\(\triangle\)H2O) and CO2 (\(\triangle\)CO2) in rock before and after reaction. e Total concentration (mol/kg) of individual elements (Si, Al, Fe, Mg, Ca, Na, K) in fluids. f Solid volume ratio (Vsolid/Vsolid,0) and total volume ratio (Vtotal/Vtotal,0) before and after reaction. Kfs K-feldspar, Bt biotite, Cpx clinopyroxene, Tr tremolite, Chl chlorite, Qtz quartz, Cal calcite, Dol dolomite, Mgs magnesite, Gr graphite, Pl plagioclase, Pa paragonite, Mag magnetite.
Typical carbonate-bearing mineral assemblages are found in cases with initial fluids at around QFM + 0.3 (fO2 = –27.8, XO = 0.344; Fig. 4a, b). At log [F/R] > ~2.5, plagioclase, calcite, chlorite, and quartz appear, which are typical minerals in veins within the Sanbagawa pelitic schists54. With an increase in the proportion of antigorite, the mineral assemblage evolves as follows: Chl ± Qtz ± Cpx/Chl + Tr/Cal + Talc/Dol + Talc/Mgs + Talc/Mgs + Talc + Atg (Fig. 4b, c). Such a mineralogical sequence reflects the fluid-dominated system at the boundary of the serpentinite body and close to the large veins, and the rock-dominated system in the interior of the serpentinite body with a fine vein network (Fig. 4c) observed within the Higuchi serpentinite body (Figs. 1–2). At log [F/R] < 2.0, H2O is released by tremolite, and talc and carbonates form at the expense of antigorite, while CO2 is consumed (Fig. 4d). Carbonaceous material in sedimentary rocks is initially poorly crystalline, and its crystallinity increases during prograde metamorphism in subduction zones. Therefore, relatively disordered graphite exists in the metapelites around the Higuchi body44. In the presence of disordered graphite, the CO2 concentration in the input solution could have been greater than those estimated by our thermodynamic calculations (Fig. 4d), which assumed the presence of crystalline graphite33. This could result in the formation of larger amounts of carbonates at the same fO2 conditions in the initial solution.
With a decrease in the F/R ratio (log [F/R] < 1.5; i.e., representative of the interior of the serpentinite body), carbonates + talc form by consumption of antigorite. pH increases and fO2 slightly decreases (Supplementary Fig. 7), as reported in previous studies35,36,55. Si is the dominant component in the initial fluid, as it is saturated with quartz (Fig. 4e). The aqueous CO2 species dominantly exists as the MgOSi(OH)2(HCO3)+ complex (Supplementary Fig. 7). Such effects of SiO2 on the enhancement of carbon solubility have been inferred from dissolution experiments on forsterite + enstatite + graphite32. As the amount of antigorite increases, the concentration of Si decreases and the concentration of Mg increases (Fig. 4e). For example, at log [F/R] = 1.0, the overall mass balance for the carbonation of serpentinite is:
$$\begin{array}{cc}{{{{{{\mathrm{Mg}}}}}}}_{48}{{{{{{\mathrm{Si}}}}}}}_{34}{{{{{{\mathrm{O}}}}}}}_{85}{({{{{{\mathrm{OH}}}}}})}_{62}+1.82{{{{{{\mathrm{SiO}}}}}}}_{2}({{{{{\mathrm{aq}}}}}})+0.07{{{{{{\mathrm{Al}}}}}}}_{2}{{{{{{\mathrm{O}}}}}}}_{3}({{{{{\mathrm{aq}}}}}})+0.22{{{{{\mathrm{CaO}}}}}}({{{{{\mathrm{aq}}}}}})+12.99{{{{{{\mathrm{CO}}}}}}}_{2}({{{{{\mathrm{aq}}}}}})\\ \quad{{{{{\mathrm{Antigorite}}}}}}\hfill\end{array}\\ \begin{array}{cc}=8.90{{{{{{\mathrm{Mg}}}}}}}_{3}{{{{{{\mathrm{Si}}}}}}}_{4}{{{{{{\mathrm{O}}}}}}}_{10}{({{{{{\mathrm{OH}}}}}})}_{2}+12.54{{{{{{\mathrm{MgCO}}}}}}}_{3}+0.22{{{{{\mathrm{MgCa}}}}}}{({{{{{{\mathrm{CO}}}}}}}_{3})}_{2}+0.07{{{{{{\mathrm{Mg}}}}}}}_{5}{{{{{{\mathrm{Al}}}}}}}_{2}{{{{{{\mathrm{Si}}}}}}}_{3}{{{{{{\mathrm{O}}}}}}}_{11}{({{{{{\mathrm{OH}}}}}})}_{7}\\ \quad\qquad{{{{{{\mathrm{Talc}}}}}}} \hfill\qquad\qquad {{{{{{\mathrm{Magnesite}}}}}}}\hfill\quad {{{{{{\mathrm{Dolomite}}}}}}}\hfill\quad {{{{{{\mathrm{Chlorite}}}}}}}\\ +8.18{{{{{\mathrm{MgO}}}}}}({{{{{\mathrm{aq}}}}}})+21.85{{{{{{\mathrm{H}}}}}}}_{2}{{{{{\mathrm{O}}}}}}\hfill\end{array}\\ $$
In reaction 3, a small amount of chlorite is formed. Chlorite is not found in the Higuchi serpentinite, but Al-rich antigorite often occurs with talc. Interestingly, the Mg concentration at low log [F/R] (<~1.5), which exists dominantly as Mg(OH)2,aq (Supplementary Fig. 7), is higher than the Si concentration in the initial fluid, implying relatively high mobility of Mg in the mantle wedge. Moreover, at low F/R where talc + carbonates are formed (Fig. 4f), the solid volume decreases (Vsolid/Vsolid,0 = 0.7–1.0), whereas the total volume (solid + fluid) increases (Vtotal/Vtotal,0 = 1.000–1.004). Observations from recent laboratory experiments conducted under forearc mantle conditions40 support our modeling and field observations, suggesting that the progressive formation of magnesite + talc is associated with a solid volume decrease. Although the fluid-mineral equilibria and volume changes were computed for isobaric condition (Fig. 4f), the values of Vtotal/Vtotal,0 > 1 at low F/R suggest that carbonation reactions of serpentinized mantle tend to result in a fluid pressure rise when the system is undrained.
Fracturing induced by volume-changing reactions
To understand the effects of the solid volume change and fluid pressure increase on fracturing, we conducted numerical simulations of coupled fracturing, reaction, fluid flow, and element diffusion using a distinct element method56 (see Methods section). We consider a simple metasomatic dehydration reaction between a serpentinite body and a matrix of pelitic schist (Fig. 5a), in the cases of dilation (Fig. 5b) and contraction (Fig. 5c). In the model, the reaction proceeds along the margins and fractures within the serpentinite body in response to diffusive flux of metasomatic agents (i.e., CO2 species or silica), which is saturated in the pelitic schist. In both cases of dilation and contraction, fluid pressure increases within the serpentinite body, but different fracture patterns are produced depending on the volume change. In the case of dilation, radial cracks develop preferentially within the surrounding matrix (Fig. 5b, d and Supplementary Fig. 10a). In contrast, fracture networks are preferentially developed within the serpentinite body in the case of contraction (Fig. 5c, d and Supplementary Fig. 10b), and the fracture networks develop from the margin with branching-out structure, which shows a similar geometry as the carbonate veins in the Higuchi body (Fig. 1a–e and Supplementary Fig. 2). These contrasting fracture patterns are consistent with previous numerical simulations56,57,58,59 and laboratory experiments28,40,58. Similar fracture patterns are also reproduced (Supplementary Fig. 11) even when fracturing is simulated in models of metasomatic reaction with dilation or contraction but without fluid flow and dehydration. The thermodynamic calculations (Fig. 4) and DEM simulations (Fig. 5) indicate that volume contraction is likely to be the main cause of fracturing during carbonation. Euhedral dolomite grains that are interpreted to have grown in open space (Fig. 2i) indicate that fluid overpressures sustained open cracks, and could have assisted the propagation of fractures. Fluid overpressure could be built up when the rate of fluid production is greater than that of fluid escape through the pelitic schist60. The permeability of the pelitic schist in the Sanbagawa belt is estimated to be 2.1 × 10−20 m2 under a confining pressure of 200 MPa61, and the porosity of the pelitic schist could be low and similar to that of chlorite schist in subduction zones (0.01%–0.2%)62. Such low porosity and permeability in the pelitic schist and the relatively high reaction rate of carbonation with dehydration (several [tens of] percent of carbonation from an antigorite block in a few days)40 could result in fluid overpressure within the serpentinite body, as discussed in dehydration of serpentinite60. The volume contraction produces tensile cracks at an isotropic effective confining stress (Fig. 5), but mechanical instabilities associated with volume changes and fluid overpressure during reaction can trigger earthquake ruptures at high differential stresses60,63.
Fig. 5: Representative results of numerical simulations using distinct-element techniques of reaction-induced fracturing during carbonation of a serpentinite body in an non-reactive matrix.
a Boundary conditions used in the model. b, c Snapshots of the reaction ratio (left), fluid pressure distribution (Pf−Pmin) with the fracture pattern (middle) and the concentration of species S (right) during reaction progress. b Metasomatic dehydration with dilation at an average reaction ratio, ξAv = 17.3%, and a volume strain εv = 0.0076; and c metasomatic dehydration with contraction at ξAv = 5.5%, and the volume strain εv = –0.005. d The crack density (percentage of broken bonds with respect to total bonds) in the reactive material domain and non-reactive matrix for dilation and contraction reactions, respectively.
Consequences of heterogeneous and episodic carbonation in the mantle wedge
In the Sanbagawa belt, carbonation of some ultramafic blocks has occurred47, but the distribution and extent of carbonation reactions are restricted40,47. In the Higuchi body, carbonates were not formed during the initial stages of serpentinization (to form antigorite). These features suggest that in contrast to the relatively homogeneous serpentinization process (Fig. 3b), carbonation in the mantle wedge may involve rupturing that is heterogeneous in time and space (Fig. 3c). The reactions involving carbonic fluids are influenced by redox conditions. It is well known that peridotite has a high reducing potential64,65, but the redox conditions in a fully serpentinized body could be modified by fluid–rock interactions after serpentinization66. In the case of the Higuchi body, carbonation did not occur in the initial stage of serpentinization (massive antigorite; Fig. 2a, b), and chloritized pelitic schists at the boundary were not significantly depleted in graphite (Supplementary Figs. 3a and 5). Therefore, it is unlikely that the CO2 fluid was produced only near the serpentinite body under oxidizing conditions. Based on analysis of the Higuchi body, we infer that carbonation of the mantle wedge can be induced by episodic ingress of carbonic fluids that may be created by oxidation of carbonaceous materials with fluids passing through subducted oxidized layers, including hematite-bearing mafic schists and bedded manganese deposits67,68. In addition, we emphasize that once carbonic fluids reach the mantle wedge, carbonation can proceed in a self-promoting way via positive feedbacks between the reaction, volume contraction, fracturing, and transport of elements and CO2–H2O fluids (Figs. 3–5)40,56.
In the Nankai subduction zone in Shikoku, SW Japan, ETS is observed at the slab–mantle interface near the corner of the mantle wedge20,21,22,23,69. The frictional behavior of serpentine, as well as metasomatic products such as talc, chlorite, and tremolite, is characterized by stable slip related to strain hardening70. A notable feature of carbonation within the Higuchi body is that networks of millimeter- to meter-scale carbonate–talc veins developed during the carbonation of serpentinized mantle (Figs. 1–2 and Supplementary Fig. 2). The total volume increase (fluid + solid) suggested by the thermodynamic modeling of carbonation (Fig. 4f) might cause non-double coupled earthquakes, as reported in swarm seismicity in volcanic zones71. However, the signal of ETS is consistent with shear slip on the plate interface69, and non-double coupled components are not clear due to the large signal-to-noise ratio. The high fluid pressure observed in the Higuchi body (Fig. 2) and suggested by the modeling (Fig. 4f) is consistent with high Vp/Vs ratios associated with the ETS region21,22. The DEM modeling reveals that volume contraction in the presence of high fluid pressures tends to generate tensile fractures (Fig. 5 and Supplementary Fig. 10b), which may subsequently transform (or develop) into shear fractures under differential stress. The brittle shear failures observed in thin-sections and outcrops (Figs. 1b, c and 2e) are consistent with the mechanism of low-frequency earthquakes related to shear slip on the plate interface21,69. Following sealing of void spaces by carbonates, localized shear is concentrated within the talc-rich layers (Figs. 1b and 2a, e) and dolomite veins (Figs. 1c and 2f). We speculate that this kind of repeated brittle failure, followed by viscous flow, may represent an analog for the ETS that is observed within the relatively cold nose of the mantle wedge.
Measurements of stable isotope compositions of carbonate minerals
The chemical compositions of minerals were analyzed using an electron microprobe analyzer (EPMA, JEOL8200) at Tohoku University. The acceleration voltage was 15 kV, and the current was 12 nA or 120 nA for quantitative analyses and elemental mapping, respectively. Identification of serpentine and other minerals was performed using a Raman spectrometer (Horiba XploRa) equipped with an Olympus BX51 microscope at Tohoku University.
Oxygen (\({\delta }^{18}O\)) and carbon (\({\delta }^{13}C\)) isotope analyses were conducted on selected carbonate samples from the Higuchi serpentinite body. For comparison, we also analyzed the stable isotope compositions of marble samples from the Sanbagawa belt in central Shikoku. Samples were extracted from cut and polished slabs using a sharp knife, and then stained with Alizarin red-S to distinguish between calcite and dolomite. Staining with Alizarin red-S does not affect the C and O isotope ratios49. Sample powders for C–O isotope analyses were taken from different portions of each slab. Carbonate mineral (dolomite, magnesite, or calcite) powders were placed in small stainless steel thimbles and dropped into a reaction vessel containing pyrophosphoric acid at 60 °C (calcite) or 100 °C (for dolomite and magnesite) in vacuum to produce CO2 gas. Released CO2 gas was cleaned to remove impurities like H2O by using pentane slush and collected by using liquid nitrogen cold traps. Stable isotope measurements were carried out with a Thermofischer MAT-253 mass spectrometer at Niigata University. Results are reported in conventional per mil (‰) notation with respect to V-SMOW (Vienna-Standard Mean Ocean Water) for oxygen and V-PDB (Vienna-Peedee Belemnite) for carbon. The precision of δ13C and δ18O for the laboratory standard CO2 gas were 0.04‰ and 0.06‰, respectively72.
Thermodynamic model of fluid–rock interaction
Thermodynamic calculations were carried out in the system Na–K–Ca–Fe–Mg–Al–Si–Cl –C–O–H using the Deep Earth Water (DEW) model73,74 and software EQ3/675 with a modified Berman thermodynamic dataset76. This updated thermodynamic dataset includes H2CO30, HCO3–, and various complexes related to bicarbonic acid (Na(HCO3)0, Ca(HCO3)+, and MgOSi(OH)2(HCO3)+). For this reason, thermodynamic calculations are not restricted to the H2O-rich system but also model CO2-rich fluids involving carbonation74. We treated solid solutions as ideal mixing between Mg and Fe endmembers for chlorite, talc, tremolite, biotite, and clinopyroxene, and between albite and anorthite for plagioclase. We first created an input solution using EQ3, in equilibrium with the observed mineral assemblage in the pelitic schists: muscovite + chlorite (XMg = 0.6) + quartz + albite + clinozoisite + calcite + graphite, at 400 °C and 0.5 GPa54,77. This mineral assemblage represents the most likely fluid source or the metasomatic and carbonation reactions within the serpentinite. The pressure of carbonation in the Higuchi serpentinite was not determined, but probably ranges between 0.5 and 0.9 GPa based on the P–T conditions in the chlorite and garnet zones45. At 400 °C, there is no miscibility of CO2–H2O fluids below 1 GPa78. The log oxygen fugacity of the input solutions, log[fO2], ranged from –29.0 to –27.5, corresponding to ΔQFM (deviation from quartz–fayalite–magnetite buffer) from –0.8 to +0.7 in log units79. Based on the compositions of each initial solution calculated by EQ3, XO is obtained as follows:
$${X}_{O}=\frac{{n}_{O}}{{n}_{O}+{n}_{H}}.$$
where nO and nH are the number of moles of oxygen and hydrogen in the fluids, respectively53. The log[fO2] range of the initial solution corresponds to Xo from 0.318 to 0.354. EQ6 was then used to model the interactions (thermodynamic equilibria) between the input solution and serpentinite composed of 100% antigorite (Fig. 4). We calculated log fluid/rock mass ratios from –2.0 to 4.0. To create the stable mineral assemblage at various F/R and fO2 conditions for the input solution (Fig. 4a and Supplementary Fig. 6), a shell script was written to automatically generate EQ3/6 run input files with various fO2 conditions, and the output files generated by EQ3/6 were further processed with an author-generated MATLAB® script.
The changes in CO2 and H2O during the reactions (Fig. 4d) are calculated by determining changes in hydrous minerals and carbonates. The volume change ratio of the solids (Vsolid/Vsolid,0; Fig. 4f) was calculated by using the volumes of product minerals and consumed antigorite. The volume change ratio of solid and fluids (Vtotal/Vtotal,0) was calculated by the volumes of product minerals, consumed antigorite, and changes in the amount of fluid. The molar volume of fluids was obtained based on the sum of the concentration of carbonic species. With decreasing log [F/R], the XCO2 values decreases from 2.30 × 10–2 to 1.88 × 10–2.
We also undertook the same calculation at 400 °C and 1.0 GPa (Supplementary Fig. 8). We found that the topography of the stable mineral assemblage in a plot of log [fO2] vs. log [F/R] is largely similar to the case at 400 °C and 0.5 GPa (Fig. 4a), except that aragonite is stable instead of calcite. The chemistry of the initial solution calculated by EQ3 is also largely consistent with that calculated by Perple_X version 6.9.1.80 with thermodynamic data of Holland and Powell81 (Supplementary Fig. 9).
Distinct element method to model metasomatic dehydration reactions
We conducted two-dimensional distinct-element numerical simulations to investigate fracture patterns induced by volume-changing dehydration reactions, following the methods of Okamoto & Shimizu56 with slight modifications to incorporate element diffusion. Okamoto & Shimizu56 treated coupled processes of reaction (dehydration/hydration), fluid flow, and fracturing. The model consists of an aggregate of circular elements connected by elastic bonds. When the external force exceeds the tensile or shear strength of the bond, the bond is broken to form a microcrack. To treat fluid flow, we calculate the fluid pressure in each domain, which is defined by the regions surrounded by connected elements. The fluid flow in a channel is calculated by the Poiseuille equation and includes the crack aperture (w), fluid viscosity (\(\mu\)), the length of the flow channel, and the fluid pressure gradient between adjacent domains (\(\triangle\)Pf). Okamoto & Shimizu56 considered a simple hydration/dehydration reaction as Mineral A + H2O = Mineral B, and the reaction rate is assumed to be a linear function of fluid pressure. They showed that contrasting fracture patterns are produced in response to solid volume changes rather than fluid pressure as similar to the other studies56,57,58,59. Here, as a simplification of the carbonation of serpentinite, we consider a simple metasomatic dehydration reaction:
$${{{{{\mathrm{Mineral}}}}}}\,{{{{{\mathrm{A}}}}}}+{{{{{\mathrm{Aqueous}}}}}}\,{{{{{\mathrm{species}}}}}}\,{{{{{\mathrm{S}}}}}}={{{{{\mathrm{Mineral}}}}}}\,{{{{{\mathrm{B}}}}}}+{{{{{{\mathrm{H}}}}}}}_{2}{{{{{\mathrm{O}}}}}}$$
where aqueous species S represents metasomatic agents such as CO2 species and SiO2. The reaction is characterized by the volume change factor (the volumetric ratio of 100% reacted particle to unreacted particle), the ratio of the changes in fluid and particle volume (fluid volume factor), and the ratio of the changes in the amount of species S with respect to the particle volume change (solute factor). For this study, we used volume factors of 1.1 (dilation) or 0.9 (contraction), a fluid factor of –0.1, and a solute factor of 1.0. The reaction rate, Z, is defined as a function of the concentration of CS as follows:
$$Z={Z}_{{{{{{\mathrm{max}}}}}}}(1-({C}_{{{{{{\mathrm{s}}}}}},{{{{{\mathrm{max}}}}}}}-{C}_{{{{{{\mathrm{s}}}}}}})/({C}_{{{{{{\mathrm{s}}}}}},{{{{{\mathrm{max}}}}}}}-{C}_{{{{{{\mathrm{s}}}}}},{{{{{\mathrm{min}}}}}}}\,))$$
Cs,max and Cs,min represent maximum and minimum concentrations of the species S in the system: the reaction rate is greatest (Zmax) at Cs = Cs,max, and the reaction stops at Cs = Cs,min. In addition to advective transport with H2O, we consider diffusional transport of aqueous species S as a function of the concentration gradient of S in each domain.
The values of the parameters used in this study are summarized in Supplementary Table 3. This study used a 10 × 10 m square rock model that contains 4357 particles with diameters of 50–100 mm. This rock specimen initially contains a reactive mineral domain composed of antigorite (analogous to serpentinized mantle) and an non-reactive matrix composed of quartz (analogous to metasediment). Here we consider that the species S is saturated within the non-reactive matrix. The physical properties (particle density, ρ, and Young's modulus, E) of the matrix material are based on quartz (ρ = 2650 kg/m3; E = 140 GPa), whereas the reactive mineral properties change from antigorite (ρ = 2600 kg/m3; E = 115 GPa) to a mixture of dolomite (95%) + talc (5%) (ρ = 2830 kg/m3; E = 110 GPa), following the data of Mavko et al.82 and Abers & Hacker83 (Supplementary Table 3). There are no experimental data on the tensile and shear strengths of these minerals, but the tensile strength of rock typically ranges from ~1 to 40 MPa, and compressive strengths are typically ~10 times the tensile strengths84. Here, we set the tensile and shear strengths in the rock model to be 10 and 100 MPa, respectively, regardless of the minerals in the model. Based on these strength data, we performed preliminary simulations of uniaxial compression and tension tests to adjust microscopic input parameters85, and decided to use a tensile spring strength of 30 MPa and a shear spring strength of 118 MPa. The effective confining pressure was set to be 1 MPa. The initial and fluid pressures outside the rock model were set to Pmin, and when dehydration reaction proceeds fluid pressure, Pf, inside the rock increases and fluid flows toward outside. The fluid physical properties (viscosity of 1.0 × 10–4 Pas and bulk modulus of 3.5 GPa) were assumed to be the same as those of water at temperatures of ~400 °C and pressures of 0.5 GPa. The concentration of species S, Cs, is assumed to be a maximum value (Cs,max = 1.0) in the non-reactive matrix (saturated), and Cs in the reactive mineral domain is set to be Cs,min (=0) at the initial stage. The apparent diffusion coefficient (=diffusion coefficient/average particle size) was set to 0 s−1 in unbroken rocks, and 10 s−1 along fractures. As the diffusive flux of species is large with respect to the fluid flow, the concentration of S is nearly equal to Cs,max.
The data of this study are available in methods and supplementary tables. The input and output files for EQ3 and EQ6 for representative cases (Fig. 4), and animations of the DEM simulation results are available in the online repository (https://doi.org/10.6084/m9.figshare.13336850).
The DEM code used in this study is available from the corresponding author upon request with the approval of A.O. and H.S.
Kerrick, D. M. & Connolly, J. A. D. Metamorphic devolatilization of subducted oceanic basalts: implications for seismicity, arc magmatism and volatile recycling. Earth Planet. Sci. Lett. 189, 19–29 (2001).
Kerrick, D. M. & Connolly, J. A. D. Metamorphic devolatilization of subducted marine sediments and the transport of volatiles into the Earth's mantle. Nature 411, 293–296 (2001).
Hacker, B. R., Peacock, S. M., Abers, G. A. & Holloway, S. D. Subduction factory 2. Are intermediate–depth earthquakes in subducting slabs linked to metamorphic reactions? J. Geophys. Res. Solid Earth 108, 2030 (2003).
Hyndman, R. D. & Peacock, S. M. Serpentinization of the forearc mantle. Earth Planet. Sci. Lett. 212, 417–432 (2003).
Stewart, E. M. & Ague, J. J. Pervasive subduction zone devolatilization recycles CO2 into the forearc. Nat. Commun. 11, 6220 (2020).
Schmidt, M. & Poli, S. Experimentally based water budgets for dehydrating slabs and consequences for arc magma generation. Earth Planet. Sci. Lett. 163, 361–379 (1998).
Manning, C. E. The chemistry of subduction–zone fluids. Earth Planet. Sci. Lett. 223, 626 (1994).
Galvez, M. E., Connolly, J. A. D. & Manning, C. E. Implications for metal and volatile cycles from the pH of subduction zone fluids. Nature 359, 420–424 (2016).
Scambelluri, M., Cannao, E. & Gilio, M. The water and fluid–mobile element cycles during serpentine subduction. A review. Eur. J. Mineral. 31, 405–428 (2019).
Peacock, S. M. & Hyndman, R. D. Hydrous minerals in the mantle wedge and the maximum depth of subduction thrust earthquakes. Geophys. Res. Lett. 26, 2517–2520 (1999).
Ague, J. J. & Nicolescu, S. Carbon dioxide released from subduction zones by fluid–mediated reactions. Nat. Geosci. 7, 355–359 (2014).
Manning, C. E. Phase–equilibrium controls on SiO2 metasomatism by aqueous fluid in subduction zones: reaction at constant pressure and temperature. Int. Geol. Rev. 37, 1074–1093 (2015).
Saishu, H., Okamoto, A. & Otsubo, M. Silica precipitation potentially controls earthquake recurrence in seismogenic zones. Sci. Rep. 7, 13337 (2017).
Audet, P. & Bürgmann, R. Possible control of subduction zone slow–earthquake periodicity by silica enrichment. Nature 510, 389–392 (2014).
Fumagalli, P. & Poli, S. Experimentally determined phase relations in hydrous peridotites to 6.5 GPa and their consequences on the dynamics of subduction zones. J. Petrol. 46, 555–578 (2005).
Fumagalli, P., Zanchetta, S. & Poli, S. Alkalis in phlogopite and amphibole and their effects on phase relations in metasomatized peridotites: a high-pressure study. Contrib. Mineral. Petrol. 158, 723–737 (2009).
Tumiati, S., Fumagalli, P., Tiraboschi, C. & Poli, S. An experimental study on COH-bearing peridotite up to 3.2 GPa and implications for crust–mantle recycling. J. Petrol. 54, 453–479 (2013).
Hirauchi, K., Katayama, I., Uehara, S., Miyahara, M. & Takai, Y. Inhibition of subduction thrust earthquakes by low–temperature plastic flow in serpentine. Earth Planet. Sci. Lett. 295, 349–357 (2010).
Oleskevich, D. A., Hyndman, R. D. & Wang, K. The updip and downdip limits to great subduction earthquakes: thermal and structural models of Cascadia, south Alaska, SW Japan, and Chile. J. Geophys. Res. Solid Earth 14, 14965–14991 (1999).
Beroza, G. C. & Ide, S. Slow earthquakes and nonvolcanic tremor. Annu. Rev. Earth Planet. Sci. 39, 271–296 (2011).
Shelly, D. R., Beroza, G. C., Ide, S. & Nakamula, S. Low-frequency earthquakes in Shikoku, Japan, and their relationship to episodic tremor and slip. Nature 442, 188–191 (2006).
Kato, A. et al. Variations of fluid pressure within the subducting oceanic crust and slow earthquakes. Geophys. Res. Lett. 37, L14310 (2010).
Gao, X. & Wang, K. Rheological separation of the megathrust seismogenic zone and episodic tremor and slip. Nature 543, 416–419 (2010).
Ujiie, K. et al. An explanation of episodic tremor and slow slip constrained by crack–seal veins and viscous shear in subduction mélange. Geophys. Res. Lett. 45, 5371–5379 (2019).
Mizukami, T. et al. Two types of antigorite serpentinite controlling heterogeneous slow–slip behaviors of slab–mantle interface. Earth Planet. Sci. Lett. 401, 148–158 (2014).
Behr, W. M., Kotowski, A. J. & Ashley, K. T. Dehydration-induced rheological heterogeneity and the deep tremor source in warm subduction zones. Geology 46, 475–478 (2018).
Tarling, M. S., Smith, S. A. F. & Scott, J. M. Fluid overpressure from chemical reactions in serpentinite within the source region of deep episodic tremor. Nat. Geosci. 12, 1034–1042 (2019).
Miller, S. A., van der Zee, W., Olgaard, D. L. & Connolly, J. A. D. A fluid–pressure feedback model of dehydration reactions: experiments, modeling, and application to subduction zones. Tectonophysics 370, 241–251 (2003).
Connolly, J. A. D. Devolatilization-generated fluid pressure and deformation-propagated fluid flow during prograde regional metamorphism. J. Geophys. Res. 102, 18149–18173 (1997).
Kelemen, P. B. & Manning, C. E. Reevaluating carbon fluxes in subduction zones, what goes down, mostly comes up. Proc. Natl. Acad. Sci. USA 112, E3997–E4006 (2014).
Galvez, M. E., Manning, C. E., Connolly, J. A. D. & Rumble, D. The solubility of rocks in metamorphic fluids: a model for rock-dominated conditions to upper mantle pressure and temperature. Earth Plant. Sci. Lett. 430, 486–498 (2015).
Tumiati, S. et al. Silicate dissolution boosts the CO2 concentrations in subduction fluids. Nat. Commun. 8, 16 (2017).
Tumiati, T. et al. Dissolution susceptibility of glass-like carbon versus crystalline graphite in high-pressure aqueous fluids and implications for the behavior of organic matter in subduction zones. Geochim. Cosmochim. Acta 273, 383–402 (2020).
Kelemen, P. B. & Matter, J. In situ carbonation of peridotite for CO2 storage. Proc. Natl. Acad. Sci. USA 105, 17295–17300 (2008).
Galvez, M. E. et al. Graphite formation by carbonate reduction during subduction. Nat. Geosci. 6, 473–477 (2013).
Giuntoli, F., Brovarone, A. V. & Menegon, L. Feedback between high–pressure genesis of abiotic methane and strain localization in subducted carbonate rocks. Sci. Rep. 10, 9848 (2020).
Peacock, S. M. Serpentinization and infiltration metasomatism in the Trinity peridotite, Klamath province, northern California: implications for subduction zones. Contrib. Mineral. Petrol. 95, 95–70 (1987).
Scambelluri, M. et al. Carbonation of subduction–zone serpentinite (high–pressure ophiocarbonate; Ligurian Western Alps) and implications for the deep carbon cycling. Earth Planet. Sci. Lett. 441, 155–166 (2016).
Peng, W. et al. Multistage CO2 sequestration in the subduction zone: insights from exhumed carbonated serpentinites, SW Tianshan UHP belt, China. Geochim. Cosmochim. Acta 270, 218–243 (2020).
Sieber, M. J., Yaxley, G. M. & Hermann, J. Investigation of fluid–driven carbonation of a hydrated, forearc mantle wedge using serpentine cores in high–pressure experiments. J. Petrol. 61, egaa035 (2020).
Wallis, S. R. & Okudaira, T. in The Geology of Japan. (eds T. Moreno, T., Wallis, S. R., Kojima, T. & Gibbons W.) 101–124. (Geological Society of London, 2016).
Aoya, M., Endo, S., Mizukami, T. & Wallis, S. R. Paleo–mantle wedge preserved in the Sambagawa high–pressure metamorphic belt and the thickness of forearc continental crust. Geology 41, 451–454 (2013).
Hattori, K., Wallis, S., Enami, M. & Mizukami, T. Subduction of mantle wedge peridotites: evidence from the Higashi–Akaishi ultramafic body in the Sanbagawa metamorphic belt. Isl. Arc 19, 192–207 (2010).
Inui, M. & Takefuji, A. Spatial distribution of garnet indicating control of bulk rock chemistry in the Sanbagawa metamorphic rocks, Kanto Mountains, Japan. J. Mineral. Petrol. Sci. 113, 181–189 (2018).
Enami, M., Wallis, S. R. & Banno, Y. Paragenesis of sodic–pyroxene–bearing quartz schists: implications for the P–T history of the Sanbagawa belt. Contrib. Mineral. Petrol. 116, 182–198 (1994).
Ishii, T., Robinson, P. T., Maekawa, H. & Fiske, R. Petrological studies of peridotites from diapiric serpentinite seamounts in the Izu–Ogasawara–Mariana forearc, Leg 125. In Proc. Proceedings of the Ocean Drilling Program, Scientific Results, College Station, TX (Ocean Drilling Program). (eds. Fryer P., Pearce J. A., Stokking L. B., et al.) 445–85 (1992).
Kawahara, H. et al. Brucite as an important phase of the shallow mantle wedge: evidence from the Shiraga unit of the Sanbagawa subduction zone, SW, Japan. Lithos 254–255, 53–66 (2016).
Powell, R., Condliffe, D. M. & Condliffe, E. Calcite–dolomite geothermometry in the system CaCO3–MgCO3–FeCO3: an experimental study. J. Metamorph. Geol. 2, 33–41 (1984).
Wada, H., Enami, M. & Yanagi, T. Isotopic studies of marbles in the Sanbagawa metamorphic terrain, central Shikoku, Japan. Geochem. J. 18, 61–73 (1984).
Morohashi, K., Okamoto, A., Satish–Kumar, M. & Tsuchiya, N. Variations in stable isotope compositions (δ13C, δ18O) of calcite within exhumation–related veins from the Sanbagawa metamorphic belt. J. Mineral. Petrol. Sci. 105, 361–365 (2008).
Menzel, M. D. et al. Carbonation of mantle peridotite by CO2-rich fluids: the formation of listvenites in the Advocate ophiolite complex (Newfoundland, Canada). Lithos 323, 238–261 (2018).
Hansen, L. D. & Dipple, C. M. Carbonated serpentinite (listwanite) at Atlin, British Columbia: a geological analogue to carbon dioxide sequestration. Can. Mineral. 43, 225–239 (2005).
Connolly, J. A. D. Phase diagram methods for graphitic rocks and application to the system C-O-H-FeO-TiO2-SiO2. Contrib. Mineral. Petrol. 119, 94–116 (1995).
Okamoto, A., Kikuchi, T. & Tsuchiya, N. Mineral distribution within polymineralic veins in the Sanbagawa belt, Japan: implications for mass transfer during vein formation. Contrib. Mineral. Petrol. 156, 323–336 (2008).
Brovarone, A. V. et al. Subduction hides high–pressure sources of energy that may feed the deep subsurface biosphere. Nat. Commun. 11, 3880 (2020).
Okamoto, A. & Shimizu, H. Contrasting fracture patterns induced by volume–increasing and –decreasing reactions: implications for the progress of metamorphic reactions. Earth Planet. Sci. Lett. 415, 9–18 (2015).
Jamtveit, B., Malthe–Sørenssen, A. & Kostenko, O. Reaction enhanced permeability during retrogressive metamorphism. Earth Planet. Sci. Lett. 267, 620–627 (2008).
Kuleci, H., Ulven, O. I., Rybacki, E., Wunder, B. & Abart, R. Reaction–induced fracturing in a hot pressed calcite–periclase aggregate. J. Struct. Geol. 94, 116–135 (2017).
Yoshida, K., Okamoto, A., Shimizu, H., Oyanagi, R., Tsuchiya, N. & Oman Drilling Project Phase 2 Science Party. Fluid infiltration through oceanic lower crust in response to reaction-induced fracturing: Insights from serpentinized troctolite and numerical models. J. Grophys. Res. 125, e2020JB020268 (2020).
Ague, J. J., Park, J. & Rye, D. M. Regional metamorphic dehydration and seismic hazard. Geophys. Res. Lett. 25, 4221–4224 (1998).
Wibberley, C. A. J. & Shimamoto, T. Internal structure and permeability of major strike-slip fault zones: the Median Tectonic Line in Mie Prefecture, Southwest Japan. J. Struct. Geol. 25, 59–78 (2003).
Ganzhorn, A. C., Pilorge, H. & Reynard, B. Porosity of metamorphic rocks and fluid migration within subduction interfaces. Earth Planet. Sci. Lett. 522, 107–117 (2019).
Kirby, S. H., Durham, W. B. & Stern, L. A. Mantle phase changes and deep–earthquake faulting in subducting lithospheres. Science 252, 216–225 (1991).
Klein, F. & Bach, W. Fe–Ni–Co–O–S phase relations in peridotite–seawater interactions. J. Petrol. 50, 37–59 (2009).
Piccoli, F. et al. Subducting serpentinites release reduced, not oxidized, aqueous fluids. Sci. Rep. 9, 19573 (2019).
Malvoisin, B., Chopin, C., Brunet, F. & Galvez, M. E. Low-temperature wollastonite formed by carbonate reduction: a marker of serpentinite redox conditions. J. Petrol. 53, 159–176 (2012).
Nakagawa, M., Santosh, M. & Maruyama, S. Manganese formations in the accretionary belts of Japan: implications for subduction–accretion process in an active convergent margin. J. Asian Earth Sci 42, 208–222 (2011).
Tumiati, S., Godard, G., Matrin, S., Malaspina, N. & Poli, S. Ultra-oxidized rocks in subduction mélanges? Decoupling between oxygen fugacity and oxygen availability in a Mn-rich metasomatic environment. Lithos 226, 116–130 (2015).
Ide, S., Shelly, D. R. & Beroza, G. C. Mechanism of deep low frequency earthquakes: Further evidence that deep non-volcanic tremor is generated by shear slip on the plate interface. Geophys. Res. Lett. 34, L03308 (2007).
Hirauchi, K., den Hartog, S. A. M. & Spiers, C. J. Weakening of the slab–mantle wedge interface induced by metasomatic growth of talc. Geology 41, 75–78 (2013).
Hrubcová, P., Doubravová, J. & Vavryˇcuk, V. Non-double-couple earthquakes in 2017 swarm in Reykjanes Peninsula, SW Iceland: sensitive indicator of volcano-tectonic movements at slow-spreading rift. Earth Planet. Sci. Lett. 563, 116875 (2021).
Satish-Kumar, M., Kiran, S. & Abe, M. A new inlet system for microscale carbon and oxygen isotope analysis using dual inlet isotope ratio mass spectrometer at Niigata University, Japan. Science Reports of Niigata University. Vol. 35 (2021) (in press).
Sverjensky, D. A., Harrison, B. & Azzolini, D. Water in the deep Earth: The dielectric constant and the solubilities of quartz and corundum to 60kb and 1200°C. Geochim. Cosmochim. Acta 129, 125–145 (2014).
Huang, F. & Sverjensky, D. A. Extended deep earth water for predicting major element mantle metasomatism. Geochim. Cosmochim. Acta 254, 192–230 (2019).
Wolery T. J. EQ3NR, A Computer Program For Geochemical Aqueous Speciation–solubility Calculations: Theoretical Manual, User's Guide, And Related Documentation (version 7.0). (Lawrence Livermore National Laboratory, 1992).
Berman, R. G. Internally-consistent thermodynamic data for minerals in the system Na2O-K2O-CaO-MgO-FeO-Fe2O3-Al2O3-SiO2-TiO2-H2O-CO2. J. Petrol. 29, 445–522 (1988).
Goto, A., Kunugiza, K. & Omori, S. Evolving fluid composition during prograde metamorphism in subduction zones: a new approach using carbonate–baring assemblages in the pelitic system. Gondwana Res. 11, 166–179 (2007).
Abramson, E. H., Bollengier, O. & Brown, M. The water–carbon dioxide miscibility surface to 450 °C and 7GPa. Am. J. Sci. 317, 967–989 (2017).
Miozzi, F. & Tumiati, S. Aqueous concentration of CO2 in carbon-saturated fluids as a highly sensitive oxybarometer. Geochem. Perspect. Lett. 16, 30–34 (2020).
Connolly, J. A. D. Computation of phase equilibria by linear programming: a tool for geodynamic modeling and its application to subduction zone decarbonation. Earth Planet. Sci. Lett. 236, 524–5412 (2005).
Holland, T. & Powell, R. An improved and extended internally consistent dataset for phases of petrological interest, involving a new equation of state for solid. J. Metamorph. Geol. 29, 333–383 (2011).
Mavko, G., Mukerji, T. & Dvorkin, J. The Rock Physics Handbook (Cambridge University Press, 2009)
Abers, G. A. & Hacker, B. R. A MATLAB toolbox and excel workbook for calculating the densties, seismic wave speeds, and major element composition f minerals and rocks at pressure and temperature. Geochem. Geophys. Geosyst. 17, 616–624 (2016).
Pollard, D. D. & Fletcher, R. C. Fundamentals of Structural Geology (Cambridge University Press, 2006)
Shimizu, H. & Okamoto, A. The roles of fluid transport and surface reaction in reaction–induced fracturing, with implications for the development of mesh textures in serpentinites. Contrib. Mineral. Petrol. 171, 1–18 (2016).
We thank Kenichi Hirauchi and Mutsuki Aoya for useful discussions, and Shinichi Yamasaki and Otgongayar Dandar for XRF analyses. Fang Huang kindly introduced us to the DEW modeling. Discussion with J.A.D. Connolly helped in clarifying the relation between oxygen fugacity and carbonation reactions. The authors acknowledge constructive comments from two anonymous reviewers that substantially improved this manuscript. This work was financially supported by JSPS KAKENHI Grant Numbers JP 16H06347, 18KK0376, and 17H02981 to A.O., and JP15H05831 and 20KK0081 to M.S.-K, and by Earthquake Res. Inst., the University of Tokyo, Joint Research program 2021-B-01 and 2018-B-01.
Graduate School of Environmental Studies, Tohoku University, Sendai, Japan
Atsushi Okamoto, Kazuki Yoshida & Masaoki Uno
Research Institute for Marine Geodynamics, Japan Agency for Marine-Earth Science and Technology (JAMSTEC), Yokosuka, Japan
Ryosuke Oyanagi
School of Science and Engineering, Kokushikan University, Tokyo, Japan
Geotechnical Analysis Group, Advanced Analysis Department, Civil Engineering Design Division, Kajima Corporation, Tokyo, Japan
Hiroyuki Shimizu
Faculty of Science, Department of Geology, Niigata University, Niigata, Japan
Madhusoodhan Satish-Kumar
Atsushi Okamoto
Kazuki Yoshida
Masaoki Uno
A.O., R.O., K.Y., and M.U. carried out field work. A.O. carried out the petrological analyses. R.O. carried out the thermodynamic calculations on fluid–rock interaction, M.U. performed mass balance analyses. K.Y. and H.S. developed the DEM model and carried out simulations. M.S. and K.Y. performed the isotope analyses of carbonates. A.O. wrote the paper with inputs from all authors.
Correspondence to Atsushi Okamoto.
Peer review information Communications Earth & Environment thanks the anonymous reviewers for their contribution to the peer review of this work. Primary Handling Editors: Maria Luce Frezzotti, Joe Aslin, Heike Langenberg.
Okamoto, A., Oyanagi, R., Yoshida, K. et al. Rupture of wet mantle wedge by self-promoting carbonation. Commun Earth Environ 2, 151 (2021). https://doi.org/10.1038/s43247-021-00224-5
Hadal aragonite records venting of stagnant paleoseawater in the hydrated forearc mantle
Katsuyoshi Michibayashi
Communications Earth & Environment (2021)
Communications Earth & Environment (Commun Earth Environ) ISSN 2662-4435 (online) | CommonCrawl |
Planned sessions
Subscribe announcements
12/12/2019, 16:00 — 17:00 — Abreu Faro Amphitheatre
João Pimentel Nunes, Instituto Superior Técnico, CAMGSD, Universidade de Lisboa
Imaginary time flows to reality
The mathematical expressions of the idea of quantization are a source of rich interdisciplinary relations between different areas in geometry and other subjects such as analysis and representation theory. In this colloquium, after a gentle description of some structures of modern geometry and of the quantization problem, we will describe "flows in imaginary time" and will give an idea of their role in quantization (in particular, in so-called real polarizations) and Kähler geometry. Finally, we will give a light description of some recent results.
21/11/2019, 16:00 — 17:00 — Amphitheatre Pa3, Mathematics Building
Carlos Florentino, Departamento de Matemática, Faculdade de Ciências Universidade de Lisboa, CMAFcIO
Moduli Spaces and their Polynomial Invariants
The idea of symmetry, present in ancient civilizations, became part of our mathematical tools with the introduction of groups and group actions by Galois and Lie. Understanding the geometry of the spaces of orbits, and the algebra of the invariant functions on them is extremely useful both in algebraic and in geometric classification problems. These problems were greatly unified by the notion of a moduli space, introduced by Riemann and developed by Mumford.
In this colloquium, we present some classification problems in algebra and geometry which give rise to interesting moduli spaces — polygon spaces, character varieties, Higgs bundles —, and show some of the tools used in their study, such as polynomial invariants (named after Euler, Poincaré, Hodge, etc.). In some simple cases, there are explicit formulas for these polynomials (some just recently computed), and we end up announcing an interesting form of Langlands duality for character varieties of free groups.
Vítor Cardoso, CENTRA, Instituto Superior Técnico, Universidade de Lisboa
Testing General Relativity with Gravitational Waves
This year marks the centenary of a pivotal breakthrough: the confirmation that gravity can be described as spacetime curvature. Among the most outrageous predictions of the theory are the existence of black holes and gravitational waves.
Gravitational waves offer a unique glimpse into the unseen universe in different ways, and allow us to test the basic tenets of General Relativity, some of which have been taken for granted without observations: are gravitons massless? Are black holes the simplest possible macroscopic objects? Do event horizons and black holes really exist, or is their formation halted by some as yet unknown mechanism? Do singularities arise in our universe as the outcome of violent collisions? Can gravitational waves carry information about the nature of the elusive dark matter?
I will describe the science encoded in a gravitational wave signal and what the upcoming years might have in store regarding fundamental physics and gravitational waves.
09/07/2019, 16:00 — 17:00 — Room P3.10, Mathematics Building
Jose Castillo, San Diego State University
Mimetic Discretization Methods
Mimetic discretizations or compatible discretizations have been a recurrent search in the history of numerical methods for solving partial differential equations with variable degree of success. There are many researches currently active in this area pursuing different approaches to achieve this goal and many algorithms have been developed along these lines. Loosely speaking, "mimetic" or "compatible" algebraic methods have discrete structures that mimic vector calculus identities and theorems. Specific approaches to discretization have achieved this compatibility following different paths, and with diverse degree of generality in relation to the problems solved and the order of accuracy obtainable. Here, we present theoretical aspects for a mimetic method based on the extended Gauss Divergence Theorem as well as examples using this method to solve partial differential equations using the Mimetic Operators Library Enhanced (MOLE).
Manuel Cabral Morais, Instituto Superior Técnico, CEMAT - Universidade de Lisboa
On ARL-unbiased Control Charts
A control chart is a graphical device used to monitor a parameter of a measurable characteristic. An observation of the plotted control statistic beyond the control limits suggests a change in the parameter being monitored. The control limits of most charts tend to be set ignoring the skewness character of the control statistic and this may dramatically affect their average run length (ARL) performance. We derive several ARL-unbiased control charts whose ARL profiles attain a pre-specified maximum when the parameter is on-target. R is used to provide striking illustrations of how ARL-unbiased charts work in practice.
Paulo Mateus, Instituto Superior Técnico, SQIG -IT, Universidade de Lisboa
Cryptography after Quantum Computation
Due to Shor's algorithm and the imminence of quantum computers, all cryptographic standards are being rethought. In this talk, we address a security functionality associated with privacy, namely oblivious transfer. We present some cryptographic solutions proposed within the Security and Quantum Information Group at Instituto de Telecomunicações. Specifically, we discuss the proof of security for methods based on quantum information (quantum cryptography) and for classical methods, based on hardness assumptions conjectured to be robust to quantum attacks (post-quantum cryptography). We conclude by examining the advantages and disadvantages of each approach, as well as their implications in cryptography and technology.
Pedro J. Freitas, Faculdade de Ciências, CIUHCT
Francisco Gomes Teixeira and the internationalization of Portuguese mathematics
Francisco Gomes Teixeira (1851-1933) was a remarkable Portuguese mathematician, one of the greatest of the nineteenth and twentieth centuries and certainly the most prolific in that period. He maintained regular and intense correspondence with the greatest mathematicians of his time. In this lecture we will present some aspects of this correspondence.
Christopher Deninger, University of Münster
Dynamical systems for arithmetic schemes
We construct a natural infinite dimensional dynamical system whose periodic orbits come in compact packets $P$ which are in bijection with the prime numbers $p$. Here each periodic orbit in $P$ has length $\log p$. In fact a corresponding construction works more generally for finitely generated normal rings and their maximal ideals or even more generally for arithmetic schemes and their closed points. Moreover the construction is functorial for a large class of morphisms. Thus the zeta functions of analytic number theory and arithmetic geometry can be viewed as Ruelle type zeta functions of dynamical systems. We will describe the construction and what is known about these dynamical systems. The generic fibres of our dynamical systems are related to an earlier construction by Robert Kucharczyk and Peter Scholze of topological spaces whose fundamental groups realize Galois groups. There are many unproven conjectures on arithmetic zeta functions and the ultimate aim is to use analytical methods for dynamical systems to prove them.
Patrícia Gonçalves, Instituto Superior Técnico, CAMGSD - Universidade de Lisboa
From randomness to determinism
In this seminar I will describe how to derive rigorously the laws that rule the space-time evolution of the conserved quantities of a certain stochastic process. The goal is to describe the connection between the macroscopic equations and the microscopic system of random particles. The former can be either PDEs or stochastic PDEs depending on whether one is looking at the law of large numbers or the central limit theorem scaling; while the latter is a collection of particles that move randomly. Depending on the choice of the transition probability that particles obey, we will see that the macroscopic laws can be of different nature.
Adélia Sequeira, Instituto Superior Técnico, CEMAT - Universidade de Lisboa
Mathematics and Cardiovascular Diseases
Mathematical modeling and simulations of the human circulatory system is a challenging and complex wide-range multidisciplinary research field.
In this talk we will consider some mathematical models and simulations of the cardiovascular system and comment on their significance to yield realistic and accurate numerical results, using stable, reliable and efficient computational methods.
José Natário, Instituto Superior Técnico, CAMGSD - Universidade de Lisboa
Mathematical Relativity and the Cosmic Censorship Conjecture
Einstein's general theory of relativity has always been a great catalyst for mathematical development, from Riemannian geometry to partial differential equations. In this talk we give a mathematical history of the subject, aiming to explain one of its most important open problems, the strong cosmic censorship conjecture.
Carlos Rocha, Instituto Superior Técnico, CAMGSD - Universidade de Lisboa
Qualitative Theory of Differential Equations — Morse-Smale Evolution Processes
The qualitative theory of differential equations aims at the description of the asymptotic behavior of solutions of differential equations.
We survey aspects of the Morse-Smale theory, from dynamical systems generated by ordinary differential equations to evolution processes generated by non-autonomous partial parabolic differential equations.
Jorge Drumond Silva, CAMGSD, Instituto Superior Técnico - Universidade de Lisboa
The Interplay Between Dispersive Partial Differential Equations and Fourier Analysis
Partial differential equations have always been a subject of fruitful interaction with Fourier analysis, starting precisely with the study of the heat equation. In the last few decades, nonlinear partial differential equations of hyperbolic and dispersive type, in particular, have been at the center of a significant new interplay and mutual progress between these two fields, through the works of prominent mathematicians like Tosio Kato, Charles Fefferman, Jean Bourgain, Carlos Kenig and Terence Tao.
In this talk, we will review some of the basic concepts and ideas that play a central role in this connection between techniques from Fourier analysis and properties of solutions of dispersive PDEs, covering topics like Strichartz estimates, smoothing effects, local and global well posedness of initial value problems at low regularity, among others.
Cristina Sernadas, Instituto Superior Técnico, CMAF-CIO - Universidade de Lisboa
The World is Incomplete, Reducible and Real
Some results and reduction techniques for proving decidability of mathematical theories and completeness of logics are presented. The crucial role of the theory of real closed ordered fields is explained. Selected illustrations from Euclidean Geometry to Quantum Logic are discussed.
João Filipe Queiró, Universidade de Coimbra
Sums and products of equivalence orbits
Similarity and equivalence of matrices over fields are well-understood relations, and that understanding is elementary, specially in the case of equivalence. For matrices over rings – e.g. the integers – the situation is different. Equivalence is still simple and its study leads to the concept of invariant factors of a matrix. About these, several interesting basic questions can be raised. This talk will address two of those questions: how do invariant factors behave under matrix addition and multiplication? Some things are known about these problems, and the second one – already completely solved for certain classes of rings – has deep relations with other parts of mathematics.
Cristina Câmara, Instituto Superior Técnico - Universidade de Lisboa
From Toeplitz matrices to black holes, and beyond
What do Toeplitz matrices, random matrix models, orthogonal polynomials, Painlevé transcendents, the KdV equation, and black holes, seemingly very unrelated subjects, have in common? These, and a variety of other mathematical problems, can be studied by means of the so called Riemann-Hilbert method. In this talk we briefly describe what a Riemann-Hilbert problem is and present several recent applications, from the spectral properties of Toeplitz operators to exact solutions of Einstein field equations.
Cláudia Nunes Philippart, Instituto Superior Técnico, CEMAT - Universidade de Lisboa
Decide to Win
In the finance world there are many problems related with the optimal time to undertake some action. One of the most common problems is the derivation of the exercise time of an American option. But also in decisions regarding investments this question is essential. Questions like: when to adopt a new technology? When to invest in a new airport? When should suspension out of production occur? These problems have a real impact in the economy, and therefore one needs a proper mathematical formulation and solution for them.
In this talk we address such problems. They are known in the literature as optimal stopping problems, closely related with free boundary problems. One way to solve the optimisation problem is to use a variational inequality, known as the Hamilton-Jacobi-Bellman equation (HJB, for short). In the first part of the talk we present briefly the mathematical formulation and tools to solve such problems, and in the second part we show some applications, providing solution and discussion.
Pedro Lima, Instituto Superior Técnico, CEMAT - Universidade de Lisboa
The Mathematics of the Brain
With an extremely large number of functional units (neurons) and an even larger number of connections between them (synapses) the brain is maybe the most complex system that Science has ever tried to explain and simulate. Neuroscience is nowadays a multidisciplinary field which mobilizes all over the world thousands of scientists having different profiles, from medical doctors to computer engineering. including mathematicians. The mathematical tools of Neuroscience are getting more and more complex, giving rise to new branches, such as Mathematical Neuroscience or Computational Neuroscience. In this talk we will visit some of the most well-known mathematical models, emphasizing the role that mathematical topics such as Differential Equations, Numerical Analysis or even Algebraic Topology play in the modelling of the brain and the nervous system.
Colóquio_PLima
José Félix Costa, IST - ULisboa
O Poder dos Sistemas Analógico-Digitais
Introduz-se um modelo de computação analógico-digital em que a componente digital é o modelo padrão (e.g. máquina de Turing) e a componente analógica é o resultado de uma medição, e.g. obtida através de sensor de grandeza física. A medição atua como oráculo e a troca de informação entre a componente digital e a componente analógica decorre no tempo intrínseco ao processo físico. As medições realizadas pelo acoplamento analógico-digital podem ser executadas através de diferentes protocolos, de estocástico a determinístico, nomeadamente fazendo variar a precisão da medição entre finita e infinita. Discute-se a natureza das medições que podem ser efetuadas pelos sistemas analógico-digitais. Estabelece-se que o poder computacional destes sistemas que operam num número polinomial de passos é o das classes computacionais $\mathit{BPP//}\log^{(k)}\!\star$. Por fim discutem-se os limites da simulação computacional de sistemas físicos e comparam-se os conceitos de número mensurável segundo a abordagem de Geroch e Hartle e a nossa.
Extended abstract
Jorge Buescu, Faculdade de Ciências, Universidade de Lisboa
O mais impenetrável ABC
Neste colóquio tratar-se-á de um famoso problema em aberto em Teoria de Números, conhecido como conjectura ABC. Mostra-se por que razão esta Conjectura é provavelmente o problema mais importante da área — a seguir à Hipótese de Riemann —, evidenciando as suas fortíssimas e surpreendentes consequências. Como exemplo destas consequências fornecer-se-á uma demonstração do Teorema de Fermat (módulo ABC). Finalmente, detalhar-se-á a bizarra situação actual do problema, com uma possível demonstração que a comunidade matemática se tem esforçado por decifrar ao longo da última década — sem grande sucesso até hoje.
Older session pages: Previous 2 3 4 Oldest
The Mathematics Colloquium is a series of monthly talks organized by the Department of Mathematics of IST, aiming to be a forum for the presentation of mathematical ideas or ideas about Mathematics. The Colloquium welcomes the participation of faculty, researchers and undergraduate or graduate students, of IST or other institutions, and is seen as an opportunity of bringing together and fostering the building up of ideas in an informal atmosphere.
Organizers: Conceição Amado, Lina Oliveira e Maria João Borges. | CommonCrawl |
\( \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \)
Advanced High School Statistics: Second Edition, with updates based on AP© Statistics Course Framework
David M Diez, Mine Çetinkaya-Rundel, Leah Dorazio, Christopher D Barr
PrevUpNext
Textbook overview
Examples and exercises
1 Data collection
Case study: using stents to prevent strokes
Data basics
Overview of data collection principles
Observational studies and sampling strategies
Chapter exercises
2 Summarizing data
Examining numerical data
Numerical summaries and box plots
Considering categorical data
Case study: malaria vaccine (special topic)
3 Probability
Defining probability
The binomial formula
Continuous distributions
4 Distributions of random variables
Normal distribution
Sampling distribution of a sample mean
Binomial distribution
Sampling distribution of a sample proportion
5 Foundations for inference
Estimating unknown parameters
Introducing hypothesis testing
6 Inference for categorical data
Inference for a single proportion
Difference of two proportions
Testing for goodness of fit using chi-square
Chi-square tests for two-way tables
7 Inference for numerical data
Inference for a mean with the \(t\)-distribution
Inference for paired data
Inference for the difference of two means
8 Introduction to linear regression
Line fitting, residuals, and correlation
Fitting a line by least squares regression
Inference for the slope of a regression line
Transformations for skewed data
A Data sets within the text
Data sets within the text
B Distribution Tables
Random Number Table
Normal Probability Table
\(t\) Probability Table
Chi-Square Probability Table
C Calculator reference, Formulas, and Inference guide
Calculator reference
Inference guide
D Calculator reference, Formulas, and Inference guide
Authored in PreTeXt
Section 6.4 Chi-square tests for two-way tables
We encounter two-way tables in this section, and we learn about two new and closely related chi-square tests. We will answer questions such as the following:
Does the phrasing of the question affect how likely sellers are to disclose problems with a product?
Is gender associated with whether Facebook users know how to adjust their privacy settings?
Is political affiliation associated with support for the use of full body scans at airports?
Subsection 6.4.1 Learning objectives
Calculate the expected counts and degrees of freedom for a chi-square test involving a two-way table.
State and verify whether or not the conditions for a chi-square test for a two-way table are met.
Explain the difference between the chi-square test of homogeneity and chi-square test of independence.
Carry out a complete hypothesis test for homogeneity and for independence.
Subsection 6.4.2 Introduction
Google is constantly running experiments to test new search algorithms. For example, Google might test three algorithms using a sample of 10,000 google.com search queries. Table 6.4.1 shows an example of 10,000 queries split into three algorithm groups. 1 The group sizes were specified before the start of the experiment to be 5000 for the current algorithm and 2500 for each test algorithm.
Google regularly runs experiments in this manner to help improve their search engine. It is entirely possible that if you perform a search and so does your friend, that you will have different search results. While the data presented in this section resemble what might be encountered in a real experiment, these data are simulated.
Search algorithm current test 1 test 2 Total
Counts 5000 2500 2500 10000
Table 6.4.1. Experiment breakdown of test subjects into three search groups.
Example 6.4.2.
What is the ultimate goal of the Google experiment? What are the null and alternative hypotheses, in regular words?
The ultimate goal is to see whether there is a difference in the performance of the algorithms. The hypotheses can be described as the following:
\(H_{0}\text{:}\) The algorithms each perform equally well.
\(H_{A}\text{:}\) The algorithms do not perform equally well.
In this experiment, the explanatory variable is the search algorithm. However, an outcome variable is also needed. This outcome variable should somehow reflect whether the search results align with the user's interests. One possible way to quantify this is to determine whether (1) there was no new, related search, and the user clicked one of the links provided, or (2) there was a new, related search performed by the user. Under scenario (1), we might think that the user was satisfied with the search results. Under scenario (2), the search results probably were not relevant, so the user tried a second search.
Table 6.4.3 provides the results from the experiment. These data are very similar to the count data in Section 6.3. However, now the different combinations of two variables are binned in a two-way table. In examining these data, we want to evaluate whether there is strong evidence that at least one algorithm is performing better than the others. To do so, we apply a chi-square test to this two-way table. The ideas of this test are similar to those ideas in the one-way table case. However, degrees of freedom and expected counts are computed a little differently than before.
Search algorithm
current test 1 test 2 Total
No new search 3511 1749 1818 7078
New search 1489 751 682 2922
Total 5000 2500 2500 10000
Table 6.4.3. Results of the Google search algorithm experiment.
What is so different about one-way tables and two-way tables?
A one-way table describes counts for each outcome in a single variable. A two-way table describes counts for combinations of outcomes for two variables. When we consider a two-way table, we often would like to know, are these variables related in any way?
The hypothesis test for this Google experiment is really about assessing whether there is statistically significant evidence that the choice of the algorithm affects whether a user performs a second search. In other words, the goal is to check whether the three search algorithms perform differently.
Subsection 6.4.3 Expected counts in two-way tables
From the experiment, we estimate the proportion of users who were satisfied with their initial search (no new search) as \(7078/10000 = 0.7078\text{.}\) If there really is no difference among the algorithms and 70.78% of people are satisfied with the search results, how many of the 5000 people in the "current algorithm" group would be expected to not perform a new search?
About 70.78% of the 5000 would be satisfied with the initial search:
\begin{equation*} 0.7078\times 5000 = 3539\text{ users } \end{equation*}
That is, if there was no difference between the three groups, then we would expect 3539 of the current algorithm users not to perform a new search.
Checkpoint 6.4.5.
Using the same rationale described in Solution 6.4.4.1, about how many users in each test group would not perform a new search if the algorithms were equally helpful? 2
We would expect \(0.7078*2500 = 1769.5\text{.}\) It is okay that this is a fraction.
We can compute the expected number of users who would perform a new search for each group using the same strategy employed in Solution 6.4.4.1 and Checkpoint 6.4.5. These expected counts were used to construct Table 6.4.6, which is the same as Table 6.4.3, except now the expected counts have been added in parentheses.
No new search 3511 (3539) 1749 (1769.5) 1818 (1769.5) 7078
New search 1489 (1461) 751 (730.5) 682 (730.5) 2922
Table 6.4.6. The observed counts and the (expected counts).
The examples and exercises above provided some help in computing expected counts. In general, expected counts for a two-way table may be computed using the row totals, column totals, and the table total. For instance, if there was no difference between the groups, then about 70.78% of each column should be in the first row:
\begin{align*} 0.7078\times (\text{ column 1 total } ) \amp = 3539\\ 0.7078\times (\text{ column 2 total } ) \amp = 1769.5\\ 0.7078\times (\text{ column 3 total } ) \amp = 1769.5 \end{align*}
Looking back to how the fraction 0.7078 was computed — as the fraction of users who did not perform a new search (\(7078/10000\)) — these three expected counts could have been computed as
\begin{align*} \left(\frac{\text{ row 1 total } }{\text{ table total } }\right)\text{ (column 1 total) } \amp = 3539\\ \left(\frac{\text{ row 1 total } }{\text{ table total } }\right)\text{ (column 2 total) } \amp = 1769.5\\ \left(\frac{\text{ row 1 total } }{\text{ table total } }\right)\text{ (column 3 total) } \amp = 1769.5 \end{align*}
This leads us to a general formula for computing expected counts in a two-way table when we would like to test whether there is strong evidence of an association between the column variable and row variable.
Computing expected counts in a two-way table.
To identify the expected count for the \(i^{th}\) row and \(j^{th}\) column, compute
\begin{equation*} \text{ Expected Count } _{\text{ row } i,\text{ col } j} = \frac{(\text{ row \(i\) total } ) \times (\text{ column \(j\) total } )}{\text{ table total } } \end{equation*}
Subsection 6.4.4 The chi-square test of homogeneity for two-way tables
The chi-square test statistic for a two-way table is found the same way it is found for a one-way table. For each table count, compute
\begin{align*} \amp \text{ General formula } \amp \amp \frac{(\text{ observed count } - \text{ expected count } )^2}{\text{ expected count } }\\ \amp \text{ Row 1, Col 1 } \amp \amp \frac{(3511 - 3539)^2}{3539} = 0.222\\ \amp \text{ Row 1, Col 2 } \amp \amp \frac{(1749 - 1769.5)^2}{1769.5} = 0.237\\ \amp \hspace{9mm}\vdots \amp \amp \hspace{13mm}\vdots\\ \amp \text{ Row 2, Col 3 } \amp \amp \frac{(682 - 730.5)^2}{730.5} = 3.220 \end{align*}
Adding the computed value for each cell gives the chi-square test statistic \(\chi^2\text{:}\)
\begin{equation*} \chi^2 = 0.222 + 0.237 + \dots + 3.220 = 6.120 \end{equation*}
Just like before, this test statistic follows a chi-square distribution. However, the degrees of freedom is computed a little differently for a two-way table. 3 For two way tables, the degrees of freedom is equal to
\begin{gather*} df = \text{ (number of rows - 1) } \times \text{ (number of columns - 1) } \end{gather*}
Recall: in the one-way table, the degrees of freedom was the number of groups minus 1.
In our example, the degrees of freedom is
\begin{gather*} df = (2-1)\times (3-1) = 2 \end{gather*}
If the null hypothesis is true (i.e. the algorithms are equally useful), then the test statistic \(\chi^2 = 6.12\) closely follows a chi-square distribution with 2 degrees of freedom. Using this information, we can compute the p-value for the test, which is depicted in Figure 6.4.7.
Computing degrees of freedom for a two-way table.
When using the chi-square test to a two-way table, we use
\begin{equation*} df = (R-1)\times (C-1) \end{equation*}
where \(R\) is the number of rows in the table and \(C\) is the number of columns.
Use two-proportion methods for 2-by-2 contingency tables.
When analyzing 2-by-2 contingency tables, use the two-proportion methods introduced in Section 6.2.
Figure 6.4.7. Computing the p-value for the Google hypothesis test.
Conditions for the chi-square test of homogeneity.
There are two conditions that must be checked before performing a chi-square test of homogeneity. If these conditions are not met, this test should not be used.
Multiple random samples or randomly assigned treatments. Data collected by multiple independent random samples or multiple randomly assigned treatments. Data can then be organized into a two-way table.
All expected counts at least 5. All of the cells in the two-way table must have at least 5 expected cases under the assumption that the null hypothesis is true.
Compute the p-value and draw a conclusion about whether the search algorithms have different performances.
Here, found that the degrees of freedom for this \(3\times 2\) table is 2. The p-value corresponds to the area under the chi-square curve with 2 degrees of freedom to the right of \(\chi^2=6.120\text{.}\) Using a calculator, we find that the p-value = 0.047. Using an \(\alpha=0.05\) significance level, we reject \(H_0\text{.}\) That is, the data provide convincing evidence that there is some difference in performance among the algorithms.
Notice that the conclusion of the test is that there is some difference in performance among the algorithms. This chi-square test does not tell us which algorithm performed better than the others. To answer this question, we could compare the relevant proportions or construct bar graphs. The proportion that resulted in the new search can be calculated as
\begin{gather*} \text{ current: } \frac{1489}{5000} = 0.298 \text{ test 1: } \frac{751}{2500} = 0.300 \text{ test 2: } \frac{682}{2500} = 0.136\text{.} \end{gather*}
This suggests that the current algorithm and test 1 algorithm performed better than the test 2 algorithm; however, to formally test this specific claim we would need to use a test that includes a multiple comparisons correction, which is beyond the scope of this textbook.
A careful reader may have noticed that when there are exactly 2 random samples or treatments and the counts can be arranged in a \(2\times 2\) table, both a chi-square test for homogeneity and a 2-proportion Z-test could apply. In this case, the chi-square test for homogeneity and the two-sided 2-proportion Z-test are equivalent, meaning that they produce the same p-value. 4
Sometimes the success-failure condition for the Z-test is weakened to require the number of successes and failures to be at least 5, making it consistent with the chi-square condition that expected counts must at least 5.
\(\chi^2\) test of homogeneity.
When there are multiple samples or treatments and we are comparing the distribution of a categorical variable across several groups, e.g. comparing the distribution of rural/urban/suburban dwellers among 4 states,
Identify: Identify the hypotheses and the significance level, \(\alpha\text{.}\)
\(H_0\text{:}\) The distribution of [...] is the same for each population/treatment.
\(H_A\text{:}\) The distribution of [...] is not the same for each population/treatment.
Choose: Choose the correct test procedure and identify it by name.
Here we choose the \(\chi^2\) test of homogeneity.
Check: Check that the test statistic follows a chi-square distribution.
Data come from multiple random samples or from multiple randomly assigned treatments.
All expected counts are \(\ge 5\) (calculate and record expected counts).
Calculate: Calculate the \(\chi^2\)-statistic, \(df\text{,}\) and p-value.
test statistic: \(\chi^2 =\sum{ \frac{\text{ (observed } - \text{ expected } )^2}{\text{ expected } }}\)
\(df = (\# \text{ of rows } - 1) \times (\# \text{ of columns } - 1)\)
p-value = (area to the right of \(\chi^2\)-statistic with the appropriate \(df\))
Conclude: Compare the p-value to \(\alpha\text{,}\) and draw a conclusion in context.
If the p-value is \(\lt \alpha\text{,}\) reject \(H_0\text{;}\) there is sufficient evidence that [\(H_A\) in context].
If the p-value is \(> \alpha\text{,}\) do not reject \(H_0\text{;}\) there is not sufficient evidence that [\(H_A\) in context].
In an experiment 5 , each individual was asked to be a seller of an iPod (a product commonly used to store music on before smart phones). The participant received $10 + 5% of the sale price for participating. The iPod they were selling had frozen twice in the past inexplicitly but otherwise worked fine. Unbeknownst to the participants who were the sellers in the study, the buyers were collaborating with the researchers to evaluate the influence of different questions on the likelihood of getting the sellers to disclose the past issues with the iPod. The scripted buyers started with "Okay, I guess I'm supposed to go first. So you've had the iPod for 2 years ..." and ended with one of three questions:
General: What can you tell me about it?
Positive Assumption: It doesn't have any problems, does it?
Negative Assumption: What problems does it have?
https://www.acrwebsite.org/volumes/1012889/volumes/v40/NA-40
The outcome variable is whether the participant discloses or hides the problem with the iPod.
General Positive Assump. Negative Assump.
Response Disclose 2 23 36
Hide 71 50 37
Total 73 73 73
Does the phrasing of the question affect how likely individuals are to disclose the problems with the iPod? Carry out an appropriate test at the 0.05 significance level.
Identify: We will test the following hypotheses at the \(\alpha=0.05\) significance level.
\(H_0\text{:}\) The likelihood of disclosing the problem is the same for each question type.
\(H_A\text{:}\) The likelihood of disclosing the problem is not the same for each question type.
Choose: We want to know if the distribution of disclose/hide is the same for each of the three question types, so we want to carry out a chi-square test for homogeneity.
Check: This is an experiment in which there were three randomly allocated treatments. Here a treatment corresponds to a question type. All values in the table of expected counts are \(\ge\) 5. Table of expected counts:
Response Disclose 20.3 20.3 20.3
Hide 52.7 52.7 52.7
Calculate: Using technology, we get \(\chi^2 = 40.1\)
\(df = (\# \text{ of rows } - 1) \times (\# \text{ of columns } - 1) = 2\times 1 = 2\)
The p-value is the area under the chi-square curve with 2 degrees of freedom to the right of \(\chi^2=40.1\text{.}\) Thus, the p-value is almost 0.
Conclude: Because the p-value \(\approx\) 0 \(\lt \alpha\text{,}\) we reject \(H_0\text{.}\) We have strong evidence that the likelihood of disclosing the problem is not the same for each question type.
Checkpoint 6.4.10.
If an error was made in the test in the previous example, would it have been a Type I error or a Type II error? 6
In this test, the p-value was less than \(\alpha\text{,}\) so we rejected \(H_0\text{.}\) If \(H_0\) is in fact true, and we reject it, that would be committing a Type I error. We could not have made a Type II error, because a Type II error involves not rejecting \(H_0\text{.}\)
Subsection 6.4.5 The chi-square test of independence for two-way tables
Often, instead of having separate random samples or treatments, we have just one sample and we want to look at the association between two variables. When these two variables are categorical, we can arrange the responses in a two-way table.
In Chapter 3 we looked at independence in the context of probability. Here we look at independence in the context of inference. We want to know if any observed association is due to random chance or if there is evidence of a real association in the population that the sample was taken from. To answer this, we use a chi-square test for independence. The chi-square test of independence applies when there is only one random sample and there are two categorical variables. The null claim is always that the two variables are independent, while the alternate claim is that the variables are dependent.
Example 6.4.11.
Table 6.4.12 summarizes the results of a Pew Research poll 7 . A random sample of adults in the U.S. was taken, and each was asked whether they approved or disapproved of the job being done by President Obama, Democrats in Congress, and Republicans in Congress. The results are shown in Table 6.4.12. We would like to determine if the three groups and the approval ratings are associated. What are appropriate hypotheses for such a test?
https://www.people-press.org/2012/03/14/romney-leads-gop-contest-trails-in-matchup-with-obama/
\(H_{0}\) The group and their ratings are independent. (There is no difference in approval ratings between the three groups.)
\(H_{A}\) The group and their ratings are dependent. (There is some difference in approval ratings between the three groups, e.g. perhaps Obama's approval differs from Democrats in Congress.)
Obama Democrats Republicans Total
Approve 842 736 541 2119
Disapprove 616 646 842 2104
Total 1458 1382 1383 4223
Table 6.4.12. Pew Research poll results of a March 2012 poll.
Conditions for the chi-square test of independence.
There are two conditions that must be checked before performing a chi-square test of independence. If these conditions are not met, this test should not be used.
One random sample with two variables/questions. The data must be arrived at by taking a random sample.After the data is collected, it is separated and categorized according to two variables and can be organized into a two-way table.
All expected counts at least 5. All of the cells in the two-way table must have at least 5 expected cases assuming the null hypothesis is true.
First, we observe that the data came from a random sample of adults in the U.S. Next, let's compute the expected values that correspond to Table 6.4.12, if the null hypothesis is true, that is, if group and rating are independent.
The expected count for row one, column one is found by multiplying the row one total (2119) and column one total (1458), then dividing by the table total (4223): \(\frac{2119\times 1458}{4223} = 731.6\text{.}\) Similarly for the first column and the second row: \(\frac{2104\times 1458}{4223} = 726.4\text{.}\) Repeating this process, we get the expected counts:
Obama Congr. Dem. Congr. Rep.
Approve 731.6 693.5 694.0
Disapprove 726.4 688.5 689.0
The table above gives us the number we would expect for each of the six combinations if group and rating were really independent. Because all of the expected counts are at least 5 and there is one random sample, we can carry out the chi-square test for independence.
The chi-square test of independence and the chi-square test of homogeneity both involve counts in a two-way table. The chi-square statistic and the degrees of freedom are calculated in the same way.
Calculate the chi-square statistic.
We calculate \(\frac{(\text{ obs } - \text{ exp } )^2}{\text{ exp } }\) for each of the six cells in the table. Adding the results of each cell gives the chi-square test statistic.
\begin{align*} \chi^2 =\amp \sum{\frac{(\text{ obs } - \text{ exp } )^2}{\text{ exp } }}\\ =\amp \frac{(842-731.6)^2}{731.6} +\cdots\\ =\amp 16.7 + \cdots = 106.4 \end{align*}
Find the p-value for the test and state the appropriate conclusion.
We must first find the degrees of freedom for this chi-square test. Because there are 2 rows and 3 columns, the degrees of freedom is \(df=(2-1)\times (3-1) = 2\text{.}\) We find the area to the right of \(\chi^2=106.4\) under the chi-square curve with \(df=2\text{.}\) The p-value is extremely small, much less than 0.01, so we reject \(H_0\text{.}\) We have evidence that the three groups and their approval ratings are dependent.
\(\chi^2\) test of independence.
When there is one sample and we are looking for association or dependence between two categorical variables, e.g. testing for an association between gender and political party,
\(H_0\text{:}\) [variable 1] and [variable 2] are independent.
\(H_A\text{:}\) [variable 1] and [variable 2] are dependent.
Here we choose the \(\chi^2\) test of independence.
Data come from a single random sample.
A 2011 survey asked 806 randomly sampled adult Facebook users about their Facebook privacy settings. One of the questions on the survey was, "Do you know how to adjust your Facebook privacy settings to control what people can and cannot see?" The responses are cross-tabulated based on gender.
Response Yes 288 378 666
No 61 62 123
Not sure 10 7 17
Total 359 447 806
Carry out an appropriate test at the 0.10 significance level to see if there is an association between gender and knowing how to adjust Facebook privacy settings to control what people can and cannot see.
\(H_0\text{:}\) Gender and knowing how to adjust Facebook privacy settings are independent.
\(H_A\text{:}\) Gender and knowing how to adjust Facebook privacy settings are dependent.
Choose: Two variables were recorded on the respondents: gender and response to the question regarding privacy settings. We want to know if these variables are associated / dependent, so we will carry out a chi-square test of independence.
Check: According to the problem, there was one random sample taken. All values in the table of expected counts are \(\ge\) 5. Table of expected counts:
Response Yes 296.64 369.36
No 54.785 68.215
Not sure 7.572 9.428
Calculate: Using technology, we get \(\chi^2 = 3.13\text{.}\) The degrees of freedom for this test is given by: \(df = (\# \text{ of rows } - 1) \times (\# \text{ of columns } - 1) = 2\times 1 = 2\)
The p-value is the area under the chi-square curve with 2 degrees of freedom to the right of \(\chi^2=3.13\text{.}\) Thus, the p-value = 0.209.
Conclude: Because the p-value = 0.209 \(> \alpha\text{,}\) we do not reject \(H_0\text{.}\) We do not have sufficient evidence that gender and knowing how to adjust Facebook privacy settings are dependent.
In context, interpret the p-value of the test in the previous example. 8
The p-value in this test corresponds to the area to the right of \(\chi^2 = 3.13\) under the chi-square curve with 2 degrees of freedom. It is the probability of getting a \(\chi^2\)-statistic larger than 3.13 if \(H_0\) were true and assuming a chi-square model with 2 degrees of freedom holds. Equivalently, it is the probability of our observed counts being this different from the expected counts, relative to the expected counts, if gender and response really are independent (and the model holds).
Subsection 6.4.6 Calculator: chi-square test for two-way tables
TI-83/84: Entering data into a two-way table.
Hit 2ND \(x^{-1}\) (i.e. MATRIX).
Right arrow to EDIT.
Hit 1 or ENTER to select matrix A.
Enter the dimensions by typing #rows, ENTER, #columns, ENTER.
Enter the data from the two-way table.
Chi-square test of homogeneity and independence.
Use STAT, TESTS, \(\chi^2\)-Test.
First enter two-way table data as described in the previous box.
Choose STAT.
Right arrow to TESTS.
Down arrow and choose C:\(\chi^2\)-Test.
Down arrow, choose Calculate, and hit ENTER, which returns
\(\chi^2\) chi-square test statistic
p p-value
df degrees of freedom
TI-83/84: Finding the expected counts
First enter two-way table data as described previously.
Carry out the chi-square test of homogeneity or independence as described in previous box.
Hit 2 to see matrix B. This matrix contains the expected counts.
Casio fx-9750GII: Chi-square test of homogeneity and independence.
Navigate to STAT (MENU button, then hit the 2 button or select STAT).
Choose the TEST option (F3 button).
Choose the CHI option (F3 button).
Choose the 2WAY option (F2 button).
Enter the data into a matrix:
Hit \(\triangleright\)MAT (F2 button).
Navigate to a matrix you would like to use (e.g. Mat C) and hit EXE.
Specify the matrix dimensions: m is for rows, n is for columns.
Enter the data.
Return to the test page by hitting EXIT twice.
Enter the Observed matrix that was used by hitting MAT (F1 button) and the matrix letter (e.g. C).
Enter the Expected matrix where the expected values will be stored (e.g. D).
Hit the EXE button, which returns
To see the expected values of the matrix, go to \(\triangleright\)MAT (F6 button) and select the corresponding matrix.
Use Table 6.4.12, reproduced below, and a calculator to find the expected values and the \(\chi^2\)-statistic, \(df\text{,}\) and p-value for the chi-square test for independence.
Subsection 6.4.7 Section summary
When there are two categorical variables, rather than one, the data must be arranged in a two-way table and a \(\chi^2\) test of homogeneity or a \(\chi^2\) test of independence is appropriate.
These tests use the same \(\chi^2\)-statistic as the chi-square goodness of fit test, but instead of number of categories \(-\) 1, the degrees of freedom is (\(\# \text{ of rows } - 1)\times (\# \text{ of columns } -1\)). All expected counts must be at least 5.
When working with a two-way table, the expected count for each row,column combination is calculated as: expected count = \(\frac{(\text{ row total } )\times (\text{ column total } )}{\text{ table total } }\text{.}\)
The \(\chi^2\) test of homogeneity and the \(\chi^2\) test of independence are almost identical. The differences lie in the data collection method and in the hypotheses.
When there are multiple samples or treatments and we are comparing the distribution of a categorical variable across several groups, e.g. comparing the distribution of rural/urban/suburban dwellers among 4 states, the hypotheses can often be written as follows:
We test these hypotheses at the \(\alpha\) significance level using a \(\chi^2\) test of homogeneity.
When there is one sample and we are looking for association or dependence between two categorical variables, e.g. testing for an association between gender and political party, the hypotheses can be written as:
We test these hypotheses at the \(\alpha\) significance level using a \(\chi^2\) test of independence.
Both of the \(\chi^2\) tests for two-way tables require that all expected counts are \(\ge\) 5.
The chi-square statistic is:
\(df =\) (# of rows \(-\) 1)(# of cols \(-\) 1)
The p-value is the area to the right of \(\chi^2\)-statistic under the chi-square curve with the appropriate \(df\text{.}\)
Exercises 6.4.8 Exercises
1. Quitters.
Does being part of a support group affect the ability of people to quit smoking? A county health department enrolled 300 smokers in a randomized experiment. 150 participants were assigned to a group that used a nicotine patch and met weekly with a support group; the other 150 received the patch and did not meet with a support group. At the end of the study, 40 of the participants in the patch plus support group had quit smoking while only 30 smokers had quit in the other group.
Create a two-way table presenting the results of this study.
Answer each of the following questions under the null hypothesis that being part of a support group does not affect the ability of people to quit smoking, and indicate whether the expected values are higher or lower than the observed values.
How many subjects in the "patch + support" group would you expect to quit?
How many subjects in the "patch only" group would you expect to not quit?
(a) Two-way table:
Treatment Yes No Total
Patch + support group 40 110 150
Only patch 30 120 150
Total 70 230 300
(b-i) \(E_{row_{1},col_{1}} = \frac{(row 1 total)\times(col 1 total)}{table total} = 35\text{.}\) This is lower than the observed value.
(b-ii) \(E_{row_{2},col_{2}} = \frac{(row 2 total)\times(col 2 total)}{table total} = 115\text{.}\) This is lower than the observed value.
2. Full body scan, Part II.
The table below summarizes a data set we first encountered in Exercise 6.2.9.10 regarding views on full-body scans and political affiliation. The differences in each political group may be due to chance. Complete the following computations under the null hypothesis of independence between an individual's party affiliation and his support of full-body scans. It may be useful to first add on an extra column for row totals before proceeding with the computations.
Party Affiliation
Republican Democrat Independent
Answer Should 264 299 351
Should not 38 55 77
Don't know/No answer 16 15 22
How many Republicans would you expect to not support the use of full-body scans?
How many Democrats would you expect to support the use of full- body scans?
How many Independents would you expect to not know or not answer?
3. Offshore drilling, Part III.
The table below summarizes a data set we first encountered in Exercise 6.2.9.7 that examines the responses of a random sample of college graduates and non-graduates on the topic of oil drilling. Complete a chi-square test for these data to check whether there is a statistically significant difference in responses from college graduates and non-graduates.
College Grad
Support 154 132
Oppose 180 126
Do not know 104 131
\(H_{0}:\) The opinion of college grads and non-grads is not different on the topic of drilling for oil and natural gas off the coast of California. \(H_{A}:\) Opinions regarding the drilling for oil and natural gas off the coast of California has an association with earning a college degree.
\begin{align*} E_{row_{1},col_{1}}=151.5 \amp E_{row_{1},col_{2}}=134.5\\ E_{row_{2},col_{1}}=162.1 \amp E_{row_{2},col_{2}}=143.9\\ E_{row_{3},col_{1}}= 124.5\amp E_{row_{3},col_{2}}=110.5 \end{align*}
Independence: The samples are both random, unrelated, and from less than 10% of the population, so independence between observations is reasonable. Sample size: All expected counts are at least 5. \(\chi^2 = 11.47, df = 2 \rightarrow \text{p-value } = 0.003\text{.}\) Since the p-value \(< \sigma\text{,}\) we reject \(H_{0}\text{.}\) There is strong evidence that there is an association between support for off-shore drilling and having a college degree.
4. Parasitic worm.
Lymphatic filariasis is a disease caused by a parasitic worm. Complications of the disease can lead to extreme swelling and other complications. Here we consider results from a randomized experiment that compared three different drug treatment options to clear people of the this parasite, which people are working to eliminate entirely. The results for the second year of the study are given below: 9
Christopher King et al. "A Trial of a Triple-Drug Treatment for Lymphatic Filariasis". In: New England Journal of Medicine 379 (2018), pp. 1801-1810.
Clear at Year 2 Not Clear at Year 2
Three drugs 52 2
Two drugs 31 24
Two drugs annually 42 14
Set up hypotheses for evaluating whether there is any difference in the performance of the treatments, and also check conditions.
Statistical software was used to run a chi-square test, which output:
\begin{align*} \amp X^2 = 23.7 \amp \amp df = 2 \amp \amp \text{ p-value } = \text{ 7.2e-6 } \end{align*}
Use these results to evaluate the hypotheses from part (a), and provide a conclusion in the context of the problem.
Subsection 6.4.9 Chapter Highlights
Calculating a confidence interval or a test statistic and p-value are generally done with statistical software. It is important, then, to focus not on the calculations, but rather on
choosing the correct procedure
understanding when the procedures do or do not apply, and
interpreting the results.
Choosing the correct procedure requires understanding the type of data and the method of data collection. All of the inference procedures in Chapter 6 are for categorical variables. Here we list the five tests encountered in this chapter and when to use them.
1-proportion Z-test
1 random sample, a yes/no variable
Compare the sample proportion to a fixed / hypothesized proportion.
2 independent random samples or randomly allocated treatments
Compare two populations or treatments to each other with respect to one yes/no variable; e.g. comparing the proportion over age 65 in two distinct populations.
\(\chi^2\) goodness of fit test
1 random sample, a categorical variable (generally at least three categories)
Compare the distribution of a categorical variable to a fixed or known population distribution; e.g. looking at distribution of color among M&M's.
\(\chi^2\) test of homogeneity:
2 or more independent random samples or randomly allocated treatments
Compare the distribution of a categorical variable across several populations or treatments; e.g. party affiliation over various years, or patient improvement compared over 3 treatments.
\(\chi^2\) test of independence
1 random sample, 2 categorical variables
Determine if, in a single population, there is an association between two categorical variables; e.g. grade level and favorite class.
Even when the data and data collection method correspond to a particular test, we must verify that conditions are met to see if the assumptions of the test are reasonable. All of the inferential procedures of this chapter require some type of random sample or process. In addition, the 1-proportion Z-test/interval and the 2-proportion Z-test/interval require that the success-failure condition is met and the three \(\chi^2\) tests require that all expected counts are at least 5.
Finally, understanding and communicating the logic of a test and being able to accurately interpret a confidence interval or p-value are essential. For a refresher on this, review Chapter 5: Foundations for inference. | CommonCrawl |
> quant-ph > arXiv:2112.07978
arXiv:2112.07978 (quant-ph)
[Submitted on 15 Dec 2021 (v1), last revised 16 Dec 2021 (this version, v2)]
Title:Entanglement between superconducting qubits and a tardigrade
Authors:K. S. Lee, Y. P. Tan, L. H. Nguyen, R. P. Budoyo, K. H. Park, C. Hufnagel, Y. S. Yap, N. Møbjerg, V. Vedral, T. Paterek, R. Dumke
Abstract: Quantum and biological systems are seldom discussed together as they seemingly demand opposing conditions. Life is complex, "hot and wet" whereas quantum objects are small, cold and well controlled. Here, we overcome this barrier with a tardigrade -- a microscopic multicellular organism known to tolerate extreme physiochemical conditions via a latent state of life known as cryptobiosis. We observe coupling between the animal in cryptobiosis and a superconducting quantum bit and prepare a highly entangled state between this combined system and another qubit. The tardigrade itself is shown to be entangled with the remaining subsystems. The animal is then observed to return to its active form after 420 hours at sub 10 mK temperatures and pressure of $6\times 10^{-6}$ mbar, setting a new record for the conditions that a complex form of life can survive.
Subjects: Quantum Physics (quant-ph); Biological Physics (physics.bio-ph)
Cite as: arXiv:2112.07978 [quant-ph]
(or arXiv:2112.07978v2 [quant-ph] for this version)
Journal reference: New J. Phys. 24 12302 (2022)
Related DOI: https://doi.org/10.1088/1367-2630/aca81f
DOI(s) linking to related resources
From: Kaisheng Lee [view email]
[v1] Wed, 15 Dec 2021 09:09:49 UTC (1,737 KB)
[v2] Thu, 16 Dec 2021 13:06:20 UTC (1,737 KB)
quant-ph
physics.bio-ph
3 blog links | CommonCrawl |
Chapter Review
9 Math workbook
Purchase this Material for $8
You need to sign up or log in to purchase.
Subscribe for All Access
Buy to View
Write as a single power. Then, evaluate the expression.
5^6 \div 5 \div 5^2
4^6 \div 4^5 \times 4^2
[(-3)^2]^3
\dfrac{(5^4)^3}{5^5 \times 5^4}
Q4f
E. coli is a type of bacteria that can cause dangerous health problems. It doubles every 20 min. The initial population of a sample of E. coli is 400.
a) Copy and complete this table.
b) Construct a graph of population versus time. Use a smooth curve to connect the points. Describe the shape of the graph.
c) What will the population be after
5 h?
a^5b^4 \times a^3b^2
\dfrac{d^6 \times d^5}{d^7}
\dfrac{m^{10}}{m^3 \times m^5}
\dfrac{(y^6)^3}{(y^5)^2}
Identify the coefficient and variable part of each term.
-5y
4a^5b^3
\dfrac{2}{3}x^2y^3
In a hockey tournament, teams are awarded 4 points for a win and 2 points for an overtime win.
a) Write an expression that describes the number of points a team has.
b) Use your expression to find the number of points earned by a team that has five wins and two overtime wins.
State the degree of each term.
5x^4
-7m^5
a^3b^2c
State the degree of each polynomial.
5x+4
3y^4-2
5m^2+3m+6
6a^3-5a^2+4a-3
Q10d
Classify each pair of terms as either like or unlike.
3x and -7x
4y and 5z
4ab and -2ab
3x^2y and 4xy^2
5xy and 3yx
Q11e
5m^2 and 8m^2
Q11f
Identify the like terms in each set.
5a^2, -3b, 2d, 6x^2, 7b^3, -5x^2, 4a, 6c
6y^2, 5y^2, -4y^3, 3, -4y^2, -2y^3
Simplify by collecting like terms.
3x+5y+4x+6y
5d+3m-4d-5m
2a^2-5a+3-a^2+2a-6
3w^2+2wy-y^2-2w^2-2wy+4y^2
4d-8e-6f+3d+5e-10f
6a^3-4ab+5b^2-3+5a^3-3ab
(5x+3)+(6x-4)
(4y-3)+(5y-2)
(3p^2+5p+4)+(7p^2-4p-3)
(4m^2-3mn-2n^2)-(m^2+mn-5n^2)
(6a+8b)+(3a-4b)-(5a-3b)
A rectangular cake has dimensions 4x by 3x+2. Find a simplified expression for its perimeter.
Expand.
5(x+2)
-4(y-3)
2m(3m+4)
-4g(2g-3)
Expand and simplify.
4(2x+3y)+5(3x+6y)
3(4y-2w)-3(2y+1)
4(3a+2b)+3(2a-3b)-(a+2b)
-4[3-2(c+5)-4c] | CommonCrawl |
Cryptography Stack Exchange is a question and answer site for software developers, mathematicians and others interested in cryptography. It only takes a minute to sign up.
Non-malleable file encryption using AES XTS 256?
I'm looking to do file encryption of a bunch of individual files, some small, some quite large. The files will be write-once/read-many, so I could use CBC, however, since the read-access needs to have seekability, XTS seems a better fit. One issue, as I understand it, is that XTS (or CBC for that matter) is malleable... i.e. if the encrypted cipher text is modified, an attacker could potentially do some targeted things to files. So I was thinking surely this problem is already solved but it wasn't clear to me what the standard was, if any. Performance is a concern, so one thought I had was simply running a CRC on each XTS sector and including the CRC result in the plaintext for encryption of that XTS sector (e.g. just expand each sector by the 4 octets). Thus, if anything in the sector was modified by an attacker, the decrypt would fail (i.e. the CRC wouldn't likely match). Is there a smarter way, or a standard that's "fast"?
EDIT: it occurs to me that if the files are rewritable (which currently they are not) a CRC on each cluster in XTS mode is still open to an attack where an attacker can guess at the plaintext and write a cluster to a known location in the file and see if it matches the cluster that's already there. Only way to prevent that is to have a random IV per block as well, that changes per write, I think.
encryption aes modes-of-operation file-encryption xts
$\begingroup$ Actually, XTS is not what is normally named "malleable". You can change a ciphertext block and the corresponding plaintext block will change too, but the result of this change is not predictable by the attacker without knowing the key. For CBC, changing a ciphertext block will change the corresponding plaintext block in a unpredictable way and the next plaintext block in a predictable way. $\endgroup$ – Paŭlo Ebermann Jun 25 '13 at 15:12
$\begingroup$ You should use a MAC, not a CRC (which is not a cryptographic operation), and preferably on the ciphertext, not the plaintext. $\endgroup$ – Paŭlo Ebermann Jun 25 '13 at 15:13
$\begingroup$ Thanks for the comments... it was my impression that a MAC operation would be high-cost; CRC is low(er) cost, but provides no protection unless it too is protected. But to your point about malleability... perhaps MAC or CRC isn't really needed at all and it's fine how it is? $\endgroup$ – mark Jun 25 '13 at 16:05
$\begingroup$ Quite likely calculating a MAC is cheaper than the IO-operations to write the block. I'm not sure if there is any cryptanalysis of an encrypted CRC, I suppose this also depends on the mode of operation. In CBC mode you can change the first block arbitrarily by manipulating the IV. Use a MAC (which should include the IV, by the way, and everything else which might relevant to the encryption) and you are on the safe side. $\endgroup$ – Paŭlo Ebermann Jun 25 '13 at 19:07
$\begingroup$ @PaŭloEbermann: I felt your comment above was a better answer to this old question than any of the existing answers below, so I wrote an actual answer based on it. $\endgroup$ – Ilmari Karonen Aug 7 '18 at 14:08
As Paŭlo Ebermann notes in a comment above, the time cost of a proper MAC calculation is almost certainly negligible compared to the cost of reading or writing the data on the disk.
Applying a MAC to each encrypted sector (and the sector number used to derive the IV) will fully protect your data against most forms of tampering. The main drawback of this method, of course, is the need to store the MAC outputs somewhere so that they can be verified when the data is read back. But, of course, the same drawback applies to your proposed "encrypted CRC" method as well.
As for the security of using an encrypted CRC as a MAC, it probably depends on details of the way the encryption is done (and I'm not aware of any security analyses of such a scheme in combination with XTS mode).
In particular, if the sector data was encrypted using CTR or OFB mode (or something similarly malleable), then the linearity of CRCs would allow bit-flipping attacks to be done in a way that does not change the CRC checksum. As Wikipedia notes, this was one of the main weaknesses in the (by now thoroughly broken) WEP protocol, which used an encrypted CRC for integrity protection.
On the other hand, if one were to encrypt both the data and the CRC using a wide-block encryption mode like CMC or EME, this would probably be secure, at least if the CRC was long enough to resist brute force. Then again, in that case we could presumably achieve equivalent security just by replacing the CRC with a block of zero bytes of the same length, and checking that they're still zero after decryption. And it would presumably still be slower than XTS plus a proper MAC.
Note that one form of tampering that even proper per-sector MACs will not prevent is a "reversion attack" where the attacker replaces the entire contents of a sector with previously captured ciphertext from the same sector, in effect reverting the corresponding plaintext data to an earlier version. To prevent such attacks, you could for example compute an additional MAC (preferably of the Carter–Wegman type, such as VMAC or Poly1305, since those can be efficiently updated when part of the input data changes) over the per-sector MACs, and verify that global MAC when the disk is mounted.
Of course, even such a global MAC will still not protect you against whole-disk reversion attacks, where the entire content of the disk is replaced by an earlier copy of it. In general, there is no way to prevent such attacks unless some data (like a copy of the global MAC) can be securely stored somewhere outside the disk itself.
Also, avoiding accidental MAC verification failures e.g. in the event of a power loss during a write is possible but rather tricky. Basically, it would require some kind of a secure intent log where you first write an (encrypted and MACed) record of the data you intend to write and the way this will affect the global MAC, flush it to disk, and only then perform the actual write to the rest of the disk. Done properly, that should ensure that, at any given point in time, either a) the rest of the disk is in a consistent state and properly MACed, or b) the intent log is consistent and properly MACed, and the rest of the disk can be brought to a consistent and properly MACed state by replaying the planned changes recorded in the intent log. Of course, if you do all that, as a nice side effect you'll also gain extra resiliency against accidental (as well as deliberate) data corruption.
Ilmari KaronenIlmari Karonen
This appears to be a better question to offer this answer to, in addition to Seth's great answer to the question: "What is the advantage of XTS over CBC mode (with diffuser)?".
You'll want to add authentication to the encryption. See paragraphs 2 and 4 below. Links to an article and source code for authenticated encryption are provided at the bottom of this answer.
In reference to Seth's answer:
Wikipedia's webpage on Disk encryption theory provides an up to date answer (compared to Seth's great answer, dated Dec 5 2012) about CBC (Cypher Block Chaining) and XTS (XEX-based tweaked-codebook mode with ciphertext stealing) along with other modes.
CMC and EME have been proposed to fix the problems with XTS. XTS mode is susceptible to data manipulation and tampering so blocks require checksums or other means to detect tampering and it is susceptible to traffic analysis, replay and randomization attacks.
CMC requires two passes and EME is patented so neither is being considered by NIST uses XTS-AES but have been considered by the Security in Storage Working Group (SIS-WG).
Currently Authentication Encryption Modes are being considered. Authenticated encryption with associated data (AEAD) is a variant of AE where the data to be encrypted needs both authentication and integrity as opposed to just integrity. AEAD binds associated data (AD) to the ciphertext and to the context where it's supposed to appear, so that attempts to "cut-and-paste" a valid ciphertext into a different context are detected and rejected.
Other References:
"IEEE 1619-2018 - IEEE Standard for Cryptographic Protection of Data on Block-Oriented Storage Devices" (Published Date: 2019-01-25).
"IEEE 1619.1-2018 - IEEE Standard for Authenticated Encryption with Length Expansion for Storage Devices" (Published Date: 2019-01-25).
"IEEE 1619.2-2010 - IEEE Standard for Wide-Block Encryption for Shared Storage Media" (Published Date: 2011-03-08).
$$\begin{array}{l} \text{Other Modes:} & \\ \text{CMC} & \text{CBC Mask CBC} \\ \text{EME} & \text{ECB Mix ECB} \\ \text{HCH} & \text{Hash Encrypt Hash} \\ \text{HCTR} & \text{Hash Counter Hash} \\ \text{HEH} & \text{Hash ECB Hash} \\ \text{PEP} & \text{Polynomial Hash Encrypt Polynomial Hash} \\ \text{TET} & \text{Hash ECB Hash} \\ \text{ } & \\ \text{Acronyms:} & \\ \text{CBC} & \text{Cypher Block Chaining} \\ \text{ECB} & \text{Electronic Code Book} \\ \text{XEX} & \text{Xor–Encrypt–Xor} \end{array}$$
In the article: "Practical Cryptographic Data Integrity Protection with Full Disk Encryption - Extended Version" by Milan Broz, Mikulas Patocka, Vashek Matyas (Submitted on 1 Jul 2018) they explain on page 1:
"A major shortcoming of current FDE implementations is the absence of data integrity protection. Confidentiality is guaranteed by symmetric encryption algorithms, but the nature of length-preserving encryption (a plaintext sector has the same size as the encrypted one) does not allow for any metadata that can store integrity protection information.
Cryptographic data integrity protection is useful not only for detecting random data corruption (where a CRC-like solution may suffice) but also for providing a countermeasure to targeted data modification attacks. Currently deployed FDE systems provide no means for proving that data were written by the actual user. An attacker can place arbitrary data on the storage media to later harm the user."
"We introduce an algorithm-agnostic solution that provides both data integrity and confidentiality protection at the disk sector layer. Our open-source solution is intended for drives without any special hardware extensions and is based on per-sector metadata fields implemented in software. Our implementation has been included in the Linux kernel since the version 4.12. This is extended version of our article that appears in IFIP SEC 2018 conference proceedings.".
On page 3 and 4, section 2.2, "FDE Protection Types" they further detail:
Pure FDE - Length-preserving encryption that provides confidentiality only.
Authenticated FDE - Encryption that provides both confidentiality and integrity protection, but limited by COTS devices (no hardware for authentication).
HW-trusted - The ideal solution with confidentiality and integrity protection. It stores some additional information to external trusted storage in such a way that the system can detect data replay.
"Authenticated encryption enforces that a user cannot read tampered data but will see an authentication error. It not only stops any attempts to use tampered data on higher layers, but also helps a user to realize that the device is no longer trustworthy. An overview of the features among FDE types is summarized in Table 3.".
On page 5, section 3.2, "Authenticated Encryption" they continue:
"We have two options for integrity protection combined with device encryption: either to use Authenticated Encryption with Additional Data (AEAD) or to combine length-preserving encryption with an additional cryptographic integrity operation. The major difference is that for the combined mode, we can ignore integrity tags and decrypt the data without such tags. In the AEAD mode, the authentication is an integral part of decryption. Additionally, for the combined mode, we need to provide two separate keys (encryption and authentication), whereas the AEAD mode generally derives the authentication key internally. Both mentioned integrity methods calculate an authentication tag from the final ciphertext (encrypt-then-MAC).
The encryption operation output consists of the encrypted data and the authentication tag. Authentication mode with additional data (AEAD) calculates the authentication tag not only from the input data but also from additional metadata, called additional authentication data (AAD). Table 4 summarizes examples of the encryption modes mentioned in this text.".
References from page 20 of their article:
[25] CAESAR: Competition for Authenticated Encryption: Security, Applicability, and Robustness, 2016. https://competitions.cr.yp.to/caesar.html
[26] Hongjun Wu and Bart Preneel. AEGIS, A Fast Authenticated Encryption Algorithm (v1.1). Technical report, 2016. https://competitions.cr.yp.to/round3/aegisv11.pdf
[27] Hongjun Wu and Tao Huang. The Authenticated Cipher MORUS (v2). Technical report, 2016. https://competitions.cr.yp.to/round3/morusv2.pdf
A search for AEGIS256 (AES with authentication) turns up many results, as do searches for the other AEAD algorithms. Unfortunately there is currently no COTS hardware available, whether there is a website somewhere that sells an SDD (or HD) that implements AEAD entirely on the drive is something I'll need to do further research to determine.
It shouldn't be too difficult to use a AES encrypting drive with sedutil and add authentication support to the PBA. For further examples of using sedutil see V$_x$Labs article: "Use the hardware-based full disk encryption of your TCG Opal SSD with msed".
RobRob
Since 2016 there exists a dm-integrity target that can do simple CRC32 verification, or full blown HMAC-SHA1/SHA-256 with encryption keys, to ensure the integrity of the volume.
It's integrated with LUKS (if you use --type luks2), see cryptsetup man page for --integrity option, or use standalone with the integritysetup tool.
You can find some benchmarks in this FOSDEM presentation
Hubert KarioHubert Kario
I made an XEX based WDE appliance recently and from a file modification perspective, what happens is that if an attacker modifies any bit, then the result is a scrambling of a 16 byte (that is, the block size of AES) chunk of the resulting plaintext.
That... well... kinda blunts any potential bit-flipping attacks, as it means that they can't easily construct an alteration of their choosing.
In my implementation, since I was doing WDE, I was dealing with a 512 byte block at a time in isolation. In my case, I perturbed the nonce on a per-block basis. It's not clear what your data block size is going to be, but for XEX/XTS your random-access "chunks" are going to need to be done in such a blockwise fashion, and you should similarly perturb the nonce. Your nonce perturbation need not have a ton of cleverness, as step 1 is to run the nonce through AES ECB anyway.
nsayernsayer
Thanks for contributing an answer to Cryptography Stack Exchange!
Not the answer you're looking for? Browse other questions tagged encryption aes modes-of-operation file-encryption xts or ask your own question.
What is the advantage of XTS over CBC mode (with diffuser)?
Why is CRC said to be linear?
Is there a standard for OpenSSL-interoperable AES encryption?
Is AES-XTS considered safe to encrypt multiple files with the same keys?
AES mode for file encryption
Is XTS basically the cheapest form of (secure) double-encryption?
Derivation of many (Key, IV) pairs from random 128 nonces and one secret 512 master key
Reusing IV in AES for encrypting file
AES-XTS vs AES-CTR for Write Once Storage
Make AES more secure by randomising the blocks in an encrypted file | CommonCrawl |
Home Journals RIA Firefly Optimization Based Noise Additive Privacy-Preserving Data Classification Technique to Predict Chronic Kidney Disease
Firefly Optimization Based Noise Additive Privacy-Preserving Data Classification Technique to Predict Chronic Kidney Disease
Preet Kamal Kaur* | Kanwal Preet Singh Attwal | Harmandeep Singh
Department of Computer Science and Engineering, Punjabi University, Patiala, Punjab 147002, India
[email protected]
With the continuous advancements in Information and Communication Technology, healthcare data is stored in the electronic forms and accessed remotely according to the requirements. However, there is a negative impact like unauthorized access, misuse, stealing of the data, which violates the privacy concern of patients. Sensitive information, if not protected, can become the basis for linkage attacks. Paper proposes an improved Privacy-Preserving Data Classification System for Chronic Kidney Disease dataset. Focus of the work is to predict the disease of patients' while preventing the privacy breach of their sensitive information. To accomplish this goal, a metaheuristic Firefly Optimization Algorithm (FOA) is deployed for random noise generation (instead of fixed noise) and this noise is added to the least significant bits of sensitive data. Then, random forest classifier is applied on both original and perturbed dataset to predict the disease. Even after perturbation, technique preserves required significance of prediction results by maintaining the balance between utility and security of data. In order to validate the results, proposed method is compared with the existing technology on the basis of various evaluation parameters. Results show that proposed technique is suitable for healthcare applications where both privacy protection and accurate prediction are necessary conditions.
chronic kidney disease, data perturbation, firefly optimization algorithm, privacy-preserving data classification, random forest
Data mining has played very important role in the development of healthcare sector. Mining helps to take out the essential outcomes which further support decision making process and also, allow creation of necessary plans or policies needed for trouble-free functioning of healthcare functions and operations. Mining enables gaining insights into various diseases, predicting the diseases of patients at earlier stage and also assisting doctors in providing timely treatment to patients. According to this, it can be perceived that health related data poses a high value for researchers as well as for healthcare workers. Through access to this information, very good relation can be maintained between patient and clinician (doctor) enabling effective and correct conduction of healthcare practices. But at the same time, patient's data may contain sensitive attributes which should not be disclosed such as Aadhar number, age, address, hospital visits, lab results, some sensitive disease or its cause etc. So, issue of privacy in data mining needs to be addressed and confidentiality of data should be maintained while accessing or analyzing it. To fetch required information from data, discover unknown patterns from it and also, to prevent sensitive information from disclosure, mining techniques are combined with privacy preservation approaches giving rise to the field of Privacy Preserving Data Mining (PPDM). Original data of patients is transformed in such a way that adversaries can't access it and still, it is useful enough to take out the required outcomes.
In the current scenario, healthcare organization uses information and communication technology to monitor the patients [1]. Patient's data is collected from the Electronic Health Records (EHR), sensors, Radio Frequency Identification Tag (RFID) that helps in decision making and improves the quality of service [2]. Records contain the patient's personal information (such as identity data) along with medical details. Thus, illegal access to the records affects the privacy of patients [3]. To safeguard the sensitive data from leakage, various data disturbance approaches can be utilized such as anonymization, cryptography or perturbation [4]. Anonymization method makes the individual data indistinguishable using suppression and generalization techniques. In the cryptographic technique, sensitive information is encrypted using a secret key and encryption algorithm. Either these methods don't provide the required privacy or they bring in the overhead of encryption-decryption and are less efficient for the larger data sets as data utility poses great concern [5]. Data perturbation method is more popular due to simplicity and its advantage to treat different attributes independently. Data perturbation techniques transform sensitive information with an advantage of maintaining the utility of data [6]. Thus, perturbed data can be employed for research and analysis without breaking patient privacy [7]. On the other side, patient's data is used for disease prediction but due to large amount of data processing, it manually consumes a lot of time. Thus, various machine learning classifiers are applied to predict the disease in an automated and intelligent manner.
To overcome these challenges, in this paper, focus is towards developing a Privacy Preserving Data Classification Technique for prediction of Chronic Kidney Disease. In the proposed technique, meta-heuristic Firefly Optimization Algorithm has been applied to generate random noise. Noise is added in the least significant bits of the sensitive data to provide privacy. On the other side, random forest technique is used to predict the disease and results are analyzed for both original and perturbed datasets.
Rest of the paper is as follows. Section 2 outlines existing work done in the field. Section 3 illustrates systematic flow and methodology of the proposed technique. Section 4 represents implementation results of the technique. Different performance metrics used for its evaluation as well as for its validation are discussed. Section 5 presents the conclusion and future scope of this research.
2. Related Work
In section 2.1 and 2.2, various data perturbation techniques and different classifiers are studied that have commonly been used in the domain of healthcare. Inferences and challenges drawn from the literature are elaborated in section 2.3.
2.1 Data perturbation techniques
Data perturbation method transforms the sensitive data before publishing in such a way that privacy of an individual is preserved while maintaining the important data properties. Different ways studied to perform perturbation of data are shown below:
2.1.1 Noise addition
In this technique, random noise is added in the sensitive data using Eq. (1).
$P=S+N$ (1)
where, P denotes the perturbed data, S and N define sensitive data and noise respectively.
2.1.2 Noise multiplication
In this technique, random noise is multiplied with the sensitive data using Eq. (2).
$P=S X N$ (2)
where, P denotes the perturbed data, S and N define the sensitive data and noise respectively.
2.1.3 Min-max normalization
In this technique, sensitive data is linearly transformed and normalized using Eq. (3).
v^{\prime}=\frac{v-\min (A)}{\max (A)-\min (A)}\left(\left(n e w_{\max (A)}-n e w_{\min (A)}\right)+\right. \\
\left.n e w_{\min (A)}\right)
where, A denotes the sensitive attribute, max and min represent the maximum value and minimum value in the attribute, newmax and newmin denote the new boundary value range for A.
2.1.4 Micro-aggregation
It is a technique used to satisfy the k-anonymity constraint in dataset. In k-anonymization, whole dataset is converted into different groups where each group holds at least k records such that any record in the group can't be identified from k-1 other records in it. Micro-aggregation helps to replace all the values in group with the centroid value (arithmetic mean in case of numerical attributes or it can be some other value based on the value range of an attribute). Aim should be to achieve the k-anonymous dataset while lessening the degradation of quality of data [8].
2.1.5 Data swapping
In this process, values of sensitive attributes are swapped among different records so as to increase the uncertainty in data which further provides privacy. It maintains the statistical properties of the database i.e. different knowledge discovery tasks can be performed on this transformed data.
Existing methods studied in the field of data perturbation are briefly discussed below:
Kiran and Vasumathi [9] applied min-max normalization technique on original dataset (Matrix M) to distort it and take all the values into the specific range of 0 and 1. Obtained distorted data matrix M' is not same as real matrix. Then, it is multiplied with a negative number (shifting vector) for more security [10]. Technique is applied on 4 real-world datasets and evaluation is done by calculating values of privacy and accuracy parameters for NBTree classfier.
Kalaivani and Chidambaram [11] proposed multilevel trust based PPDM technique in which original data is changed before it gets published. Data is perturbed and made available to various data miners according to trust level maintained by them i.e. different data miners have differently perturbed copies of similar data. Still, if all the data placed with them can be joined and reconstructed in some way, then, it can give rise to diversity attack which reveals extra information about users. To protect the privacy, Gaussian noise is added randomly to the original data [12].
Jahan et al. [13] developed multiplicative perturbation method to change the data before publishing it to the data analyst for performing data mining functions. In multiplicative perturbation, combination of fuzzy logic and random rotation works. Different multivariate datasets downloaded from UCI machine learning repository are used as input. For confidential information, fuzzy logic is computed and then, original dataset is multiplied with random rotation matrix. Random rotation preserves distance measures which further helps to maintain the utility of data. K-means clustering is used to analyze the results obtained with both original and perturbed datasets.
Balasubramaniam and Kavitha [14] used geometrical data perturbation (GDP) technique to preserve the privacy of personal health records. Different type of information about patients is stored in different tables. Any of the table and from it, any number of columns can be chosen for perturbation. While applying geometric perturbation, firstly, original dataset is converted into matrix form. As geometric perturbation can only be applied to numerical data values, ASCII values are calculated for other type of data and stored in the matrix. Random matrix is created with values in the range of 1 to 9 and then, it is rotated in clockwise direction vertically. This random rotation matrix is multiplied with original data. Gaussian noise is also generated (range from 0 to 1). Finally, addition operation is performed between initially calculated product, transpose of rotation matrix and Gaussian noise [15]. Perturbed data is now outsourced to cloud where data retrieval and query processing takes place. This technique is compared with existing Advanced Encryption Standard (AES) method on the basis of time.
2.2 Data classification
Classification is a process of categorizing data into specific classes. Classifier is trained with the help of training data and various classification rules are discovered. These rules are further used to classify unknown data/tuples for obtaining important results in different domains. In this section, various classifiers used for disease prediction are explained.
2.2.1 Decision Tree (DT) Algorithm
Decision tree is a structure similar to flow chart which is trained with the help of labelled tuples. Root (R) of a tree contains whole population which has to be further divided. Internal nodes (I) represent test that has to be conducted on a particular attribute of data, each branch (B) comes out as outcome of the test and leaves (L) give different possible solutions (class labels). While traversing from root node to leaf node of a tree, set of classification rules are obtained which can now be used to predict the class label of an unknown tuple. The algorithm belongs to supervised learning field and applied in a loop to each child node until all samples at the node are of one same class [16, 17]. To select an attribute which has to be considered for splitting the node while constructing tree, information gain is used as a measure. Selected attribute needs to minimize the information required for classification of tuples and tree should be as simple as it can be. Chaurasia et al. [18] applied decision tree algorithm on Chronic Kidney Disease dataset and achieved 93% correct predictions. Author focused on identifying the key attributes that play major role to decide whether person is suffering from kidney disease or not.
2.2.2 K-Nearest Neighbor (KNN) Algorithm
K-Nearest Neighbor is also a supervised method of classification which works using the concept of analogy i.e. comparison is done between the new test tuple and similar/alike training tuples. Each training tuple contains value for every attribute in dataset which means to define a particular tuple, n attributes are needed. Training tuples collectively form pattern space of n-dimensions. Whenever any new data tuple comes, classifier searches this pattern space to find k tuples from the training set that are nearest to the new tuple. Mostly, Euclidean distance is used as a measure to find the closeness between tuples [16, 19]. From the 'k' identified neighbors (training tuples), majority class label is selected and assigned to new (test) tuple. If value of 'k' is 1 i.e. there is only one neighbor, then, label of that neighbor (tuple) is given to test tuple. Tikariha and Richhariya [20] used KNN classifier for prediction of Chronic Kidney disease and compared it with the results obtained by Support Vector Machine for same dataset.
2.2.3 Support vector machine
Support Vector Machine is a classification algorithm used to classify the points of data in n-dimensional space. It can be useful to classify both linear and non-linear data by finding the required hyper-plane. Hyper-plane is used to separate the classes like a decision boundary. It is found with the help of support vectors and number of planes depends upon number of features in data. When there is more than one hyper-plane, aim is to find the plane with maximum margin i.e. distance from nearest element of either class to the hyper-plane should be largest. This property makes SVM robust and reduces misclassification errors. SVM also has a capability to ignore outliers [16, 20].
Linear SVM: If straight lines (hyper-plane) can be drawn to classify the data, then dataset is said to be linearly separable. Suppose, if there are 2 classes, goal is to select the one having largest margin. Data belonging to either side of line represents different classes/categories present in dataset.
Non Linear SVM: If no straight line can be drawn to classify the dataset, then, data is said to be linearly inseparable. Kernel function is used to change low dimensional space of input to higher dimensional space i.e. inseparable problems are converted into separable problems. Non-linear decision boundary (such as concentric circles) is used to separate the classes.
2.3 Inferences drawn from the literature
As healthcare data attributes such as blood pressure, sugar, age etc. play major role in deciding the disease and its outcomes, it is very necessary to maintain the utility of data. If data is changed to very large extent, then it will not be useful enough for the required purpose. Therefore, balance has to be maintained between usefulness of data and security of the patient.
It is observed from the study that most popular techniques for data perturbation are geometric transformation, Gaussian noise, K-Means Clustering and Min-Max normalization [9-15].
In geometric perturbation, sensitive data is transformed by performing various geometric functions such as rotation, translation, scaling or projection. They can't process very high volumes of data efficiently, e.g. random rotation consumes a considerable amount of time to provide better results while enforcing privacy.
Gaussian noise addition depends on original data i.e. mean and standard deviation is drawn from the given input data for noise generation. In K- Means Clustering technique, sensitive data is changed into different clusters. Internal to each cluster, there is similar type of information regarding an attribute. For example, in the healthcare data, patients are grouped into different clusters based on age attribute. Main limitation of K-Means Clustering technique is that it forms many clusters if there is high variability in the attributes.
Min-max normalization technique modifies the sensitive data and takes it between the specified ranges. However, this transformation is constant and breaks the relationship between sensitive and non-sensitive data. Besides, it also provides lesser accuracy.
On the other side, various machine learning techniques are being used to classify the healthcare data. Support vector machine (SVM), K-nearest neighbor, and decision tree are mostly used for classification [16, 20].
Out of these, the decision tree algorithm is the most preferred classifier to predict the disease but it also has a large number of limitations such as over-fitting and no global solution [18, 21, 22]. Small change in the input data can lead to significant changes in the decision tree.
In K-NN algorithm, if chosen value of 'k' is incorrect, it leads to over-fitting or under-fitting of data to the model. Moreover, classifier doesn't perform well on unbalanced data (e.g. if instances of some class is more than the other, biased results may be produced). K-NN can't assume anything about distribution or discriminative functions from the training data and completely relies on memorizing all of the training instances. Generalization of data is done only when prediction query is submitted to the algorithm [16, 20, 23].
In SVM, while dealing with non-linear data, kernel function needs to be selected. Matrix of kernel grows in quadratic manner with increase in size of training dataset. If data is high dimensional, many support vectors are generated. Due to this, high memory is required and training time is also increased. Also, interpretation of final SVM model is difficult for humans [16, 20, 24].
To deal with above said limitations, firefly algorithm is deployed for data perturbation in this research and instead of fixed noise, random noise is generated. To sustain the utility of healthcare data, noise is added to the least significant bit of sensitive attribute. Random Forest is going to be applied as a classification method in the work. It is an ensemble learning technique which has a potential to deal with large amounts of data. When combination of decision trees works together, prediction accuracy improves. Subset of features (not all) is considered for splitting the node at each level of tree and significance of each feature can also be explained by this classifier. Detailed functioning of Firefly algorithm and Random Forest for Chronic Kidney Disease prediction is elaborated in the subsequent section.
3. Proposed Technique
Proposed technique adds random noise to the confidential information (which can later become medium for linkage attacks when combined with other databases) and precisely predicts the disease of patients. Disease prediction is done by applying random forest classifier initially on original dataset and then, on perturbed dataset. Performance is examined on the basis of evaluation parameters discussed in next section. Also, proposed technique is compared with the existing techniques related to this domain. Flowchart of the proposed technique is shown in Figure 1.
Figure 1. Flow chart for the proposed technique
Original Healthcare Dataset: Chronic_kidney_disease healthcare dataset is downloaded from the UC Irvine Machine Learning Repository. The dataset contains total 25 attributes (24 other attributes + 1 class attribute) [25].
Data Pre-processing: Chronic kidney dataset is pre-processed i.e. cleaned and formatted so that it can be utilised for prediction or analysis purpose.
Identification of sensitive and normal attributes: Now, in the dataset, sensitive attributes violating privacy of patients are identified. Noise is to be added in these sensitive attributes and other normal attributes remain as it is.
Data Perturbation Technique: In data perturbation, data is transformed with an addition of noise in it. In the proposed technique, random noise is generated based on Firefly Algorithm and added to the sensitive attributes of dataset so that privacy of patients is not breached.
Classifier: Random Forest Data Classifier is used to predict the disease of patient by randomly picking samples for tree construction, using its sensitive as well as non-sensitive attributes. This classifier is applied on both modified as well as the original dataset.
Performance Analysis: In this section, change in dataset after noise addition and variability in prediction results by applying classifier on modified dataset and original dataset is measured in terms of Accuracy, Precision, Recall, F-score, Mean Square error, Peak Signal-to-Noise Ratio, Structural Similarity Index, Execution time.
3.1 Firefly Optimization Algorithm and how it works
Figure 2. Flow chart for firefly optimization algorithm
Firefly algorithm is based on the fireflies that produce light while flying in the night [26]. The light is produced from the lower stomach of the firefly known as bioluminescence. This light is used by firefly to move or attract their mates or prey. This mechanism was formulated by author Yang and defined the following postulates:
Fireflies are unisexual. Therefore, they can attract fireflies irrespective of their sex.
The attractiveness between the fireflies is directly proportional to the light intensity. Therefore, low-intensity firefly is attracted by high-intensity firefly.
If the light brightness between two fireflies is equal, then movement of fireflies is random.
The generations of novel solutions are performed on the basis of two components, namely random walk and glow of the lightning bugs. Light of the fireflies becomes the fundamental factor that should be linked up with the objective determination of the related problem. Initially, a random population of fireflies is generated. After that, fitness evaluation is done for all fireflies based on the objective function. Objective function helps in finding the optimal solution. Then, updating of fireflies' light intensity is there. Again, they are ranked and their positions are updated. Whole procedure is iterated a fixed number of times and optimal results are determined [27].
Updated position of firefly is determined using Eq. (4).
$X^{\prime}=X+\beta e^{-\gamma r^{2}}(Y-X)+\alpha \in$ (4)
whereas, $X^{\prime}$ denotes the new position of a firefly, $X, Y$ denotes the two fireflies in which attractiveness is to be calculated. r denotes the distance between the fireflies. $\beta$ represents the attractiveness factor, $\gamma$ defines the light absorption factor, $\alpha \in$ defines the random deviation factor. Flowchart of the firefly algorithm is shown in Figure 2.
3.2 Deployment of Firefly Algorithm for noise generation
In the proposed work, to secure the sensitive attribute, noise is added in it. Given below is the explanation of generating random noise using Firefly Algorithm:
Step 1: First of all, initial population of fireflies is defined along with number of iterations, light absorption factor and random deviation factor.
Step 2: Next, initial random positions of fireflies are determined.
Step 3: After that, the Euclidean distance between the fireflies is calculated.
Step 4: Two arguments are passed for fitness function to work: positions of fireflies and sensitive attribute of dataset. Sensitive attribute is read and converted into 8-bit binary number. 2 Least significant bits of Firefly's position are XORed with the LSB 2-bits of the sensitive attribute and fitness function is evaluated using the Mean Square Error (MSE). The reason behind taking MSE as an objective function is to generate the maximum difference between original and noisy attributes.
Step 5: Again, position of fireflies are updated based on the Eq. (4) and this whole process is repeated for fixed number of times.
Step 6: Out of all fireflies, firefly which gives maximum MSE is selected for providing optimal solution and its position is taken as a key to perturb the sensitive data.
3.3 Random Forest Classifier
Random Forest Classifier is based on ensemble learning. In this learning, many algorithms or classifiers that may be of same type or of different type are combined together to give the required decision. Random Forest technique works using the collection of decision trees where trees are built by randomly selected subsets from the training data. All the decision trees provide their prediction. After this, voting is performed and prediction result with highest votes is selected. Basic concept of Random Forest Classifier is shown in Figure 3:
Figure 3. Basic working scheme of Random Forest Classifier
Random Forest Classifier provides higher prediction accuracy because rather than single model, union of models work in combination. It is based on the concept of bagging but actually is an extension of it. Bagging is basically bootstrap aggregation in which all features have to be taken under consideration for node splitting while making a tree whereas in random forest, subgroup of all the features is selected randomly and from this subset, best feature is used to split the node. It helps to improve performance and reduce variance from the results. For example, if there are 30 features, the random forest will be using only sure number of these features in each model, let's say five. 25 features are missed that would be effective. However, as per statement, decision trees are a part of random forest. So, in each tree, only five random features are used. But, if the number of trees in the forest is increased, then all or maximum features can be used. Therefore, error because of bias and error rate due to variance can be reduced by the usage of maximum features [28-30]. Therefore, it can be concluded that random forest based on ensemble learning is more powerful in comparison to a single decision tree so as to reduce bias error and limit over-fitting.
3.4 Random Forest Classifier for Chronic Kidney Disease
Globally, Chronic Kidney Disease (CKD) is becoming a common health issue and 10% of the world's population is generally affected. How to analyze CKD by systematic and automatic ways has few direct testimonials. So, how the machine learning (ML) method is used to diagnose CKD is highlighted in this paper. Abnormalities in various physiological data can be detected by ML algorithms with immense achievement. Research presents that random forest (RF) classifier attains the near-optimal performances for the identification of CKD subjects. Outcomes are perceptible and collectively discussed. Thus, ML algorithms play an important function in CKD diagnosis with adequate durability and RF can be used for diagnosis of this kind of disease as proposed by our research [29].
4. Experimental Results and Discussion
In this section, various results are shown that have been obtained through experimentation performed on the proposed technique to validate it against the existing techniques. Standard healthcare chronic kidney disease dataset is taken from the UCI Machine Learning Repository [18]. This dataset contains 25 attributes and is collected from the hospital for nearly two months of the period to predict chronic kidney disease. Attributes in the dataset are: age, blood pressure, specific gravity, albumin, sugar, red blood cells, pus cell, pus cell clumps, bacteria, blood glucose random, blood urea, serum creatinine, sodium, potassium, hemoglobin, packed cell volume, white blood cell count, red blood cell count, hypertension, diabetes mellitus, coronary artery disease, appetite, pedal edema, anemia and class. Simulations are performed in MATLAB 2017. System configurations are i5 processor, 8GB RAM and 1.80GHz operating frequency.
4.1 Performance metrics
Mean Square Error: Mean Square Error is computed by averaging the squared intensity of the original values and perturbed values (after noise addition) for an attribute [31].
It is calculated using Eq. (5) given below:
$M S E=\frac{\sum_{i=1}^{l e n g t h(\text { attribute })\quad} \begin{array}{c}
\text { (Original attribute }_\quad\quad{\text {value }}{ }^{2} \\
\left.-\text { Perturbed }_{\text {attribute_value }}\right)
\end{array}}{\text { length(attribute) }}$ (5)
Peak Signal-to-Noise Ratio (PSNR): PSNR measures how much noise is added in the attribute and it is calculated using Eq. (6) [32].
$P S N R=10 \log _{10} \frac{P e a k^{2}}{M S E}$ (6)
whereas, peak denotes the maximum value that can be represented for the particular attribute. If the class of an attribute is 8-bit long, peak value will be 255.
Structural Similarity Index (SSIM): Structural Similarity Index tells about the similarity between original dataset and perturbed dataset. In the proposed technique, this measure is used to find the similarity between initial dataset and dataset obtained after addition of noise in the sensitive attribute. Inbuilt command 'ssim' of MATLAB is used for the calculation of this parameter.
Accuracy: Accuracy is one of the most important metrics for evaluating the classification models. It is the fraction of predictions in which our model got right i.e. when classifier gave correct results. It is calculated using Eq. (7) [16].
$Accuracy =\frac{\text { True Positive+True Negative }}{\text { True Positive + False Positive+True Negative+False Negative }}$ (7)
True Positive: positive class correctly predicted
False Positive: positive class incorrectly predicted
True Negative: Negative class correctly predicted
False Negative: Negative class incorrectly predicted
Precision: Precision is defined as the ratio of number of observations which are correctly predicted as positive to the total number of observations that are predicted positive [16, 33].
$\text { Precision }=\frac{\text { True Positive }}{\text { True Positive }+\text { False Positve }}$ (8)
Recall: Recall is defined as percentage of observations of a positive class that have actually been predicted as belonging to that class [33].
$\text { Recall }=\frac{\text { True Positive }}{\text { True Positive }+\text { False Negative }}$ (9)
F-score: F-score is defined as the harmonic mean of precision and recall [33].
$F-\text { score }=\frac{2 \times \text { Precision } \times \text { Recall }}{\text { Precision }+\text { Recall }}$ (10)
Execution Time: Total time taken by the proposed technique to obtain the required results. In MATLAB, tic and toc commands are used to determine the total execution time.
Results obtained after each step of experimentation are shown below:
Step 1: Firstly, pre-processing of the dataset is done in order to deal with missing and inconsistent values. Irrelevant information is removed from it.
Step 2: Then, dataset is divided in the proportion of 70:30 where 70% denotes the training data and remaining 30% is for test data.
Step 3: After that, random forest technique is applied on the training data for disease prediction. Accuracy, precision, recall and F-score of classifier are measured.
Step 4: Next, the dataset is divided into two parts, sensitive and non-sensitive attributes. In the chronic kidney disease dataset, according to the proposed work, age attribute comes under sensitive and remaining attributes come under non-sensitive attributes. Age of patient when leaked, can give rise to linkage attacks i.e. combining this dataset with some other dataset having common attribute can lead to revelation of individual's identity which is violation of research limits and privacy requirements.
Step 5: Integer value of age (sensitive attribute) is transformed into 8-bit binary value and its least significant (LSB) 2-bits are extracted.
Step 6: Firefly algorithm is used for noise generation. Fitness evaluation is done using the Mean Square Error and position of firefly which leads to maximum MSE, is taken as a key to add noise in the age attribute. 2 LSBs of the key are XORed with 2 LSBs of age (converted into binary) to preserve the privacy of the patient without negatively impacting the prediction of the disease.
Initialization of various parameters for Firefly algorithm is shown in Table 1.
Table 1. Firefly Algorithm Parameters
Alpha (Randomization Parameter)
Gamma (Light Absorption Coefficient)
Step 7: Sensitive attribute (after addition of noise in it) is concatenated with the non-sensitive attributes and random forest classifier is applied to the perturbed dataset.
Step 8: Parameters such as accuracy, precision, recall and F-score are again calculated. Also, PSNR (Peak Signal to Noise Ratio), SSIM (Structural Similarity Index) and MSE (Mean Square Error) are measured.
Step 9: Performance analysis of the firefly algorithm for noise generation and random forest technique for disease prediction is done using various parameters. Comparative analysis is done for validation of the proposed technique against existing techniques.
Figure 4. Fitness function to achieve high MSE
Fitness function for the Firefly algorithm to achieve high MSE is shown in Figure 4.
Experimental values of the evaluation parameters calculated for the proposed technique are shown in Tables 2 and 3. The results show that the proposed technique achieves good classification results while preserving privacy of patients.
Table 2. Calculation of various performance parameters for proposed technique
PSNR (in dB)
Execution Time (in seconds)
Proposed Technique
Table 3. Comparison of classification results for original dataset and perturbed dataset
Original Dataset
Perturbed Dataset
Accuracy Percentage (%)
Random Forest classifier is applied firstly on Original Dataset (without addition of noise in sensitive attribute) and then, on perturbed dataset (with addition of noise in sensitive attribute using Firefly Algorithm). Comparison of classification results on the basis of Recall, Precision and F-Score is represented in Figure 5. Accuracy comparison by applying classifier on Original and Perturbed dataset is depicted in Figure 6.
Figure 5. Comparison of classification results for original and perturbed dataset on the basis of recall, precision and F-score
Figure 6. Comparison of classification accuracy for original and perturbed dataset
4.2 Comparative analysis
Proposed technique adds noise in the sensitive attribute using the firefly algorithm and predicts the accuracy using the random forest technique. Comparison of results for original and perturbed dataset is discussed above in detail. In this section, the experimental results of the proposed technique are compared with the existing techniques. Firefly algorithm is compared with the existing noise addition techniques on the basis of MSE, PSNR and SSIM and accuracy of random forest technique (when combined to privacy preservation) is compared with the existing classification techniques for disease prediction. Table 4 represents the comparison of noise addition methods.
Mean Square Error of Firefly algorithm is higher while Peak Signal-Noise Ratio and Structural Similarity Index are lower than Gaussian noise addition and min-max normalization noise addition methods. In the proposed technique, MSE is used as an objective function and its value is maximized. Highest value of MSE is chosen so that difference between original age and perturbed age becomes maximum which is required for the work. If there is more variation in data, more security can be provided for sensitive information of patients. Parallel to data perturbation, it has been assured that data is changed only up to the extent where its utility can be maintained for providing accurate results i.e. proposed work also takes care of the intrinsic balance management between privacy and usefulness of data.
Various classification techniques such as Support Vector Machine, K-Nearest neighbor and Decision Tree algorithm [18, 20] have mostly been used to predict the chronic kidney disease. In these existing methods, privacy concern has not been taken into consideration. In the proposed work, after noise addition in data for privacy preservation, ensemble learning (Random Forest) is applied. Classification accuracy of these above mentioned existing techniques are compared with that of proposed technique. Results are discussed in tabular form (Table 5).
Comparison of the proposed technique (Random Forest with Privacy Preservation) with different classification methods is shown below in graphical manner (Figure 7).
As presented above, proposed technique (using Firefly Algorithm and Random Forest Classifier) gives better accuracy than SVM, KNN and Decision tree classifiers [18, 20] even after perturbation of dataset by noise addition. It can be concluded that this technique provides required privacy to the patients while exhibiting the significant results for disease prediction.
Table 4. Comparison of Firefly Algorithm with the existing noise addition techniques
Noise Addition Technique
Gaussian Noise
Min-Max Normalization
Firefly Algorithm
Table 5. Accuracy comparison of the proposed technique with the existing classification techniques
Classification Technique
Support Vector Machine
K-Nearest Neighbor
Decision Tree Algorithm
Random Forest Technique with Privacy Preservation (Proposed)
Figure 7. Accuracy comparison of proposed technique with various classification methods
4.3 Other facets of proposed research
Nature inspired algorithms are emerging as an advancement to solve various complex problems in engineering. They tend to find the solution of problem through optimization of objective function by imitating the behavior of natural beings existing together. Bio-inspired algorithms need be explored more for the branch of Privacy Preserving data mining in healthcare applications. In this research, field of genetic algorithm has been merged with data classification and machine learning to predict the disease of patients' in secured manner. Concept of meta-heuristic optimization has been utilized for random noise addition. Firefly Algorithm uses MSE as a parameter for fitness function to select the key through which the values of sensitive attributes are perturbed. Perturbation has been done only in least significant bits so that original information (private) is concealed but classification results are not adversely affected. Prediction using Random Forest has high training speed because it deals with subset of features at a time rather than all of them. Variance gets averaged (ensemble learning) and total error rate is minimized. On the other side, Firefly algorithm divides the whole population into different subgroups because of strong local attraction. It can deal with multi-modal problems with high efficiency as well as success rate and it doesn't suffer from pre-mature convergence. Although, initial random positions of fireflies and random deviation parameter improves the exploration of search space to move from local best to global best solution but still, firefly, in some instances, can fall into local optima. Further, aim is to apply and test different variants of Firefly Algorithm for healthcare field. It can be accomplished with the help of parameter tuning or further modification of location updating formula. Instead of straight walk, path of fireflies in the algorithm can be guided by different distributions like Brownian, Logarithmic etc. [26, 27]. This research can become the basis for real time applications in which data is continuously accessed for various data mining and knowledge discovery tasks but security is a bigger challenge.
5. Conclusion and Future Scope
Healthcare dataset contains a large amount of heterogeneous data. Data mining is applied to extract useful information from it. This data contains both sensitive and non-sensitive attributes. This research is focused on applying classification technique on healthcare data and reaching on certain decision about the patient's health. In addition to decision making, it is taken under consideration that patient's sensitive information is protected and probability of linkage attacks is reduced. So, before discovering knowledge from raw data, it is changed in certain ways to disguise the sensitive information while preserving the particular data property that is critical for building meaningful data-mining model. Perturbation techniques have to handle the intrinsic trade-off between preserving data privacy and data utility, as perturbing data usually reduces data utility.
Experimentation results by applying classifier on both the datasets are compared on the basis of 4 performance parameters: Accuracy, Recall, Precision and F-score. For original dataset, 97% accuracy, 0.97015 recall value, 0.98485 precision value and 0.97744 F-score value has been achieved. On the other side, perturbed dataset gives 95% accuracy, 0.95522 value for recall, 0.96970 and 0.96241 values for precision and F-score respectively. From these results, it is observed that, even after perturbation of sensitive data, it is providing good accuracy for classification/disease prediction results as discussed above. To validate the results of proposed technique, it is compared with existing data perturbation and classification techniques. Firefly algorithm is compared with Min-Max normalization and Gaussian noise addition methods on the basis of Mean Square Error, Structural Similarity Index and Peak Signal-Noise Ratio. From the values of SSIM and PSNR, it is found that by using both of these methods to add the noise in sensitive attribute, there is very much similarity between original and perturbed dataset. Due to this, necessary and sufficient privacy couldn't be provided to the patients. In firefly algorithm, noise addition is up-to the required extent and also, calculated MSE between original and perturbed age (sensitive attribute) is maximum when compared to above discussed noise addition techniques. For prediction of disease, using 70:30 ratio for training and testing data, it is observed that the proposed Privacy Preserving Data Classification technique (by using Firefly Algorithm and Random Forest Classifier) gives better accuracy i.e. 95% as compared to 73.75%, 78.75% and 93% of SVM, K-NN and decision tree algorithms respectively.
Key benefits of the proposed technique are random noise generation, better accuracy and secure prediction. In this research, MSE is taken as a parameter for calculation of fitness function which is further used to generate the optimal key for addition of noise in the sensitive data. In future, more than one parameter can be used in the objective/fitness function to select the key for noise addition. Moreover, Convolutional neural network (CNN) can be explored to further improve the accuracy of proposed technique.
[1] Vitabile, S., Marks, M., Stojanovic, D., Pllana, S., Molina, J.M., Krzyszton, M., Salomie, I. (2019). Medical data processing and analysis for remote health and activities monitoring. In High-Performance Modelling and Simulation for Big Data Applications, pp. 186-220. https://doi.org/10.1007/978-3-030-16272-6_7
[2] Sharan Vinothraj, A., Hariraj, L.K., Selvarajah, V. (2020). Implementation of RFID Technology in Managing Health Information in a Hospital. Int J Cur Res Rev, 12(20): 177-182. http://dx.doi.org/10.31782/IJCRR.2020.122029
[3] Sohail, M.N., Jiadong, R., Uba, M.M., Irshad, M. (2019). A comprehensive looks at data mining techniques contributing to medical data growth: A survey of researcher reviews. In Recent Developments in Intelligent Computing, Communication and Devices, 752: 21-26. https://doi.org/10.1007/978-981-10-8944-2_3
[4] Jiang, L., Chen, L., Giannetsos, T., Luo, B., Liang, K., Han, J. (2019). Toward practical privacy-preserving processing over encrypted data in IoT: An assistive healthcare use case. IEEE Internet of Things Journal, 6(6): 10177-10190. https://doi.org/10.1109/JIOT.2019.2936532
[5] Jin, H., Luo, Y., Li, P., Mathew, J. (2019). A review of secure and privacy-preserving medical data sharing. IEEE Access, 7: 61656-61669. https://doi.org/10.1109/ACCESS.2019.2916503
[6] Kumar, A., Kumar, R. (2020). Privacy preservation of electronic health record: Current status and future direction. In Handbook of Computer Networks and Cyber Security, 715-739. https://doi.org/10.1007/978-3-030-22277-2_28
[7] Kundalwal, M.K., Chatterjee, K., Singh, A. (2019). An improved privacy preservation technique in health-cloud. ICT Express, 5(3): 167-172. https://doi.org/10.1016/j.icte.2018.10.002
[8] Rodríguez-Hoyos, A., Estrada-Jiménez, J., Rebollo-Monedero, D., Parra-Arnau, J., Forné, J. (2018). Does $k$-anonymous microaggregation affect machine-learned macrotrends? IEEE Access, 6: 28258-28277. https://doi.org/10.1109/ACCESS.2018.2834858
[9] Kiran, A., Vasumathi, D. (2020). Data mining: min–max normalization based data perturbation technique for privacy preservation. In Proceedings of the Third International Conference on Computational Intelligence and Informatics. Singapore: Springer, pp. 723-34. https://doi.org/10.1007/978-981-15-1480-7_66
[10] Jain, Y.K., Bhandare, S.K. (2011). Min max normalization based data perturbation method for privacy protection. International Journal of Computer & Communication Technology, 2(8): 45-50. https://doi.org/10.47893/ijcct.2013.1201
[11] Kalaivani, R., Chidambaram, S. (2014). Additive Gaussian noise based data perturbation in multi-level trust privacy preserving data mining. International Journal of Data Mining & Knowledge Management Process, 4(3): 21-29. https://doi.org/10.5121/IJDKP.2014.4303
[12] Hu, Z., Luo, Y., Zheng, X., Zhao, Y. (2020). A novel privacy-preserving matrix factorization recommendation system based on random perturbation. Journal of Intelligent & Fuzzy Systems, 38(4): 4525-4535. https://doi.org/10.3233/JIFS-191287
[13] Jahan, T., Narasimha, G., Rao, V.G. (2016). A multiplicative data perturbation method to prevent attacks in privacy preserving data mining. International Journal of Computer Science and Innovation, 1(1): 45-51.
[14] Balasubramaniam, S., Kavitha, V. (2015). Geometric data perturbation-based personal health record transactions in cloud computing. The Scientific World Journal, 2015: 927867. https://doi.org/10.1155/2015/927867
[15] Krishnan, C., Lalitha, T. (2020). Attribute-Based Encryption for Securing Healthcare Data in Cloud Environment. PalArch's Journal of Archaeology of Egypt/Egyptology, 17(9): 10134-10143.
[16] Han, J., Kamber, M., Pei, J. (2011). Data Mining Concepts and Techniques (3rd ed.). Elsevier. https://doi.org/10.1016/C2009-0-61819-5
[17] Tangirala, S. (2020). Evaluating the impact of GINI index and information gain on classification using decision tree classifier algorithm. International Journal of Advanced Computer Science and Applications, 11(2): 612-619. https://doi.org/10.14569/IJACSA.2020.0110277
[18] Chaurasia, V., Pal, S., Tiwari, B.B. (2018). Chronic kidney disease: A predictive model using decision tree. International Journal of Engineering Research and Technology, 11(11): 1781-1794. https://ssrn.com/abstract=3298343
[19] Mittal, K., Aggarwal, G., Mahajan, P. (2019). Performance study of K-nearest neighbor classifier and K-means clustering for predicting the diagnostic accuracy. International Journal of Information Technology, 11(3): 535-540. https://doi.org/10.1007/s41870-018-0233-x
[20] Tikariha, P., Richhariya, P. (2018). Comparative study of chronic kidney disease prediction using different classification techniques. In Proceedings of International Conference on Recent Advancement on Computer and Communication, 34: 195-203. https://doi.org/10.1007/978-981-10-8198-9_20
[21] Zhou, X., Lu, P., Zheng, Z., Tolliver, D., Keramati, A. (2020). Accident prediction accuracy assessment for highway-rail grade crossings using random forest algorithm compared with decision tree. Reliability Engineering & System Safety, 200: 106931. https://doi.org/10.1016/j.ress.2020.106931
[22] Biplob, M.B., Sheraji, G.A., Khan, S.I. (2018). Comparison of different extraction transformation and loading tools for data warehousing. In 2018 International Conference on Innovations in Science, Engineering and Technology (ICISET), pp. 262-267. https://doi.org/10.1109/ICISET.2018.8745574
[23] Cunningham, P., Delany, S.J. (2021). k-nearest neighbour classifiers-A tutorial. ACM Computing Surveys (CSUR), 54(6): 1-25. https://doi.org/10.1145/3459665
[24] Cervantes, J., Garcia-Lamont, F., Rodríguez-Mazahua, L., Lopez, A. (2020). A comprehensive survey on support vector machine classification: Applications, challenges and trends. Neurocomputing, 408: 189-215. https://doi.org/10.1016/j.neucom.2019.10.118
[25] Asuncion, A., Newman, D. (2007). UCI machine learning repository. Chronic_Kidney_Disease Data Set. https://archive.ics.uci.edu/ml/datasets/chronic_kidney_disease, accessed on Jul. 03, 2015.
[26] Yang, X.S. (2010). Firefly algorithm, Levy flights and global optimization. In Research and development in intelligent systems XXVI: 209-218. https://doi.org/10.1007/978-1-84882-983-1_15
[27] Kumar, V., Kumar, D. (2021). A systematic review on firefly algorithm: past, present, and future. Archives of Computational Methods in Engineering, 28(4): 3269-3291. https://doi.org/10.1007/s11831-020-09498-y
[28] Jabbar, M.A., Deekshatulu, B.L., Chandra, P. (2016). Intelligent heart disease prediction system using random forest and evolutionary approach. Journal of Network and Innovative Computing, 4(2016): 175-184.
[29] Subasi, A., Alickovic, E., Kevric, J. (2017). Diagnosis of chronic kidney disease by using random forest. In CMBEBIH 2017, 589-594. https://doi.org/10.1007/978-981-10-4166-2_89
[30] Yadav, D.C., Pal, S. (2020). Prediction of heart disease using feature selection and random forest ensemble method. International Journal of Pharmaceutical Research, 12(4): 56-66. https://doi.org/10.31838/ijpr/2020.12.04.013
[31] Abdulkareem, N.M., Abdulazeez, A.M., Zeebaree, D.Q., Hasan, D.A. (2021). COVID-19 world vaccination progress using machine learning classification algorithms. Qubahan Academic Journal, 1(2): 100-105. https://doi.org/10.48161/qaj.v1n2a53
[32] Singh, S.S., Sachdeva, R., Singh, A. (2020). An optimized approach for underwater image dehazing and colour correction. In Proceedings of the International Conference on Innovative Computing & Communications (ICICC). http://dx.doi.org/10.2139/ssrn.3565932
[33] Attwal, K.P.S., Dhiman, A.S. (2020). Investigation and comparative analysis of data mining techniques for the prediction of crop yield. International Journal of Sustainable Agricultural Management and Informatics, 6(1): 43-74. https://dx.doi.org/10.1504/IJSAMI.2020.106540 | CommonCrawl |
arXiv.org > cs > arXiv:1101.3804
Computer Science > Data Structures and Algorithms
arXiv:1101.3804 (cs)
[Submitted on 20 Jan 2011]
Title:Estimating the Average of a Lipschitz-Continuous Function from One Sample
Authors:Abhimanyu Das, David Kempe
Abstract: We study the problem of estimating the average of a Lipschitz continuous function $f$ defined over a metric space, by querying $f$ at only a single point. More specifically, we explore the role of randomness in drawing this sample. Our goal is to find a distribution minimizing the expected estimation error against an adversarially chosen Lipschitz continuous function. Our work falls into the broad class of estimating aggregate statistics of a function from a small number of carefully chosen samples. The general problem has a wide range of practical applications in areas as diverse as sensor networks, social sciences and numerical analysis. However, traditional work in numerical analysis has focused on asymptotic bounds, whereas we are interested in the \emph{best} algorithm. For arbitrary discrete metric spaces of bounded doubling dimension, we obtain a PTAS for this problem. In the special case when the points lie on a line, the running time improves to an FPTAS. Both algorithms are based on approximately solving a linear program with an infinite set of constraints, by using an approximate separation oracle. For Lipschitz-continuous functions over $[0,1]$, we calculate the precise achievable error as $1-\frac{\sqrt{3}}{2} \approx 0.134$, which improves upon the \quarter which is best possible for deterministic algorithms.
Subjects: Data Structures and Algorithms (cs.DS); Combinatorics (math.CO)
Journal reference: European Symposium on Algorithms 2010
Cite as: arXiv:1101.3804 [cs.DS]
(or arXiv:1101.3804v1 [cs.DS] for this version)
From: Abhimanyu Das [view email]
[v1] Thu, 20 Jan 2011 00:36:00 UTC (80 KB)
cs.DS
math.CO
Abhimanyu Das
David Kempe | CommonCrawl |
A fourth order accurate approximation of the first and pure second derivatives of the Laplace equation on a rectangle
Adiguzel A Dosiyev1 &
Hamid MM Sadeghi1
In this paper, we discuss an approximation of the first and pure second order derivatives for the solution of the Dirichlet problem on a rectangular domain. The boundary values on the sides of the rectangle are supposed to have the sixth derivatives satisfying the Hölder condition. On the vertices, besides the continuity condition, the compatibility conditions, which result from the Laplace equation for the second and fourth derivatives of the boundary values, given on the adjacent sides, are also satisfied. Under these conditions a uniform approximation of order \(O(h^{4})\) (h is the grid size) is obtained for the solution of the Dirichlet problem on a square grid, its first and pure second derivatives, by a simple difference scheme. Numerical experiments are illustrated to support the analysis made.
Since the operation of differentiation is ill conditioned, to find a highly accurate approximation for the derivatives of the solution of a differential equation becomes problematic, especially when the smoothness is restricted.
In [1], it was proved that the higher order difference derivatives uniformly converge to the corresponding derivatives of the solution of the Laplace equation in any strictly interior subdomain, with the same order of h as which the difference solution converges on the given domain. In [2], by using the difference solution of the Dirichlet problem for the Laplace equation on a rectangle, the uniform convergence of its first and pure second divided difference over the whole grid domain to the corresponding derivatives of the exact solution with the rate \(O(h^{2})\) is proved. In [3], the difference schemes on a rectangular parallelepiped were constructed, where solutions approximate the Dirichlet problem for the Laplace equation and its first and second derivatives. Under the assumptions that the boundary functions belong to \(C^{4,\lambda }\), \(0<\lambda<1\), on the faces, are continuous on the edges, and their second-order derivatives satisfy the compatibility condition, the solution to their difference schemes converges uniformly on the grid with the rate of \(O(h^{2})\).
In this paper, we consider the Dirichlet problem for the Laplace equation on a rectangle, when the boundary values belong to \(C^{6,\lambda }\), \(0<\lambda <1\), on the sides of the rectangle and as a whole are continuous on the vertices. Also the 2τ, \(\tau=1,2\), order derivatives satisfy the compatibility conditions on the vertices which result from the Laplace equation. Under these conditions, we construct the difference problems, the solutions of which converge to the first and pure second derivatives of the exact solution with the order \(O(h^{4})\). Finally, numerical experiments are given in the last part of the paper to support the theoretical results.
The Dirichlet problem on rectangular domains
Let \(\Pi= \{ (x,y):0< x<a,0<y<b \} \) be a rectangle, \(a/b\) be rational, \(\gamma_{j}\) (\(\gamma_{j}^{\prime}\)), \(j=1,2,3,4\), be the sides, including (excluding) the ends, enumerated counterclockwise starting from the left side (\(\gamma_{0}\equiv\gamma_{4}\), \(\gamma_{5}\equiv\gamma _{1}\)), and let \(\gamma=\bigcup_{j=1}^{4}\gamma_{j}\) be the boundary of Π. Denote by s the arclength, measured along γ, and by \(s_{j}\) the value of s at the beginning of \(\gamma_{j}\). We say that \(f\in C^{k,\lambda}(D)\), if f has kth derivatives on D satisfying a Hölder condition with exponent \(\lambda\in(0,1)\).
We consider the boundary value problem
$$ \Delta u=0 \quad\text{on } \Pi,\quad\quad u=\varphi_{j}(s) \quad\text{on } \gamma _{j}, j=1,2,3,4, $$
where \(\Delta\equiv\partial^{2}/\partial x^{2}+\partial^{2}/\partial y^{2}\), \(\varphi_{j}\) are given functions of s. Assume that
$$\begin{aligned}& \varphi_{j} \in C^{6,\lambda}(\gamma_{j}),\quad 0< \lambda <1, j=1,2,3,4, \end{aligned}$$
$$\begin{aligned}& \varphi_{j}^{(2q)}(s_{j}) = (-1)^{q} \varphi_{j-1}^{(2q)}(s_{j}),\quad q=0,1,2. \end{aligned}$$
The solution u of problem (1) is from \(C^{5,\lambda }(\overline{\Pi})\).
The proof of Lemma 2.1 follows from Theorem 3.1 in [4].
The inequality is true
$$ \max_{0\leq p\leq3}\sup_{(x,y)\in\Pi}\biggl\vert \frac{\partial^{6}u}{\partial x^{2p}\,\partial y^{6-2p}}\biggr\vert < \infty, $$
where u is the solution of problem (1).
From Lemma 2.1 it follows that the functions \(\frac{\partial ^{4}u}{\partial x^{4}}\) and \(\frac{\partial^{4}u}{\partial y^{4}}\) are continuous on \(\overline{\Pi}\). We put \(w=\frac{\partial^{4}u}{\partial x^{4}}\). The function w is harmonic in Π and is the solution of the problem
$$ \Delta w=0 \quad\text{on } \Pi,\quad\quad w=\Phi_{j}\quad\text{on } \gamma _{j}, j=1,2,3,4, $$
$$\begin{aligned}& \Phi_{\tau}=\frac{\partial^{4}\varphi_{\tau}}{\partial y^{4}},\quad \tau=1,3, \\& \Phi_{\nu}=\frac{\partial^{4}\varphi_{\nu}}{\partial x^{4}},\quad \nu=2,4. \end{aligned}$$
From the conditions (2) and (3) it follows that
$$ \Phi_{j}\in C^{2,\lambda}(\gamma_{j}),\quad 0< \lambda<1,\quad\quad \Phi_{j}(s_{j})=\Phi_{j-1}(s_{j}),\quad j=1,2,3,4. $$
Hence, on the basis of Theorem 6.1 in [5], we have
$$\begin{aligned}& \sup_{(x,y)\in\Pi}\biggl\vert \frac{\partial^{2}w}{\partial x^{2}}\biggr\vert = \sup_{(x,y)\in\Pi}\biggl\vert \frac{\partial ^{6}u}{\partial x^{6}}\biggr\vert < \infty, \end{aligned}$$
$$\begin{aligned}& \sup_{(x,y)\in\Pi}\biggl\vert \frac{\partial^{2}w}{\partial y^{2}}\biggr\vert = \sup_{(x,y)\in\Pi}\biggl\vert \frac{\partial ^{6}u}{\partial x^{4}\,\partial y^{2}}\biggr\vert < \infty. \end{aligned}$$
Similarly, it is proved that
$$ \sup_{(x,y)\in\Pi} \biggl\{ \biggl\vert \frac{\partial ^{6}u}{\partial y^{6}}\biggr\vert ,\biggl\vert \frac{\partial^{6}u}{\partial y^{4}\,\partial x^{2}}\biggr\vert \biggr\} < \infty. $$
From (5)-(7), estimation (4) follows. □
Let \(\rho(x,y)\) be the distance from a current point of the open rectangle Π to its boundary and let \(\partial/\partial l\equiv \alpha \partial/\partial x+\beta\partial/\partial y\), \(\alpha^{2}+\beta^{2}=1\). Then the next inequality holds:
$$ \biggl\vert \frac{\partial^{8}u}{\partial l^{8}}\biggr\vert \leq c\rho^{-2}, $$
where c is a constant independent of the direction of the derivative \(\partial/\partial l\), u is a solution of problem (1).
According to Lemma 2.2, we have
$$ \max_{0\leq p\leq3}\sup_{(x,y)\in\Pi}\biggl\vert \frac{\partial^{6}u}{\partial x^{2p}\,\partial y^{6-2p}}\biggr\vert \leq c< \infty. $$
Since any eighth order derivative can be obtained by two times differentiating some of the derivatives \(\partial^{6}/\partial x^{2p}\,\partial y^{6-2p}\), \(0\leq p\leq3\), on the basis of estimations (29) and (30) from [6], we obtain
$$ \max_{\nu+\mu=8}\biggl\vert \frac{\partial^{8}u}{\partial x^{\nu }\,\partial y^{\mu}}\biggr\vert \leq c_{1}\rho^{-2}(x,y)< \infty. $$
From (9), inequality (8) follows. □
Let \(h>0\), and \(a/h\geq6\), \(b/h\geq6\) be integers. We assign \(\Pi ^{h}\), a square net on Π, with step h, obtained by the lines \(x,y=0,h,2h,\ldots\) . Let \(\gamma_{j}^{h}\) be a set of nodes on the interior of \(\gamma_{j}\), and let
$$ \gamma^{h}=\bigcup_{j=1}^{4} \gamma_{j}^{h},\quad\quad \dot{\gamma_{j}}=\gamma _{j-1}\cap\gamma_{j},\quad\quad\overline{\gamma}^{h}= \bigcup_{j=1}^{4}\bigl(\gamma _{j}^{h}\cup\dot{\gamma_{j}}\bigr), \quad\quad\overline{ \Pi}^{h}=\Pi^{h}\cup\overline{\gamma}^{h}. $$
Let the operator B be defined as follows:
$$\begin{aligned} Bu(x,y) =&\bigl(u(x+h,y)+u(x-h,y)+u(x,y+h)+u(x,y-h)\bigr)/5 \\ &{}+\bigl(u(x+h,y+h)+u(x+h,y-h) \\ &{}+u(x-h,y+h)+u(x-h,y-h)\bigr)/20. \end{aligned}$$
We consider the classical 9-point finite difference approximation of problem (1):
$$ u_{h}=Bu_{h}\quad\text{on }\Pi^{h},\quad\quad u_{h}=\varphi_{j}\quad\text{on }\gamma _{j}^{h}\cup\dot{\gamma_{j}}, j=1,2,3,4. $$
By the maximum principle, problem (11) has a unique solution.
In what follows, and for simplicity, we will denote by \(c,c_{1},c_{2},\ldots\) constants which are independent of h and the nearest factor, and identical notation will be used for various constants.
Let \(\Pi^{1h}\) be the set of nodes of the grid \(\Pi^{h}\) that are at a distance h from γ, and let \(\Pi^{2h}=\Pi^{h}\backslash\Pi ^{1h}\).
The following inequality holds:
$$ \max_{(x,y)\in ( \Pi^{1h}\cup\Pi^{2h} ) }\vert Bu-u\vert \leq ch^{6}, $$
where u is a solution of problem (1).
Let \((x_{0},y_{0})\) be a point of \(\Pi^{1h}\), and let
$$ R_{0}= \bigl\{ (x,y): \vert x-x_{0}\vert < h,\vert y-y_{0}\vert <h \bigr\} $$
be an elementary square, some sides of which lie on the boundary of the rectangle Π. On the vertices of \(R_{0}\) and on the mid-points of its sides lie the nodes of which the function values are used to evaluate \(Bu(x_{0},y_{0})\).
We represent a solution of problem (1) in some neighborhood of \((x_{0},y_{0})\in\Pi^{1h}\) by Taylor's formula
$$ u(x,y)=p_{7}(x,y)+r_{8}(x,y), $$
where \(p_{7}(x,y)\) is the seventh order Taylor's polynomial, \(r_{8}(x,y)\) is the remainder term. Taking into account that the function u is harmonic, by exhaustive calculations, we have
$$ Bp_{7}(x_{0},y_{0})=u(x_{0},y_{0}) . $$
Now, we estimate \(r_{8}\) at the nodes of the operator B. We take a node \((x_{0}+h,y_{0}+h)\) which is one of the eight nodes of B, and we consider the function
$$ \widetilde{u}(s)=u \biggl( x_{0}+\frac{s}{\sqrt{2}},y_{0}+ \frac{s}{\sqrt{2}}\biggr) , \quad-\sqrt{2}h\leq s\leq\sqrt{2}h $$
of one variable s. By virtue of Lemma 2.3, we have
$$ \biggl\vert \frac{d^{8}\widetilde{u}(s)}{ds^{8}}\biggr\vert \leq c(\sqrt{2}h-s)^{-2},\quad 0\leq s< \sqrt{2}h. $$
We represent function (16) around the point \(s=0\) by Taylor's formula
$$ \widetilde{u}(s)=\widetilde{p}_{7}(s)+\widetilde{r}_{8}(s), $$
where \(\widetilde{p}_{7}(s)\equiv p_{7} ( x_{0}+\frac{s}{\sqrt{2}},y_{0}+\frac{s}{\sqrt{2}} ) \) is the seventh order Taylor's polynomial of the variable s, and
$$ \widetilde{r}_{8}(s)\equiv r_{8} \biggl( x_{0}+ \frac{s}{\sqrt{2}},y_{0}+\frac{s}{\sqrt{2}} \biggr) ,\quad 0\leq \vert s \vert < \sqrt{2}h , $$
is the remainder term.
On the basis of (17) and the integral form of the remainder term of Taylor's formula, we have
$$ \bigl\vert \widetilde{r}_{8}(\sqrt{2}h-\varepsilon)\bigr\vert \leq c\frac{1}{7!}\int_{0}^{\sqrt{2}h-\varepsilon} ( \sqrt{2}h- \varepsilon-t ) ^{7}(\sqrt{2}h-t)^{-2}\,dt\leq c_{1}h^{6}, \quad 0< \varepsilon\leq\frac{h}{\sqrt{2}}. $$
Taking into account the continuity of the function \(\widetilde {r}_{8}(s)\) on \([ -\sqrt{2}h,\sqrt{2}h ] \), from (18) and (19), we obtain
$$ \bigl\vert r_{8} ( x_{0}+h,y_{0}+h ) \bigr\vert \leq c_{1}h^{6}, $$
where \(c_{1}\) is a constant independent of the point taken, \((x_{0},y_{0})\) on \(\Pi^{1h}\).
Estimation (20) is obtained analogously for the remaining seven nodes of the operator B. Since the norm of the operator is equal to 1 in the uniform metric, by using (20), we have
$$ \bigl\vert Br_{8} ( x_{0},y_{0} ) \bigr\vert \leq c_{2}h^{6}. $$
Hence, on the basis of (14), (15), (17), and linearity of the operator B, we obtain
$$ \bigl\vert Bu(x_{0},y_{0})-u ( x_{0},y_{0} ) \bigr\vert \leq ch^{6}, $$
for any \((x_{0},y_{0})\in\Pi^{1h}\).
Now, let \((x_{0},y_{0})\) be a point of \(\Pi^{2h}\), and let in the Taylor formula (14) corresponding to this point, the remainder term \(r_{8}(x,y)\) be represented in the Lagrange form. Then \(Br_{8}(x_{0},y_{0})\) contains eighth order derivatives of the solution of problem (1) at some points of the open square \(R_{0}\) defined by (13), when \((x_{0},y_{0})\in\Pi^{2h}\). The square \(R_{0}\) lies at a distance from the boundary γ of the rectangle Π; it is not less than h. Therefore, on the basis of Lemma 2.3, we obtain
$$ \bigl\vert Br_{8} ( x_{0},y_{0} ) \bigr\vert \leq c_{3}h^{6}, $$
where \(c_{3}\) is a constant independent of the point \(( x_{0},y_{0} ) \in\Pi^{2h}\). Again, on the basis of (14), (15), and (22) follows estimation (12) at any point \((x_{0},y_{0})\in\Pi^{2h}\). Lemma 2.4 is proved. □
We present two more lemmas. Consider the following systems:
$$\begin{aligned}& q_{h} = Bq_{h}+g_{h}\quad\text{on }\Pi ^{h},\quad\quad q_{h}=0\quad\text{on }\gamma^{h}, \end{aligned}$$
$$\begin{aligned}& \overline{q}_{h} = B\overline{q}_{h}+ \overline{g}_{h}\quad\text{on }\Pi^{h}, \quad\quad \overline{q}_{h}\geq0\quad\text{on }\gamma^{h}, \end{aligned}$$
where \(g_{h}\) and \(\overline{g}_{h}\) are given functions, and \(\vert g_{h}\vert \leq\overline{g}_{h}\) on \(\Pi^{h}\).
The solutions \(q_{h}\) and \(\overline{q}_{h}\) of systems (23) and (24) satisfy the inequality
$$ \vert q_{h}\vert \leq\overline{q}_{h}\quad\textit{on } \overline{\Pi}^{h}. $$
The proof of Lemma 2.5 follows from the comparison theorem (see Chapter 4 in [7]).
For the solution of the problem
$$ q_{h}=Bq_{h}+h^{6}\quad\textit{on }\Pi ^{h}, \quad\quad q_{h}=0\quad\textit{on }\gamma^{h}, $$
$$ q_{h}\leq\frac{5}{3}\rho dh^{4}\quad\textit{on } \overline{\Pi}^{h}, $$
where \(d=\max\{a,b\}\), \(\rho=\rho(x,y)\) is the distance from the current point \((x,y)\in\overline{\Pi}^{h}\) to the boundary of the rectangle Π.
We consider the functions
$$ \overline{q}_{h}^{(1)}(x,y)=\frac{5}{3}h^{4} \bigl(ax-x^{2}\bigr)\geq0,\quad\quad \overline{q}_{h}^{(2)}(x,y)= \frac{5}{3}h^{4}\bigl(by-y^{2}\bigr)\geq0\quad\text{on }\overline{\Pi}, $$
which are solutions of the equation \(\overline{q}_{h}=B\overline{q}_{h}+h^{6} \) on \(\Pi^{h}\). By virtue of Lemma 2.5, we obtain
$$ q_{h}\leq\min_{i=1,2}\overline{q}_{h}^{(i)}(x,y) \leq\frac{5}{3}\rho dh^{4}\quad\text{on }\overline{\Pi }^{h}. $$
Assume that the boundary functions \(\varphi_{j}\), \(j=1,2,3,4\) satisfy conditions (2) and (3). Then
$$ \max_{\overline{\Pi}^{h}}\vert u_{h}-u\vert \leq c\rho h^{4}, $$
where \(u_{h}\) is the solution of the finite difference problem (11), and u is the exact solution of problem (1).
$$ \varepsilon_{h}=u_{h}-u\quad\text{on }\overline{\Pi }^{h}. $$
It is obvious that
$$ \varepsilon_{h}=B\varepsilon_{h}+(Bu-u)\quad\text{on }\Pi ^{h},\quad\quad \varepsilon_{h}=0\quad\text{on }\gamma^{h}. $$
By virtue of estimation (12) for \((Bu-u)\) and by applying Lemma 2.5 to the problems (25) and (28), on the basis of Lemma 2.6 we obtain
$$ \max_{\overline{\Pi}^{h}}\vert \varepsilon_{h}\vert \leq c \rho h^{4}. $$
From (27) and (29) follows the proof of Theorem 2.7. □
Approximation of the first derivative
We denote \(\Psi_{j}=\frac{\partial u}{\partial x}\) on \(\gamma_{j}\), \(j=1,2,3,4\), and consider the boundary value problem:
$$ \Delta v=0\quad\text{on }\Pi, \quad\quad v=\Psi_{j}\quad\text{on }\gamma _{j}, j=1,2,3,4, $$
where u is a solution of the boundary value problem (1).
We put
$$\begin{aligned}& \Psi_{1h}(u_{h}) = \frac{1}{12h} \bigl( -25\varphi _{1}(y)+48u_{h}(h,y)-36u_{h}(2h,y) \\& \hphantom{\Psi_{1h}(u_{h}) =}{}+ 16u_{h}(3h,y)-3u_{h}(4h,y) \bigr) \quad\text{on }\gamma _{1}^{h}, \end{aligned}$$
$$\begin{aligned}& \Psi_{3h}(u_{h}) = \frac{1}{12h} \bigl( 25\varphi _{3}(y)-48u_{h}(a-h,y)+36u_{h}(a-2h,y) \\& \hphantom{\Psi_{3h}(u_{h}) =}{} - 16u_{h}(a-3h,y)+3u_{h}(a-4h,y) \bigr) \quad\text{on } \gamma_{3}^{h}, \end{aligned}$$
$$\begin{aligned}& \Psi_{ph}(u_{h}) = \frac{\partial\varphi_{p}}{\partial x}\quad\text{on }\gamma_{p}^{h}, p=2,4, \end{aligned}$$
where \(u_{h}\) is the solution of the finite difference boundary value problem (11).
The following inequality is true:
$$ \bigl\vert \Psi_{kh}(u_{h})-\Psi_{kh}(u)\bigr\vert \leq c_{3}h^{4}, \quad k=1,3, $$
where \(u_{h}\) is the solution of problem (11), u is the solution of problem (1).
On the basis of (31), (32), and Theorem 2.7, we have
$$\begin{aligned} \bigl\vert \Psi_{kh}(u_{h})-\Psi_{kh}(u)\bigr\vert \leq&\frac{1}{12h}\bigl( 48 ( ch ) h^{4}+36 ( c2h ) h^{4} + 16 ( c3h ) h^{4}+3 ( c4h ) h^{4} \bigr) \\ \leq &c_{3}h^{4},\quad k=1,3. \end{aligned}$$
The following inequality holds
$$ \max_{(x,y)\in\gamma_{k}^{h}}\bigl\vert \Psi_{kh}(u_{h})- \Psi_{k}\bigr\vert \leq c_{4}h^{4},\quad k=1,3. $$
From Lemma 2.1 it follows that \(u\in C^{5,0}(\overline{\Pi})\). Then, at the end points \((0,\nu h)\in\gamma_{1}^{h}\) and \((a,\nu h)\in\gamma _{3}^{h}\) of each line segment \(\{ (x,y):0\leq x\leq a,0< y=\nu h<b \} \), (31) and (32) give the fourth order approximation of \(\frac{\partial u}{\partial x}\), respectively. From the truncation error formulas (see [8]) it follows that
$$ \max_{(x,y)\in\gamma_{k}^{h}}\bigl\vert \Psi_{kh}(u)-\Psi _{k}\bigr\vert \leq\frac{h^{4}}{5}\max_{(x,y)\in\overline{\Pi}} \biggl\vert \frac{\partial ^{5}u}{\partial x^{5}}\biggr\vert \leq c_{5}h^{4},\quad k=1,3. $$
On the basis of Lemma 3.1 and estimation (33) follows (35).
We consider the finite difference boundary value problem
$$ v_{h}=Bv_{h}\quad\text{on }\Pi^{h},\quad\quad v_{h}=\Psi_{jh}\quad\text{on }\gamma _{j}^{h}, j=1,2,3,4, $$
where \(\Psi_{jh}\), \(j=1,2,3,4\), are defined by (31)-(33). □
The estimation is true
$$ \max_{(x,y)\in\overline{\Pi}^{h}}\biggl\vert v_{h}-\frac{\partial u}{\partial x} \biggr\vert \leq ch^{4}, $$
where u is the solution of problem (1), \(v_{h}\) is the solution of the finite difference problem (37).
$$ \epsilon_{h}=v_{h}-v\quad\text{on }\overline{\Pi }^{h}, $$
where \(v=\frac{\partial u}{\partial x}\). From (37) and (39), we have
$$ \begin{aligned} &\epsilon_{h}=B\epsilon_{h}+(Bv-v)\quad\text{on }\Pi ^{h},\quad\quad \epsilon_{h}=\Psi_{kh}(u_{h})-v \quad\text{on }\gamma_{k}^{h}, k=1,3, \\ &\epsilon _{h}=0\quad\text{on }\gamma_{p}^{h}, p=2,4. \end{aligned} $$
$$ \epsilon_{h}=\epsilon_{h}^{1}+\epsilon _{h}^{2}, $$
$$\begin{aligned}& \epsilon_{h}^{1} = B\epsilon_{h}^{1} \quad\text{on }\Pi^{h}, \end{aligned}$$
$$\begin{aligned}& \epsilon_{h}^{1} = \Psi_{kh}(u_{h})-v \quad\text{on }\gamma_{k}^{h}, k=1,3, \quad\quad\epsilon _{h}^{1}=0\quad\text{on }\gamma_{p}^{h}, p=2,4, \end{aligned}$$
$$\begin{aligned}& \epsilon_{h}^{2} = B\epsilon_{h}^{2}+(Bv-v) \quad\text{on }\Pi^{h},\quad\quad \epsilon_{h}^{2}=0\quad\text{on }\gamma_{j}^{h}, j=1,2,3,4. \end{aligned}$$
By Lemma 3.2 and by the maximum principle, for the solution of system (42), (43), we have
$$ \max_{(x,y)\in\overline{\Pi}^{h}}\bigl\vert \epsilon_{h}^{1} \bigr\vert \leq\max_{q=1,3}\max_{(x,y)\in\gamma_{q}^{h}}\bigl\vert \Psi_{qh}(u_{h})-v\bigr\vert \leq c_{4}h^{4}. $$
The solution \(\epsilon_{h}^{2}\) of system (44) is the error of the approximate solution obtained by the finite difference method for problem (30), when the boundary values satisfy the conditions
$$\begin{aligned}& \Psi_{j} \in C^{4,\lambda}(\gamma_{j}),\quad 0< \lambda <1, j=1,2,3,4, \end{aligned}$$
$$\begin{aligned}& \Psi_{j}^{(2q)}(s_{j}) = (-1)^{q}\Psi _{j-1}^{(2q)}(s_{j}), \quad q=0,1. \end{aligned}$$
Since the function \(v=\frac{\partial u}{\partial x}\) is harmonic on Π with the boundary functions \(\Psi_{j}\), \(j=1,2,3,4\), on the basis of (46), (47), and Remark 15 in [9], we have
$$ \max_{(x,y)\in\overline{\Pi}^{h}}\bigl\vert \epsilon_{h}^{2} \bigr\vert \leq c_{6}h^{4}. $$
By (41), (45), and (48) inequality (38) follows. □
Approximation of the pure second derivatives
We denote \(\omega=\frac{\partial^{2}u}{\partial x^{2}}\). The function ω is harmonic on Π, on the basis of Lemma 2.1 is continuous on \(\overline{\Pi}\) and is a solution of the following Dirichlet problem:
$$ \Delta\omega=0\quad\text{on }\Pi,\quad\quad \omega=\digamma_{j}\quad\text{on } \gamma_{j}, j=1,2,3,4, $$
$$\begin{aligned}& \digamma_{\tau} = \frac{\partial^{2}\varphi_{\tau}}{\partial x^{2}},\quad \tau=2,4, \end{aligned}$$
$$\begin{aligned}& \digamma_{\nu} = -\frac{\partial^{2}\varphi_{\nu}}{\partial y^{2}},\quad\nu=1,3. \end{aligned}$$
From the continuity of the function ω on \(\overline{\Pi}\) and from (2), (3) and (50), (51) it follows that
$$\begin{aligned}& \digamma_{j} \in C^{4,\lambda}(\gamma_{j}),\quad 0< \lambda<1, j=1,2,3,4, \end{aligned}$$
$$\begin{aligned}& \digamma_{j}^{(2q)}(s_{j}) = (-1)^{q} \digamma_{j-1}^{(2q)}(s_{j}),\quad q=0,1, j=1,2,3,4. \end{aligned}$$
Let \(\omega_{h}\) be a solution of the finite difference problem
$$ \omega_{h}=B\omega_{h}\quad\text{on }\Pi^{h},\quad\quad \omega_{h}=\digamma_{j}\quad\text{on }\gamma _{j}^{h}\cup\dot{\gamma_{j}}, j=1,2,3,4, $$
where \(\digamma_{j}\), \(j=1,2,3,4\), are the functions determined by (50) and (51).
The following estimation holds:
$$ \max_{\overline{\Pi}^{h}}\vert \omega_{h}-\omega \vert \leq ch^{4}, $$
where \(\omega=\frac{\partial^{2}u}{\partial x^{2}}\), u is the solution of problem (1) and \(\omega_{h}\) is the solution of the finite difference problem (54).
On the basis of conditions (52) and (53), the exact solution of problem (49) belongs to the class of functions \(\widetilde{C}^{4,\lambda}(\overline{\Pi})\) (see [9]). Therefore, inequality (55) follows from the results in [9] (see Remark 15), as the case of the Dirichlet problem. □
Numerical example
Let \(\Pi= \{ ( x,y ) :-1< x<1,0<y<1 \} \), and let γ be the boundary of Π. We consider the following problem:
$$ \Delta u=0 \quad\text{on } \Pi,\quad\quad u=p(x,y)\quad\text{on } \gamma_{j}, j=1,2,3,4, $$
$$ p(x,y)= \bigl( x^{2}+y^{2} \bigr) ^{\frac{181}{60}}\cos \biggl( \frac{181}{30}\arctan\biggl( \frac{y}{x} \biggr) \biggr) $$
is the exact solution of this problem.
Let U be the exact solution and \(U_{h}\) be its approximate values on \(\overline{\Pi}^{h}\) of the Dirichlet problem on the rectangular domain Π. We denote \(\Vert U-U_{h}\Vert _{\overline{\Pi}^{h}}= \max_{\overline{\Pi}^{h}}\vert U-U_{h}\vert \), \(\Re _{U}^{m}=\frac{\Vert U-U_{2^{-m}}\Vert _{\overline{\Pi}^{h}}}{\Vert U-U_{2^{- ( m+1 ) }}\Vert _{\overline{\Pi}^{h}}}\).
In Table 1 and in Table 2, the maximum errors and the convergence order of the approximations of the first and pure second derivatives of problem (56) for different step sizes h are presented.
Table 1 The approximate results for the first derivative
Table 2 The approximate results for the pure second derivative
The results show that the approximate solutions converge as \(O(h^{4})\).
The shapes of \(\frac{\partial u}{\partial x}\) and \(\frac{\partial ^{2}u}{\partial x^{2}}\) and their approximations are demonstrated in Figure 1 and Figure 2, respectively.
The graph of the approximate (a) and exact (b) solutions of \(\pmb{\frac{\partial u}{\partial x}}\) .
The graph of the approximate (a) and exact (b) solutions of \(\pmb{\frac{\partial^{2}u}{\partial x^{2}}}\) .
The obtained results can be used to highly approximate the derivatives for the solution of Laplace's equation by the finite difference method, in some version of domain decomposition methods, in composite grid methods, and in the combined methods for solving Laplace's boundary value problems on polygons (see [10–13]).
Lebedev, VI: Evaluation of the error involved in the grid method for Newmann's two dimensional problem. Sov. Math. Dokl. 1, 703-705 (1960)
Volkov, EA: On convergence in \(c_{2}\) of a difference solution of the Laplace equation on a rectangle. Russ. J. Numer. Anal. Math. Model. 14(3), 291-298 (1999)
Volkov, EA: On the grid method for approximating the derivatives of the solution of the Dirichlet problem for the Laplace equation on the rectangular parallelpiped. Russ. J. Numer. Anal. Math. Model. 19(3), 269-278 (2004)
Volkov, EA: Differentiability properties of solutions of boundary value problems for the Laplace and Poisson equations on a rectangle. Proc. Steklov Inst. Math. 77, 101-126 (1965)
Volkov, EA: On differential properties of solutions of the Laplace and Poisson equations on a parallelepiped and efficient error estimates of the method of nets. Proc. Steklov Inst. Math. 105, 54-78 (1969)
Volkov, EA: On the solution by the grid method of the inner Dirichlet problem for the Laplace equation. Transl. Am. Math. Soc. 24, 279-307 (1963)
Samarskii, AA: The Theory of Difference Schemes. Marcel Dekker, New York (2001)
Burden, RL, Faires, JD: Numerical Analysis. Brooks/Cole, Cengage Learning, Boston (2011)
Dosiyev, AA: On the maximum error in the solution of Laplace equation by finite difference method. Int. J. Pure Appl. Math. 7(2), 229-241 (2003)
Dosiyev, AA: The high accurate block-grid method for solving Laplace's boundary value problem with singularities. SIAM J. Numer. Anal. 42(1), 153-178 (2004)
Volkov, EA, Dosiyev, AA: A high accurate composite grid method for solving Laplace's boundary value problems with singularities. Russ. J. Numer. Anal. Math. Model. 22(3), 291-307 (2007)
Dosiyev, AA: The block-grid method for the approximation of the pure second order derivatives for the solution of Laplace's equation on a staircase polygon. J. Comput. Appl. Math. 259, 14-23 (2014)
Volkov, EA: Grid approximation of the first derivatives of the solution to the Dirichlet problem for the Laplace equation on a polygon. Proc. Steklov Inst. Math. 255, 92-107 (2006)
Department of Mathematics, Eastern Mediterranean University, Famagusta, KKTC, Mersin 10, Turkey
Adiguzel A Dosiyev
& Hamid MM Sadeghi
Search for Adiguzel A Dosiyev in:
Search for Hamid MM Sadeghi in:
Correspondence to Adiguzel A Dosiyev.
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
Dosiyev, A.A., Sadeghi, H.M. A fourth order accurate approximation of the first and pure second derivatives of the Laplace equation on a rectangle. Adv Differ Equ 2015, 67 (2015) doi:10.1186/s13662-015-0408-8
finite difference method
approximation of derivatives
uniform error
Laplace equation
Proceedings of the International Congress in Honour of Professor Ravi P. Agarwal | CommonCrawl |
Intuition behind impulse response terms in convolution
We're learning about convolution in my signals and systems class right now. I have been able to do all of the problems by simply working out the respective sum/integral, but I'm still having trouble gaining the intuition behind it.
Consider the following example. Let $x[n]$ be a discrete-time signal and input it into some LTI system with impulse response $h[n]$. Then,
$$ y[n] = \sum_{k=-\infty}^{\infty} x[k] h[n-k]. $$
Let's plug in some values to make this more concrete. Suppose that we want to compute $y[6]$. Well,
$$ y[6] = \sum_{k=-\infty}^{\infty} x[k] h[6-k] = \cdots + x[4]h[2] + x[5]h[1] + x[6]h[0] + x[7]h[-1] + \cdots. $$
I understand the shifting, but I feel as if the multiplications should be in a different order. Namely, why are we multiplying $x[7]$ by $h[-1]$. I feel as if we should be multiplying it by $h[1]$, since we've essentially shifted everything to the right by $6$ units, to $6$ is the new $0$, which would mean that $7$ is the new $1$ (under the shifting). I have it backwards, and understand why when I work out the math, but why?
More concretely, my question is the following:
With respect to the above example, what exactly is the meaning of $h[2]$? $h[-1]$? $h[k]$ in general?
convolution linear-systems
RyanRyan
$\begingroup$ Perhaps reading this answer might help you. $\endgroup$ – Dilip Sarwate Feb 9 '16 at 20:36
The coefficient $h[n]$ is the value of the system's response at time $n$ when the input signal was an impulse at $n=0$. Obviously, that's why we call $h[n]$ the impulse response. From this you can see that for a system to be causal, $h[n]$ must be zero for $n<0$, otherwise the system would "know" in advance that an impulse will come at $n=0$.
Note that any discrete-time signal $x[n]$ can be written as a sum of unit impulses:
$$x[n]=\sum_{k=-\infty}^{\infty}x[k]\delta[n-k]\tag{1}$$
Since the response to a shifted impulse $\delta[n-k]$ is $h[n-k]$, and since the system is linear and time-invariant, from $(1)$ the output signal must be given by
$$y[n]=\sum_{k=-\infty}^{\infty}x[k]h[n-k]\tag{2}$$
which is of course the discrete-time convolution. Consequently, positive indices of $h[n]$ correspond to the memory of the system, that's why $h[1]$ is multiplied with the past input sample $x[n-1]$, etc.
answered Feb 9 '16 at 9:20
Matt L.Matt L.
$\begingroup$ Nice (+1), but it begs the question: why does $(1)$, which is just like a convolution, hold? $\endgroup$ – Dilip Sarwate Feb 9 '16 at 20:40
$\begingroup$ @DilipSarwate: You don't need to see it as a convolution, but just as a decomposition of the signal into shifted and weighted impulses (which is of course essentially the same, but conceptually it's different). Eq. (1) can be understood, even if one doesn't know what convolution is. That was at least the idea here. $\endgroup$ – Matt L. Feb 9 '16 at 20:46
Consider if you had a polynomial with coefficients $x[n]$, i.e. $\sum_n x[n]z^n$, and a polynomial with coefficients $h[n]$ and you multiplied them together... what would the coefficients of the result be?
Derek ElkinsDerek Elkins
$\begingroup$ The coefficients would be the discrete convolution of $x[n]$ and $h[n]$ (we did this as an exercise in class). I don't quite see what you're getting at though. $\endgroup$ – Ryan Feb 9 '16 at 8:25
$\begingroup$ Another way to write the convolution of $x$ and $h$ is: $\sum_{i+j=n} x[i]h[j]$ $\endgroup$ – Derek Elkins Feb 9 '16 at 8:58
$\begingroup$ If instead of $z$ we used $D$ in the polynomials above, where $(Dx)[n] = x[n-1]$ then the polynomial would be a weighted sum of these delays. Multiplying the polynomials would, as you say, convolve the coefficients. The convolution formula says the weight at some point in time is the product of the weights of the delays that, combined, would shift us to this point in time. $\endgroup$ – Derek Elkins Feb 9 '16 at 9:04
Not the answer you're looking for? Browse other questions tagged convolution linear-systems or ask your own question.
Flipping the impulse response in convolution
Why unit impulse function is used to find impulse response of an LTI system?
understanding discrete-time convolution in LTI systems
find impulse response from step response
Express circular convolution in terms of linear convolution
advantage of using impulse response function for LTI systems?
Impulse response time period in circular convolution
Impulse Response to LTI
Convolution of Input signal and Impulse response
Intuition behind commutativity of convolution in LTI systems | CommonCrawl |
DCDS-B Home
A PDE model of intraguild predation with cross-diffusion
December 2017, 22(10): 3629-3651. doi: 10.3934/dcdsb.2017143
Pullback attractor in $H^{1}$ for nonautonomous stochastic reaction-diffusion equations on $\mathbb{R}^n$
Linfang Liu 1, , Xianlong Fu 1, and Yuncheng You 2,,
Department of Mathematics, Shanghai Key Laboratory of PMMP, East China Normal University, Shanghai 200241, China
Department of Mathematics and Statistics, Uniersity of South Florida, Tampa, FL 33620, USA
* Corresponding author: Yuncheng You
Received October 2016 Revised January 2017 Published April 2017
Fund Project: The second author is supported by NSF grant of China (Nos. 11671142 and 11371087), Science and Technology Commission of Shanghai Municipality (STCSM) (grant No. 13dz2260400) and Shanghai Leading Academic Discipline Project (No. B407)
In this paper we study the asymptotic dynamics of the weak solutions of nonautonomous stochastic reaction-diffusion equations driven by a time-dependent forcing term and the multiplicative noise. By conducting the uniform estimates we show that the cocycle generated by this SRDE has a pullback $(L^2, H^1)$ absorbing set and it is pullback asymptotically compact through the pullback flattening approach. The existence of a pullback $(L^2, H^1)$ random attractor for this random dynamical system in space $H^{1}(\mathbb{R}^{n})$ is proved.
Keywords: Pullback random attractor, stochastic reaction-diffusion equation, pullback asymptotic compactness, pullback flattening property.
Mathematics Subject Classification: 35B40, 35B41, 35R60, 37L30.
Citation: Linfang Liu, Xianlong Fu, Yuncheng You. Pullback attractor in $H^{1}$ for nonautonomous stochastic reaction-diffusion equations on $\mathbb{R}^n$. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3629-3651. doi: 10.3934/dcdsb.2017143
L. Arnold, Random Dynamical Systems Spring-Verlag, New York and Berlin, 1998. doi: 10.1007/978-3-662-12878-7. Google Scholar
J. Ball, Continuity properties and global attractors of generalized semiflows and the Naiver-Stokes equations, J. Nonlinear Science, 7 (1997), 475-502. doi: 10.1007/s003329900037. Google Scholar
J. Ball, Global attractors for damped semilinear wave equation, Discrete and Continuous Dynamical Systems, Ser. A, 10 (2004), 31-52. doi: 10.3934/dcds.2004.10.31. Google Scholar
T. Bao, Existence and upper semi-continuity of uniform attractors for non-autonomous reaction-diffusion equations on $\mathbb{R}^{n}$, Electronic Journal of Differential Equations, 2012 (2012), 1-18. Google Scholar
H. Crauel and F. Flandoli, Attractors for random dynamical systems, Probability Theory and Related Fields, 100 (1994), 365-393. doi: 10.1007/BF01193705. Google Scholar
[6] A. N. Carvalho, J. A. Langa and J. C. Robinson, Attractors for Infinite-Dimensional Nonautonomous Dynamical Systems, Springer, New York, 2013.
T. Caraballo, G. Lukaszewicz and J. Real, Pullback attractors for asymptotically compact non-autonomous dynamical systems, Nonlinear Analysis, 64 (2006), 484-498. doi: 10.1016/j.na.2005.03.111. Google Scholar
Caraballo, G. Lukaszewicz and J. Real, Pullback attractors for non-autonomous 2D-Navier-Stokes equations in some unbounded domains, C.R. Math. Acad. Sci. Paris, 342 (2006), 263-268. doi: 10.1016/j.crma.2005.12.015. Google Scholar
V. Chepyzhov and M. Vishik, Attractors of nonautonomous dynamical systems and their dimensions, J. Math. Pures Appl., 73 (1994), 279-333. Google Scholar
P. E. Kloeden and M. Rasmussen, Nonautonomous Dynamical Systems American Mathematical Society, Providence, RI, 2011. doi: 10.1090/surv/176. Google Scholar
H. Li, Y. You and J. Tu, Random attractors and averaging for non-autonomous stochastic wave equations with nonlinear damping, Journal of Differential Equations, 258 (2015), 148-190. doi: 10.1016/j.jde.2014.09.007. Google Scholar
Y. Li and B. Guo, Random attractors for quasi-continuous random dynamical systems and applications to stochastic reaction-diffusion equations, Journal of Differential Equations, 245 (2008), 1775-1800. doi: 10.1016/j.jde.2008.06.031. Google Scholar
G. Lukaszewicz and A. Tarasinska, On $H^{1}$-pullback attractors for nonautonomous micropolar fluid equations in a bounded domain, Nonlinear Analysis, 71 (2009), 782-788. doi: 10.1016/j.na.2008.10.124. Google Scholar
Y. Li and C. Zhong, Pullback attractors for the norm-to-weak continuous process and application to the nonautonomous reaction-diffusion equations, Appl. Math. Computation, 190 (2007), 1020-1029. doi: 10.1016/j.amc.2006.11.187. Google Scholar
B. Schmalfuss, Backward cocycles and attractors of stochastic differential equations, in International Seminar on Applied Mathematics-Nonlinear Dynamics: Attractor Approximation and Global Behavior (1992), 185-192.Google Scholar
[16] G. R. Sell and Y. You, Dynamics of Evolutionary Equations, Springer, New York, 2002. doi: 10.1007/978-1-4757-5037-9.
B. Q. Tang, Regularity of random attractors for stochastic reaction-diffusion equations on unbounded domains Stochastics and Dynamics 16 (2016), 1650006, 29pp. doi: 10.1142/S0219493716500064. Google Scholar
[18] R. Temam, Infinite Dimensional Dynamical Systems in Mechanics and Physics, Springer, New York, 1998. doi: 10.1007/978-1-4684-0313-8.
[19] H. Tuckwell, INTRODUCTION to Theoretical Neurobiology: Nonlinear and Stochastic Theories, Cambridge University Press, Cambridge, 1998.
B. Wang, Sufficient and necessary criteria for existence of pullback attractors for non-compact random dynamical system, Journal of Differential Equations, 253 (2012), 1544-1563. doi: 10.1016/j.jde.2012.05.015. Google Scholar
B. Wang, Random attractors for non-autonomous stochastic wave equation with multiplicative noise, Discrete and Continuous Dynamical Systems, Ser. A, 34 (2014), 269-300. doi: 10.3934/dcds.2014.34.269. Google Scholar
B. Wang, Existence and upper semicontinuity of attractors for stochastic equations with deterministic non-autonomous terms Stochastics and Dynamics 14 (2014), 1450009, 31pp. doi: 10.1142/S0219493714500099. Google Scholar
G. Wang and Y. Tang, $(L^2, H^1)$-random attractors for stochastic reaction-diffusion equations on unbounded domains, Abstract and Applied Analysis 2013 (2013), 279509, 23pp.Google Scholar
Y. Wang and C. Zhong, On the existence of pullback attractors for non-autonomous reaction-diffusion equation, Dynamical Systems, 23 (2008), 1-16. doi: 10.1080/14689360701611821. Google Scholar
K. Wiesenfeld, D. Pierson, E. Pantazelou, C. Dames and F. Moss, Stochastic resonance on a circle, Phys. Rev. Lett., 72 (1994), 2125-2129. doi: 10.1103/PhysRevLett.72.2125. Google Scholar
K. Wiesenfeld and F. Moss, Stochastic resonance and the benefits of noise: from ice ages to crayfish and SQUIDs, Nature, 373 (1995), 33-35. doi: 10.1038/373033a0. Google Scholar
Y. You, Random attractors and robustness for stochastic reversible reaction-diffusion systems, Discrete and Continuous Dynamical Systems, Ser. A, 34 (2014), 301-333. doi: 10.3934/dcds.2014.34.301. Google Scholar
Y. You, Random dynamics of stochastic reaction-diffusion systems with additive noise, J. Dynamics and Differential Equations, 29 (2017), 83-112. doi: 10.1007/s10884-015-9431-4. Google Scholar
W. Xhao, $H^{1}$-random attractors and random equilibria for stochastic reaction-diffusion equations with multiplicative noise, Communications in Nonlinear Science and Numerical Simulation, 18 (2013), 2707-2721. doi: 10.1016/j.cnsns.2013.03.012. Google Scholar
C. Zhong, M. Yang and C. Sun, The existence of global attractors for the norm-to-weak continuous semigroup and application to the nonlinear reaction-diffusion equations, Journal of Differential Equations, 223 (2006), 367-399. doi: 10.1016/j.jde.2005.06.008. Google Scholar
Peter E. Kloeden, Thomas Lorenz. Pullback attractors of reaction-diffusion inclusions with space-dependent delay. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1909-1964. doi: 10.3934/dcdsb.2017114
Saugata Bandyopadhyay, Bernard Dacorogna, Olivier Kneuss. The Pullback equation for degenerate forms. Discrete & Continuous Dynamical Systems - A, 2010, 27 (2) : 657-691. doi: 10.3934/dcds.2010.27.657
Yangrong Li, Lianbing She, Jinyan Yin. Longtime robustness and semi-uniform compactness of a pullback attractor via nonautonomous PDE. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1535-1557. doi: 10.3934/dcdsb.2018058
María Anguiano, Tomás Caraballo, José Real, José Valero. Pullback attractors for reaction-diffusion equations in some unbounded domains with an $H^{-1}$-valued non-autonomous forcing term and without uniqueness of solutions. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 307-326. doi: 10.3934/dcdsb.2010.14.307
Wenqiang Zhao. Pullback attractors for bi-spatial continuous random dynamical systems and application to stochastic fractional power dissipative equation on an unbounded domain. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3395-3438. doi: 10.3934/dcdsb.2018326
Tomás Caraballo, José Real, I. D. Chueshov. Pullback attractors for stochastic heat equations in materials with memory. Discrete & Continuous Dynamical Systems - B, 2008, 9 (3&4, May) : 525-539. doi: 10.3934/dcdsb.2008.9.525
Yuncheng You. Pullback uniform dissipativity of stochastic reversible Schnackenberg equations. Conference Publications, 2015, 2015 (special) : 1134-1142. doi: 10.3934/proc.2015.1134
Wen Tan. The regularity of pullback attractor for a non-autonomous p-Laplacian equation with dynamical boundary condition. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 529-546. doi: 10.3934/dcdsb.2018194
Bao Quoc Tang. Regularity of pullback random attractors for stochastic FitzHugh-Nagumo system on unbounded domains. Discrete & Continuous Dynamical Systems - A, 2015, 35 (1) : 441-466. doi: 10.3934/dcds.2015.35.441
Rodrigo Samprogna, Tomás Caraballo. Pullback attractor for a dynamic boundary non-autonomous problem with Infinite Delay. Discrete & Continuous Dynamical Systems - B, 2018, 23 (2) : 509-523. doi: 10.3934/dcdsb.2017195
T. Caraballo, J. A. Langa, J. Valero. Structure of the pullback attractor for a non-autonomous scalar differential inclusion. Discrete & Continuous Dynamical Systems - S, 2016, 9 (4) : 979-994. doi: 10.3934/dcdss.2016037
Zeqi Zhu, Caidi Zhao. Pullback attractor and invariant measures for the three-dimensional regularized MHD equations. Discrete & Continuous Dynamical Systems - A, 2018, 38 (3) : 1461-1477. doi: 10.3934/dcds.2018060
Linfang Liu, Xianlong Fu. Existence and upper semicontinuity of (L2, Lq) pullback attractors for a stochastic p-laplacian equation. Communications on Pure & Applied Analysis, 2017, 6 (2) : 443-474. doi: 10.3934/cpaa.2017023
Jianhua Huang, Wenxian Shen. Pullback attractors for nonautonomous and random parabolic equations on non-smooth domains. Discrete & Continuous Dynamical Systems - A, 2009, 24 (3) : 855-882. doi: 10.3934/dcds.2009.24.855
José A. Langa, Alain Miranville, José Real. Pullback exponential attractors. Discrete & Continuous Dynamical Systems - A, 2010, 26 (4) : 1329-1357. doi: 10.3934/dcds.2010.26.1329
Yuncheng You. Random attractors and robustness for stochastic reversible reaction-diffusion systems. Discrete & Continuous Dynamical Systems - A, 2014, 34 (1) : 301-333. doi: 10.3934/dcds.2014.34.301
Tomás Caraballo, José A. Langa, James C. Robinson. Stability and random attractors for a reaction-diffusion equation with multiplicative noise. Discrete & Continuous Dynamical Systems - A, 2000, 6 (4) : 875-892. doi: 10.3934/dcds.2000.6.875
Yejuan Wang, Chengkui Zhong, Shengfan Zhou. Pullback attractors of nonautonomous dynamical systems. Discrete & Continuous Dynamical Systems - A, 2006, 16 (3) : 587-614. doi: 10.3934/dcds.2006.16.587
Alexey Cheskidov, Landon Kavlie. Pullback attractors for generalized evolutionary systems. Discrete & Continuous Dynamical Systems - B, 2015, 20 (3) : 749-779. doi: 10.3934/dcdsb.2015.20.749
Perla El Kettani, Danielle Hilhorst, Kai Lee. A stochastic mass conserved reaction-diffusion equation with nonlinear diffusion. Discrete & Continuous Dynamical Systems - A, 2018, 38 (11) : 5615-5648. doi: 10.3934/dcds.2018246
Linfang Liu Xianlong Fu Yuncheng You | CommonCrawl |
Comparison of machine learning techniques to predict all-cause mortality using fitness data: the Henry ford exercIse testing (FIT) project
Sherif Sakr1,
Radwa Elshawi2,
Amjad M. Ahmed1,
Waqas T. Qureshi3,
Clinton A. Brawner4,
Steven J. Keteyian1,
Michael J. Blaha5 &
Mouaz H. Al-Mallah ORCID: orcid.org/0000-0003-2348-04841,4
Prior studies have demonstrated that cardiorespiratory fitness (CRF) is a strong marker of cardiovascular health. Machine learning (ML) can enhance the prediction of outcomes through classification techniques that classify the data into predetermined categories. The aim of this study is to present an evaluation and comparison of how machine learning techniques can be applied on medical records of cardiorespiratory fitness and how the various techniques differ in terms of capabilities of predicting medical outcomes (e.g. mortality).
We use data of 34,212 patients free of known coronary artery disease or heart failure who underwent clinician-referred exercise treadmill stress testing at Henry Ford Health Systems Between 1991 and 2009 and had a complete 10-year follow-up. Seven machine learning classification techniques were evaluated: Decision Tree (DT), Support Vector Machine (SVM), Artificial Neural Networks (ANN), Naïve Bayesian Classifier (BC), Bayesian Network (BN), K-Nearest Neighbor (KNN) and Random Forest (RF). In order to handle the imbalanced dataset used, the Synthetic Minority Over-Sampling Technique (SMOTE) is used.
Two set of experiments have been conducted with and without the SMOTE sampling technique. On average over different evaluation metrics, SVM Classifier has shown the lowest performance while other models like BN, BC and DT performed better. The RF classifier has shown the best performance (AUC = 0.97) among all models trained using the SMOTE sampling.
The results show that various ML techniques can significantly vary in terms of its performance for the different evaluation metrics. It is also not necessarily that the more complex the ML model, the more prediction accuracy can be achieved. The prediction performance of all models trained with SMOTE is much better than the performance of models trained without SMOTE. The study shows the potential of machine learning methods for predicting all-cause mortality using cardiorespiratory fitness data.
Using data to make decisions and predications is not new. However, the nature of data availability is changing and the changes bring with them complexity in managing the volumes and analysis of these data. The marriage between mathematics and computer science is driven by the unique computational challenges of building predictive models from large data sets and getting into untapped hidden knowledge. Machine learning (ML) [1, 2] is a modern data analysis technique with the unique ability to learn and improve its performance without being explicitly programmed and without human instruction. The main goal of supervised ML classification algorithms [3] is to explain the dependent variable in terms of the independent variables. The algorithms get adjusted based on the training sample and the error signal. In general, conventional statistical techniques commonly rely on the process of hypothesis testing. This process is very user-driven where user specifies variables, functional form and type of interaction. Therefore, user intervention may influence resulting models. With ML techniques, the primary hypothesis is that there is a pattern (rather than an association) in the set of predictor variables that will identify the outcome. ML algorithms automatically scan and analyze all predictor variables in a way that prevents overlooking potentially important predictor variables even if it was unexpected. Therefore, it has been acknowledged as a powerful tool which dramatically changes the mode and accessibility of science, research and practice in all domains [4]. Medicine and Healthcare are no different [5,6,7].
The Henry Ford exercIse Testing (FIT) Project [8] is a retrospective cohort that included 69,985 patients who had undergone exercise cardiopulmonary treadmill stress testing at Henry Ford Health System in Detroit, MI from January 1, 1991- May 28, 2009. Briefly, the study population was limited to patients over 18 years of age at the time of stress testing and excluded patients undergoing modified or non- Bruce protocol [9] exercise stress tests. Information regarding a patient's medical history, demographics, medications, cardiovascular disease risk factors were obtained at the time of testing by nurses and exercise physiologists, as well as searches through the electronic medical records. For the full details of The FIT Project, we refer to prior work by Al-Mallah et al. [8]. Several studies [10,11,12,13] have used conventional statistical techniques to predict various medical outcomes using the FIT project data. In general, ML is an exploratory process, where there is no one-model-fits-all solution. In particular, there is no model that is known to achieve the highest accuracy for all domains, problem types or datasets [14]. The best performing model varies from one problem to another based on the characteristics of the variables and observation. In this study, we evaluate and compare seven popular supervised ML algorithms in terms of its accuracy of prediction for mortality based on exercise capacity (e.g., fitness) data. In particular, we conducted experiments using the following ML techniques: Decision Tree (DT), Support Vector Machine (SVM), Artificial Neural Networks (ANN), Naïve Bayesian Classifier (BC), Bayesian Network (BN), K-Nearest Neighbor (KNN), and Random Forest (RF). We applied the 10-fold cross-validation evaluation method for all techniques where several evaluation metrics are compared and reported. The study shows the potential of machine learning methods for predicting all-cause mortality using cardiorespiratory fitness data.
Cohort study
In this study, we have excluded from the original registry of the FIT project the patients with known coronary artery disease (n = 10,190) or heart failure (n = 1162) at the time of the exercise test or with less than 10-year follow-up (n = 22; 890). Therefore, a total of 34,212 patients were included in this study. The baseline characteristics of the included cohort are shown in Table 1 and indicate a high prevalence of traditional risk factors for cardiovascular disease. After a follow-up duration of 10 years, a total of 3921 patients (11.5%) died as verified by the national social security death index. All included patients had a social security number and were accounted for. In this study, we have classified the patients into two categories: low risk of all-cause mortality (ACM) and high risk of ACM. In particular, patients were considered to have high risk for ACM if the predicted event rate is more than or equal to 3%.
Table 1 Baseline Characteristics for Included Study Cohort
Data preprocessing is a crucial step in ML. Data that have not preprocessed carefully may lead to misleading prediction results. In our study, we have conducted the following preprocessing steps.
Outliers: The dataset used has been preprocessed by removing outliers (values that deviate from the expected value for a specific attribute) using the statistical measure namely inter-quartile range (IQR) [15]. The authors in [1] compare different outlier detection methods on biomedical datasets. The results show that the IQR is the fastest method in detecting all outliers correctly. Since the dataset used in this study is nearly symmetric, its mean equals its median equals its midrange, then the IQR is a good choice for handling outliers. The IQR measure is used to preprocess and identify the outliers from the training dataset. The IQR finds the outliers from the dataset by identifying the data that is over ranging from the dataset. The IQR is evaluated as IQR = Q3-Q1 where Q3 and Q1 are the upper and lower quartiles, respectively. The number of records that are identified as outliers and has been removed is 808 records.
Missing values: It has been noted that some attributes such as the Percentage of Achieved Heart Rate and Metabolic Equivalent (METS) have missing values. The missing data for such attributes has been handled by replacing the missing values by the attribute mean.
The FIT project dataset includes 49 demographic and clinical variables.Footnote 1 In general, it is a common case that a few or several of the variables used in ML predictive models are in fact not associated with the response. In practice, including such irrelevant variables leads to unnecessary complexity in the resulting model. Therefore, before developing our model, we utilized an automated R-based popular feature selection algorithm, information gain [16], to choose the most effective attributes in classifying the training data. In particular, this algorithm assesses the weight of each variable by evaluating the entropy gain with respect to the outcome, and then ranks the variables according to their weights. Only attributes with information gain >0 were subsequently used in model building.
One of the main issues we encountered with the dataset used in this study is that it is imbalanced. In particular, the dataset included 3946 records with class label Yes (high risk of all-cause mortality) and 30,985 records with class label No (low risk of all-cause mortality). In general, the predication accuracy is significantly affected with imbalanced data [17]. In practice, there are two ways to handle the imbalanced class problem. One way is assign distinct costs to examples in the training dataset [18]. The other way is to either oversampling the minority class or to under-sampling the majority class [19,20,21,22]. In order to handle the imbalanced dataset used in this study, we use Synthetic Minority Over-sampling (SMOTE) Technique [23]. It is an over-sampling technique in which the minority class is over-sampled by creating synthetic examples rather than by over-sampling with replacement. SMOTE selects the minority class samples and creates "synthetic" samples along the same line segment joining some or all K nearest neighbors belonging to the minority class [24, 25]. In other words, the oversampling is done as follows:
Take sample of the dataset and find its nearest neighbors
To create a synthetic data point, take the vector between a data point P in the sample dataset and one of P's k-nearest neighbors.
Multiply this vector by a random number x which lies between 0 and 1.
Add this to P to create the new synthetic data point.
The percentage of SMOTE instances created in our experiment is 300% (11,838 records from the minority class).
Machine learning classification techniques
In our experiments, we studied the following seven popular ML classification techniques: Decision Tree (DT), Support Vector Machine (SVM), Artificial Neural Networks (ANN), Naïve Bayesian Classifier (BC), Bayesian Network (BN), K-Nearest Neighbor (KNN) and Random Forest (RF). We explore the space of parameters and common variations for each machine learning algorithm as thoroughly as is computationally feasible.
Decision Tree (DT) [26] is a model that uses a tree-like graph to predict the value of a target variable by learning simple decision rules inferred from the data features. We use J48 decision tree algorithm (Weka implementation of C4.5 [27]). We tested the J48 classifier with confidence factor of 0.1, 0.25, 0.5, 0.75 and 1. The confidence factor parameter tests the effectiveness of post-pruning and lowering the confidence factor decreases the amount of post-pruning.
Support Vector Machine (SVM) [28] represents the instances as a set of points of 2 types in N dimensional place and generates a (N - 1) dimensional hyperplane to separate those points into 2 groups. SVM attempts to find a straight line that separates those points into 2 types and is situated as far as possible from all those points. Training the SVM is done using Sequential Minimal Optimization algorithm [2]. We used Weka implementation of SMO [29]. We tested SVM using polynomial, normalized polynomial, puk kernels and varied the complexity parameter {0.1, 10, and 30}. The value of the complexity parameter controls the tradeoff between fitting the training data and maximizing the separating margin.
Artificial Neural Network (ANN) [30] attempts to mimic the human brain in order to learn complex tasks. It is modeled as an interconnected group of nodes in a way that is similar to the vast network of neurons in the human brain. Each node of the network receives inputs from other nodes, combines them in some way, performs a generally nonlinear operation on the result and outputs the final result. We trained the Neural Networks with gradient descent backpropagation. We varied the number of hidden units {1, 2, 4, 8, 32} and the momentum {0,0.2,0.5,0.9}.
Naïve Bayesian Classifier [31] applies Bayes' theorem [32] with the naive assumption of independence between every pair of features. We use Weka implementation of Multilayer Perceptron [33]. We try three different Weka options for handling continuous attributes: modeling them as a single normal, modeling them with kernel estimation, or discretizing them using supervised discretization. Bayesian Network [34] is designed for modeling under uncertainty where the nodes represent variables and arcs represent direct connections between them. BNs model allows probabilistic beliefs about the variables to be updated automatically as new information becomes available. We tried different search algorithms including K2 [33], Hill Climbing [35], Repeated Hill Climber, LAGD Hill Climbing, TAN [36], Tabu search [53] and Simulated annealing [37].
K-Nearest Neighbors (KNN) [38] identifies from the neighbors, K similar points in the training data that are closest to the test observation and classifies it by estimating the conditional probability of belonging to each class and choosing the class with the largest probability. We varied the number of k {1, 3, 5, 10} neighbors. We considered three distance functions: Euclidean distance, Manhattan distance and Minkowski distance.
Random Forest (RF) [39, 40] is a classification algorithm that works by forming multiple decision trees at training and at testing it outputs the class that is the mode of the classes (classification). Decision tree works by learning simple decision rules extracted from the data features. The deeper the tree, the more complex the decision rules and the fitter the model. Random decision forests overcome the problem of over fitting of the decision trees. We use Random Forest Weka implementation. We varied the forests to have 10, 50, and 100 trees. The size of the feature set considered at each split is 1, 2, 4, 8, and 12.
Model evaluation and validation
In order to evaluate our models, we used the 10-fold cross-validation [39] evaluation method where the data are randomly partitioned into 10 mutually exclusive subsets {D1, D2, …, DK} with approximately equal size. The testing operation is then repeated 10 times where at the ith evaluation iteration, the Di subset is used as the test set and the others as the training set. In general, a main advantage of the 10-fold cross-validation evaluation method is that it has a lower variance than a single hold-out set evaluator. In particular, it reduces this variance by averaging over 10 different partitions, therefore, it is less sensitive to any partitioning bias on the training or testing data. For each iteration of the evaluation process, the following metrics are calculated:
Sensitivity: True Positive recognition rate
Sensitivity = TP/TP + FN
Specificity: True Negative recognition rate
Specificity = TN/TN + FP
Precision: It represents the percentage of tuples that the classifier has labeled as positive are actually positive
Precision = TP/TP + FP
F-score: It represents the harmonic mean of precision and sensitivity
F-score = 2 * TP / 2* TP + FP + FN
Root Mean Squared Error (RMSE): It is defined as the square root of the mean square error that measures the difference between values predicted by the model and the actual values observed, where y ′ is a vector of n predictions and y is the vector of n observed (actual) values
$$ RMSD=\sqrt{\left(\frac{1}{n}\sum \limits_{i=1}^n{\left({y}_i^{\prime }-{y}_i\right)}^2\right)} $$
ROC: Receiver Operating Characteristic (ROC) Curve [40] is a way to quantify the diagnostic value of a test over its whole range of possible cutoffs for classifying patients as positive vs. negative. In each possible cutoff, the true positive rate and false positive rate is calculated as the X and Y coordinates in the ROC Curve.
True Positive (TP) refers to the number of high risk patients who are classified as high risk, whereas False Negative (FN) refers to the number of high risk patients who are classified as low risk patients. On the other hand, False Positive (FP) refers to the number of low risk patients who are classified as high-risk patients and False Negative (FN) refers to the number of low risk patients who are classified as low risk patients. All results of the different metrics are then averaged to return the final result.
As an outcome of the feature selection process, the ML models have been developed using only 15 variables where Age, METS, Percentage HR achieved, HX Hypertension, Reason for test are ranked as the top significant five variables. The full list of the outcome variables is presented in Fig. 1.
The ranking of the variables based on the outcome of the Feature Selection Process
Tables 2 and 3 show the performance of the DT classifier, with confidence parameter (Conf) equals 0.1, 0.25, 0.5, 0.75 and 1, using sampling and without using sampling, respectively. The results show that the AUC increased as the confidence factor increased up to about 0.75 at a peak of 0.88 AUC using sampling and up to about 0.25 at a peak of 0.73 AUC without using sampling, after which the classifier exhibited effects of over-training. These effects are seen by a decrease in the AUC value with a confidence factor above 0.75 using sampling and above 0.25 without using sampling.
Table 2 Comparison of the performance of Decision Tree (DT) classifier with sampling using confidence parameter (Conf) equals 0.1, 0.25, 0.5, 0.75 and 1
Table 3 Comparison of the performance of Decision Tree (DT) classifier without sampling using confidence parameter (Conf) equals 0.1, 0.25, 0.5, 0.75 and 1
The results of the SVM classifier using sampling and without using sampling are reported in Tables 4 and 5, respectively. Different kernels (polynomial kernel, normalized polynomial kernel and puk kernel) and complexity parameters (C) (0.1, 10 and 30) are tested. The results show that the AUC increased as the complexity parameter increased up to 30 using sampling. In addition, the SVM using puk kernel outperforms the SVM using other kernels achieving AUC of 0.80 using sampling and 0.59 without using sampling with complexity parameter C = 30.
Table 4 Comparison of the performance of Support Vector Machine (SVM) classifier with sampling using polynomial, normalized polynomial and puk kernels using complexity parameters 0.1, 10 and 30
Table 5 Comparison of the performance of Support Vector Machine (SVM) classifier without sampling using polynomial, normalized polynomial and puk kernels using complexity parameters 0.1, 10 and 30
Tables 6 and 7 show the performance of Neural Networks with gradient descent backpropagation using hidden units H = {1, 2, 4, 8, 32} and the momentum M = {0, 0.2, 0.5, 0.9} using sampling and without using sampling, respectively. The number of hidden units and momentum rate that gives better AUC value is considered here. For neural networks, the highest performance is achieved when H = 4 and M = 0.5 for the case of using sampling (AUC = 0.82) while when H = 8 and M = 0 for the case of not using sampling (AUC = 0.80).
Table 6 Comparison of the performance of Artificial Neural Networks (ANN) classifier with gradient descent backpropagation using hidden units {1, 2, 4, 8, 32} and the momentum {0,0.2,0.5,0.9} using sampling
Table 7 Comparison of the performance of Artificial Neural Networks (ANN) classifier with gradient descent backpropagation using hidden units {1, 2, 4, 8, 32} and the momentum {0,0.2,0.5,0.9} without using sampling
The performance of the Naïve Bayesian Classifier using sampling and without using sampling is reported in Tables 8 and 9, respectively. Three different Weka options for handling continuous attributes are explored (single normal, kernel estimation and supervised discretization). Results show that BC using supervised discretization achieves the highest AUC value of 0.82 using sampling and without using sampling. The performance results of the Bayesian Network classifier with different search algorithms (K2, Hill Climbing, Repeated Hill Climber, LAGD Hill Climbing, TAN, Tabu and Simulated Annealing) using sampling and without using sampling are reported in Tables 10 and 11, respectively. Bayesian Network classifier using Tan search algorithm achieves the highest AUC value of 0.84 using Sampling and 0.83 without using sampling.
Table 8 Comparison of the performance of Naïve Bayesian classifier (BC) using three different Weka options for handling continuous attributes: single normal, kernel estimation and supervised discretization using Sampling
Table 9 Comparison of the performance of Naïve Bayesian classifier (BC) using three different Weka options for handling continuous attributes: single normal, kernel estimation and supervised discretization without using Sampling
Table 10 Comparison of the performance of Bayesian Network classifier (BN) using different search algorithms: K2, Hill Climbing, Repeated Hill Climber, LAGD Hill Climbing, TAN, Tabu and Simulated Annealing using Sampling
Table 11 Comparison of the performance of Bayesian Network classifier (BN) using different search algorithms: K2, Hill Climbing, Repeated Hill Climber, LAGD Hill Climbing, TAN, Tabu and Simulated Annealing without using Sampling
Tables 12 and 13 report the performance of the KNN classifier, with different values of k {1, 3, 5, 10} neighbors, and using sampling and without using sampling. In our experiments, we used different distance functions; Euclidean distance, Manhattan distance and Minkowski distance. The results show that the KNN classifier using sampling has its best performance (AUC = 0.88) with K value equals 1 using any of the three distance functions while the KNN classifier without using sampling has its best performance (AUC = 0.74) with K value equals 10 using any of the three distance functions.
Table 12 Comparison of the performance K-Nearest Neighbor classifier (KNN) using different values of k {1, 3, 5, 10} neighbors and using different distance functions; Euclidean distance, Manhattan distance and Minkowski distance using sampling
Table 13 Comparison of the performance K-Nearest Neighbor classifier (KNN) using different values of k {1, 3, 5, 10} neighbors and using different distance functions; Euclidean distance, Manhattan distance and Minkowski distance without using sampling
Tables 14 and 15 report the performance of the Random Forest (RF) classifier using 10, 50 and 100 trees. The size of the feature set (F) considered at each split is 1, 2, 4, 8, and 12. The results show that the highest AUC (0.97) is achieved using a forest of 50 trees with a feature set of 1, 2, 4, 8 or 12 using sampling whereas the highest AUC (0.82) is achieved using a forest of 100 trees with a feature set of 4.
Table 14 Comparison of the performance of Random Forest (RF) classifier having 10, 50 and 100 trees with different feature set considered at each split (1, 2, 4, 8, and 12) using sampling
Table 15 Comparison of the performance of Random Forest (RF) classifier having 10, 50 and 100 trees with different feature set considered at each split (1, 2, 4, 8, and 12) without using sampling
We compared the impact of using different percentage of synthetic examples of the class "yes" (patients who are considered to have high risk for ACM). Figure 2 shows the area under the curve of seven different machine learning models trained using Decision Tree (DT), Support Vector Machine (SVM), Artificial Neural Networks (ANN), Naïve Bayesian Classifier (BC), Bayesian Network (BN), K-Nearest Neighbor (KNN) and Radom Forest (RF). All the models have been evaluated using datasets with 100%, 200% and 300% of synthetic example created using the SMOTE sampling technique on the training dataset and evaluated using 10-fold cross validation. The results show that increasing the percentage of synthetic examples improves the prediction accuracy for all models except for the BC. For example, the SVM model achieves AUC of 0.62 using the sampled dataset with 100% synthetic examples compared to 0.72 using the sampled dataset with 200% synthetic examples. Increasing the percentage of synthetic examples to 300% improves the AUC of the BN to achieve 0.8. The performance of KNN, DT and RTF models using SMOTE has shown great improvement. The RF has shown the best improvement using SMOTE achieving 0.83 using 100% synthetic examples compared to 0.95 and 0.97 using 200% and 300% synthetic examples respectively. In our experiments, further increasing the synthetic examples to 400% and 500% did not show any improvement in the performance of the prediction models.
AUC of different models with different percentage of synthetic examples created using SMOTE
In order to evaluate the impact of using the SMOTE sampling techniques in handling the problem of the imbalanced dataset, we build different prediction models with and without SMOTE. Tables 16 and 17 show the prediction performance of different prediction models using various evaluation metrics without and with the SMOTE sampling technique (300%), respectively. For each metric (row), we highlighted the highest value in bold font and underlined the lowest value. As shown in Tables 16 and 17, after applying the 10-fold cross-validation on the training dataset, the AUC and sensitivity for all models used SMOTE have been significantly improved over the training results without SMOTE except for the BC. In addition, the performance of each model can differ from one metric to another. In general, the Random Forest (RF) classifier using SMOTE sampling achieves the best performance improvement. In particular, it achieves the best performance in terms of Sensitivity (95.07%), RMSE (0.18), F-Score (84.55%) and AUC (0.97). However, the same model without using SMOTE achieves Sensitivity of (59.09%), RMSE of (0.29), F-Score (29.35%) and AUC of (0.82). The KNN models using SMOTE achieves the best performance in terms of Specificity (96.98%) and Precision (77.18%). The KNN model without SMOTE achieves Specificity of 89.31%% and Precision of 11.12%. This improved performance of the prediction models is due to the imbalanced data size. It is noted that all the models with SMOTE achieve a more balanced sensitivity. Figure 3(a) and (b) illustrates the ROC curves for the different ML models with and without using SMOTE, respectively.
Table 16 Comparison of the performance of the different classification models without using the SMOTE sampling method
Table 17 Comparison of the performance of the different classification models using the SMOTE sampling methods. The models are: Decision Tree (DT), Support Vector Machine (SVM), Artificial Neural Networks (ANN), Naïve Bayesian Classifier (BC), Bayesian Network (BN), K-Nearest Neighbor (KNN) and Random Forest (RF)
The ROC curves of the different machine learning classification models. The models are: Decision Tree (DT), Support Vector Machine (SVM), Artificial Neural Networks (ANN), Naïve Bayesian Classifier (BC), Bayesian Network (BN) and K-Nearest Neighbor (KNN). The results show that without using the SMOTE sampling method (a), BC and BN achieves the highest AUC (0.81) while with using the SMOTE sampling method (b), the KNN model achieves the highest AUC (0.94)
Using machine learning methods to predict different medical outcomes (e.g., diabetics, hypertension and death) from medical datasets is gaining an increasing attention in the medical domain. This study is designed to take advantage of the unique opportunity provided by our access to a large and rich clinical research dataset, a total of 34,212 patients, collected by the FIT project to investigate the relative performance of various machine learning classification methods for predicting all-cause mortality (ACM) using medical records of cardiorespiratory fitness. The large number of attributes of the dataset, 49 attributes, is used to uncover new potential predictors of ACM. To the best of our knowledge, this is the first study that compares the performance of ML model for predicting ACM using cardiorespiratory fitness data. We have evaluated seven models trained with and without SMOTE using various evaluation metrics.
Knuiman et al. [41] presented an empirical comparison of four different techniques for estimating the risk of death using mortality follow-up data on 1701 men. The four techniques used are binary tree, logistic regression, survival tree and Cox regression. The Cox regression outperformed the other three techniques achieving area under the AUC of 0.78 followed by logistic regression (AUC = 0.72), survival tree (AUC = 0.71) and binary tree (AUC = 0.66), respectively. Vomlel et al. [42] presented a predictive model for mortality using five different machine learning techniques on a data of 603 patients from University Hospital in Olomouc. The machine learning techniques used are logistic regression, decision tree, Naive Bayes classifier, Artificial Neural Network and Bayesian Network classifier. Using 10- fold cross validation logistic regression achieves the highest area under curve of 0.82, whereas the decision tree has the lowest AUC value of 0.61. Allyn et al. [43] compared the performance of logistic regression model and different machine learning models to predict the mortality in-hospital after elective cardiac surgery. The study includes database of 6520 patients from December 2005 to December 2012, from a cardiac surgical center at University Hospital. Five different machine learning models have been evaluated: logistic regression, gradient boosting machine, random forest, support vector machine and naive bayes. The area under the ROC curve for the machine learning model (AUC = 0.795) was significantly higher than the logistic regression model (AUC = 0.742). Taylor et al. [44] studied the prediction of mortality of 4676 patients with sepsis at the emergency department using logistic regression and machine learning model. The machine learning model (AUC 0.86) outperforms the logistic regression model (AUC 0.76). Sherri [45] studied the Physical Performance and Age-Related Changes in Sonomans (SPPARCS) to predict death among 2066 residents of Sonoma, California over the period between 1993 and 1995. In this study, a super learner has been used for death prediction. A super learner is an ensembling machine learning approach that combines multiple machine learning algorithms into a single algorithm and returns a prediction function with the best cross-validated mean squared error. The super learner outperforms all single algorithms in the collection of algorithms, although its performance was quite similar to that of some algorithms. Super learner outperformed the worst algorithm (neural networks) by 44% with respect to estimated cross-validated mean squared error. In principle, the datasets of both studies (Knuiman et al. [41] and Allyn et al. [43]) are considered to be relatively small in comparison to the number of patients for our dataset. In general, in Machine Learning, the bigger the size of the dataset, the higher the accuracy and robustness of the developed prediction models. In these studies, the highest AUC achieved by the developed prediction models is 0.86. In our experiments, the Random Forest (RF) model using SMOTE sampling achieved AUC of 0.97which significantly outperform the models of both studies.
Sullivan et al. [46] investigated the literature related to the comparisons made between established risk prediction models for perioperative mortality used in the setting of cardiac surgery. Meta-analysis was conducted to calculate a summary estimate of the difference in AUCs between models. The comparisons include 22 studies. The authors noted that all he investigated studies relied on relatively small datasets. This highlights the strengths and uniqueness of our study which is relying on large datasets reflected on the number of patients and the number of variables.
In general, an important observation from the results of our experiments is that for all metrics, the results show that it is not necessarily that the complex ML models (e.g., Support Vector Machine (SVM), Artificial Neural Networks (ANN)) can always outperform simpler models (e.g., Decision Tree (DT) model [47]). In particular, the Decision Tree (DT) model has been outperforming the complex models in terms of all evaluation metrics. The RF and KNN classifiers are considered to be less complex than SVM and ANN. However, it achieved the best performance for all metrics for model trained using SMOTE. In general, KNN is a non-linear classifier, therefore, it tends to perform very well with a lot of data points. It is also very sensitive to bad features (variables). Therefore, effective feature selection [27] is an important step before using the KNN classifier and tends to improve its results. The Decision Tree (DT) model benefits from the feature selection and removing colinear variables steps as well. In general, decision trees do not require any assumptions of linearity in the data and thus they work well for nonlinearly related variables.
On the other hand, the SVM model tends to perform well in high-dimensioned classification problems that may have over hundreds of thousands of dimensions, which is not the case of this study. In addition, the SVM model does not tend to perform well if the classes of the problem are strongly overlapping. In general, parametric models (e.g., SVM, Bayesian Network) can suffer from remembering local groupings as by their nature they summarize information in some way. ANN can usually outperform other methods if the dataset is very large and if the structure of the data is complex (e.g., they have many layers). This is an advantage for the KNN classifier which makes the least number of assumptions regarding the input data.
The results also show that the performance of the KNN and ANN classifiers, similar to the other models, can be very sensitive for the values of its parameters and thus these parameters need to be carefully explored and tuned in order to reach an adequate configuration. For example, the results show that setting the K parameter to the value of 1 achieves the best performance for all the evaluation metrics. For example, for K = 1, the model achieves AUC of (0.94) while for K = 3, 5 and 10, the model achieves the accuracy of (0.93), (0.91) and (0.90), respectively. In general, increasing the value of the K parameter has a mostly negative impact on the performance of the classifier for all metrics. The risk of model overfilling by using a low K value has been overcome by using the 10-fold cross-validation evaluation method. However, clearly, the optimal value of the K parameter can significantly differ from one problem to another.
ML techniques have shown solid prediction capabilities in various application domains including medicine and healthcare. In this study, we presented an evaluation and comparison of seven popular ML techniques on predicating all-cause mortality (ACM) using medical records of Cardiorespiratory Fitness for the Henry Ford Testing (FIT) Project. The results show that various ML techniques can significantly vary in terms of its performance for the different evaluation metrics. It is also not necessarily that the more complex the ML model, the more prediction accuracy can be achieved. Simpler models can perform better in some cases as well. Therefore, there is no one-size-fits-all model that can be well performing for all domains or datasets. Each problem and dataset need to be carefully evaluated, modeled and studied in order to reach an effective predictive model design. The results have also shown that it is critical to carefully explore and evaluate the performance of the ML models using various tuned values for their parameters. These results confirm the explorative nature of the ML process that requires iterative and explorative experiments in order to discover the model design that can achieve the target accuracy.
The detailed descriptions of the variables of the dataset are available on the appendix of the article.
ACM:
All-cause mortality
Naïve Bayesian Classifier
BN:
Bayesian Network
CRF:
Cardiorespiratory Fitness.
DT:
KNN:
K-Nearest Neighbor
ML:
RF:
Random Forest.
SMOTE:
Synthetic Minority Over-Sampling Technique (SMOTE)
SVM:
Alpaydin E. Introduction to machine learning. MIT press; 2014. https://mitpress.mit.edu/books/introduction-machine-learning-0.
Marsland S. Machine learning: an algorithmic perspective. CRC press; 2015. https://www.crcpress.com/Machine-Learning-An-Algorithmic-Perspective-Second-Edition/Marsland/p/book/9781466583283.
Aggarwal CC. Data classification: algorithms and applications. CRC Press; 2014. https://www.crcpress.com/Data-Classification-Algorithms-and-Applications/Aggarwal/p/book/9781466586741.
Mayer-Schonberger V, Cukier K. Big data: a revolution that will transform how we live, work, and think. Houghton Mifflin Harcourt; 2013. https://www.amazon.com/Big-Data-Revolution-Transform-Think/dp/0544227751.
Waljee AK, Higgins PD. Machine learning in medicine: a primer for physicians. Am J Gastroenterol. 2010;105(6):1224.
Kayyali B, Knott D, Van Kuiken S. "The big-data revolution in us health care: Accelerating value and innovation," Mc Kinsey & Company; 2013. https://www.mckinsey.com/industries/healthcare-systems-and-services/our-insights/the-big-data-revolution-in-us-health-care.
Burke J. Health analytics: gaining the insights to transform health care, 1st ed. Wiley; 2013. http://eu.wiley.com/WileyCDA/WileyTitle/productCd-1118383044.html.
Al-Mallah MH, Keteyian SJ, Brawner CA, Whelton S, Blaha MJ. Rationale and design of the henry ford exercise testing project (the fit project). Clin Cardiol. 2014;37(8):456–61.
Bruce R, Kusumi F, Hosmer D. Maximal oxygen intake and nomographic assessment of functional aerobic impairment in cardiovascular disease. Am Heart J. 1973;85(4):546–62.
Juraschek SP, Blaha MJ, Whelton SP, Blumenthal R, Jones SR, Keteyian SJ, Schairer J, Brawner CA, Al-Mallah MH. Physical fitness and hypertension in a population at risk for cardiovascular disease: the henry ford exercise testing (fit) project. J Am Heart Assoc. 2014;3(6):e001268.
Hung RK, Al-Mallah MH, McEvoy JW, Whelton SP, Blumenthal RS, Nasir K, Schairer JR, Brawner C, Alam M, Keteyian SJ, et al. Prognostic value of exercise capacity in patients with coronary artery disease: the fit (henry ford exercise testing) project. Mayo Clin Proc. 2014;89(12. Elsevier):1644–54.
Juraschek SP, Blaha MJ, Blumenthal RS, Brawner C, Qureshi W, Keteyian SJ, Schairer J, Ehrman JK, Al-Mallah MH. Cardiorespiratory fitness and incident diabetes: the fit (henry ford exercise testing) project. Diabetes Care. 2015;38(6):1075–81.
Qureshi WT, Alirhayim Z, Blaha MJ, Juraschek SP, Keteyian SJ, Brawner CA, Al-Mallah MH. "Cardiorespiratory fitness and risk of incident atrial fibrillation: results from the henry ford exercise tesing (fit) project,". Circulation. 2015:CIRCULATIONAHA–114. https://www.ncbi.nlm.nih.gov/pubmed/25904645.
Austin PC, Tu JV, Ho JE, Levy D, Lee DS. Using methods from the data-mining and machine-learning literature for disease classification and prediction: a case study examining classification of heart failure subtypes. J Clin Epidemiol. 2013;66(4):398–407.
Bu Y, Howe B, Balazinska M, Ernst MD. The HaLoop approach to large-scale iterative data analysis. VLDB J. 2012;21(2):169.
Batista GE, Prati RC, Monard MC. A study of the behavior of several methods for balancing machine learning training data. ACM Sigkdd Explorations Newsletter. 2004;6(1):20–9.
M. J. Pazzani, C. J. Merz, P. M. Murphy, K. M. Ali, T. Hume, and C. Brunk, "Reducing misclassification costs," in machine learning, proceedings of the eleventh international conference, Rutgers University, New Brunswick1994, 1994, pp. 217–225.
Kubat M, Matwin S. "Addressing the curse of imbalanced training sets: One-sided selection". In: Proceedings of the Fourteenth International Conference on Machine Learning, vol 97. Nashville: ICML; 1997. pp. 179–86.
Japkowicz N. The class imbalance problem: significance and strategies. In: In proceedings of the 2000 international conference on artificial intelligence (ICAI); 2000. p. 111–7.
Lewis DD, Catlett J. "Heterogenous uncertainty sampling for supervised learning," in machine learning, proceedings of the eleventh international conference, vol. 1994. New Brunswick: Rutgers University; 1994. p. 148–56.
Ling CX, Li C. "Data mining for direct marketing: Problems and solutions". In Proceedings of the Fourth International Conference on Knowledge Discovery and Data Mining (KDD-98). New York City; 1998. pp. 73–79. [Online]. Available: http://www.aaai.org/Library/KDD/1998/kdd98-011.php. Acceesed 1 May 2017.
Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP. SMOTE: synthetic minority over-sampling technique. J Artif Intell Res (JAIR). 2002;16:321–57.
Li D-C, Liu C-W, Hu SC. A learning method for the class imbalance problem with medical data sets. Comput Biol Med. 2010;40(5):509–18.
Ramentol E, Caballero Y, Bello R, Herrera F. Smotersb*: a hybrid preprocessing approach based on oversampling and under-sampling for high imbalanced data-sets using smote and rough sets theory. Knowl Inf Syst. 2012;33(2):245–65.
Quinlan JR. Induction of decision trees. Mach Learn. 1986;1(1):81–106.
Hearst MA, Dumais ST, Osman E, Platt J, Scholkopf B. Support vector machines. IEEE Intelligent Systems and their Applications. 1998;13(4):18–28.
Platt J. "Fast Training of Support Vector Machines using Sequential Minimal Optimization." In Advances in Kernel Methods - Support Vector Learning. MIT Press; 1998. https://dl.acm.org/citation.cfm?id=299094.299105.
Arbib MA. The handbook of brain theory and neural networks. MIT press; 2003. https://mitpress.mit.edu/books/handbook-brain-theory-and-neural-networks.
Cooper G, Herskovits E. A Bayesian method for the induction of probabilistic networks from data. Mach Learn. 1992;9:309–47.
Murphy KP. "Naive bayes classifiers," University of British Columbia; 2006. https://datajobsboard.com/wp-content/uploads/2017/01/Naive-Bayes-Kevin-Murphy.pdf.
J. M. Bernardo and A. F. Smith, "Bayesian theory," 2001.
Friedman N, Geiger D, Goldszmidt M. Bayesian network classifiers. Mach Learn. 1997;29(2-3):131–63.
Buntine WL. A guide to the literature on learning probabilistic networks from data. IEEE Trans Knowl Data Eng. 1996;8:195–210.
Cunningham P, Delany SJ. K-nearest neighbour classifiers. Multiple Classifier Systems. 2007;34:1–17.
Cheng J, Greiner R. "Comparing bayesian network classifiers." Proceedings UAI, 101–107; 1999. https://dl.acm.org/citation.cfm?id=2073808.
Bouckaert RR. "Bayesian Belief Networks: from Construction to Inference." Ph.D. thesis. University of Utrecht; 1995. https://dspace.library.uu.nl/handle/1874/845.
Breiman L. Random forests. Mach Learn. 2001;45(1):5–32.
Ho TK. Random decision forests. In: Proceedings of the third international conference on document analysis and recognition (volume 1) - volume 1. ICDAR '95. Washington, DC: IEEE Computer Society; 1995. p. 278.
Prasad AM, Iverson LR, Liaw A. Newer classification and regression tree techniques: bagging and random forests for ecological prediction. Ecosystems. 2006;9(2):181–99.
Refaeilzadeh P, Tang L, Liu H. "Cross-validation," in Encyclopedia of database systems. Spring. 2009:532–8.
Knuiman MW, Vu HT, Segal MR. An empirical comparison of multivariable methods for estimating risk of death from coronary heart disease. J Cardiovasc Risk. 1997;4(2):127–34.
Vomlel J, Kruzık H, Tuma P, Precek J, Hutyra M. Machine learning methods for mortality prediction in patients with st elevation myocardial infarction. Proceedings of WUPES. 2012;2012:204–13.
Allyn J, Allou N, Augustin P, Philip I, Martinet O, Belghiti M, Provenchere S, Montravers P, Ferdynus C. A comparison of a machine learning model with euroscore ii in predicting mortality after elective cardiac surgery: a decision curve analysis. PLoS One. 2017;12(1):e0169772.
Taylor RA, Pare JR, Venkatesh AK, Mowafi H, Melnick ER, Fleischman W, Hall MK. Prediction of in-hospital mortality in emergency department patients with sepsis: a local big datadriven, machine learning approach. Acad Emerg Med. 2016;23(3):269–78.
Rose S. Mortality risk score prediction in an elderly population using machine learning. Am J Epidemiol. 2013;177(5):443.
Sullivan PG, Wallach JD, Ioannidis JP. Meta-analysis comparing established risk prediction models (euroscore ii, sts score, and acef score) for perioperative mortality during cardiac surgery. Am J Cardiol. 2016;118(10):1574–82.
Hanley JA, McNeil BJ. The meaning and use of the area under a receiver operating characteristic (roc) curve. Radiology. 1982;143(1):29–36.
The authors thank the staff and patients involved in the FIT project for their contributions.
Availability of data and material
The FIT project includes data from a single institution which was collected under IRB approval and did not utilize public funding or resources. Resources from Henry Ford Hospital were utilized in this project. The IRB approval clearly stated that the data will remain with the PI (Dr. Mouaz Al-Mallah - [email protected]) and the study investigators. We would like to note that there many ongoing analyses from the project. Data sharing will be only on a collaborative basis after the approval of the all the investigators who have invested time and effort on this project. This also has to be subject to IRB approval from Henry Ford Hospital and data sharing agreements.
Funding was provided by King Abdullah International Medical Research Center. Funding grant number SP16/100. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
King AbdulAziz Cardiac Center, Ministry of National Guard, Health Affairs, King Abdulaziz Medical City for National Guard - Health affairs, King Abdullah International Medical Research Center, King Saud bin Abdulaziz University for Health Sciences, Department Mail Code: 1413, P.O. Box 22490, Riyadh, 11426, Kingdom of Saudi Arabia
Sherif Sakr, Amjad M. Ahmed, Steven J. Keteyian & Mouaz H. Al-Mallah
Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
Radwa Elshawi
Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC, USA
Waqas T. Qureshi
Division of Cardiovascular Medicine, Henry Ford Hospital, Detroit, MI, USA
Clinton A. Brawner & Mouaz H. Al-Mallah
Johns Hopkins University, Baltimore, MD, USA
Michael J. Blaha
Sherif Sakr
Amjad M. Ahmed
Clinton A. Brawner
Steven J. Keteyian
Mouaz H. Al-Mallah
SS: Data analysis and manuscript drafting. RE: Data analysis and manuscript drafting. AA: Data collection, critical review of manuscript. WQ: Data collection, critical review of manuscript. CB: Data collection, critical review of manuscript. SK: Data collection, critical review of manuscript. MB: Data analysis, critical review of manuscript. MA: Data collection, Data analysis and critical review of manuscript. All authors read and approved the final manuscript.
Correspondence to Mouaz H. Al-Mallah.
This article does not contain any studies with human participants or animals performed by any of the authors. The FIT project is approved by the IRB (ethics committee) of HFH hospital (IRB #: 5812). Informed consent was waived due to retrospective nature of the study. The consent to participate is not required.
Not applicable. The manuscript doesn't contain any individual identifying data.
Sakr, S., Elshawi, R., Ahmed, A.M. et al. Comparison of machine learning techniques to predict all-cause mortality using fitness data: the Henry ford exercIse testing (FIT) project. BMC Med Inform Decis Mak 17, 174 (2017). https://doi.org/10.1186/s12911-017-0566-6
FIT (Henry ford ExercIse testing) project
Standards, technology, machine learning, and modeling | CommonCrawl |
The Annals of Mathematical Statistics
Ann. Math. Statist.
A Generalization of Wald's Identity with Applications to Random Walks
H. D. Miller
More by H. D. Miller
Let $S_m = X_1 + \cdots + X_m$, where the $X_j$ are independent random variables with common m.g.f. $\phi(t)$ which is assumed to exist in a real interval containing $t = 0$. Let the random variable $n$ be defined as the smallest integer $m$ for which either $S_m \geqq \alpha$ or $S_m \leqq - \beta(\alpha > 0, \beta > 0)$. Thus $n$ can be regarded as the time to absorption for the random walk $S_m$ with absorbing barriers at $\alpha$ and $-\beta$. Let $S = S_n$ and let $F_m(x) = P(-\beta < S_k < \alpha \quad \text{for}\quad k = 1, 2, \cdots m - 1 \quad \text{and} \quad S_m \leqq x)$. The main result of the paper is the identity \begin{equation*}\tag{0.1}E(e^{tS}z^n) = 1 + \lbrack z\phi (t) - 1\rbrack F(z, t),\end{equation*} where $F(z, t) = \sum^\infty_{m = 0} z^m \int^\alpha_{-\beta} e^{tx} dF_m(x).$ Wald's identity follows formally from (0.1) by setting $z = \lbrack\phi(t)\rbrack^{-1}$. Regions of validity of (0.1) and of Wald's identity are discussed, and it is shown that the latter holds for a larger range of values of $t$ than is usually supposed. In Section 5 there are three examples. In the first we consider the case where there is a single absorbing barrier and where the $X_j$ are discrete and bounded. This is a gambler's ruin problem, and we obtain an expression for the probability of ruin. In the second we use the classical random walk to illustrate the region of validity of (0.1). In the third we obtain the Laplace transform of the distribution of the time to absorption in a random walk in which steps of $+1$ and -1 occur at random in continuous time.
Ann. Math. Statist., Volume 32, Number 2 (1961), 549-560.
https://projecteuclid.org/euclid.aoms/1177705060
doi:10.1214/aoms/1177705060
links.jstor.org
Miller, H. D. A Generalization of Wald's Identity with Applications to Random Walks. Ann. Math. Statist. 32 (1961), no. 2, 549--560. doi:10.1214/aoms/1177705060. https://projecteuclid.org/euclid.aoms/1177705060
The Institute of Mathematical Statistics
Application of Methods in Sequential Analysis to Dam Theory
Phatarfod, R. M., The Annals of Mathematical Statistics, 1963
Characterizations of Independence in Certain Families of Bivariate and Multivariate Distributions
Jogdeo, Kumar, The Annals of Mathematical Statistics, 1968
The Probability in the Extreme Tail of a Convolution
Blackwell, David and Hodges, J. L., The Annals of Mathematical Statistics, 1959
High Level Occupation Times for Continuous Gaussian Processes
Marlow, Norman A., The Annals of Probability, 1973
On the Glivenko-Cantelli Theorem for Weighted Empiricals Based on Independent Random Variables
Singh, Radhey S., The Annals of Probability, 1975
Bernard Friedman's Urn
Freedman, David A., The Annals of Mathematical Statistics, 1965
A Characterization of the Inverse Gaussian Distribution
Khatri, C. G., The Annals of Mathematical Statistics, 1962
Functional Central Limit Theorems for Random Walks Conditioned to Stay Positive
Iglehart, Donald L., The Annals of Probability, 1974
On Generalized Renewal Measures and Certain First Passage Times
Alsmeyer, Gerold, The Annals of Probability, 1992
Limit Theorems for Stopped Random Walks
Farrell, R. H., The Annals of Mathematical Statistics, 1964
euclid.aoms/1177705060 | CommonCrawl |
Joachim, C. & Gimzewski, J. K. An electromechanical amplifier using a single molecule. Chemical Physics Letters 265, 353–357 (1997).
Aitken, E. J. et al. Electron spectroscopic investigations of the influence of initial-and final-state effects on electronegativity. Journal of the American Chemical Society 102, 4873–4879 (1980).
Joachim, C., Gimzewski, J. K. & Aviram, A. Electronics using hybrid-molecular and mono-molecular devices. Nature 408, 541–548 (2000).
Gimzewski, J. K., Sass, J. K., Schlitter, R. R. & Schott, J. Enhanced photon emission in scanning tunnelling microscopy. EPL (Europhysics Letters) 8, 435 (1989).
Fabian, D. J., Gimzewski, J. K., Barrie, A. & Dev, B. Excitation of Fe 1s core-level photoelectrons with synchrotron radiation. Journal of Physics F: Metal Physics 7, L345 (1977).
Dürig, U., Gimzewski, J. K. & Pohl, D. W. Experimental observation of forces acting during scanning tunneling microscopy. Physical review letters 57, 2403 (1986).
Bomben, K. D., Bahl, M. K., Gimzewski, J. K., Chambers, S. A. & Thomas, T. D. Extended-x-ray-absorption fine-structure amplitude attenuation in Br 2: Relationship to satellites in the x-ray photoelectron spectrum. Physical Review A 20, 2405 (1979).
Bomben, K. D., Gimzewski, J. K. & Thomas, T. D. Extra-atomic relaxation in HCl, ClF, and Cl2 from x-ray photoelectron spectroscopy. The Journal of Chemical Physics 78, 5437–5442 (1983).
Reihl, B. & Gimzewski, J. K. Field emission scanning Auger microscope (FESAM). Surface Science 189, 36–43 (1987).
Coombs, J. H. & Gimzewski, J. K. Fine structure in field emission resonances at surfaces. Journal of Microscopy 152, 841–851 (1988).
Stieg, A. Z., Rasool, H. I. & Gimzewski, J. K. A flexible, highly stable electrochemical scanning probe microscope for nanoscale studies at the solid-liquid interface. Review of Scientific Instruments 79, 103701 (2008).
Dürig, U., Gimzewski, J. K., Pohl, D. W. & Schlittler, R. Force Sensing in Scanning Tunneling Microscopy. IBM, Rüschlikon 1 (1986).
Loppacher, C. et al. Forces with submolecular resolution between the probing tip and Cu-TBPP molecules on Cu (100) observed with a combined AFM/STM. Applied Physics A 72, S105–S108 (2001).
Stoll, E. P. & Gimzewski, J. K. Fundamental and practical aspects of differential scanning tunneling microscopy. Journal of Vacuum Science & Technology B 9, 643–647 (1991).
Tang, H., Cuberes, M. T., Joachim, C. & Gimzewski, J. K. Fundamental considerations in the manipulation of a single C< sub> 60 molecule on a surface with an STM. Surface science 386, 115–123 (1997).
Yamashita, K., Gimzewski, J. K. & Veprek, S. Hydrogen trapping in zirconium under plasma conditions. Journal of Nuclear Materials 128, 705–707 (1984).
Reed, J. et al. Identifying individual DNA species in a complex mixture by precisely measuring the spacing between nicking restriction enzymes with atomic force microscope. Journal of The Royal Society Interface 9, 2341–2350 (2012).
Gimzewski, J. K. et al. Impurity deposition profiles in the plasma edge of the TCA Tokamak. Physica Scripta 30, 271 (1984).
Gimzewski, J. K. et al. Impurity recycling and retention on Au and C surfaces exposed to the scrape-off layer of the TCA tokamak. Journal of Vacuum Science & Technology A 4, 90–96 (1986).
Sharma, S. et al. Influence of substrates on hepatocytes: a nanomechanical study. Journal of Scanning Probe Microscopy 4, 7–16 (2009).
Berndt, R. & Gimzewski, J. K. Injection luminescence from CdS (112{\={}} 0) studied with scanning tunneling microscopy. Physical Review B 45, 14095 (1992).
Berger, R. et al. Integration of silicon micromechanical arrays with molecular monolayers for miniaturized sensor systems. Sensors and Their Applications VIII, Proceedings of the eighth conference on Sensors and their Applications, held in Glasgow, UK, 7-10 September 1997 7, 71 (1997).
Gimzewski, J. K., Donnelly, T. & Affrossman, S. Interaction of ozone with nickel ions adsorbed on alumina. Journal of Catalysis 47, 79–84 (1977).
Reed, J., Schmit, J., Han, S., Wilkinson, P. & Gimzewski, J. K. Interferometric profiling of microcantilevers in liquid. Optics and Lasers in Engineering 47, 217–222 (2009).
Gimzewski, J. K. & Veprek, S. Investigation of impurity retention, implantation and sputtering phenomena on au and c surfaces exposed to the scrape-off-layer. Journal of Nuclear Materials 128, 703–704 (1984).
Gimzewski, J. K. & Veprek, S. Investigation of the initial stages of oxidation of microcrystalline silicon by means of X-ray photoelectron spectroscopy. Solid state communications 47, 747–751 (1983).
Gimzewski, J. K. et al. Investigations of the surface of the amorphous alloy Fe< sub> 80 B< sub> 20 by STM, XPS and AES. Journal of non-crystalline solids 116, 253–261 (1990).
Reihl, B., Coombs, J. H. & Gimzewski, J. K. Local inverse photoemission with the scanning tunneling microscope. Surface Science 211, 156–164 (1989).
Gaisch, R. et al. Low-temperature ultra-high-vacuum scanning tunneling microscope. Ultramicroscopy 42, 1621–1626 (1992).
Cuberes, M. T., Schlittler, R. R. & Gimzewski, J. K. Manipulation of C 60 molecules on Cu (111) surfaces using a scanning tunneling microscope. Applied Physics A: Materials Science & Processing 66, S669–S673 (1998).
Scandella, L. et al. Micromechanical Thermal Gravimetry Performed on one Single Zeolite Crystal. Helvetica Physica Acta 71, 3–4 (1998).
Berger, R. et al. Micromechanical thermogravimetry. Chemical Physics Letters 294, 363–369 (1998).
Gimzewski, J. K. & Veprek, S. A novel method for the determination of the energies of impurity ions bombarding a solid surface exposed to a low pressure plasma. Journal of Vacuum Science & Technology A 2, 35–39 (1984).
Gimzewski, J. K., Berndt, R. & Schlittler, R. R. Observation of local photoemission using a scanning tunneling microscope. Ultramicroscopy 42, 366–370 (1992).
Gimzewski, J. K., Berndt, R. & Schlittler, R. R. Observation of mass transport on Au (110)-(1$\times$ 2) reconstructed surfaces using scanning tunneling microscopy. Surface Science 247, 327–332 (1991).
Gimzewski, J. K., Berndt, R. & Schlittler, R. R. Observation of mass transport on Au (110)-(1$\times$ 2) reconstructed surfaces using scanning tunneling microscopy. Surface Science Letters 247, A213 (1990). | CommonCrawl |
Hasib Khan 1, , Cemil Tunc 2,, and Aziz Khan 3,
Department of Mathematics, Shaheed BB University, Sheringal, Dir Upper 18000, Khybar Pakhtunkhwa, Pakistan
Department of Mathematics, Faculty of Sciences, Van Yuzuncu Yil University, 65080 Van, Turkey
Prince Sultan University, P.O. Box 66833, 11586 Riyadh, Saudi Arabia
* Corresponding author: Cemil Tunc
In this paper, we are dealing with singular fractional differential equations (DEs) having delay and
$ \mho_p $
$ p $
-Laplacian operator). In our problem, we Contemplate two fractional order differential operators that is Riemann–Liouville and Caputo's with fractional integral and fractional differential initial boundary conditions.The SFDE is given by
$ \begin{equation*} \left\{\begin{split} &\mathcal{D}^{\gamma}\big[\mho^*_p[\mathcal{D}^{\kappa}x(t)]\big]+\mathcal{Q}(t)\zeta_1(t, x(t-\varrho^*)) = 0, \\& \mathcal{I}_0^{1-\gamma}\big(\mho^*_p[\mathcal{D}^{\kappa}x(t)]\big)|_{t = 0} = 0 = \mathcal{I}_0^{2-\gamma}\big(\mho^*_p[\mathcal{D}^{\kappa}x(t)]\big)|_{t = 0}, \\& \mathcal{D}^{\delta^*}x(1) = 0, \, \, x(1) = x'(0), \, \, x^{(k)}(0) = 0\text{ for $k = 2, 3, \ldots, n-1$}, \end{split}\right. \end{equation*} $
$ \zeta_1 $
is a continuous function and singular at
$ t $
$ x(t) $
for some values of
$ t\in [0, 1] $
. The operator
$ \mathcal{D}^{\gamma}, \, $
is Riemann–Liouville fractional derivative while
$ \mathcal{D}^{\delta^*}, \mathcal{D}^{\kappa} $
stand for Caputo fractional derivatives and
$ \delta^*, \, \gamma\in(1, 2] $
$ n-1<\kappa\leq n, $
$ n\geq3 $
. For the study of the EUS, fixed point approach is followed in this paper and an application is given to explain the findings.
Keywords: Fractional differential equations with singularity, existence of positive solution, Hyers-Ulam stability, Caputo's fractional derivative.
Citation: Hasib Khan, Cemil Tunc, Aziz Khan. Green function's properties and existence theorems for nonlinear singular-delay-fractional differential equations. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2020139
B. Ahmad, A. Alsaedi, R. P. Agarwal and A. Alsharif, On sequential fractional integro-differential equations with nonlocal integral boundary conditions, Bull. Malays. Math. Sci. Soc., 41 (2018), 1725-1737. doi: 10.1007/s40840-016-0421-4. Google Scholar
A. Atangana and J. F. Gómez-Aguilar, Hyperchaotic behaviour obtained via a nonlocal operator with exponential decay and Mittag-Leffler laws, Chaos Solitons Fractals, 102 (2017), 285-294. doi: 10.1016/j.chaos.2017.03.022. Google Scholar
A. Atangana and J. F. Gómez-Aguilar, A new derivative with normal distribution kernel: Theory, methods and applications, Phys. A, 476 (2017), 1-14. doi: 10.1016/j.physa.2017.02.016. Google Scholar
A. Atangana and J. F. Gómez-Aguilar, Decolonisation of fractional calculus rules: Breaking commutativity and associativity to capture more natural phenomena, The European Physical Journal Plus, 133 (2018), 1-22. doi: 10.1140/epjp/i2018-12021-3. Google Scholar
A. Atangana and J. F. Gómez-Aguilar, Numerical approximation of Riemann-Liouville definition of fractional derivative: From Riemann-L iouville to Atangana-Baleanu, Numer. Methods Partial Differential Equations, 34 (2018), 1502-1523. doi: 10.1002/num.22195. Google Scholar
T. Abdeljawad, F. Jarad and D. Baleanu, On the existence and the uniqueness theorem for fractional differential equations with bounded delay within Caputo derivatives, Sci. China Ser. A, 51 (2008), 1775-1786. doi: 10.1007/s11425-008-0068-1. Google Scholar
T. Abdeljawad, D. Baleanu and F. Jarad, Existence and uniqueness theorem for a class of delay differential equations with left and right Caputo fractional derivatives, J. Math. Phys., 49 (2008), 083507, 11 pp. doi: 10.1063/1.2970709. Google Scholar
T. Abdeljawad and Q. M. Al-Mdallal, Discrete Mittag-Leffler kernel type fractional difference initial value problems and Gronwall's inequality, J. Comput. Appl. Math., 339 (2018), 218-230. doi: 10.1016/j.cam.2017.10.021. Google Scholar
T. Abdeljawad and J. Alzabut, On Riemann-Liouville fractional q–difference equations and their application to retarded logistic type model, Math. Methods Appl. Sci., 41 (2018), 8953-8962. doi: 10.1002/mma.4743. Google Scholar
B. Ahmad and R. Luca, Existence of solutions for sequential fractional integro-differential equations and inclusions with nonlocal boundary conditions, Appl. Math. Comput., 339 (2018), 516-534. doi: 10.1016/j.amc.2018.07.025. Google Scholar
J. Alzabut, T. Abdeljawad and D. Baleanu, Nonlinear delay fractional difference equations with application on discrete fractional Lotka-Volterra model, J. Comput. Anal. Appl., 25 (2018), 889-898. Google Scholar
T. Abdeljawad, Fractional operators with exponential kernels and a Lyapunov type inequality, Adv. Difference Equ., 2017 (2017), 11 pp. doi: 10.1186/s13662-017-1285-0. Google Scholar
A. Babakhani and T. Abdeljawad, A Caputo Fractional Order Boundary Value Problem with Integral Boundary Conditions, J. Comput. Anal. Appl., 15 (2013), 753-763. Google Scholar
Y. K. Chang and R. Ponce, Uniform exponential stability and applications to bounded solutions of integro-differential equations in Banach spaces, J. Integral Equations Appl., 30 (2018), 347-369. doi: 10.1216/JIE-2018-30-3-347. Google Scholar
A. Coronel-Escamilla, J. F. Gómez-Aguilar, M. G. López-López, V. M. Alvarado-Martínez and G. V. Guerrero-Ramírez, Triple pendulum model involving fractional derivatives with different kernels, Chaos Solitons Fractals, 91 (2016), 248-261. doi: 10.1016/j.chaos.2016.06.007. Google Scholar
J. Henderson and R. Luca, Systems of Riemann–Liouville fractional equations with multi-point boundary conditions, Appl. Math. Comput., 309 (2017), 303-323. doi: 10.1016/j.amc.2017.03.044. Google Scholar
L. Guo, L. Liu and Y. Wu, Iterative unique positive solutions for singular p-Laplacian fractional differential equation system with several parameters, Nonlinear Anal., Model. Control, 23 (2018), 182-203. doi: 10.15388/NA.2018.2.3. Google Scholar
A. Ghanmia, M. Kratoub and K. Saoudib, A Multiplicity Results for a Singular Problem Involving a Riemann-Liouville Fractional Derivative, Filomat, 32 (2018), 653-669. doi: 10.2298/FIL1802653G. Google Scholar
J. F. Gómez-Aguilar and A. Atangana, New insight in fractional differentiation: Power, exponential decay and Mittag-Leffler laws and applications, The European Physical Journal Plus, 132 (2017), 13pp. Google Scholar
J. F. Gómez-Aguilar, L. Torres, H. Yépez-Martínez, D. Baleanu, J. M. Reyes and I. O. Sosa, Fractional Liénard type model of a pipeline within the fractional derivative without singular kernel, Adv. Difference Equ., 2016 (2016), Paper No. 173, 13 pp. doi: 10.1186/s13662-016-0908-1. Google Scholar
R. Hilfer, Application of Fractional Calculus in Physics, World Scientific Publishing Co., Inc., River Edge, NJ, 2000. doi: 10.1142/9789812817747. Google Scholar
S. Hristova and C. Tunc, Stability of nonlinear volterra integro-differential equations with caputo fractional derivative and bounded delays, Electron. J. Differential Equations, 2019 (2019), Paper No. 30, 11 pp. Google Scholar
D. Ji, Positive Solutions of Singular Fractional Boundary Value Problem with p-Laplacian., Bull. Malays. Math. Sci. Soc., 41 (2018), 249-263. doi: 10.1007/s40840-015-0276-0. Google Scholar
E. T. Karimov and K. Sadarangani, Existence of a unique positive solution for a singular fractional boundary value problem, Carpathian J. Math., 34 (2018), 57-64. Google Scholar
A. A. Kilbas, H. M. Srivastava and J. J. Trujillo, Theory and Applications of Fractional Differential Equations, North-Holland Mathematics Studies, 204, Elsevier Science B.V., Amsterdam, 2006. Google Scholar
A. Khan, Y. Li, K. Shah and T. S. Khan, On coupled p-Laplacian fractional differential equations with nonlinear boundary conditions, Complexity, 2017 (2017), Art. ID 8197610, 9 pp. doi: 10.1155/2017/8197610. Google Scholar
H. Khan, C. Tunc, W. Chen and A. Khan, Existence theorems and Hyers-Ulam stability for a class of hybrid fractional differential equations with p-Laplacian operator, J. Appl. Anal. Comput., 8 (2018), 1211-1226. Google Scholar
H. Khan, W. Chen and H. Sun, Analysis of positive solution and Hyers–Ulam stability for a class of singular fractional differential equations with p–Laplacian in Banach space, Math. Methods Appl. Sci., 41 (2018), 3430-3440. doi: 10.1002/mma.4835. Google Scholar
B. López, J. Harjani and K. Sadarangani, Existence of positive solutions in the space of Lipschitz functions to a class of fractional differential equations of arbitrary order, Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. RACSAM, 112 (2018), 1281-1294. doi: 10.1007/s13398-017-0426-3. Google Scholar
R. Luca, On a class of nonlinear singular Riemann-Liouville fractional differential equations, Results Math., 73 (2018), Art. 125, 15 pp. doi: 10.1007/s00025-018-0887-5. Google Scholar
I. Podlubny, Fractional Differential Equations, Mathematics in Science and Engineering, 198. Academic Press, Inc., San Diego, CA, 1999. Google Scholar
S. G. Samko, A. A. Kilbas and O. I Marichev, Fractional Integrals and Derivatives: Theory and Applications, Gordon and Breach Science Publishers, Yverdon, 1993. Google Scholar
K. Saoudi, A critical fractional elliptic equation with singular nonlinearities, Fract. Calc. Appl. Anal., 20 (2017), 1507-1530. doi: 10.1515/fca-2017-0079. Google Scholar
H. Srivastava, A. El-Sayed and F. Gaafar, A Class of Nonlinear Boundary Value Problems for an Arbitrary Fractional-Order Differential Equation with the Riemann-Stieltjes Functional Integral and Infinite-Point Boundary Conditions, Symmetry, 2018. doi: 10.3390/sym10100508. Google Scholar
S. Xie and Y. Xie, Nonlinear solutions of non local boundary value problems for nonlinear higher-order singular fractional differential equations, J. Appl. Anal. Comput., 8 (2018), 938-953. Google Scholar
F. Yan, M. Zuo and X. Hao, Positive solution for a fractional singular boundary value problem with p-Laplacian operator, Bound. Value Probl., 2018 (2018), Paper No. 51, 10 pp. doi: 10.1186/s13661-018-0972-4. Google Scholar
H. Yépez-Martínez, J. F. Gómez-Aguilar, I. O. Sosa, J. M. Reyes and J. Torres-Jiménez, The Feng's first integral method applied to the nonlinear mKdV space-time fractional partial differential equation, Rev. Mexicana Fís., 62 (2016), 310-316. Google Scholar
X. Zhang and Q. Zhong, Triple positive solutions for nonlocal fractional differential equations with singularities both on time and space variables, Appl. Math. Lett., 80 (2028), 12-19. doi: 10.1016/j.aml.2017.12.022. Google Scholar
L. Zhang, Z. Sun and X. Hao, Positive solutions for a singular fractional nonlocal boundary value problem, Adv. Difference Equ., 2018 (2018), Paper No. 381, 8 pp. doi: 10.1186/s13662-018-1844-z. Google Scholar
C. J. Zuñiga-Aguilar, J. F. Gómez-Aguilar and R. F. Escobar-Jiménez, Romero-Ugalde HM. Robust control for fractional variable-order chaotic systems with non-singular kernel, The European Physical Journal Plus, 133 (2018), 13pp. Google Scholar
Roberto Garrappa, Eleonora Messina, Antonia Vecchio. Effect of perturbation in the numerical solution of fractional differential equations. Discrete & Continuous Dynamical Systems - B, 2018, 23 (7) : 2679-2694. doi: 10.3934/dcdsb.2017188
Daria Bugajewska, Mirosława Zima. On positive solutions of nonlinear fractional differential equations. Conference Publications, 2003, 2003 (Special) : 141-146. doi: 10.3934/proc.2003.2003.141
Golamreza Zamani Eskandani, Hamid Vaezi. Hyers--Ulam--Rassias stability of derivations in proper Jordan $CQ^{*}$-algebras. Discrete & Continuous Dynamical Systems - A, 2011, 31 (4) : 1469-1477. doi: 10.3934/dcds.2011.31.1469
Christina A. Hollon, Jeffrey T. Neugebauer. Positive solutions of a fractional boundary value problem with a fractional derivative boundary condition. Conference Publications, 2015, 2015 (special) : 615-620. doi: 10.3934/proc.2015.0615
Sertan Alkan. A new solution method for nonlinear fractional integro-differential equations. Discrete & Continuous Dynamical Systems - S, 2015, 8 (6) : 1065-1077. doi: 10.3934/dcdss.2015.8.1065
Ramasamy Subashini, Chokkalingam Ravichandran, Kasthurisamy Jothimani, Haci Mehmet Baskonus. Existence results of Hilfer integro-differential equations with fractional order. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 911-923. doi: 10.3934/dcdss.2020053
Ndolane Sene. Fractional diffusion equation described by the Atangana-Baleanu fractional derivative and its approximate solution. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020173
Na An, Chaobao Huang, Xijun Yu. Error analysis of discontinuous Galerkin method for the time fractional KdV equation with weak singularity solution. Discrete & Continuous Dynamical Systems - B, 2020, 25 (1) : 321-334. doi: 10.3934/dcdsb.2019185
Yirong Jiang, Nanjing Huang, Zhouchao Wei. Existence of a global attractor for fractional differential hemivariational inequalities. Discrete & Continuous Dynamical Systems - B, 2020, 25 (4) : 1193-1212. doi: 10.3934/dcdsb.2019216
Alexander Quaas, Aliang Xia. Existence and uniqueness of positive solutions for a class of logistic type elliptic equations in $\mathbb{R}^N$ involving fractional Laplacian. Discrete & Continuous Dynamical Systems - A, 2017, 37 (5) : 2653-2668. doi: 10.3934/dcds.2017113
Pablo Raúl Stinga, Chao Zhang. Harnack's inequality for fractional nonlocal equations. Discrete & Continuous Dynamical Systems - A, 2013, 33 (7) : 3153-3170. doi: 10.3934/dcds.2013.33.3153
Hasib Khan Cemil Tunc Aziz Khan | CommonCrawl |
Academic Journals (4,554) Magazines (612) Conference Materials (63) Reviews (15) Books (9) Trade Publications (2)
myocardial infarction (248) nuclear and particle physics. atomic energy. radioactivity (178) qc770-798 (178) percutaneous coronary intervention (154) shock, cardiogenic (130) See more
acute coronary syndrome (44) adp ribose transferases (29) angioplasty, balloon, coronary (42) antibodies, monoclonal (31) antigens (33) aortic valve stenosis (60) arthritis, rheumatoid (43) astrophysics (79) atrial fibrillation (45) bone marrow (92) brain (37) cardiac catheterization (35) coronary artery bypass (29) coronary artery disease (54) diseases of the circulatory (cardiovascular) system (36) epilepsy (38) extracorporeal membrane oxygenation (29) genetic predisposition to disease (30) hadron-hadron scattering (experiments) (89) heart failure (75) heart valve prosthesis (41) heart valve prosthesis implantation (49) heart-assist devices (33) intra-aortic balloon pumping (34) leukemia, myelogenous, chronic, bcr-abl positive (38) lymphocytes (32) magnetic resonance imaging (71) magnetic resonance imaging, cine (29) megakaryocytes (36) membrane glycoproteins (28) mitral valve insufficiency (35) mutation (66) myeloproliferative disorders (29) myocardium (35) physics (55) postoperative complications (38) primary myelofibrosis (61) qb460-466 (79) qc1-999 (55) rc666-701 (36) st elevation myocardial infarction (61) t-lymphocytes (54) takotsubo cardiomyopathy (59) transcatheter aortic valve replacement (63) tricuspid valve insufficiency (29)
zeitschrift fur anorganische und allgemeine chemie (143) journal of high energy physics (99) tecnologia militar (97) european physical journal c: particles and fields (79) cheminform (66) See more
acta haematologica (15) american heart journal (28) american journal of human genetics (31) anesthesia & analgesia (34) annals of hematology (31) blood (28) case reports in anesthesiology (4) circulation (47) clinical research in cardiology official journal of the german cardiac society (64) elife (7) european heart journal (53) european journal of biochemistry (15) european journal of development research (2) european journal of epidemiology (19) fresenius' journal of analytical chemistry (19) gut (8) herz (60) histopathology (26) human genetics (14) ieee transactions on magnetics (2) ieee transactions on medical imaging (6) immunology (28) inorganic chemistry (10) international journal of cancer (11) international journal of cardiology (41) jacc cardiovascular interventions (60) journal of bacteriology (8) journal of biological chemistry (15) journal of chemistry (3) journal of clinical medicine (15) journal of dairy science (2) journal of molecular medicine (19) journal of nursing education (1) journal of the american college of cardiology (41) lancet (4) laser & particle beams (1) mediators of inflammation (3) nature communications (13) nucleic acids research (15) oecologia (10) orphanet journal of rare diseases (21) physical review letters (39) plos one (44) rubber world (2) scientific reports (21)
springer nature (329) elsevier (318) john wiley & sons, inc. (270) wiley-blackwell (226) lippincott williams & wilkins (224) See more
american association of immunologists (16) american association of neurological surgeons & the journal of neurosurgical publishing group (2) american chemical society (42) american diabetes association (5) american institute of physics (23) american medical association (18) american meteorological society (2) american physical society (51) american society for microbiology (20) american society of civil engineers (3) american society of clinical oncology (5) association for information systems (1) biomed central (93) bmj publishing group (23) cambridge university press (6) cell press (49) csiro publishing (2) de gruyter (12) edp sciences (33) elife sciences publications, ltd. (3) elsevier b.v. (130) georg thieme verlag stuttgart (27) hindawi limited (16) ieee (59) iop publishing (15) ios press (10) john wiley & sons ltd (4) karger ag (18) lancet (4) lippincott & peto / initial publications (2) mary ann liebert, inc. (3) mdpi (24) mdpi ag (22) monch publishing group (97) oxford university press (148) oxford university press / usa (31) palgrave macmillan ltd. (3) public library of science (46) royal society of chemistry (20) slack incorporated (1) springer verlag (91) springer-verlag (71) springeropen (178) taylor & francis ltd (35) wiley (65)
english (3,995) german (634) spanish (97) undetermined (20) spanish; castilian (3) See more
czech (1) danish (2) french (2) multiple languages (1) portuguese (1) russian (2)
germany (114) united states (37) europe (31) australia (10) austria (8) See more
cuba (3) italy (7) massachusetts (5) north carolina (5) west africa (3)
MEDLINE (2,905) Complementary Index (991) Supplemental Index (579) Academic Search Index (431) Directory of Open Access Journals (338) See more
Business Source Index (10) Education Abstracts (H.W. Wilson) (1) Open Web RDK with Full Text (5)
Advanced Search Results For "Thiele H"
"Thiele H"
In situ observations of the Swiss periglacial environment using GNSS instruments
Source(s): Earth System Science Data, Vol 14, Pp 5061-5091 (2022)
A. Cicoira
S. Weber
A. Biri
Abstract: Monitoring of the periglacial environment is relevant for many disciplines including glaciology, natural hazard management, geomorphology, and geodesy. Since October 2022, Rock Glacier Velocity (RGV) is a new Essential Climate Variable (ECV) product wi...
GE1-350
QE1-996.5
Anesthetic Management of Laparoscopic Adrenalectomy for a Patient with Concomitant Pheochromocytoma and Bilateral Carotid Artery Stenosis
Source(s): Case Reports in Anesthesiology, Vol 2023 (2023)
Kristina L. Michaud
Robert H. Thiele
Katherine T. Forkin
Abstract: Symptomatic carotid stenosis and pheochromocytoma both require timely surgical intervention. Following a transient ischemic attack (TIA), a 46-year-old man was diagnosed with bilateral carotid artery stenosis and scheduled for carotid endarterectomy. H...
RD78.3-87.3
Two-particle Bose–Einstein correlations in $${ pp }$$ pp collisions at $$\mathbf {\sqrt{s} = 13}$$ s = 13 TeV measured with the ATLAS detector at the LHC
Source(s): European Physical Journal C: Particles and Fields, Vol 82, Iss 7, Pp 1-38 (2022)
G. Aad
B. Abbott
D. C. Abbott
Abstract: Abstract This paper presents studies of Bose–Einstein correlations (BEC) in proton–proton collisions at a centre-of-mass energy of 13 TeV, using data from the ATLAS detector at the CERN Large Hadron Collider. Data were collected in a special low-lumino...
QB460-466
Nuclear and particle physics. Atomic energy. Radioactivity
QC770-798
LiverScreen project: study protocol for screening for liver fibrosis in the general population in European countries
Source(s): BMC Public Health, Vol 22, Iss 1, Pp 1-10 (2022)
Isabel Graupera
Maja Thiele
Ann T. Ma
Abstract: Abstract Background The development of liver cirrhosis is usually an asymptomatic process until late stages when complications occur. The potential reversibility of the disease is dependent on early diagnosis of liver fibrosis and timely targeted treat...
Liver fibrosis
Anesthetic Management of Laparoscopic Adrenalectomy for a Patient with Concomitant Pheochromocytoma and Bilateral Carotid Artery Stenosis.
Source(s): Case Reports in Anesthesiology. 1/7/2023, p1-4. 4p.
Michaud, Kristina L.
Thiele, Robert H.
Forkin, Katherine T.
CAROTID artery stenosis
TRANSIENT ischemic attack
MEDICAL specialties & specialists
ADRENALECTOMY
CAROTID endarterectomy
Alveolar macrophages in early stage COPD show functional deviations with properties of impaired immune activation
Source(s): Frontiers in Immunology, Vol 13 (2022)
Kevin Baßler
Wataru Fujii
Theodore S. Kapellos
Abstract: Despite its high prevalence, the cellular and molecular mechanisms of chronic obstructive pulmonary disease (COPD) are far from being understood. Here, we determine disease-related changes in cellular and molecular compositions within the alveolar spac...
bronchoalveolar lavage
monocyte
impaired immune activation
Immunologic diseases. Allergy
RC581-607
Molecular and Cellular Response of the Myocardium (H9C2 Cells) Towards Hypoxia and HIF-1α Inhibition
Source(s): Frontiers in Cardiovascular Medicine, Vol 9 (2022)
Hari Prasad Osuru
Matthew Lavallee
Abstract: IntroductionOxidative phosphorylation is an essential feature of Animalian life. Multiple adaptations have developed to protect against hypoxia, including hypoxia-inducible-factors (HIFs). The major role of HIFs may be in protecting against oxidative s...
intracellular calcium
hypoxia inducible factor
Beclin-1
H9c2 cells
hypoxia and HIF-1α inhibition
Diseases of the circulatory (Cardiovascular) system
Analysis of accumulated SARS-CoV-2 seroconversion in North Carolina: The COVID-19 Community Research Partnership.
Source(s): PLoS ONE, Vol 17, Iss 3, p e0260574 (2022)
John C Williamson
Thomas F Wierzba
Michele Santacatterina
Abstract: IntroductionThe COVID-19 Community Research Partnership is a population-based longitudinal syndromic and sero-surveillance study. The study includes over 17,000 participants from six healthcare systems in North Carolina who submitted over 49,000 serolo...
Prognostic impact of acute pulmonary triggers in patients with takotsubo syndrome: new insights from the International Takotsubo Registry
Source(s): ESC Heart Failure, Vol 8, Iss 3, Pp 1924-1932 (2021)
Ken Kato
Victoria L. Cammann
L. Christian Napp
Abstract: Abstract Aims Acute pulmonary disorders are known physical triggers of takotsubo syndrome (TTS). This study aimed to investigate prevalence of acute pulmonary triggers in patients with TTS and their impact on outcomes. Methods and results Patients with...
Takotsubo syndrome
Broken heart syndrome
Acute respiratory insufficiency
InterTAK Registry
Uncovering the Contribution of Moderate-Penetrance Susceptibility Genes to Breast Cancer by Whole-Exome Sequencing and Targeted Enrichment Sequencing of Candidate Genes in Women of European Ancestry
Source(s): Cancers, Vol 14, Iss 3363, p 3363 (2022)
Martine Dumont
Nana Weber-Lassalle
Charles Joly-Beauparlant
Abstract: Rare variants in at least 10 genes, including BRCA1, BRCA2, PALB2, ATM, and CHEK2, are associated with increased risk of breast cancer; however, these variants, in combination with common variants identified through genome-wide association studies, exp...
genetic susceptibility
whole-exome sequencing
moderate-penetrance genes
Neoplasms. Tumors. Oncology. Including cancer and carcinogens | CommonCrawl |
HomeTextbook AnswersMathAlgebraAlgebra and Trigonometry 10th EditionChapter 10 - 10.1 - Matrices and Systems of Equations - 10.1 Exercises - Page 71287
Algebra and Trigonometry 10th Edition
by Larson, Ron
Published by Cengage Learning
Prerequisites Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Chapter 619 Chapter 9 Cha658pter 9 Chapter 10 Chapter 11 Chapter 11824 Appendix A 10.1 - Matrices and Systems of Equations - 10.1 Exercises 10.1 - Matrices and Systems of Equations - 10.1 Exercises 10.1 - Matrices and Systems of Equations - 10.1 Exercises 10.1 - Matrices and Systems of Equations - 10.1 Exercises 10.2 - Operations with Matrices - 10.2 Exercises 10.2 - Operations with Matrices - 10.2 Exercises 10.2 - Operations with Matrices - 10.2 Exercises 10.2 - Operations with Matrices - 10.2 Exercises 10.3 - The Inverse of a Square Matrix - 10.3 Exercises 10.3 - The Inverse of a Square Matrix - 10.3 Exercises 10.3 - The Inverse of a Square Matrix - 10.3 Exercises 10.4 - The Determinant of a Square Matrix - 10.4 Exercises 10.4 - The Determinant of a Square Matrix - 10.4 Exercises 10.4 - The Determinant of a Square Matrix - 10.4 Exercises 10.5 - Applications of Matrices and Determinants - 10.5 Exercises 10.5 - Applications of Matrices and Determinants - 10.5 Exercises 10.5 - Applications of Matrices and Determinants - 10.5 Exercises Review Exercises Review Exercises Review Exercises Review Exercises Chapter Test P.S. Problem Solving P.S. Problem Solving 87 88 89 90 91 92 93 94 95 96a 96b 96c 96d 96e 97 98 99 100a 100b 100c 100d
Chapter 10 - 10.1 - Matrices and Systems of Equations - 10.1 Exercises - Page 712: 87
$f(x) = -x^{2} + x + 1$
Work Step by Step
We must first solve for the functions in terms of a, b, and c: f(1) = a + b + c = 1 f(2) = 4a + 2b + c = -1 f(3) = 9a + 3b + c = -5 We can then form the matrix, and use Gaussian elimination and back-substitution to solve the matrix: $\begin{bmatrix} 1 & 1 & 1 & |1\\ 4 & 2 & 1 & |-1\\ 9 & 3 & 1 & |-5\\ \end{bmatrix}$ ~ $\begin{bmatrix} 1 & 1 & 1 & |1\\ 0 & -2 & -3 & |-5\\ 0 & -6 & -8 & |-14\\ \end{bmatrix}$ ~ $\begin{bmatrix} 1 & 1 & 1 & |1\\ 0 & 2 & 3 & |5\\ 0 & -3 & -4 & |-7\\ \end{bmatrix}$ ~ $\begin{bmatrix} 1 & 1 & 1 & |1\\ 0 & 2 & 3 & |5\\ 0 & 0 & 1 & |1\\ \end{bmatrix}$ C: c = 1 B: 2b + 3(1) = 5 2b + 3 = 5 2b = 2 b = 1 A: a + 1 + 1 = 1 a + 2 = 1 a = -1 Using the solution above, the quadratic function is: $f(x) = -x^{2} + x + 1$
Update this answer!
You can help us out by revising, improving and updating this answer.
Update this answer
After you claim an answer you'll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
Next Answer Chapter 10 - 10.1 - Matrices and Systems of Equations - 10.1 Exercises - Page 712: 88 Previous Answer Chapter 10 - 10.1 - Matrices and Systems of Equations - 10.1 Exercises - Page 711: 86
Answers by Chapter
Cha658pter 9
10.1 - Matrices and Systems of Equations - 10.1 Exercises
10.2 - Operations with Matrices - 10.2 Exercises
10.3 - The Inverse of a Square Matrix - 10.3 Exercises
10.4 - The Determinant of a Square Matrix - 10.4 Exercises
10.5 - Applications of Matrices and Determinants - 10.5 Exercises
Review Exercises
P.S. Problem Solving
Chapter 11824 | CommonCrawl |
Edge-Based Compartmental Modelling of an SIR Epidemic on a Dual-Layer Static–Dynamic Multiplex Network with Tunable Clustering
Rosanna C. Barnard ORCID: orcid.org/0000-0001-7602-04011,
Istvan Z. Kiss ORCID: orcid.org/0000-0003-1473-66441,
Luc Berthouze ORCID: orcid.org/0000-0003-3774-23692 &
Joel C. Miller ORCID: orcid.org/0000-0003-4426-04053
Bulletin of Mathematical Biology volume 80, pages 2698–2733 (2018)Cite this article
The duration, type and structure of connections between individuals in real-world populations play a crucial role in how diseases invade and spread. Here, we incorporate the aforementioned heterogeneities into a model by considering a dual-layer static–dynamic multiplex network. The static network layer affords tunable clustering and describes an individual's permanent community structure. The dynamic network layer describes the transient connections an individual makes with members of the wider population by imposing constant edge rewiring. We follow the edge-based compartmental modelling approach to derive equations describing the evolution of a susceptible–infected–recovered epidemic spreading through this multiplex network of individuals. We derive the basic reproduction number, measuring the expected number of new infectious cases caused by a single infectious individual in an otherwise susceptible population. We validate model equations by showing convergence to pre-existing edge-based compartmental model equations in limiting cases and by comparison with stochastically simulated epidemics. We explore the effects of altering model parameters and multiplex network attributes on resultant epidemic dynamics. We validate the basic reproduction number by plotting its value against associated final epidemic sizes measured from simulation and predicted by model equations for a number of set-ups. Further, we explore the effect of varying individual model parameters on the basic reproduction number. We conclude with a discussion of the significance and interpretation of the model and its relation to existing research literature. We highlight intrinsic limitations and potential extensions of the present model and outline future research considerations, both experimental and theoretical.
The continual design and development of mathematical models describing epidemic processes on large, complex populations improves our understanding of how diseases and individuals behave during an epidemic, and how preventative measures can be implemented for the greater good. With ever-increasing computational power, models can incorporate increasingly complex features, and model predictions may become more valuable. Nonetheless, any model must tread a careful balance between capturing observed real-world complexity and enabling calculations and conclusions to be drawn with ease. The ultimate epidemiological model must therefore incorporate the behavioural and structural features which significantly influence disease dynamics, whilst being analytically tractable.
Social heterogeneity describes the propensity for a social group to be diverse in character or content and is an important determinant when studying the dynamics and control of infectious diseases (Arthur et al. 2017). In a social group, heterogeneity encompasses many descriptive elements, such as variations in individuals' behaviour or in susceptibility across group members. In network theory, social heterogeneity can also describe variations in the types of connections an individual makes. For example, an individual can be connected to other individuals in distinct groups, such as workplace or community groups.
Structured populations with multiple connection types are well described by multiplex networks, where a population of individuals partakes in multiple network layers. Each network layer describes a specific type of interaction between members of the population, and network structure in one layer is allowed overlap with network structure in another layer. A pair of individuals in a multiplex network can share more than one connection. In a multiplex network, an individual is present in every network layer, but may or may not partake in connections in individual network layers.
Existing multiplex modelling studies have shown that single-layer approximations or aggregations of multiplex networks are not accurate enough to describe the epidemic process (Diakonova et al. 2016; Zhuang et al. 2017; Gomez et al. 2013; Cozzo et al. 2013), and further that an epidemic can spread on a multiplex network even if the individual layers are well below their respective epidemic thresholds (Zhao et al. 2014). A global cascades model generalised for multiplex networks was used to show that multiplexes are more vulnerable to global cascades than single-layer networks (Brummitt et al. 2012). These studies highlight the importance of accounting for heterogeneity in connection type by considering multiplex network models.
Another determinant of infectious disease dynamics is heterogeneity in the structural connections between individuals, within a single type of connection. Real-world networks often exhibit community structure, with a high density of connections within communities and a low density of connections between communities. They are also considered to exhibit other structural characteristics such as network transitivity or clustering, described in social network theory as the propensity for an individual to be connected to a friend of a friend (Newman 2003).
Community structure has been shown to affect disease dynamics on single-layered (uniplex) networks, where on average, epidemics occurring on networks with community structure exhibit greater variance in final epidemic size, a greater number of small, local outbreaks that do not develop into epidemics and higher variance in the duration of the epidemic (Salathé and Jones 2010). Network quality functions able to detect community structure in multiplex networks have been developed (Mucha et al. 2010). Further, results such as the large graph limit of a susceptible–infected–recovered (SIR) epidemic process on a dynamic multilayer network, where one network layer represents community links and another represents connections in healthcare settings, have been derived (Jacobsen et al. 2016).
In network models, increased clustering is generally considered to slow an epidemic by increasing the epidemic threshold (Miller 2009). However, this relationship is not always monotonic. Higher clustering in a multiplex study of information propagation led to an increase in the epidemic threshold and a decrease in final epidemic size (Zhuang and Yagan 2016). Increased clustering in a study of Watt's threshold model generalised for a multiplex network comprised of clustered network layers led to a decrease in the probability of a global cascade and its size (Zhuang et al. 2017). However, the authors also discovered a critical threshold for the average degree, above which clustering was shown to facilitate global cascades (Zhuang et al. 2017). A uniplex network study found that simultaneously increasing clustering and the variance of the degree distribution led to an increase in final epidemic size (Volz et al. 2011). Moreover, clustering can lead to correlations where high-degree individuals are more likely to connect with other high-degree individuals. It is clear that the effect of clustering is complex and should be considered in the design of network models.
In epidemiology it is also important to consider heterogeneity across contact duration. In human populations, links between individuals may be long-lasting (persistent), e.g. between an infant child and their caregiver; temporary (transient), e.g. between workplace colleagues; or more short-lived (fleeting), e.g. between strangers coming into close proximity on public transport. In a study using a year's mobile phone data as a proxy for the structure and dynamics of a large social network, researchers found that persistent links tend to be reciprocal and are more common for individuals with low degree and high clustering (Hidalgo and Rodriguez-Sickert 2008). Many network-based studies in the past have considered fully static network structures and hence solely investigate the effects of persistent connections between individuals, see Keeling and Eames (2005) for a review of differing approaches.
Later studies of epidemic processes on networks have incorporated persistent and transient connections into their models by imposing rewiring rules on static networks. Rewiring rules considered include spatially constrained rewiring (Rattana et al. 2014), random link activation and deletion (Taylor et al. 2012; Sélley et al. 2015; Kiss et al. 2012) and temporary link deactivation (Tunc et al. 2013; Shkarayev et al. 2014). On the other hand, epidemic processes with fleeting contact duration can be well described via the mass action model, which assumes all pairs of individuals contact one another at the same rate, the mean-field social heterogeneity model (also known as the degree-based mean-field model), which generalises the mass action model by allowing for variations in contact rate across the population, and the dynamic fixed- and dynamic variable-degree models, where edges are swapped at a given rate, or edges are broken and created at given rates, respectively (Miller and Volz 2013; Miller et al. 2012).
Here, we suppose that static and dynamic connections coexist in any complex population. We aim to derive a network model describing an SIR epidemic process spreading through a population where each individual has two types of connections: persistent links to individuals in their household, constituting a static network layer with community structure, and transient connections to strangers in the wider population, where all such edges rewire at a constant rate, constituting a dynamic network layer with conserved degrees.
In what follows, we utilise the edge-based compartmental modelling (EBCM) approach (Volz 2008; Miller 2011, 2014; Miller et al. 2012), deriving equations which describe the time evolution of classical quantities of interest, where the underlying dual-layered static–dynamic network has heterogeneity in contact type, contact duration, and contact structure. We derive the associated basic reproduction number \(R_0\), following the next-generation matrix approach (Diekmann et al. 2009). We describe the implementation of the EBCM model and of statistically correct Gillespie simulations of the epidemic process (Gillespie 1976). The new model is validated, firstly by showing that collapsing either the static or dynamic network layers leads model equations to converge to existing equivalent model equations, and secondly by comparing the dynamics predicted by model equations to those from exact simulations. We explore how various combinations of model parameters and network layers influence global dynamics, uncover behavioural regimes that the model can achieve for specific combinations of infection and rewiring rates, and show that our derived \(R_0\) behaves as expected. The paper concludes with a discussion of potential implications of the work as well as possible extensions.
Our solutions are based on the class of undirected random graphs (networks). Each node is a member of a random number of static lines (2-vertex cliques), static triangles (3-vertex cliques) and dynamic lines (2-vertex cliques). The probability that a node has s static line stubs, t static triangle corners and d dynamic line stubs is described by the probability mass function \(p_{s,t,d}\). The model captures network structure using the probability generating function (PGF)
$$\begin{aligned} g(x,y,z)=\sum _{s,t,d}p_{s,t,d}x^{s}y^{t}z^{d}. \end{aligned}$$
When differentiating PGF (1), we use superscripts such that \(g^{(x)}\) denotes the first (partial) derivative of g with respect to x and \(g^{(y,y)}\) denotes the second (partial) derivative of g with respect to y. Equation (1) can be used to calculate useful properties of the multiplex network. For example, \(M_\mathrm{s}\), the expected number of static line stubs that belong to a randomly selected individual, \(M_\mathrm{t}\), the expected number of static triangle corners that belong to a randomly selected individual, and \(M_\mathrm{d}\), the expected number of dynamic line stubs that belong to a randomly selected individual, are calculated as follows:
$$\begin{aligned} M_\mathrm{s}&= \sum _{s,t,d}s p_{s,t,d}=g^{(x)}(1,1,1), \\ M_\mathrm{t}&= \sum _{s,t,d}t p_{s,t,d} = g^{(y)}(1,1,1), \\ M_\mathrm{d}&= \sum _{s,t,d} d p_{s,t,d} = g^{(z)}(1,1,1). \end{aligned}$$
We consider a basic SIR compartmental model. Infections occur across edges on the static network layer at a constant rate \(\beta _{\mathrm{s}}\), whilst infections occur across edges on the dynamic network layer at a constant rate \(\beta _{\mathrm{d}}\). Infected individuals recover at a constant rate \(\gamma \). Once recovered, a node cannot be reinfected, and can no longer transmit infection to its neighbours. A comprehensive list of model variables and parameters is given in Table 1.
Table 1 Definitions for model variables and parameters
Edge-Based Compartmental Model Derivation
We follow the edge-based compartmental modelling approach by considering the fate of a randomly selected test node u, which is prevented from transmitting infection. This assumption is a useful tool that eliminates conditional probability arguments that would need to be considered otherwise (Miller et al. 2012). It does not introduce any approximation. At time zero, infection is introduced to a fraction \(\rho \) of the population chosen uniformly at random, comprising the initial condition of the system. We assume that the test node u is a member of s static line stubs, t static triangle corners and d dynamic line stubs. Then the probability that u is susceptible is \((1-\rho )\theta _{2}^{s}\theta _{3}^{t}\theta _{4}^{d}\), where \(\theta _{2}\) is the probability that a random line (2-clique) on the static network layer has not transmitted infection to the test node, \(\theta _{3}\) is the probability that neither of the other nodes in a random triangle on the static network layer has transmitted infection to the test node, and \(\theta _{4}\) is the probability that a random stub connected to u on the dynamic network layer has never been involved in transmitting infection to the test node. Assuming we are able to calculate \(\theta _{2}\), \(\theta _{3}\) and \(\theta _{4}\) as functions of time, we are able to calculate the proportion of susceptible individuals S as a function of time. Given S(t), we use \(I(t)=1-S(t)-R(t)\) and \(\dot{R}(t)=\gamma I(t)\) to calculate I(t) and R(t), completing the system.
Considering \(\theta _{2}\)
We divide \(\theta _{2}\) into \(\phi _\mathrm{S}\), \(\phi _\mathrm{I}\) and \(\phi _\mathrm{R}\), the probabilities that a random neighbour along a line on the static network layer has not transmitted infection to u, and is susceptible, infected or recovered, respectively. The probability the neighbour has not transmitted infection to u is \(\theta _{2}=\phi _\mathrm{S}+\phi _\mathrm{I}+\phi _\mathrm{R}\), and \((1-\theta _{2})\) is the probability that it has transmitted infection to u. The fluxes between these quantities are shown in Fig. 1. The fluxes from \(\phi _\mathrm{I}\) to \(\phi _\mathrm{R}\) and from \(\phi _\mathrm{I}\) to \((1-\theta _{2})\) are proportional to one another. Both \(\phi _\mathrm{R}\) and \((1-\theta _{2})\) are equal to zero at time zero since we assume that no infection or recovery events can occur prior to time zero. By integrating the relation \(\frac{\mathrm{d}\phi _\mathrm{R}}{\mathrm{d}t}=\frac{\gamma }{\beta _{\mathrm{s}}} \frac{\mathrm{d}(1-\theta _{2})}{\mathrm{d}t}\), and using the initial condition \(\phi _\mathrm{R}(0)=(1-\theta _{2}(0))=0\), we find the relation
$$\begin{aligned} \phi _\mathrm{R}=\frac{\gamma }{\beta _{\mathrm{s}}}(1-\theta _{2}). \end{aligned}$$
Flow diagram for the flux of a static line partner through different states. The flux between the probabilities that the test node u is connected by a line (2-clique) on the static network layer to a node v that has not transmitted infection to u and is susceptible (\(\phi _\mathrm{S}\)), infectious (\(\phi _\mathrm{I}\)) or recovered (\(\phi _\mathrm{R}\)), and the probability that v has transmitted infection to u, equal to \((1-\theta _2)\)
Next, we must calculate an expression for \(\phi _\mathrm{S}\). Consider the number of static line stubs attached to an individual that we reach by following a randomly chosen static line. Similarly, consider the number of static triangle corners attached to an individual reached by following a randomly chosen static triangle edge, and the number of dynamic line stubs attached to an individual we reach by following a randomly chosen dynamic line. Following edges in this way means we are more likely to arrive at an individual with a higher degree, in direct proportion to that individual's degree (Meyers et al. 2006). The random number of such lines and triangle corners is described by the excess degree distribution, and we calculate the associated probability density functions for each edge type as follows. Denote \(q_{s-1,t,d}\propto sp_{s,t,d}\) as the probability of there being \((s-1)\) static line stubs, t triangle corners and d dynamic line stubs connected to a susceptible node that we reach by following a static line, not counting the line by which we arrived. Similarly, denote \(r_{s,t-1,d}\propto tp_{s,t,d}\) as the probability that if we follow a triangle edge to a susceptible node, there are s static line stubs, \((t-1)\) triangle corners and d dynamic line stubs connected to that node, not counting the triangle edge by which we arrived, and \(w_{s,t,d-1}\propto dp_{s,t,d}\) as the probability that if we follow a dynamic edge to a susceptible node, there are s static line stubs, t triangle corners and \((d-1)\) dynamic line stubs connected to that node, not counting the dynamic edge by which we arrived.
From above, we note that the probability that there are s static line stubs, t triangle corners and d dynamic line stubs attached to a random neighbour of u across a static line (not counting the line it was reached across) is \(q_{s-1,t,d}\propto sp_{s,t,d}\). A neighbour reached by following a static line connected to u is susceptible with probability \((1-\rho )\theta _{2}^{s-1}\theta _{3}^{t}\theta _{4}^{d}\) (recall that u cannot transmit infection), where s, t and d are realisations of the excess degree distribution. We calculate \(\phi _\mathrm{S}\) by multiplying the probability that a random neighbour across a static line has (s, t, d) neighbours, with the probability the random neighbour is susceptible, summing over all possible values of (s, t, d), and dividing by \(M_\mathrm{s}=g^{(x)}(1,1,1)\), the expected number of static lines a randomly selected node belongs to. We find
$$\begin{aligned} \phi _\mathrm{S} = \frac{(1-\rho )\sum _{s,t,d}s p_{s,t,d} \theta _{2}^{s-1}\theta _{3}^{t}\theta _{4}^{d}}{M_\mathrm{s}} =\frac{(1-\rho )g^{(x)}(\theta _{2},\theta _{3}, \theta _{4})}{g^{(x)}(1,1,1)}. \end{aligned}$$
From the original definition of \(\theta _{2}\) we have
$$\begin{aligned} \phi _\mathrm{I}=\theta _{2}-\phi _\mathrm{S}-\phi _\mathrm{R}. \end{aligned}$$
We are now able to calculate an expression for \(\theta _{2}\) using Eqs. (2)–(4), and noting from Fig. 1 that \(\dot{\theta _{2}}=-\beta _{\mathrm{s}} \phi _\mathrm{I}\):
$$\begin{aligned} \dot{\theta _{2}}=-\beta _{\mathrm{s}} \theta _{2} +\beta _{\mathrm{s}} \frac{(1-\rho )g^{(x)}(\theta _{2},\theta _{3}, \theta _{4})}{g^{(x)}(1,1,1)} +\gamma (1-\theta _{2}). \end{aligned}$$
Since \(\theta _{3}\) denotes the probability that neither of the other nodes in a triangle has transmitted infection to the test node, we must divide \(\theta _{3}\) into six quantities \(\phi _\mathrm{SS}\), \(\phi _\mathrm{SI}\), \(\phi _\mathrm{SR}\), \(\phi _\mathrm{II}\), \(\phi _\mathrm{IR}\) and \(\phi _\mathrm{RR}\) in order to consider all possible disease status combinations for two individuals. For example, \(\phi _\mathrm{SI}\) denotes the probability that one triangle neighbour of u is susceptible, whilst the other is infectious, and neither has transmitted infection to u. The flux between the various compartments can be seen in Fig. 2. There is no simple relation between \(\phi _\mathrm{RR}\) and \(\theta _{3}\), so we take a different approach than before. We start with \(\dot{\theta _{3}}\), which satisfies
$$\begin{aligned} \dot{\theta _{3}}=-\beta _{\mathrm{s}} \phi _\mathrm{SI}-2\beta _{\mathrm{s}} \phi _\mathrm{II}-\beta _{\mathrm{s}} \phi _\mathrm{IR}. \end{aligned}$$
Flow diagram for the flux of two triangle neighbours through different states. The flux between the probabilities that the test node u is connected in a triangle to two nodes in all possible disease status configurations, where neither triangle neighbour has transmitted infection to u, as well as the probability \((1-\theta _{3})\) that a node \(v \ne u\) in the triangle has transmitted infection to the test node u
To calculate elements in the right-hand side of (6), we must first obtain an expression for \(\phi _\mathrm{SS}\), the probability that both neighbours in a triangle are still susceptible. Under the assumption that no transmission events have occurred in the triangle, the probability that a single triangle neighbour of u is susceptible is
$$\begin{aligned} (1-\rho )\sum _{s,t,d}tp_{s,t,d}\theta _{2}^{s}\theta _{3}^{t-1} \theta _{4}^{d}/M_\mathrm{t}=(1-\rho )g^{(y)}(\theta _{2},\theta _{3}, \theta _{4})/g^{(y)}(1,1,1), \end{aligned}$$
where \(M_\mathrm{t}\) is the expected number of static triangle corners belonging to a randomly chosen individual. Since we require both triangle neighbours of u to be susceptible, we have
$$\begin{aligned} \phi _\mathrm{SS}=\left( \frac{(1-\rho )g^{(y)}(\theta _{2}, \theta _{3},\theta _{4})}{g^{(y)}(1,1,1)} \right) ^{2}. \end{aligned}$$
We choose A to denote the rate at which a single triangle neighbour of u becomes infected from outside the triangle. From Fig. 2 we know that \(\frac{\mathrm{d}\phi _\mathrm{SS}}{\mathrm{d}t}=-2A\phi _\mathrm{SS}\), which implies \(A=-\frac{\mathrm{d}\phi _\mathrm{SS}}{\mathrm{d}t}/2\phi _\mathrm{SS}\). To arrive at an explicit formula for A, we begin by calculating \(\frac{\mathrm{d}\phi _\mathrm{SS}}{\mathrm{d}t}\) via the chain rule:
$$\begin{aligned} \frac{\mathrm{d}\phi _\mathrm{SS}}{\mathrm{d}t}&=2\left( \frac{(1-\rho )g^{(y)}(\theta _{2}, \theta _{3},\theta _{4})}{g^{(y)}(1,1,1)} \right) \frac{\mathrm{d}}{\mathrm{d}t} \left( \frac{(1-\rho )g^{(y)}(\theta _{2},\theta _{3}, \theta _{4})}{g^{(y)}(1,1,1)} \right) \\&=\frac{2(1-\rho )^{2}g^{(y)}(\theta _{2},\theta _{3}, \theta _{4})}{g^{(y)}(1,1,1)}\\&\quad \times \left( \frac{g^{(y)}(1,1,1) \left( g^{(y)}(\theta _{2},\theta _{3},\theta _{4})\right) '-g^{(y)} (\theta _{2},\theta _{3},\theta _{4})\left( g^{(y)}(1,1,1) \right) '}{\left( g^{(y)}(1,1,1) \right) ^{2}} \right) . \end{aligned}$$
We know that \(\left( g^{(y)}(1,1,1) \right) '=0\), since \(g^{(y)}(1,1,1)=\sum _{s,t,d}tp_{s,t,d} \in \mathbb {R}\). Hence
$$\begin{aligned} \frac{\mathrm{d}\phi _\mathrm{SS}}{\mathrm{d}t}&=\frac{2(1-\rho )^{2}g^{(y)}(\theta _{2}, \theta _{3},\theta _{4})}{g^{(y)}(1,1,1)} \left( \frac{g^{(y)}(1,1,1)\left( g^{(y)} (\theta _{2},\theta _{3},\theta _{4}) \right) '}{\left( g^{(y)}(1,1,1) \right) ^{2}} \right) \\&=\frac{2(1-\rho )^{2}g^{(y)}(\theta _{2},\theta _{3}, \theta _{4})}{g^{(y)}(1,1,1)}\left( \frac{\left( g^{(y)}(\theta _{2},\theta _{3},\theta _{4}) \right) '}{g^{(y)}(1,1,1)} \right) . \end{aligned}$$
Next, we calculate \(\left( g^{(y)}(\theta _{2},\theta _{3},\theta _{4}) \right) '\ \) using \(\frac{\mathrm {d}g(x,y,z)}{\mathrm {d}t}=\frac{\partial g}{\partial x}\frac{\mathrm {d}x}{\mathrm {d}t}+\frac{\partial g}{\partial y}\frac{\mathrm {d}y}{\mathrm {d}t} +\frac{\partial g}{\partial z}\frac{\mathrm {d}z}{\mathrm {d}t}\) to obtain
$$\begin{aligned} \left( g^{(y)}(\theta _{2},\theta _{3},\theta _{4}) \right) ' = g^{(y,x)}(\theta _{2},\theta _{3},\theta _{4})\dot{\theta _{2}} +g^{(y,y)}(\theta _{2},\theta _{3},\theta _{4})\dot{\theta _{3}} +g^{(y,z)}(\theta _{2},\theta _{3},\theta _{4})\dot{\theta _{4}}. \end{aligned}$$
$$\begin{aligned} \frac{\mathrm{d}\phi _\mathrm{SS}}{\mathrm{d}t}= \frac{2(1-\rho )^{2}g^{(y)}(\theta _2, \theta _3,\theta _4)}{g^{(y)}(1,1,1)}\left( \frac{(g^{(y)}(\theta _{2}, \theta _{3},\theta _{4}))'}{g^{(y)}(1,1,1)} \right) . \end{aligned}$$
Using \(A=-\frac{\mathrm{d}\phi _\mathrm{SS}}{\mathrm{d}t}/2\phi _\mathrm{SS}\) and some simplification, we find an explicit formula for A:
$$\begin{aligned} A=-\left( \frac{g^{(y,x)}(\theta _2,\theta _3,\theta _4) \dot{\theta _2}+g^{(y,y)}(\theta _2,\theta _3,\theta _4) \dot{\theta _3}+g^{(y,z)}(\theta _2,\theta _3,\theta _4) \dot{\theta _4}}{g^{(y)}(\theta _2,\theta _3,\theta _4)}\right) . \end{aligned}$$
Now we are ready to calculate equations for \(\phi _\mathrm{SI}\), \(\phi _\mathrm{II}\) and \(\phi _\mathrm{IR}\). We also require \(\phi _\mathrm{SR}\), but do not require a formula for \(\phi _\mathrm{RR}\). Using the flow diagram in Fig. 2, we have
$$\begin{aligned} \dot{\phi _\mathrm{SI}}&=2A\phi _\mathrm{SS}-(A+2\beta _{\mathrm{s}}+\gamma )\phi _\mathrm{SI}, \end{aligned}$$
$$\begin{aligned} \dot{\phi _\mathrm{SR}}&=\gamma \phi _\mathrm{SI}-A\phi _\mathrm{SR}, \end{aligned}$$
$$\begin{aligned} \dot{\phi _\mathrm{II}}&=(A+\beta _{\mathrm{s}})\phi _\mathrm{SI} -2(\beta _{\mathrm{s}}+\gamma )\phi _\mathrm{II}, \end{aligned}$$
$$\begin{aligned} \dot{\phi _\mathrm{IR}}&=A\phi _\mathrm{SR}+2\gamma \phi _\mathrm{II}-(\beta _{\mathrm{s}}+\gamma )\phi _\mathrm{IR}. \end{aligned}$$
To take into account the dynamic rewiring of edges, we introduce \(\theta _{4}=\psi _\mathrm{S}+\psi _\mathrm{I}+\psi _\mathrm{R}\), where \(\psi _\mathrm{I}\) denotes the probability that a random dynamic stub belonging to the test node u has never been involved in transmitting infection to u, and is currently connected to an infectious node. Other important assumptions with respect to dynamic-edge rewiring are the following: we assume that when one partnership ends, a new partnership forms immediately, neglecting any between-partner period, and we assume that edges break at rate \(\eta \). The flux between the various compartments of interest can be seen in Fig. 3.
Flow diagram for the flux of a dynamic-edge partner through different states. The flux between the probabilities \(\theta _{4}=\psi _\mathrm{S}+\psi _\mathrm{I}+\psi _\mathrm{R}\) that a random stub currently connected to u on the dynamic network layer has never been involved in transmitting infection to u. Note that the compartment denoted \(\eta \theta _{4}\) is not a compartment in the typical sense. When edges break (at rate \(\eta \)) in the model, moving into 'compartment' \(\eta \theta _{4}\), new edges are formed immediately without delay, moving straight back into compartments \(\psi _\mathrm{S}\), \(\psi _\mathrm{I}\) or \(\psi _\mathrm{R}\). \(\pi _\mathrm{S}\), \(\pi _\mathrm{I}\) and \(\pi _\mathrm{R}\) denote the probabilities that a randomly chosen dynamic stub belongs to a susceptible, infected or recovered node, respectively
Previously, \(\phi _\mathrm{S}\) (which corresponds to \(\psi _\mathrm{S}\) in this subsection) was calculated explicitly as the probability that the neighbour is susceptible. With dynamic-edge rewiring, an edge that previously transmitted infection may later become connected to a susceptible node, so the previous calculation of \(\phi _\mathrm{S}\) does not apply here. To find \(\psi _\mathrm{S}\), we need to calculate the probability that a newly formed edge connects to a susceptible, infectious or recovered individual. We call these probabilities \(\pi _\mathrm{S}\), \(\pi _\mathrm{I}\) and \(\pi _\mathrm{R}\) and note that they are equivalent to the probabilities that a randomly chosen dynamic stub belongs to a node in each disease compartment. The flux between these probabilities can be seen in Fig. 4.
Flow diagram for the flux of a dynamic line stub through different states. The flux between \(\pi _\mathrm{S}\), \(\pi _\mathrm{I}\) and \(\pi _\mathrm{R}\), the probabilities that a randomly chosen dynamic stub belongs to a susceptible, infected or recovered node, respectively
First, we calculate the values \(\pi _\mathrm{S}\), \(\pi _\mathrm{I}\) and \(\pi _\mathrm{R}\), beginning with \(\pi _\mathrm{S}\). If we select a dynamic stub at random, the probability that it belongs to an individual partaking in s static lines, t triangles and d dynamic stubs is \(dp_{s,t,d}/M_\mathrm{\mathrm{d}}\), where \(M_\mathrm{d}=g^{(z)}(1,1,1)\) is the expected number of dynamic edges that a random individual belongs to. At time zero, infection is introduced at random to a proportion \(\rho \) of the population. Thus the probability of any node being susceptible at time zero is \((1-\rho )\). The probability of a node with degree (s, t, d) being susceptible after some time, given that it was susceptible at time zero, is \(\theta _{2}^{s}\theta _{3}^{t}\theta _{4}^{d}\). Hence \(\pi _\mathrm{S}=(1-\rho )\sum _{s,t,d}p_{s,t,d}d\theta _{2}^{s} \theta _{3}^{t}\theta _{4}^{d}/M_\mathrm{d}\), with the summation taken over all degree possibilities described by the probability mass function \(p_{s,t,d}\). Stubs belonging to infected nodes become stubs belonging to recovered nodes at rate \(\gamma \); hence, \(\dot{\pi _\mathrm{R}}=\gamma \pi _\mathrm{I}\), and \(\pi _\mathrm{I}=1-\pi _\mathrm{S}-\pi _\mathrm{R}\). The equation for \(\pi _\mathrm{S}\) can be condensed using PGF (1), so we have
$$\begin{aligned} \pi _\mathrm{S}&=\frac{(1-\rho )\sum _{s,t,d}p_{s,t,d}d\theta _{2}^{s} \theta _{3}^{t}\theta _{4}^{d}}{\sum _{s,t,d}p_{s,t,d}d} =\frac{(1-\rho )\theta _{4}g^{(z)}(\theta _{2},\theta _{3}, \theta _{4})}{g^{(z)}(1,1,1)},\end{aligned}$$
$$\begin{aligned} \pi _\mathrm{I}&= 1 - \pi _\mathrm{S} - \pi _\mathrm{R}, \end{aligned}$$
$$\begin{aligned} \dot{\pi _\mathrm{R}}&=\gamma \pi _\mathrm{I}. \end{aligned}$$
To complete the system we need to calculate the flux \(B\psi _\mathrm{S}\) from \(\psi _\mathrm{S}\) to \(\psi _\mathrm{I}\) by solving a differential equation for \(\psi _\mathrm{S}\). B describes the rate at which a susceptible dynamic-edge neighbour v of u becomes infected from outside the dynamic edge joining u and v. Consider a random test node u and a random dynamic-edge neighbour v of u, at some time t. Let \(\zeta \) denote the probability that the two stubs joining u and v have not previously been involved in transmitting infection to u or to v, prior to the \(u-v\) edge forming. The probability that v is susceptible and that u's stub has not previously transmitted to u is \(\zeta (1-\rho )\theta _{2}^{s}\theta _{3}^{t} \theta _{4}^{d-1}\), where s is the number of static lines v partakes in, t is the number of triangles v partakes in, and d is the dynamic line stub degree of v. Since we do not know the values (s, t, d) for v, we must consider all possible combinations of degrees. The probability of a randomly chosen dynamic stub belonging to a node with degree (s, t, d) is \(dp_{s,t,d}/g^{(z)}(1,1,1)\). We conclude that
$$\begin{aligned} \psi _\mathrm{S}=\frac{\zeta (1-\rho )\sum _{s,t,d}p_{s,t,d}d\theta _{2}^{s} \theta _{3}^{t}\theta _{4}^{d-1}}{g^{(z)}(1,1,1)} =\frac{\zeta (1-\rho )g^{(z)}(\theta _{2},\theta _{3}, \theta _{4})}{g^{(z)}(1,1,1)}. \end{aligned}$$
To calculate the derivative of \(\psi _\mathrm{S}\), we first consider the derivative of \(\zeta \). This is given by subtracting the rate at which such edges break, \(\eta \zeta \), from the rate at which such edges form, \(\eta \theta _{4}^{2}\) (one \(\theta _{4}\) for u's stub and one for v's stub). We have
$$\begin{aligned} \dot{\zeta }=\eta \theta _{4}^{2} -\eta \zeta . \end{aligned}$$
We have an expression for \(\dot{\zeta }\), so the derivative of \(\psi _\mathrm{S}\) can be found via the chain rule:
$$\begin{aligned} \dot{\psi _\mathrm{S}}&=\dot{\zeta } \frac{(1-\rho )g^{(z)}(\theta _{2}, \theta _{3},\theta _{4})}{g^{(z)}(1,1,1)} +\frac{\zeta (1-\rho )}{g^{(z)}(1,1,1)}\left( g^{(z)}(\theta _{2}, \theta _{3},\theta _{4}) \right) ' \\&=\frac{\eta \theta _{4}^{2}(1-\rho )g^{(z)}(\theta _{2},\theta _{3}, \theta _{4})}{g^{(z)}(1,1,1)}-\frac{\eta \zeta (1-\rho )g^{(z)} (\theta _{2},\theta _{3},\theta _{4})}{g^{(z)}(1,1,1)}\\&\quad +\frac{\zeta (1-\rho )}{g^{(z)}(1,1,1)}\left( g^{(z)}(\theta _{2},\theta _{3},\theta _{4}) \right) ' \\&=\eta \theta _{4}\pi _\mathrm{S}-\eta \psi _\mathrm{S}\\&\quad +\frac{\zeta (1-\rho )}{g^{(z)}(1,1,1)}\left( g^{(z,x)}(\theta _{2},\theta _{3}, \theta _{4})\dot{\theta _{2}}+g^{(z,y)}(\theta _{2},\theta _{3}, \theta _{4})\dot{\theta _{3}}+g^{(z,z)}(\theta _{2},\theta _{3}, \theta _{4})\dot{\theta _{4}} \right) \\&=\eta \theta _{4}\pi _\mathrm{S}-\eta \psi _\mathrm{S}\\&\quad +\frac{\psi _\mathrm{S} \left( g^{(z,x)}(\theta _{2},\theta _{3},\theta _{4}) \dot{\theta _{2}}+g^{(z,y)}(\theta _{2},\theta _{3}, \theta _{4})\dot{\theta _{3}}+g^{(z,z)}(\theta _{2}, \theta _{3},\theta _{4})\dot{\theta _{4}} \right) }{g^{(z)}(\theta _{2},\theta _{3},\theta _{4})}, \end{aligned}$$
with simplifications achieved by utilising \(\pi _\mathrm{S}=(1-\rho )\theta _{4}g^{(z)}(\theta _{2}, \theta _{3},\theta _{4})/g^{(z)}(1,1,1)\) and \(\psi _\mathrm{S}=\zeta (1-\rho )g^{(z)}(\theta _{2}, \theta _{3},\theta _{4})/g^{(z)}(1,1,1)\). From Fig. 3 we have \(\dot{\psi _\mathrm{S}}=\eta \theta _{4}\pi _\mathrm{S}-\eta \psi _\mathrm{S}-B\psi _\mathrm{S}\), so we calculate the flux between compartments \(\psi _\mathrm{S}\) and \(\psi _\mathrm{I}\) using the rate
$$\begin{aligned} B=-\left( \frac{ g^{(z,x)}(\theta _{2},\theta _{3},\theta _{4}) \dot{\theta _{2}}+g^{(z,y)}(\theta _{2},\theta _{3},\theta _{4}) \dot{\theta _{3}}+g^{(z,z)}(\theta _{2},\theta _{3},\theta _{4}) \dot{\theta _{4}}}{g^{(z)}(\theta _{2},\theta _{3},\theta _{4})}\right) . \end{aligned}$$
The \(\psi _\mathrm{S}\) to \(\psi _\mathrm{I}\) flux is the product of \(\psi _\mathrm{S}\), the probability that a random dynamic stub has not transmitted infection to the test node u and is currently connected to a susceptible node, with rate B, the rate that a neighbouring susceptible node v becomes infected from outside the dynamic edge, given that the stub has not transmitted and connects u to a susceptible node. Following the flow diagram in Fig. 3, we have the differential equations
$$\begin{aligned} \dot{\theta _{4}}&= -\beta _{\mathrm{d}} \psi _\mathrm{I}, \end{aligned}$$
$$\begin{aligned} \dot{\psi _\mathrm{S}}&= \eta \theta _{4} \pi _\mathrm{S} - (B+\eta )\psi _\mathrm{S}, \end{aligned}$$
$$\begin{aligned} \dot{\psi _\mathrm{I}}&= B \psi _\mathrm{S}+\eta \theta _{4}\pi _\mathrm{I}-(\eta +\gamma +\beta _{\mathrm{d}})\psi _\mathrm{I}, \end{aligned}$$
$$\begin{aligned} \dot{\psi _\mathrm{R}}&= \gamma \psi _\mathrm{I}+\eta \theta _{4}\pi _\mathrm{R}-\eta \psi _\mathrm{R}. \end{aligned}$$
Population-Level Equations
We began the EBCM derivation by considering the probability of a randomly selected test node u (which is prevented from transmitting infection) being susceptible as \(\theta _{2}^{s}\theta _{3}^{t}\theta _{4}^{d}\), given that the node has degree (s, t, d). Since we have calculated formulae for \(\theta _{2}\), \(\theta _{3}\) and \(\theta _{4}\), we can derive population-level equations describing the proportion of the population in each disease compartment at each point in time:
$$\begin{aligned} S(t)&= (1-\rho )g(\theta _{2}(t),\theta _{3}(t), \theta _{4}(t))=(1-\rho )\sum _{s,t,d}p_{s,t,d} \theta _{2}(t)^{s}\theta _{3}(t)^{t}\theta _{4}(t)^{d}, \end{aligned}$$
$$\begin{aligned} I(t)&= 1 - S(t) - R(t), \end{aligned}$$
$$\begin{aligned} \dot{R}(t)&= \gamma I(t). \end{aligned}$$
Equations (1)–(23) form a complete system describing an SIR epidemic spreading across a dual-layer multiplex network consisting of a static network layer constructed from line stubs and triangle corners and a dynamic network layer constructed from line stubs only, where edges rewire and degrees are conserved.
Deriving the Basic Reproduction Number \(R_{0}\)
The basic reproduction number \(R_{0}\) is defined as the average number of infections caused by a single infectious individual, early in an epidemic process, in an otherwise susceptible population. In the model, a multiplex network structure is generated using three distinct edge distributions (static line stubs, static triangle corners and dynamic line stubs). To compute \(R_{0}\) we must consider the average number of infections caused across each type of edge, whilst also considering the type of edge that the infection was originally received across. With 3 edge types, this constitutes 9 values, grouped together to form the next-generation matrix
$$\begin{aligned} \varvec{G}= \left( \begin{array}{c c c} G_{ss} &{}\quad G_{st} &{}\quad G_{sd} \\ G_{ts} &{}\quad G_{tt} &{}\quad G_{td} \\ G_{ds} &{}\quad G_{{dt}} &{}\quad G_{dd} \end{array}\right) , \end{aligned}$$
where matrix element \(G_{ij}\) describes the average number of infections caused across edges of type j, where the infector received infection across an edge of type i. Following the next-generation matrix approach (Diekmann et al. 2009), the value of \(R_{0}\) is found via the leading eigenvalue of the matrix \(\varvec{G}\), or equivalently, the eigenvalue with greatest magnitude. We note that the matrix \(\varvec{G}\) defined here is the transpose of the next-generation matrix as defined in Diekmann et al. (2009). However, this discrepancy does not affect the eigenvalues or therefore the value \(R_{0}\).
To find \(R_{0}\), we begin by deriving expressions for values in the first column of \(\varvec{G}\). Firstly, consider the non-diagonal matrix entries \(G_{ts}\) and \(G_{ds}\). We want to compute the expected number of infection events occurring across static lines, when individuals contracted infection across a triangle edge or a dynamic line. In both cases, we require the expected static line stub degree, multiplied by the expected number of infections caused across a single static line attached to the infectious individual. Say the expected static line stub degree is denoted \(\langle k_\mathrm{s} \rangle \). Now we require the expected number of infections caused across a single static edge attached to an infectious individual, in an otherwise susceptible population. A single static edge joining a susceptible and an infectious individual, in an otherwise susceptible population, has two event possibilities: a single recovery, or a single infection. Denote X as the random variable describing the number of infection events occurring across a single static line joining a susceptible to an infectious individual, in an otherwise susceptible population. Using the expectation formula, and since there can only be zero or one infection events occurring across such an edge, we find the expected number of infections across a static line joining a susceptible to an infectious individual simply as \(\mathbb {P}(X=1)\). The probability of a single infection occurring across such a static edge, prior to any recovery, is \(\frac{\beta _{\mathrm{s}}}{\beta _{\mathrm{s}} +\gamma }\). Thus we can say that \(G_{ts}=G_{ds}=\langle k_\mathrm{s}\rangle \frac{\beta _{\mathrm{s}}}{\beta _{\mathrm{s}}+\gamma }\).
Finally, we calculate an expression for the diagonal matrix element \(G_{ss}\), by multiplying the expected excess static line stub degree, denoted \(\langle s\rangle \), by the expected number of infections caused across a single static line joining a susceptible individual to an infectious individual in an otherwise susceptible population. Following the same argument for \(G_{ts}\) and \(G_{ds}\), we compute the expected number of infection events for \(G_{ss}\) as \(\frac{\beta _{\mathrm{s}}}{\beta _{\mathrm{s}}+\gamma }\), and we obtain \(G_{ss}=\langle s\rangle \frac{\beta _{\mathrm{s}}}{\beta _{\mathrm{s}}+\gamma }\).
Next we derive expressions for the values \(G_{st}\), \(G_{tt}\) and \(G_{dt}\) in the second column of the matrix \(\varvec{G}\). We firstly consider the non-diagonal elements \(G_{st}\) and \(G_{dt}\). Both \(G_{st}\) and \(G_{dt}\) are calculated by multiplying the expected triangle corner degree, denoted \(\langle k_\mathrm {t}\rangle \), by the expected number of infection events caused within a single triangle attached to an infectious node in an otherwise susceptible population. In a single triangle comprised of two susceptible individuals attached to an infectious individual, there are a finite number of infection event possibilities: either no further infections occur (the infectious individual recovers), one infection event occurs, or two infection events occur. Define Y as the random variable describing the number of infection events within such a triangle. Using the expectation formula, we find the expected number of infection events within a triangle comprised of two susceptible individuals and an infective, in an otherwise susceptible population, as \(\mathbb {P}(Y=1)+2\cdot \mathbb {P}(Y=2)\). To continue, we must compute the probabilities \(\mathbb {P}(Y=1)\) and \(\mathbb {P}(Y=2)\) explicitly. \(\mathbb {P}(Y=1)\) describes the probability that the original infective infects one out of two triangle neighbours. In this case, either one of the two susceptible neighbours can become infectious, and both infectious triangle members must then recover, so that it is impossible for any more than one infection event to occur. In a triangle comprised of a single infective and two susceptible nodes, there are four distinct nodal orders in which a single infection event is followed by the recovery of both infectious nodes. We find
$$\begin{aligned} \mathbb {P}(Y=1)= & {} \frac{4\beta _{\mathrm{s}}}{2\beta _{\mathrm{s}} +\gamma }\left( \frac{\gamma }{2\beta _{\mathrm{s}}+2\gamma }\right) \left( \frac{\gamma }{\beta _{\mathrm{s}}+\gamma }\right) \\= & {} \frac{2\beta _{\mathrm{s}}}{2\beta _{\mathrm{s}}+\gamma } \left( \frac{\gamma }{\beta _{\mathrm{s}}+\gamma }\right) ^{2}. \end{aligned}$$
Considering \(\mathbb {P}(Y=2)\) is more complex, as there are two distinct ways in which two infection events can occur in a triangle between an infective and two susceptible individuals. Firstly, the original infective can infect both of its triangle neighbours consecutively, prior to any recovery events. The probability of both triangle infection events occurring in succession is given by \(\left( \frac{2\beta _{\mathrm{s}}}{2\beta _{\mathrm{s}}+\gamma }\right) \left( \frac{2\beta _{\mathrm{s}}}{2\beta _{\mathrm{s}}+2\gamma }\right) =\left( \frac{2\beta _{\mathrm{s}}}{2\beta _{\mathrm{s}}+\gamma }\right) \left( \frac{\beta _{\mathrm{s}}}{\beta _{\mathrm{s}}+\gamma }\right) \). Secondly, the original infective can cause two triangle infections via three consecutive events. In this case, the originally infectious triangle member firstly infects one susceptible triangle neighbour at rate \(\frac{2\beta _{\mathrm{s}}}{2\beta _{\mathrm{s}}+\gamma }\). The triangle is now comprised of two infectious individuals attached to a single susceptible individual. The second event to occur is a recovery of either the original infector or its first infectee, occurring at rate \(\frac{2\gamma }{2\beta _{\mathrm{s}}+2\gamma } =\frac{\gamma }{\beta _{\mathrm{s}}+\gamma }\). The triangle is now comprised of a susceptible, an infective, and a recovered individual, in an otherwise susceptible population. Following the recovery event, the final event is an infection of the remaining susceptible triangle member, occurring at rate \(\frac{\beta _{\mathrm{s}}}{\beta _{\mathrm{s}}+\gamma }\). The probability of all three events occurring in succession is thus \(\frac{2\beta _{\mathrm{s}}}{2\beta _{\mathrm{s}}+\gamma }\left( \frac{\gamma }{\beta _{\mathrm{s}} +\gamma }\right) \left( \frac{\beta _{\mathrm{s}}}{\beta _{\mathrm{s}}+\gamma }\right) \).
In the latter case of an infection, followed by a recovery, followed by another infection within a triangle originally composed of an infective and two susceptible individuals in an otherwise susceptible population, the original infector may not be directly involved in every single infection event. However, for the purposes of deriving \(R_{0}\), we say that the original infector caused these infections, regardless of the order in which triangle members recover and infect one another.
Since there are two distinct ways in which two infections can take place within a triangle comprised of an infective and two susceptible individuals, we take the sum of both individual probabilities to obtain \(\mathbb {P}(Y=2)\):
$$\begin{aligned} \mathbb {P}(Y=2)= & {} \frac{2\beta _{\mathrm{s}}}{2\beta _{\mathrm{s}} +\gamma }\left( \frac{\beta _{\mathrm{s}}}{\beta _{\mathrm{s}}+\gamma }\right) +\frac{2\beta _{\mathrm{s}}}{2\beta _{\mathrm{s}}+\gamma } \left( \frac{\gamma }{\beta _{\mathrm{s}}+\gamma }\right) \left( \frac{\beta _{\mathrm{s}}}{\beta _{\mathrm{s}}+\gamma }\right) \\= & {} \frac{2\beta _{\mathrm{s}}}{2\beta _{\mathrm{s}}+\gamma } \left( \frac{\beta _{\mathrm{s}}}{\beta _{\mathrm{s}}+\gamma }\right) \left[ 1+\frac{\gamma }{\beta _{\mathrm{s}}+\gamma }\right] . \end{aligned}$$
We find the expected number of infection events within a triangle comprised of two susceptible individuals and an infective, in an otherwise susceptible population, as
$$\begin{aligned} \mathbb {P}(Y=1)+2\cdot \mathbb {P}(Y=2)= & {} \frac{2\beta _{\mathrm{s}}}{2\beta _{\mathrm{s}}+\gamma } \left( \frac{\gamma }{\beta _{\mathrm{s}}+\gamma }\right) ^{2}\\&+\, \frac{4\beta _{\mathrm{s}}}{2\beta _{\mathrm{s}}+\gamma } \left( \frac{\beta _{\mathrm{s}}}{\beta _{\mathrm{s}}+\gamma }\right) \left[ 1+\frac{\gamma }{\beta _{\mathrm{s}}+\gamma }\right] \\= & {} \frac{2\beta _{\mathrm{s}}}{2\beta _{\mathrm{s}}+\gamma } \left[ 2-\left( \frac{\gamma }{\beta _{\mathrm{s}}+\gamma }\right) ^{2}\right] . \end{aligned}$$
Then we have \(G_{st}=\langle k_\mathrm {t}\rangle \frac{2\beta _{\mathrm {s}}}{2\beta _{\mathrm {s}}+\gamma }\left[ 2-\left( \frac{\gamma }{\beta _{\mathrm {s}}+\gamma } \right) ^{2}\right] =G_{dt}\), where \(\langle k_\mathrm {t}\rangle \) denotes the expected static triangle corner degree. Finally, we have \(G_{tt}=\langle t\rangle \frac{2\beta _{\mathrm{s}}}{2\beta _{\mathrm{s}}+\gamma } \left[ 2-\left( \frac{\gamma }{\beta _{\mathrm{s}}+\gamma }\right) ^{2}\right] \), where \(\langle t\rangle \) denotes the expected excess static triangle corner degree.
We conclude by deriving elements from the third column of \(\varvec{G}\), starting with non-diagonal matrix elements \(G_{sd}\) and \(G_{td}\). In both cases, we multiply the expected dynamic line stub degree, denoted \(\langle k_\mathrm{d}\rangle \), by the expected number of infection events occurring across a single dynamic line stub attached to an infectious individual, in an otherwise susceptible population.
The probability of a dynamic stub attached to an infective in an otherwise susceptible population transmitting infection at least once is \(\frac{\beta _{\mathrm{d}}}{\beta _{\mathrm{d}}+\gamma }\). If such an infection occurs, the I–S pairing becomes an I–I pairing with a dynamic edge joining the two individuals. The probability of a dynamic I–I edge rewiring, prior to any recovery event, is \(\frac{\eta }{\eta +\gamma }\). We can assume that any I–I edge rewires to become an I–S edge in the limit of large population size, since we are early on in an epidemic process, and we began with an otherwise susceptible population. The probability that an infectious dynamic stub infects its new susceptible neighbour is \(\frac{\beta _{\mathrm{d}}}{\beta _{\mathrm{d}}+\gamma }\). This rewiring and infecting process can occur an arbitrary number of times in the model. The expected number of infections of this type can be calculated by taking the sum
$$\begin{aligned} \sum _{n=0}^{\infty }\frac{\beta _{\mathrm{d}}}{\beta _{\mathrm{d}}+\gamma }r^{n} = \frac{\beta _{\mathrm{d}}}{\beta _{\mathrm{d}}+\gamma }\left( \frac{1}{1-r}\right) , \end{aligned}$$
by the geometric series, and where r is defined as \(\frac{\eta \beta _{\mathrm{d}}}{(\eta +\gamma )(\beta _{\mathrm{d}}+\gamma )}\), the probability of an infectious individual's dynamic-edge rewiring, followed immediately by its dynamic stub infecting the new (susceptible) neighbour across the rewired edge. We obtain the matrix values \(G_{sd}=\langle k_\mathrm{d}\rangle \frac{\beta _{\mathrm{d}}}{\beta _{\mathrm{d}}+\gamma }\left( \frac{1}{1-r}\right) =G_{td}\).
Finally, we compute \(G_{dd}\), defined as the expected number of infections caused across dynamic edges, where the infector received infection across a dynamic edge itself. Firstly, consider the single dynamic I–I edge which originally infected our individual. The probability of the edge rewiring, leaving our infective in an I–S dynamic-edge pairing, is \(\frac{\eta }{\eta +\gamma }\). The probability of the infectious dynamic stub infecting the new susceptible neighbour is \(\frac{\beta _{\mathrm{d}}}{\beta _{\mathrm{d}}+\gamma }\). Thus the probability that the dynamic stub which originally contracted infection infects \(\ge n\) individuals is \(r^{n}\), where \(r=\frac{\eta \beta _{\mathrm{d}}}{(\eta +\gamma ) (\beta _{\mathrm{d}}+\gamma )}\). We compute the expected number of infections of this type by taking the sum of \(r^{n}\) for \(n=1\):\(\infty \)
$$\begin{aligned} \sum _{n=1}^{\infty }r^{n} = \frac{r}{1-r}, \end{aligned}$$
by the geometric series. Now consider the remaining dynamic edges associated with our infectious individual. We require the expected number of infections caused by a single edge of this type. Using the same argument as for \(G_{sd}\) and \(G_{td}\), we find the expected number of infections caused by one dynamic edge attached to our infectious individual as \(\frac{\beta _{\mathrm{d}}}{\beta _{\mathrm{d}}+\gamma } \left( \frac{1}{1-r}\right) \). Thus we find \(G_{dd}=\frac{r}{1-r}+\langle d\rangle \frac{\beta _{\mathrm{d}}}{\beta _{\mathrm{d}}+\gamma } \left( \frac{1}{1-r}\right) \), where \(\langle d\rangle \) is the expected excess dynamic line stub degree.
In detail, the next-generation matrix \(\varvec{G}\) takes the form
$$\begin{aligned} \left( \begin{array}{c c c} \langle s\rangle \frac{\beta _{\mathrm{s}}}{\beta _{\mathrm{s}}+\gamma } &{}\quad \langle k_\mathrm{t}\rangle \frac{2\beta _{\mathrm{s}}}{2\beta _{\mathrm{s}}+\gamma } \left( 2-\left( \frac{\gamma }{\beta _{\mathrm{s}}+\gamma }\right) ^{2}\right) &{}\quad \langle k_\mathrm{d}\rangle \frac{\beta _{\mathrm{d}}}{\beta _{\mathrm{d}}+\gamma }\left( \frac{1}{1-r}\right) \\ \langle k_\mathrm{s}\rangle \frac{\beta _{\mathrm{s}}}{\beta _{\mathrm{s}}+\gamma } &{}\quad \langle t\rangle \frac{2\beta _{\mathrm{s}}}{2\beta _{\mathrm{s}}+\gamma } \left( 2-\left( \frac{\gamma }{\beta _{\mathrm{s}}+\gamma }\right) ^{2}\right) &{}\quad \langle k_\mathrm{d}\rangle \frac{\beta _{\mathrm{d}}}{\beta _{\mathrm{d}}+\gamma }\left( \frac{1}{1-r}\right) \\ \langle k_\mathrm{s}\rangle \frac{\beta _{\mathrm{s}}}{\beta _{\mathrm{s}}+\gamma } &{}\quad \langle k_\mathrm{t}\rangle \frac{2\beta _{\mathrm{s}}}{2\beta _{\mathrm{s}}+\gamma } \left( 2-\left( \frac{\gamma }{\beta _{\mathrm{s}}+\gamma }\right) ^{2}\right) &{}\quad \frac{r}{1-r}+\langle d\rangle \frac{\beta _{\mathrm{d}}}{\beta _{\mathrm{d}}+\gamma } \left( \frac{1}{1-r}\right) \end{array}\right) , \end{aligned}$$
where \(\langle k_\mathrm{s}\rangle \), \(\langle k_\mathrm{t}\rangle \) and \(\langle k_\mathrm{d}\rangle \) denote the expected static line stub, static triangle corner and dynamic line stub degrees, \(\langle s\rangle \), \(\langle t\rangle \) and \(\langle d\rangle \) denote the expected excess static line stub, static triangle corner and dynamic line stub degrees, and \(r=\frac{\eta \beta _{\mathrm{d}}}{(\eta +\gamma )(\beta _{\mathrm{d}}+\gamma )}\). The basic reproduction number \(R_{0}\) is the eigenvalue of next-generation matrix (24) with greatest magnitude.
Model Implementation
A variable-order stiff differential equation solver (ode15s in the MATLAB environment) was used to solve all relevant systems of equations. Initial conditions were specified, consisting of appropriate degree distributions and parameters for each edge-based compartmental model type, and of a user-specified end time for the computation.
Solutions to Eqs. (1)–(23) were found using both interdependent and independent distributions for the three edge types. For interdependent distributions, a single probability distribution governed the distribution of pairs of edge stubs, and additional model parameters \((p_{s}+p_{t}+p_{d})\equiv 1\) were used to distribute each pair of stubs into: two static line stubs (with probability \(p_{s}\)), a single static triangle corner (with probability \(p_{t}\)) or two dynamic line stubs (with probability \(p_{d}\)). In such cases we used a negative binomial distribution for pairs of edge stubs with parameters p and r describing the probability of success in a single trial and the number of trial successes, respectively, where the distribution itself is generated by \(g_{nb}(x;r,p)=(\frac{p}{1-(1-p)x})^r\) and models the number of failures before a specified number of successes is reached in a series of identical, independent Bernoulli trials. We also utilised a discrete homogeneous distribution for pairs of edge stubs where all individuals had identical degree. For independent distributions, we used three separate binomial distributions for the number of static line stubs, static triangle corners and dynamic line stubs.
Simulation Implementation
To test the validity of solutions to Eqs. (1)–(23), found in the MATLAB environment, Gillespie simulations (Gillespie 1976) were implemented to produce statistically correct trajectories of SIR epidemic processes occurring on equivalent static–dynamic multiplex networks. Prior to each simulation, static and 'dynamic' adjacency matrices were generated according to a configuration model approach, described as follows: for a population of N individuals, three vectors of length N were generated to record the number of static line stubs, static triangle corners and dynamic line stubs associated with each individual, according to user-specified degree distributions provided to the script. The script ensured that the total number of static line stubs was even, the total number of dynamic line stubs was even, and that the total number of static triangle corners was a multiple of three.
Firstly, the static network layer was generated using vectors containing the number of static line stubs and triangle corners each individual partook in. Pairs of static line stubs and triples of static triangle corners were selected at random. Provided potential static lines and triangles did not generate self-loops (where an individual is joined to itself with an edge) or double edges (where an edge exists more than once within the static network layer), they were added to the static adjacency matrix. The unmatched static line stubs and static triangle corners lists were updated, and the process continued until all static line stubs and triangle corners were successfully matched.
Secondly, the initial structure of the dynamic network layer was generated using the vector storing the number of dynamic line stubs each individual partook in. Pairs of dynamic line stubs were selected at random. Provided a potential dynamic edge did not generate a self-loop or a double edge within the dynamic network layer, it was added to the dynamic adjacency matrix. Successfully paired dynamic stubs were removed from the unmatched stubs list, and the process continued until all dynamic line stubs were successfully matched.
The nature of this configuration model approach meant that the wiring processes for the static and dynamic network layers may have had to be restarted multiple times in order to achieve final network structures. Once all static line stubs, static triangle corners and dynamic line stubs had been wired up, the configuration process was complete. Although the script prevented double edges from occurring within each network layer, it was possible for double edges to occur across the network layers, i.e. for two individuals to share both a static and a dynamic connection simultaneously.
Given static and dynamic adjacency matrices describing the multiplex network structure, simulated epidemic processes were implemented. In each Gillespie simulation, \(\rho N\) initially infectious individuals were selected at random from the population. At each time step, a vector of length \((N+1)\) described the state transition rate (infection or recovery) for all N individuals, followed by a single edge swapping rate, \(\frac{\eta M}{2}\), where \(M:=\) total number of edges in the dynamic network layer. Inter-event times followed an exponential distribution with scale parameter \(\frac{1}{R}\), where \(R:=\) the sum of the rates vector at the current time step. Each event occurring was either an infection, a recovery or an edge swap. Uniformly distributed random numbers were generated at each time step to determine the next event to occur. When an edge swap event occurred, the script selected two dynamic edges at random, ensuring that all four nodes involved in these edges were unique. The script also ensured that the proposed new dynamic edges did not already exist within the dynamic network layer. Given these conditions, an edge swap occurred and the Gillespie process continued. The process terminated once the user-specified end time was reached.
In what follows, we assess the validity of Eqs. (1)–(23) and of the basic reproduction number \(R_{0}\), obtained via next-generation matrix (24). We firstly consider two extreme cases of the multiplex model: when either the static or the dynamic network layers are negligible (close to zero). In such cases, we show that predictions made by Eqs. (1)–(23) resolve to predictions made by existing uniplex EBCM equations. When the full multiplex model is considered, with static and dynamic network elements present, there exists no basis for comparison other than generating exact simulations of the epidemic process. To this end, we utilise Gillespie simulations to demonstrate the validity of Eqs. (1)–(23) in predicting the epidemic process for a number of multiplex network configurations. By solely considering the predictions of Eqs. (1)–(23), we explore the consequences of varying individual model parameters and of considering various combinations of model parameters \((p_{s}+p_{t}+p_{d})\equiv 1\), governing the contributions of each edge type. Further, we explore the contributions of each edge type and how the resulting final epidemic size is altered within a systematic consideration of combinations of model parameters \(\beta _{\mathrm{s}}\), \(\beta _{\mathrm{d}}\) and \(\eta \). Finally, we test the performance of the derived basic reproduction number \(R_{0}\) in predicting the outcome of an epidemic and we explore variations in the value of \(R_{0}\) and the associated final epidemic size predicted by Eqs. (1)–(23) when altering the rate of rewiring, the extent of clustering and the average degree in the multiplex model.
Model Convergence to Existing Uniplex Model Equations
Model Without Dynamic Layer
When the dynamic component of the dual-layer static–dynamic multiplex is removed, the model reduces to describe an SIR epidemic on a static uniplex network generated by lines and triangles. Biologically speaking, this reduced model tracks the epidemic as it spreads across persistent connections in a population with community structure. The EBCM approach has been followed to derive equations describing an SIR epidemic on such a network (Volz et al. 2011).
By comparing predictions made by uniplex model equations in Volz et al. (2011) with those of multiplex model equations (1)–(23) when dynamic network elements are close to zero, we were able to test the multiplex model's convergence (Fig. 5). Excellent agreement was observed between multiplex model equations where dynamic network elements are negligible, uniplex model equations (Volz et al. 2011) and Gillespie simulated epidemics on equivalent multiplex networks, for a number of scenarios with varying forces of infection.
Multiplex model convergence—no dynamic layer, with simulation. The time evolution of infection prevalence for the original EBCM of an SIR epidemic on a static uniplex network (solid black line), for the proposed EBCM of an SIR epidemic on a dual-layer multiplex with the dynamic network layer being close to zero (thick dashed red line) and for 10 Gillespie simulations of the SIR epidemic on a single network of size \(N=5000\) (solid blue lines). In all panels \(\gamma =1\), \(\rho =0.05\) and \(p=0.5\) and \(r=10\) generate a negative binomial distribution for pairs of edge stubs. For the original static derivation (solid black line) \(p_{s}=0.5=p_{t}\), describing the proportion of edge pairs that are split into two single lines or remain as a triangle corner, respectively. For the multiplex derivation (thick dashed red line) \(\beta _{\mathrm{s}}=\beta _{\mathrm{d}}\), \(\eta =0.01\), and \(p_{s}=0.4999999\), \(p_{t}=0.5\) and hence \(p_{d}=10^{-7}\) describe the proportion of edge pairs that become two static lines, a static triangle corner or two dynamic edges, respectively. a \(\beta 's=1\), \(C=0.02677\), b \(\beta 's=0.5\), \(C=0.02670\), c \(\beta 's=0.25\), \(C=0.02658\), d \(\beta 's=0.125\), \(C=0.02685\), where C denotes the global clustering coefficient of each static network layer generated for simulation (Color figure online)
Model Without Static Layer
When the static component of the dual-layer static–dynamic multiplex is removed, the model describes an SIR epidemic on a dynamic uniplex network generated by lines, where edges rewire at constant rate \(\eta \) and degrees are conserved. Biologically, the reduced model describes an epidemic spreading through a population where connections between pairs of individuals are temporary, but the number of connections an individual partakes in remains fixed. The EBCM approach was followed to derive equations describing an SIR epidemic spreading on such a network in Miller et al. (2012).
Excellent agreement was observed between predictions made by Eqs. (1)–(23) when static network elements are close to zero, output from the dynamic fixed-degree derivation in Miller et al. (2012) and Gillespie simulations describing the SIR epidemic and edge rewire processes occurring simultaneously on equivalent multiplex networks, for a number of set-ups with varying forces of infection (Fig. 6).
Multiplex model convergence—no static layer, with simulation. The time evolution of infection prevalence for the original EBCM of an SIR epidemic on a dynamic uniplex network with conserved degrees and edge rewiring (solid black line), for the proposed multiplex EBCM of an SIR epidemic with the static network layer being close to zero (thick dashed red line) and for 10 Gillespie simulations of the process on a single network of size \(N=5000\) (solid blue lines). In all panels \(\gamma =1\), \(\rho =0.05\) and \(p=0.5\) and \(r=10\) generate a negative binomial distribution for pairs of edge stubs. For the original conserved-degree derivation (solid black line) \(p_{d}=1\), indicating that all edge pairs become two disjoint dynamic edges. For the multiplex derivation (thick dashed red line), \(\eta =0.01\) and \(p_{s}=p_{t}=10^{-7}\) and \(p_{d}=0.9999998\) describe the proportion of edge pairs that become two static lines, single triangle corners or two dynamic edges, respectively. a \(\beta 's=1\), \(C=0.004944\), b \(\beta 's=0.5\), \(C=0.005285\), c \(\beta 's=0.25\), \(C=0.005344\), d \(\beta 's=0.125\), \(C=0.005127\), where C denotes the global clustering coefficient of each dynamic network layer generated for simulation, at time zero (Color figure online)
Model Validation by Comparison with Simulation
We have observed excellent agreement between multiplex model predictions, uniplex model predictions and Gillespie simulated epidemics in extreme cases where either static or dynamic network elements are negligible (Figs. 5, 6). When multiplex network elements are non-negligible, static and dynamic network layers coexist in the model. In such cases, Gillespie simulated epidemics become the sole basis for assessing the validity of multiplex model equations (1)–(23).
A number of comparisons have been made between multiplex model predictions and Gillespie simulations when static and dynamic network elements coexist (Figs. 7, 8). Excellent agreement was observed for a number of comparisons with various average degrees (imposed via negative binomial parameters p and r, describing the distribution governing pairs of edge stubs) and various levels of clustering (imposed by varying parameter \(p_{t}\) with the constraint \((p_{s}+p_{t}+p_{d})\equiv 1\)) (Fig. 7). Excellent agreement was also observed for a number of comparisons with various combinations of the multiplex model's infection parameters \(\beta _{\mathrm{s}}\) and \(\beta _{\mathrm{d}}\) (Fig. 8).
Multiplex model prediction versus simulation—varying clustering and average degree. Plotting the dynamics of the proportion of infected individuals over time. Each panel contains 25 Gillespie simulations on a single multiplex network comprised of \(N=1000\) individuals (blue lines) and the associated EBCM prediction (black line). All networks are generated using a negative binomial distribution for pairs of edge stubs with parameters \(p=0.5\) and various values for r. Networks in column 1 (counting from left to right) have average degree 10 (achieved via \(r=5\)), networks in column 2 have average degree 20 (achieved via \(r=10\)) and networks in column 3 have average degree 30 (achieved via \(r=15\)). Networks in row 1 (counting from top to bottom) have minimised clustering via values \(p_{s}=0.99999998\) and \(p_{t}=10^{-8}\). Networks in row 2 have the values \(p_{s}=0.49999999=p_{t}\). Networks in row 3 have maximised clustering via the values \(p_{s}=10^{-8}\) and \(p_{t}=0.99999998\). Counting panels from left to right and top to bottom, starting with the upper-left panel, static networks have the following clustering coefficients: \(C=0.0161\), \(C=0.0267\), \(C=0.0370\), \(C=0.0535\), \(C=0.0473\), \(C=0.0493\), \(C=0.0898\), \(C=0.0662\), \(C=0.0629\). In all panels, \(t_\mathrm{max}=10\), \(\rho =0.05\), \(\beta _{\mathrm{s}}=\beta _{\mathrm{d}}=0.25\), \(\gamma =1\), \(\eta =0.01\) (Color figure online)
Multiplex model prediction versus simulation—varying infection parameters \(\beta _{\mathrm{s}}\) and \(\beta _{\mathrm{d}}\). Plotting the dynamics of the proportion of infected individuals over time. Each panel contains 100 Gillespie simulations (10 simulations on 10 multiplex networks comprised of \(N=5000\) individuals) (blue lines) and the associated EBCM prediction (black line). All multiplex networks follow a negative binomial distribution for pairs of edge stubs with parameters \(p=0.5\) and \(r=10\), which were split into three edge types via \(p_{s}=0.3=p_{t}\) and thus \(p_{d}=0.4\). In all panels \(t_\mathrm{max}=10\), \(\rho =0.05\), \(\gamma =1\), \(\eta =0.01\). Across the panels, different values for \(\beta _{\mathrm{s}}\) and \(\beta _{\mathrm{d}}\) have been used in the range [0.125, 0.25, 0.5], indicated by individual column and row headings (Color figure online)
A Brief Exploration of Parameter Spaces
Having observed excellent agreement between simulated epidemic processes and equivalent predictions made by multiplex model equations, we investigated the effects of varying single parameters on the dynamics of epidemics predicted by Eqs. (1)–(23). In total, 9 individual model parameters were varied systematically, whilst all (or the majority of) other parameters were held constant (Fig. 9). Across all parameters being varied, an identical baseline parameter set was utilised, with the resulting prediction made by Eqs. (1)–(23) plotted in black to enable ease of comparison between different parameter scenarios.
Multiplex model predictions. Plotting the dynamics of the proportion of infected individuals over time, for a number of different parameter sets. In all panels, a baseline parameter set (\(p=0.5\), \(r=10\), \(p_s=0.3=p_t\), \(p_d=0.4\), \(\beta _{\mathrm{s}}=0.05\), \(\beta _{\mathrm{d}}=0.2\), \(\gamma =1\), \(\eta =0.01=\rho \), \(t_\mathrm{max}=10\) \(\Rightarrow \) \(R_{0}=1.076\)) is used to plot dynamics predicted by multiplex model equations (1)–(23) (thick black line). In each panel, a single parameter is varied and the resultant predictions are plotted in various colours, indicated by individual panel legends. In the bottom row of panels, parameters \(p_{s}\), \(p_{t}\) and \(p_{d}\) are being varied. Since the model has the constraint \((p_{s}+p_{t}+p_{d})\equiv 1\), we alter the triplet values in each panel in the following way. Assume we are varying the parameter \(p_{s}\). If the new \(p_{s}\) is larger than the baseline \(p_{s}\), we subtract \(\tfrac{1}{2}\) the difference from the remaining baseline parameters \(p_{t}\) and \(p_{d}\). Conversely, if the new \(p_{s}\) is smaller than the baseline \(p_{s}\), \(\tfrac{1}{2}\) the difference is added to each of the values \(p_{t}\) and \(p_{d}\) (Color figure online)
This brief exploration highlights the effect that increasing or decreasing a single parameter has on the global dynamics of an SIR epidemic spreading across a dual-layer static–dynamic multiplex. Larger values of p, where p describes the probability of success in a single Bernoulli trial, generate a negative binomial distribution with smaller average degree and a reduction in variance, slowing the epidemic's spread. Larger values of r, where r denotes the number of successful Bernoulli trials that must be reached before the experiment is stopped, led the epidemic to spread more rapidly due to an increase in average degree and variance of the negative binomial distribution for pairs of edge stubs. Varying the rewiring rate \(\eta \) led to less pronounced differences, where larger values of \(\eta \) led to a slight increase in the speed at which the epidemic spread through the population. Increasing a single infection parameter \(\beta _{\mathrm{s}}\) or \(\beta _{\mathrm{d}}\) leads to an increase in the rate of epidemic spread. Altering the parameter \(\rho \) means changing the number of individuals who are infectious at the start of an epidemic process. Increasing the value of \(\rho \) leads to changes in the shape of the curve I(t), describing the prevalence of infection at time t, and to the epidemic process finishing sooner. Altering the values \(p_{s}\), \(p_{t}\) and \(p_{d}\), with the constraint \((p_{s}+p_{t}+p_{d})\equiv 1\), demonstrates the range of dynamics that can be achieved using a fixed distribution for pairs of edge stubs with additional parameters to distribute edge pairs into three edge types. Baseline infection parameters are used across all three panels, thus \(\beta _{\mathrm{s}}=0.05<0.2=\beta _{\mathrm{d}}\), meaning that an increase in the proportion of dynamic edges leads to an increase in the speed of the epidemic, whilst any increase in the proportion of static edges leads to a decrease in the rate of epidemic spread.
Contribution of Network Layers via \((p_{s}+p_{t}+p_{d})\equiv 1\)
When degree distributions are interdependent, the parameters \((p_{s}+p_{t}+p_{d})\equiv 1\) afford the ability to investigate the effects on epidemic dynamics of altering the proportion of edges of each type. Previously, we observed changes in the dynamics of I(t), caused by altering the contributions of each edge type (Fig. 9), where \(\beta _{\mathrm{d}}>\beta _{\mathrm{s}}\), rewiring was slow, and pairs of edge stubs were governed by a negative binomial distribution.
In this multiplex setting, increasing the force of infection on one network layer effectively reduces the force of infection on remaining network layers. Thus the value of parameters \(\beta _{\mathrm{s}}\), \(\beta _{\mathrm{d}}\) and \(\eta \), and the ratios between them, bias the effect of varying model parameters \(p_{s}\), \(p_{t}\) and \(p_{d}\). To take this into account, we allowed parameters \(\beta _{\mathrm{s}}\), \(\beta _{\mathrm{d}}\) and \(\eta \) to take three distinct values (specifically \(\beta _{\mathrm{s}}\in [0.55,0.6,0.65]\), \(\beta _{\mathrm{d}}\in [\frac{\beta _{\mathrm{s}}}{2},\beta _{\mathrm{s}},2\beta _{\mathrm{s}}]\) and \(\eta \in [0.01,1,100]\)), and we considered all 27 combinations of their values, before varying the contributions of each edge type and recording the final epidemic size predicted by Eqs. (1)–(23) in each case (Fig. 10). This approach enabled isolation of the effects of changing single infection or rewiring parameters and exploration of the contributions made by various combinations of edge proportions \(p_{s}\), \(p_{t}\) and \(p_{d}\) in distinct parameter settings.
Multiplex model layer contributions. Heat map plots depicting the final epidemic size (equal to the fraction of the population who are either infectious or recovered at the end of the epidemic process) predicted by Eqs. (1)–(23) for a multiplex network of various proportions \(p_{s}\), \(p_{t}\) (y-axes) and \(p_{d}\) (x-axes), with the model constraint \((p_{s}+p_{t}+p_{d})\equiv 1\). For all set-ups \(\gamma =1\), \(\rho =0.01\), \(t_\mathrm{max}=25\) and pairs of edge stubs followed a discrete homogeneous distribution where all individuals had 2 edge pairs (and hence total degree 4). The values of remaining model parameters \(\eta \), \(\beta _{\mathrm{s}}\) and \(\beta _{\mathrm{d}}\) are indicated above each panel, with \(\eta \in [0.01,1,100]\), \(\beta _{\mathrm{s}}\in [0.55,0.6,0.65]\) and \(\beta _{\mathrm{d}}\in [\beta _{\mathrm{s}}/2, \beta _{\mathrm{s}},2\beta _{\mathrm{s}}]\). All 27 possible combinations of the parameters \(\eta \), \(\beta _{\mathrm{s}}\) and \(\beta _{\mathrm{d}}\) are considered. Prior to implementation, a number of set-ups across the \((p_{s},p_{t},p_{d})\) parameter spaces in each panel were tested by hand to ensure that the epidemic process had concluded by time \(t_\mathrm{max}=25\) (Color figure online)
Increasing the proportion of triangle corners via \(p_{t}\) consistently led to decreases in final epidemic size, suggesting that clustering slows the epidemic process regardless of the choice of parameters \(\beta _{\mathrm{s}}\), \(\beta _{\mathrm{d}}\) and \(\eta \) (Fig. 10). Generally, increasing the value of \(\eta \) resulted in an increase in final epidemic size when comparing identical edge contributions. Likewise, increasing the value of infection parameters \(\beta _{\mathrm{s}}\) or \(\beta _{\mathrm{d}}\) led to an increase in final epidemic size. Dependant on the combination of parameters \(\beta _{\mathrm{s}}\), \(\beta _{\mathrm{d}}\) and \(\eta \), different behavioural regimes emerge, indicated by the orientation of colours and the direction in which they change in individual panels. We observe that a single edge proportion can have a more or less dominant effect on the outcome, dependent on the particular parameter set. For example, when \(\eta =0.01\) and \(\beta _{\mathrm{s}}=0.55=\beta _{\mathrm{d}}\), changing the proportion of dynamic edges \(p_{d}\) has little effect on the final epidemic size. However, when \(\eta =100\), \(\beta _{\mathrm{s}}=0.65\) and \(\beta _{\mathrm{d}}=1.3\), altering the parameter \(p_{d}\) leads to more extreme changes in final epidemic size, a result of \(\beta _{\mathrm{d}}\) dominating \(\beta _{\mathrm{s}}\) and an increased rate of dynamic-edge rewiring.
Validation of Basic Reproduction Number \(R_{0}\)
Next-generation matrix \(\varvec{G}\) (24) and the value \(R_{0}\) can be validated by testing to see if the final epidemic size is disturbed as \(R_{0}\) exceeds the epidemic threshold (\(R_{0}=1\)). When the basic reproduction number is subthreshold (\(R_{0}<1\)), the associated epidemic process is expected to 'die-out'. However, when \(R_{0}>1\) the epidemic is expected to take hold and spread within a population.
For a number of set-ups, we recorded the final epidemic size predicted by Eqs. (1)–(23), the final epidemic size of a single Gillespie simulation of the same process and the associated \(R_{0}\) value (Fig. 11). To obtain a suitable range of \(R_{0}\) values we systematically increased \(\beta _{\mathrm{s}}=\beta _{\mathrm{d}}\) from subthreshold values, whilst all other parameters were held constant. Independent binomial distributions were used for static line stubs, static triangle corners and dynamic line stubs. In Gillespie simulations where \(R_{0}>1\), we imposed an additional constraint requiring the number of infectives to reach at least ten times the initial number of infected individuals; otherwise, a new Gillespie simulation was implemented. As \(R_{0}\) exceeded the epidemic threshold, the final epidemic size predicted by model equations (1)–(23) and from individual simulations increased rapidly, suggesting the derivation of the next-generation matrix \(\varvec{G}\) and associated \(R_{0}\) is rigorous.
Validation of the basic reproduction number \({R}_{{0}}\). Plotting values of the basic reproduction number \(R_0\) (x-axis), found via the leading eigenvalue of matrix (24), against the associated final epidemic sizes (y-axis) predicted by multiplex equations (1)–(23) (red line) and recorded by single statistically correct Gillespie simulations (blue circles). Static and dynamic line stubs follow binomial distributions with parameters \(n=20\) and \(p=0.5\). The distribution of triangle corners follows a binomial distribution with parameters \(n=1\) and \(p=0.001\) to minimise clustering. Fixed parameters were \(\gamma =1\), \(\rho =0.001\), \(\eta =0.01\), \(t_\mathrm{max}=10\), \(N=1000\). In each set-up \(\beta _{\mathrm{s}}=\beta _{\mathrm{d}}\). One hundred transmission rates were tested, from \(\beta _{\mathrm {s}}=\beta _{\mathrm {d}}=0.01\) up to \(\beta _{\mathrm {s}}=\beta _{\mathrm {d}}=0.3\), in equal-sized increments. In Gillespie simulations where \(R_0 > 1\), if the number of infected individuals did not reach 10 times the initial number of infectives, all data were discarded and the Gillespie script restarted from initial conditions at time zero (Color figure online)
Effects of rewiring, average degrees and clustering. Plotting the value of \(R_0\) and the associated final epidemic size found using EBCM equations (1)–(23), for a number of different set-ups. Upper-left panels: testing 100 evenly spaced values for \(\eta \) in the range [0.01, 50]. Remaining model parameters were \(p_{s}=0.3=p_{t}\), \(\beta _{\mathrm{s}}=0.1=\beta _{\mathrm{d}}\), \(\gamma =1\), \(\rho =0.01\) and \(t_\mathrm{max}=25\). Pairs of edge stubs followed a negative binomial distribution with parameters \(p=0.5\) and \(r=5\). Upper-right panels: testing 15 evenly spaced values for \(\langle k\rangle \in [2,30]\), generated using a negative binomial distribution for pairs of edge stubs with fixed \(p=0.5\) and \(r \in [1,15]\). Remaining model parameters were \(p_{s}=0.3=p_{t}\), \(\beta _{\mathrm{s}}=0.0625=\beta _{\mathrm{d}}\), \(\gamma =1\), \(\eta =0.1\), \(\rho =0.01\), \(t_\mathrm{max}=25\). Lower-left panels: testing 100 evenly spaced values for \(p_{t}\) in the range [0.01, 0.99]. The proportion \((1-p_{t})\) was split equally between parameters \(p_{s}\) and \(p_{d}\). Remaining model parameters were \(\beta _{\mathrm{s}}=0.5=\beta _{\mathrm{d}}\), \(\gamma =1\), \(\rho =0.01\), \(\eta =0.1\) and \(t_\mathrm{max}=25\). Pairs of edge stubs followed a discrete homogeneous distribution where all individuals had 2 edge pairs. Lower-right panels: testing 15 evenly spaced values for \(\langle k\rangle \in [2,30]\), generated using a discrete homogeneous distribution for pairs of edge stubs where all individuals have identical degree. Remaining model parameters were \(p_{s}=0.3=p_{t}\), \(\beta _{\mathrm{s}}=0.0625=\beta _{\mathrm{d}}\), \(\gamma =1\), \(\rho =0.01\), \(\eta =0.1\), \(t_\mathrm{max}=25\) (Color figure online)
We plotted \(R_{0}\) and the associated final epidemic size predicted by Eqs. (1)–(23) for a number of scenarios to investigate the impact on their values of varying specific multiplex network attributes (rewiring, clustering and average degree) and to explore the relationship between \(R_{0}\) and final epidemic size (Fig. 12). Varying the rewiring rate \(\eta \) demonstrates that \(R_{0}\) and the associated final epidemic size increase with the value of \(\eta \). Varying \(\eta \) can also move the system below or above the epidemic threshold \(R_{0}=1\). However, there is a limit to this relationship; as \(\eta \) increases above 20, the changes in \(R_{0}\) and final epidemic size are negligible. We have seen previously that larger values of \(p_{t}\) result in smaller final epidemic sizes, suggesting that increased clustering slows epidemic processes on multiplex networks (Fig. 10). Here, we find that increasing \(p_{t}\) leads to decreases in both \(R_{0}\) and the associated final epidemic size (Fig. 12). The relationship between \(p_{t}\) and final epidemic size appears to be linear. For smaller \(p_{t}\) the curve with \(R_{0}\) appears to be linear, but as \(p_{t}\) tends towards its maximal value, the reduction in \(R_{0}\) increases.
An increase in average degree \(\langle k\rangle \), where pairs of edge stubs follow a negative binomial distribution, led to increases in \(R_{0}\) and final epidemic size (Fig. 12). The relationship between \(\langle k\rangle \) (negative binomial) and \(R_{0}\) appears to be linear. However, the relationship between \(\langle k\rangle \) and final epidemic size differs. The final epidemic size increases at a faster rate above some critical average degree, say \(\langle k\rangle =12\). A similar pattern emerges in the relationship between the average degree, \(R_{0}\) and final epidemic size when pairs of edge stubs follow a discrete homogeneous distribution. This is not surprising, as we saw previously that the relationship between \(R_{0}\) and final epidemic size is nonlinear (Fig. 11). However, these results show that small average degrees make it hard for the epidemic to take hold in the population. Potentially, this is a result of the multiplex network becoming divided into more than one connected component, meaning the disease can get trapped within smaller subpopulations of individuals, limiting its effect.
We have proposed a model describing the time evolution of an SIR epidemic spreading through a population of individuals in a multiplex network consisting of two layers: a static network layer representing persistent human connections and a dynamic network layer representing temporary human interactions made outside of a typical household. The model incorporates heterogeneity in the structure, type and duration of connections between individuals, and the number of model equations remains fixed regardless of population size. We designed the multiplex model to afford control of network transitivity (clustering), on the static layer only, by generating the associated network structure using a combination of 2-vertex and 3-vertex cliques, referred to here as static lines and triangles. The dynamic network layer was generated via a single distribution for 2-vertex cliques. Following the EBCM approach (Miller 2014), we obtained expressions for time-evolving quantities of interest, such as the infectious proportion of the population I(t). We have also applied the next-generation matrix method (Diekmann et al. 2009) to compute the basic reproduction number \(R_{0}\), a measure of the expected number of infections a typical infectious individual will cause during an epidemic.
Multiplex model equations (1)–(23) were validated, first by testing convergence of epidemic dynamics to predictions made by existing uniplex edge-based compartmental model equations, when either network layer (static or dynamic) was eliminated, and second by comparing full model (with static and dynamic elements) predictions to the dynamics of corresponding statistically correct Gillespie simulations (Gillespie 1976).
The multiplex model's parameter space was explored by varying individual parameters and plotting the resulting epidemic dynamics, and by mapping the outcome on final epidemic size of having various proportions of each edge type when considering different combinations of model parameters \(\beta _{\mathrm{s}}\), \(\beta _{\mathrm{d}}\) and \(\eta \). The basic reproduction number \(R_{0}\), found via the leading eigenvalue of next-generation matrix \(\varvec{G}\) (24), was validated by demonstrating that continually incrementing infection parameters \(\beta _{\mathrm{s}}\) and \(\beta _{\mathrm{d}}\), with all else held constant, led to a rapid increase in final epidemic size as \(R_{0}\) exceeded its epidemic threshold. Finally, we explored the effect on \(R_{0}\) and the associated final epidemic size predicted by Eqs. (1)–(23) of altering specific multiplex network attributes governing the rate of rewiring, the extent of clustering and the average degree.
Our unique contribution towards the literature is a model with a combination of static and dynamic network elements, derived by combining the EBCM approach to modelling an SIR epidemic on a static network with tunable clustering (Volz et al. 2011) with the EBCM approach to modelling an SIR epidemic on a dynamic fixed-degree network (Miller et al. 2012), under the framework of a dual-layer multiplex network.
The EBCM approach allows us to model variations in contact structure, contact type and contact duration simultaneously. Modelling such heterogeneities via EBCM provides an opportunity to investigate the effects of heterogeneities observed in real-world networks (Perry-Smith and Shalley 2003; Komurov and White 2007; Vernon and Keeling 2009), alongside consideration of common network attributes such as clustering and degree distributions. EBCM also affords a huge reduction in the number of equations required to track the epidemic, compared with full simulation.
This work progresses the drive to derive population models that capture reasonable levels of complexity and heterogeneity whilst exhibiting a tractable number of equations. By providing a clear and concise 'walkthrough' to deriving and validating our desired model, we hope that future researchers are inspired to build on these results by designing and implementing novel models, modelling approaches and computational algorithms.
The work here extends previous research following the edge-based compartmental modelling approach. Prior EBCM approaches derived model equations describing the SIR epidemic process on wholly static or wholly dynamic uniplex networks. For example, EBCM has been utilised to describe the SIR epidemic on static actual-degree configuration model (CM) networks (Miller et al. 2012), static CM networks with tunable clustering (Volz et al. 2011) and static expected degree mixed Poisson (MP) networks (Miller et al. 2012).
Dynamic uniplex networks have also been considered via the EBCM approach. Namely, CM networks with mean-field social heterogeneity (edges are broken and rewired at a very fast rate, meaning all pairs of individuals contact each other at the same rate, and edge durations are fleeting), dynamic fixed-degree CM networks (edges are rewired, but edge durations are finite), dormant contact CM networks (existing edges are broken and remain dormant for some time, before being re-established), MP networks with mean-field social heterogeneity (fleeting edge duration) and dynamic variable-degree MP networks (finite edge duration) (Miller et al. 2012).
Existing modelling approaches incorporating heterogeneity include the consideration of an epidemic with two 'levels' of mixing between individuals (but no network structure) (Ball et al. 1997), and the later considerations of epidemic processes occurring on structured populations with two levels of mixing (Zhang et al. 2015), and with two routes of transmission (Zhao et al. 2014). Recently, the EBCM approach was used to derive equations describing an SIR epidemic process with non-sexual and sexual transmission routes, a characteristic of diseases such as Ebola and Zika (Miller 2017).
Other modelling approaches have incorporated dynamicity of connections between individuals (and hence heterogeneity in contact duration) by, e.g. considering an SIR epidemic on a network with intermittent social distancing, where susceptible individuals break links with infectious individuals for some time \(t_{b}\), after which the connection is re-established (Valdez et al. 2013). Another approach considered the effects of constrained rewiring during an SIS epidemic, whereby susceptible individuals cut links to infectious individuals regardless of distance, and rewire to a susceptible individual within a given radius, where the nodes of the network were embedded in Euclidean space (Rattana et al. 2014).
Research considering the large graph limit of an SIR epidemic on a dynamic multilayer network affords heterogeneity in contact type and in contact duration by allowing individual network layers to contain either activating or de-activating edges and by allowing edges in different layers to correspond to different types of contacts (Jacobsen et al. 2016). Although Jacobsen et al. (2016) consider the SIR epidemic spreading on a multiplex network, including providing a dual-layer multiplex example where edge types correspond to community and healthcare contacts, they do not consider any fully static network components.
There are a number of adaptations that can be made to the proposed model. The model considers a heterogeneous contact structure between N individuals. However, the locations of N individuals are not taken into account. Real-world networks occur in space (Barthélemy 2011), and thus, it is important to investigate the effects of considering node locations. In this study, we have chosen to disregard the spatial locations of individuals. A more realistic model of an SIR epidemic spreading on a multiplex network of individuals would be achieved by embedding the locations of each individual into Euclidean space. Even more complex models could consider dynamic node locations or a combination of static and dynamic node locations.
Another potential adaptation is considering weighted network connections. In the proposed model, all connections are considered to be unweighted or equivalently to share equal weight (homogeneity). The model could be adapted by, e.g. making the weight of each connection proportional to the Euclidean distance between the two node locations (given spatial embedding), by imposing a distribution of connection weights or by assigning weights at random. Then, the probability of contracting disease across a connection can be made proportional to the weight of that connection.
In the present model, the population of N individuals is fixed. We do not consider the effect of flux in or out of the population, e.g. by births, deaths or migration events. An important next step is to adapt the model presented here to consider in- and outflow of members of the population or at least to consider whether such in- and outflows significantly influence disease dynamics.
Another model limitation concerns the assumptions made surrounding edge rewire events on the dynamic network layer. Here, we assume that when one partnership ends, a new partnership forms immediately. Thus, given a nonzero degree on the dynamic network layer, individuals remain connected to strangers from the wider population (via dynamic network connections) at all times. In reality, the fleeting connections an individual makes with strangers are temporary, and individuals can remain disconnected from these connections for some time. An improvement to the model could thus be achieved by allowing for gaps to occur between partnerships by implementing the dormant contact approach on the dynamic network layer [e.g. see section 3.3 of Miller et al. (2012), Valdez et al. (2013), Shkarayev et al. (2014) and Tunc et al. (2013)]. The immediate implication of such an approach is a more accurate model in relation to observed human behaviours. However, the dynamics of the epidemic process will be slowed, especially if the duration of the gap (in time) between partnerships is comparable to or longer than the typical time it takes to transmit infection to a partner. Alternative rules for edge dynamicity can also be considered, such as constrained rewiring (Rattana et al. 2014) and edge activation and deletion (Jacobsen et al. 2016; Sélley et al. 2015; Taylor et al. 2012). Other model adaptations include allowing for tunable clustering on all network layers (and thus imposing two edge distributions on each network layer), implementing more complex distributions governing the degrees of each node and biasing initially infectious individuals instead of selecting them at random.
The multiplex model affords tunable clustering on the static network layer by generating its contact structure using a distribution of line stubs and a distribution of triangle corners. However, the configuration model wiring process requires that any two individuals share at most one connection within a single network layer. Double edges can occur across network layers (i.e. when the same edge is present in both network layers), but not within them. This constraint greatly reduces the possibilities for placing triangles suitably into the network, meaning the configuration process is slowed down and the extent of clustering that can be achieved is reduced. Greater control over clustering could be achieved by adapting the model to allow for overlapping triangles (and either allowing double edges to occur in single network layers, or amalgamating any double edges that occur into single edges, or doubly weighted edges).
Other than making adaptations to the proposed model, there are a number of tests and analyses which are beyond the scope of this work. Firstly, a comprehensive exploration of the entire parameter space would elucidate the behavioural 'envelope' of the model and uncover any parameter regions where the model poorly predicts the SIR epidemic process, compared with simulation. A more thorough understanding of the impact of degree and degree heterogeneity on the relationship between parameters and system behaviour will require consideration of additional edge distributions with various levels of heterogeneity and average degrees. Secondly, the model's utility can be investigated by using real-world data from historical epidemics or similar processes, e.g. livestock herd contact tracing data or Twitter data tracking the prevalence of a hashtag over time. Using real data, model parameters could be estimated using Bayesian estimation techniques and the resulting model predictions compared with prior knowledge of what occurred. The basic reproduction number \(R_{0}\) can be tested in the same way.
This work considers an SIR compartmental model under the guise of a disease spreading through a networked population. Thought must be given to what other real-world processes can be well described by the SIR compartmental model, such as opinion formation, rumour spreading or uptake of fashion trends. Further, a two-layer multiplex like the proposed model could be used to investigate the dynamics of two interacting SIR-type processes, such as a physical disease spreading process occurring on one network layer in combination with a disease awareness process occurring on the opposing network layer, using similar approaches to those of Funk et al. (2009) and Li et al. (2015).
Future research can build on these observations by considering similar modelling approaches that account for compartmental models other than the SIR type. For example, the SIS model (describing infections that do not confer lasting immunity, such as the common cold) and the SEIR model (describing infections with incubation periods, where individuals have contracted a disease but are not yet infectious and hence are in the 'exposed' disease state) are not considered here. Modelling an SEIR infection may require simple adaptation of the existing EBCM approach. However, consideration of an SIS-type epidemic process requires an altogether new modelling approach. A key assumption of the present approach is the consideration of all neighbours of the test node u as being independent. Attempting to impose this assumption would prevent modelling of SIS dynamics, a consequence which is discussed in Miller and Kiss (2014) and Miller et al. (2012).
Experiments that can be performed to improve and inform future modelling approaches include: quantifying the levels of heterogeneity in existing populations, including behavioural and structural heterogeneity, gaining a deeper understanding of the biological processes underlying disease spreading processes, improving on existing algorithmic and analytic approaches and fostering closer relations between modellers and practitioners, in order to maximise the benefits arising from research.
Arthur RF, Gurley ES, Salje H, Bloomfield LSP, Jones JH (2017) Contact structure, mobility, environmental impact and behaviour: the importance of social forces to infectious disease dynamics and disease ecology. Philos Trans R Soc B 372(1719):20160454
Ball F, Mollison D, Scalia-Tomba G (1997) Epidemics with two levels of mixing. Ann Appl Probab 7:46–89
MathSciNet Article Google Scholar
Barthélemy M (2011) Spatial networks. Phys Rep 499(1):1–101
Brummitt CD, Lee K-M, Goh K-I (2012) Multiplexity-facilitated cascades in networks. Phys Rev E 85(4):045102
Cozzo E, Banos RA, Meloni S, Moreno Y (2013) Contact-based social contagion in multiplex networks. Phys Rev E 88(5):050801
Diakonova M, Nicosia V, Latora V, Miguel MS (2016) Irreducibility of multilayer network dynamics: the case of the voter model. New J Phys 18(2):023010
Diekmann O, Heesterbeek JAP, Roberts MG (2009) The construction of next-generation matrices for compartmental epidemic models. J R Soc Interface. https://doi.org/10.1098/rsif.2009.0386
Funk S, Gilad E, Watkins C, Jansen VAA (2009) The spread of awareness and its impact on epidemic outbreaks. Proc Natl Acad Sci 106(16):6872–6877
Gillespie DT (1976) A general method for numerically simulating the stochastic time evolution of coupled chemical reactions. J Comput Phys 22(4):403–434
Gomez S, Diaz-Guilera A, Gomez-Gardenes J, Perez-Vicente CJ, Moreno Y, Arenas A (2013) Diffusion dynamics on multiplex networks. Phys Rev Lett 110(2):028701
Hidalgo CA, Rodriguez-Sickert C (2008) The dynamics of a mobile phone network. Physica A 387(12):3017–3024
Jacobsen KA, Burch MG, Tien JH, Rempała GA (2016) The large graph limit of a stochastic epidemic model on a dynamic multilayer network. arXiv preprint arXiv:1605.02809
Keeling MJ, Eames KTD (2005) Networks and epidemic models. J R Soc Interface 2(4):295–307
Kiss IZ, Berthouze L, Taylor TJ, Simon Péter L (2012) Modelling approaches for simple dynamic networks and applications to disease transmission models. Proc R Soc A 468:1332–1355
Komurov K, White M (2007) Revealing static and dynamic modular architecture of the eukaryotic protein interaction network. Mol Syst Biol 3(1):110
Li W, Tang S, Fang W, Guo Q, Zhang X, Zheng Z (2015) How multiple social networks affect user awareness: the information diffusion process in multiplex networks. Phys Rev E 92(4):042810
Meyers LA, Newman MEJ, Pourbohloul B (2006) Predicting epidemics on directed contact networks. J Theor Biol 240(3):400–418
Miller JC (2009) Percolation and epidemics in random clustered networks. Phys Rev E 80(2):020901
Miller JC (2011) A note on a paper by Erik Volz: SIR dynamics in random networks. J Math Biol 62(3):349–358
Miller JC (2014) Epidemics on networks with large initial conditions or changing structure. PLoS ONE 9(7):e101421
Miller JC (2017) Mathematical models of SIR disease spread with combined non-sexual and sexual transmission routes. Infect Dis Model. https://doi.org/10.1016/j.idm.2016.12.003
Miller JC, Kiss IZ (2014) Epidemic spread in networks: existing methods and current challenges. Math Model Nat Phenom 9(2):4–42
Miller JC, Volz EM (2013) Model hierarchies in edge-based compartmental modeling for infectious disease spread. J Math Biol 67(4):869–899
Miller JC, Slim AC, Volz EM (2012) Edge-based compartmental modelling for infectious disease spread. J R Soc Interface 9(70):890–906
Mucha PJ, Richardson T, Macon K, Porter MA, Onnela J-P (2010) Community structure in time-dependent, multiscale, and multiplex networks. Science 328(5980):876–878
Newman MEJ (2003) The structure and function of complex networks. SIAM Rev 45(2):167–256
Perry-Smith JE, Shalley CE (2003) The social side of creativity: a static and dynamic social network perspective. Acad Manag Rev 28(1):89–106
Rattana P, Berthouze L, Kiss IZ (2014) Impact of constrained rewiring on network structure and node dynamics. Phys Rev E 90(5):052806
Salathé M, Jones JH (2010) Dynamics and control of diseases in networks with community structure. PLoS Comput Biol 6(4):e1000736
Sélley F, Besenyei Á, Kiss IZ, Simon PL (2015) Dynamic control of modern, network-based epidemic models. SIAM J Appl Dyn Syst 14(1):168–187
Shkarayev MS, Tunc I, Shaw LB (2014) Epidemics with temporary link deactivation in scale-free networks. J Phys A Math Theor 47(45):455006
Taylor M, Taylor TJ, Kiss IZ (2012) Epidemic threshold and control in a dynamic network. Phys Rev E 85(1):016103
Tunc I, Shkarayev MS, Shaw LB (2013) Epidemics in adaptive social networks with temporary link deactivation. J Stat Phys 151(1–2):355–366
Valdez LD, Macri PA, Braunstein LA (2013) Temporal percolation of a susceptible adaptive network. Physica A 392(18):4172–4180
Vernon MC, Keeling MJ (2009) Representing the UK's cattle herd as static and dynamic networks. Proc R Soc Lond B Biol Sci 276(1656):469–476
Volz E (2008) SIR dynamics in random networks with heterogeneous connectivity. J Math Biol 56(3):293–310
Volz EM, Miller JC, Galvani A, Meyers LA (2011) Effects of heterogeneous and clustered contact patterns on infectious disease dynamics. PLoS Comput Biol 7(6):e1002042
Zhang C, Zhou S, Miller JC, Cox IJ, Chain BM (2015) Optimizing hybrid spreading in metapopulations. Sci Rep. https://doi.org/10.1038/srep09924
Zhao D, Li L, Peng H, Luo Q, Yang Y (2014) Multiple routes transmitted epidemics on multiplex networks. Phys Lett A 378(10):770–776
Zhuang Y, Yağan O (2016) Information propagation in clustered multilayer networks. IEEE Trans Netw Sci Eng 3(4):211–224
Zhuang Y, Arenas A, Yağan O (2017) Clustering determines the dynamics of complex contagions in multiplex networks. Phys Rev E 95(1):012312
Rosanna C. Barnard acknowledges funding from the Engineering and Physical Sciences Research Council, EP/M506667/1. Joel C. Miller was funded by the Global Good Fund through the Institute for Disease Modeling. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Department of Mathematics, Pevensey III, University of Sussex, Falmer, BN1 9QH, UK
Rosanna C. Barnard & Istvan Z. Kiss
Centre for Computational Neuroscience and Robotics, University of Sussex, Falmer, BN1 9QH, UK
Luc Berthouze
Institute for Disease Modeling, Bellevue, WA, USA
Joel C. Miller
Rosanna C. Barnard
Istvan Z. Kiss
Correspondence to Rosanna C. Barnard.
Barnard, R.C., Kiss, I.Z., Berthouze, L. et al. Edge-Based Compartmental Modelling of an SIR Epidemic on a Dual-Layer Static–Dynamic Multiplex Network with Tunable Clustering. Bull Math Biol 80, 2698–2733 (2018). https://doi.org/10.1007/s11538-018-0484-5
Issue Date: October 2018
Edge-based compartmental modelling
Multiplexity | CommonCrawl |
Concavity of Parametric Curves
Recall that when we have a function $f$, we could determine intervals where $f$ was concave up and concave down by looking at the second derivative of $f$. The same sort of intuition can be applied to a parametric curve $C$ defined by the equations $x = x(t)$ and $y = y(t)$. Recall that the first derivative of the curve $C$ can be calculated by $\frac{dy}{dx} = \frac{dy/dt}{dx/dt}$. If we take the second derivative of $C$, then we can now calculate intervals where $C$ is concave up or concave down.
\begin{align} \frac{d^2y}{dx^2} = \frac{d}{dx} \left ( \frac{dy}{dx} \right) = \frac{\frac{d}{dt} \left (\frac{dy}{dx} \right)}{\frac{dx}{dt}} \end{align}
Now let's look at some examples of calculating the second derivative of parametric curves.
Determine the second derivative of the parametric curve defined by $x = 3t^2 - 2t$ and $y = 4t^3 - 2t$.
Let's first find the first derivative $\frac{dy}{dx}$:
\begin{align} \frac{dy}{dx} = \frac{\frac{dy}{dt}}{\frac{dx}{dt}} = \frac{12t^2 - 2}{6t - 2} \end{align}
And so, we will now calculate the second derivative:
\begin{align} \frac{\frac{d}{dt} \left (\frac{dy}{dx} \right)}{\frac{dx}{dt}} = \frac{\frac{d}{dt} \left (\frac{12t^2 - 2}{6t - 2} \right)}{\frac{dx}{dt}} = \frac{\frac{d}{dt} \left (\frac{12t^2 - 2}{6t - 2} \right)}{6t - 2} \end{align}
We will have to use the quotient rule this time:
\begin{align} \frac{d^2y}{dx^2} = \frac{(6t - 2)(24t) - (12t^2 - 2)(6)}{6t - 2} \\ \frac{d^2y}{dx^2} = \frac{144t^2 -48t - 72t^2 +12}{6t - 2} \\ \frac{d^2y}{dx^2} = \frac{72t^2 - 48t + 12}{6t - 2} \end{align}
Let $C$ be a parametric curve defined by the equations $x = t^2 + t$ and $y = 2t^3 - 1$. Find intervals when $C$ is concave up and concave down.
We will first calculate the first derivative, $\frac{dy}{dx}$ as follows:
\begin{align} \frac{dy}{dx} = \frac{dy/dt}{dx/dt} = \frac{6t^2}{2t + 1} \end{align}
Now let's calculate the second derivative by first differentiating $\frac{dy}{dx}$ with respect to $t$:
\begin{align} \frac{d}{dt} \cdot \frac{dy}{dx} = \frac{(2t + 1)(12t) - 2(6t^2)}{(2t + 1)^2} \end{align}
Therefore, since $\frac{d^2y}{dx^2} = \frac{\frac{d}{dt}\frac{dy}{dt}}{\frac{dx}{dt}}$, it follows that:
\begin{align} \frac{d^2y}{dx^2} = \frac{\frac{(2t + 1)(12t) - 2(6t^2)}{(2t + 1)^2}}{2t + 1} \\ \frac{d^2y}{dx^2} = \frac{(2t + 1)(12t) - 2(6t^2)}{(2t + 1)^3} \\ \frac{d^2y}{dx^2} = \frac{24t^2 + 12t - 12t^2}{(2t + 1)^3} \\ \frac{d^2y}{dx^2} = \frac{12t(t + 1)}{(2t + 1)^3} \\ \end{align}
We now want to know when the second derivative is positive and when the second derivative is negative. Note that the numerator is positive when $t < -1$ or $t > 0$ and negative when $-1 < t < 0$. Meanwhile, the denominator is positive when $t > -0.5$ and negative when $t < -0.5$. The diagram below makes this illustration:
When $-1 < t < -0.5$ or $0 < t$, then $C$ is concave up.
When $t < -1$ or $-0.5 < t < 0$, then $C$ is concave down.
Let $C$ be the parametric curve defined by the equations $x = t^2 + t$ and $y = 2t - 1$. Determine intervals where $C$ is concave up or concave down.
We first calculate the first derivative for $C$, that is:
\begin{align} \frac{dy}{dx} = \frac{2}{2t + 1} \end{align}
Now we must compute the second derivative for $C$. Note that $\frac{d}{dt} \frac{dy}{dx} = \frac{(2t + 1)(0) - 2(2)}{(2t + 1)^2}$ by the quotient rule. Therefore:
\begin{align} \frac{d^2y}{dx^2} = \frac{\frac{d}{dt} \frac{dy}{dx}}{\frac{dx}{dt}} = \frac{-4}{(2t + 1)^2} \cdot \frac{1}{2t + 1} \\ \frac{d^2y}{dx^2} = -\frac{4}{(2t + 1)^3} \end{align}
We note that when $t < -0.5$, our second derivative is positive, while when $t > 0.5$, our second derivative is negative. Hence, $C$ is concave up for $t < -0.5$ and concave down for $t > 0.5$. | CommonCrawl |
A couple of weeks ago, I was reading some lecture notes on game theory and I came across a really neat game.
After discussing the very basics of game theory and decision making theory, the author of the lectures gives an exercise which I found really interesting and enjoyable, at the point that I went ahead and gave it as a quiz to my business calculus class.
To my surprise, most of my class got the right answer, which was truly a grateful feeling. The game is really simple so anybody can understand it, but in my opinion, it represents many aspects of real life.
Every student is to write down a real $x_i$ number in between 0 and 10 inclusively. After doing so, one computes the mean $\bar{x}$ of all of the students' bets and each student's grade is given by
$10-\left|x_i-\frac{2}{3}\bar{x}\right|$
This might look a really simple task, and by no means a game at all, but it is a game of strategy and common sense.
Our desire as a students is to maximize our grade, but that depends on the average choice of the class, which might complicate a bit the analysis of a best strategy to pick our $x_i$.
It is not hard to see that a global best strategy is to pick $x_i=0$, as if everybody is a good and logical player, having all bets equal to $0$ would give each student's grade to be $10$, which is the best possible.
So, our personal best strategy should be to pick $0$, but in real life, not all players are good thinkers or really logical, so at the end of the day, our best strategy won't give us the best out come possible.
In a sense, we can think of this game as rewarding you if you somehow think average, and most of the times, the average thinking is not precisely the most wise and logical.
By the way the game was set up, we can see that it neither rewards the average thinking as much as someone that was 2/3 away from it. If you are 2/3 away from the average, you'll get full credit, and this is somehow what happens in real life. Usually not the average people get the best outcome nor the people that plays the best, but people that are in between.
This gives a really good example that in most occasions, your outcome does not depend only on your own strategy, but also in someone else's strategy, and that making the best decisions and taking the best choices does not guarantee your success.
The name 'the game of life' has already been claimed for another game Pedro. Sorry ;-)
La traza como optimizador de volumenes | CommonCrawl |
Spatial behavior in the vibrating thermoviscoelastic porous materials
DCDS-B Home
Intrinsic decay rate estimates for the wave equation with competing viscoelastic and frictional dissipative effects
September 2014, 19(7): 2013-2026. doi: 10.3934/dcdsb.2014.19.2013
On a generalized Cahn-Hilliard equation with biological applications
Laurence Cherfils 1, , Alain Miranville 2, and Sergey Zelik 3,
Université de La Rochelle, Laboratoire Mathématiques, Image et Applications, Avenue Michel Crépeau, F-17042 La Rochelle Cedex, France
Université de Poitiers, Laboratoire de Mathématiques et Applications, UMR CNRS 6086 - SP2MI, Boulevard Marie et Pierre Curie - Téléport 2, F-86962 Chasseneuil Futuroscope Cedex
Department of Mathematics, University of Surrey, Guildford, GU2 7XH
Received March 2013 Revised May 2013 Published August 2014
In this paper, we are interested in the study of the asymptotic behavior of a generalization of the Cahn-Hilliard equation with a proliferation term and endowed with Neumann boundary conditions. Such a model has, in particular, applications in biology. We show that either the average of the local density of cells is bounded, in which case we have a global in time solution, or the solution blows up in finite time. We further prove that the relevant, from a biological point of view, solutions converge to $1$ as time goes to infinity. We finally give some numerical simulations which confirm the theoretical results.
Keywords: Cahn-Hilliard equation, convergence to a steady state, blow up, simulations., proliferation term, global existence.
Mathematics Subject Classification: 35B40, 35B44, 35K5.
Citation: Laurence Cherfils, Alain Miranville, Sergey Zelik. On a generalized Cahn-Hilliard equation with biological applications. Discrete & Continuous Dynamical Systems - B, 2014, 19 (7) : 2013-2026. doi: 10.3934/dcdsb.2014.19.2013
A. V. Babin and M. I. Vishik, Attractors of Evolution Equations,, North-Holland Amsterdam, (1992). Google Scholar
J. W. Cahn, On spinodal decomposition,, Acta. Metall., 9 (1961), 795. doi: 10.1016/0001-6160(61)90182-1. Google Scholar
J. W. Cahn and J. E. Hilliard, Free energy of a nonuniform system I. Interfacial free energy,, J. Chem. Phys., 28 (1958), 258. doi: 10.1063/1.1744102. Google Scholar
V. Chalupeckí, Numerical studies of Cahn-Hilliard equations and applications in image processing,, in Proceedings of Czech-Japanese Seminar in Applied Mathematics, (2004). Google Scholar
L. Cherfils, A. Miranville and S. Zelik, The Cahn-Hilliard equation with logarithmic potentials,, Milan J. Math., 79 (2011), 561. doi: 10.1007/s00032-011-0165-4. Google Scholar
L. Cherfils, M. Petcu and M. Pierre, A numerical analysis of the Cahn-Hilliard equation with dynamic boundary conditions,, Discrete Cont. Dyn. Systems, 27 (2010), 1511. doi: 10.3934/dcds.2010.27.1511. Google Scholar
D. Cohen and J. M. Murray, A generalized diffusion model for growth and dispersion in a population,, J. Math. Biol., 12 (1981), 237. doi: 10.1007/BF00276132. Google Scholar
I. C. Dolcetta, S. F. Vita and R. March, Area-preserving curve-shortening flows: From phase separation to image processing,, Interfaces Free Bound., 4 (2002), 325. doi: 10.4171/IFB/64. Google Scholar
C. M. Elliott, The Cahn-Hilliard model for the kinetics of phase separation,, in Mathematical Models for Phase Change Problems, 88, (1989). Google Scholar
C. M. Elliott, D. A. French and F. A. Milner, A second order splitting method for the Cahn-Hilliard equation,, Numer. Math., 54 (1989), 575. doi: 10.1007/BF01396363. Google Scholar
, FreeFem++ is freely, available at http://www.freefem.org/ff++., (). Google Scholar
M. Grasselli and M. Pierre, A splitting method for the Cahn-Hilliard equation with inertial term,, Math. Models Methods Appl. Sci., 20 (2010), 1363. doi: 10.1142/S0218202510004635. Google Scholar
S. Injrou and M. Pierre, Stable discretizations of the Cahn-Hilliard-Gurtin equations,, Discrete Cont. Dyn. Systems, 22 (2008), 1065. doi: 10.3934/dcds.2008.22.1065. Google Scholar
E. Khain and L. M. Sander, A generalized Cahn-Hilliard equation for biological applications,, Phys. Rev. E, 77 (2008). doi: 10.1103/PhysRevE.77.051129. Google Scholar
I. Klapper and J. Dockery, Role of cohesion in the material description of biofilms,, Phys. Rev. E, 74 (2006). doi: 10.1103/PhysRevE.74.031902. Google Scholar
R. V. Kohn and F. Otto, Upper bounds for coarsening rates,, Commun. Math. Phys., 229 (2002), 375. doi: 10.1007/s00220-002-0693-4. Google Scholar
J. S. Langer, Theory of spinodal decomposition in alloys,, Ann. Phys., 65 (1975), 53. doi: 10.1016/0003-4916(71)90162-X. Google Scholar
S. Maier-Paape and T. Wanner, Spinodal decomposition for the Cahn-Hilliard equation in higher dimensions. Part I: Probability and wavelength estimate,, Commun. Math. Phys., 195 (1998), 435. doi: 10.1007/s002200050397. Google Scholar
S. Maier-Paape and T. Wanner, Spinodal decomposition for the Cahn-Hilliard equation in higher dimensions: Nonlinear dynamics,, Arch. Ration. Mech. Anal., 151 (2000), 187. doi: 10.1007/s002050050196. Google Scholar
A. Miranville, Asymptotic behavior of a generalized Cahn-Hilliard equation with a proliferation term,, Appl. Anal., 92 (2013), 1308. doi: 10.1080/00036811.2012.671301. Google Scholar
A. Miranville and S. Zelik, Attractors for dissipative partial differential equations in bounded and unbounded domains,, in Handbook of Differential Equations, 4 (2008), 103. doi: 10.1016/S1874-5717(08)00003-0. Google Scholar
B. Nicolaenko, B. Scheurer and R. Temam, Some global dynamical properties of a class of pattern formation equations,, Commun. Partial Diff. Eqns., 14 (1989), 245. doi: 10.1080/03605308908820597. Google Scholar
A. Novick-Cohen, The Cahn-Hilliard equation: Mathematical and modeling perspectives,, Adv. Math. Sci. Appl., 8 (1998), 965. Google Scholar
A. Novick-Cohen, The Cahn-Hilliard equation,, in Handbook of Differential Equations, 4 (2008), 201. doi: 10.1016/S1874-5717(08)00004-2. Google Scholar
A. Oron, S. H. Davis and S. G. Bankoff, Long-scale evolution of thin liquid films,, Rev. Mod. Phys., 69 (1997), 931. doi: 10.1103/RevModPhys.69.931. Google Scholar
M. Pierre, Habilitation Thesis,, Université de Poitiers, (1997). Google Scholar
R. Temam, Infinite-Dimensional Dynamical Systems in Mechanics and Physics,, Second edition, (1997). doi: 10.1007/978-1-4612-0645-3. Google Scholar
U. Thiele and E. Knobloch, Thin liquid films on a slightly inclined heated plate,, Phys. D, 190 (2004), 213. doi: 10.1016/j.physd.2003.09.048. Google Scholar
S. Tremaine, On the origin of irregular structure in Saturn's rings,, Astron. J., 125 (2003), 894. doi: 10.1086/345963. Google Scholar
Shixing Li, Dongming Yan. On the steady state bifurcation of the Cahn-Hilliard/Allen-Cahn system. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3077-3088. doi: 10.3934/dcdsb.2018301
Georgia Karali, Yuko Nagase. On the existence of solution for a Cahn-Hilliard/Allen-Cahn equation. Discrete & Continuous Dynamical Systems - S, 2014, 7 (1) : 127-137. doi: 10.3934/dcdss.2014.7.127
Dimitra Antonopoulou, Georgia Karali. Existence of solution for a generalized stochastic Cahn-Hilliard equation on convex domains. Discrete & Continuous Dynamical Systems - B, 2011, 16 (1) : 31-55. doi: 10.3934/dcdsb.2011.16.31
Aibo Liu, Changchun Liu. Cauchy problem for a sixth order Cahn-Hilliard type equation with inertial term. Evolution Equations & Control Theory, 2015, 4 (3) : 315-324. doi: 10.3934/eect.2015.4.315
Cecilia Cavaterra, Maurizio Grasselli, Hao Wu. Non-isothermal viscous Cahn-Hilliard equation with inertial term and dynamic boundary conditions. Communications on Pure & Applied Analysis, 2014, 13 (5) : 1855-1890. doi: 10.3934/cpaa.2014.13.1855
Maurizio Grasselli, Nicolas Lecoq, Morgan Pierre. A long-time stable fully discrete approximation of the Cahn-Hilliard equation with inertial term. Conference Publications, 2011, 2011 (Special) : 543-552. doi: 10.3934/proc.2011.2011.543
Alain Miranville. Existence of solutions for Cahn-Hilliard type equations. Conference Publications, 2003, 2003 (Special) : 630-637. doi: 10.3934/proc.2003.2003.630
Changchun Liu, Hui Tang. Existence of periodic solution for a Cahn-Hilliard/Allen-Cahn equation in two space dimensions. Evolution Equations & Control Theory, 2017, 6 (2) : 219-237. doi: 10.3934/eect.2017012
Desheng Li, Xuewei Ju. On dynamical behavior of viscous Cahn-Hilliard equation. Discrete & Continuous Dynamical Systems - A, 2012, 32 (6) : 2207-2221. doi: 10.3934/dcds.2012.32.2207
Álvaro Hernández, Michał Kowalczyk. Rotationally symmetric solutions to the Cahn-Hilliard equation. Discrete & Continuous Dynamical Systems - A, 2017, 37 (2) : 801-827. doi: 10.3934/dcds.2017033
Sergey Zelik, Jon Pennant. Global well-posedness in uniformly local spaces for the Cahn-Hilliard equation in $\mathbb{R}^3$. Communications on Pure & Applied Analysis, 2013, 12 (1) : 461-480. doi: 10.3934/cpaa.2013.12.461
Irena Pawłow, Wojciech M. Zajączkowski. The global solvability of a sixth order Cahn-Hilliard type equation via the Bäcklund transformation. Communications on Pure & Applied Analysis, 2014, 13 (2) : 859-880. doi: 10.3934/cpaa.2014.13.859
Kelong Cheng, Cheng Wang, Steven M. Wise, Zixia Yuan. Global-in-time Gevrey regularity solutions for the functionalized Cahn-Hilliard equation. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020186
Annalisa Iuorio, Stefano Melchionna. Long-time behavior of a nonlocal Cahn-Hilliard equation with reaction. Discrete & Continuous Dynamical Systems - A, 2018, 38 (8) : 3765-3788. doi: 10.3934/dcds.2018163
Dimitra Antonopoulou, Georgia Karali, Georgios T. Kossioris. Asymptotics for a generalized Cahn-Hilliard equation with forcing terms. Discrete & Continuous Dynamical Systems - A, 2011, 30 (4) : 1037-1054. doi: 10.3934/dcds.2011.30.1037
Alain Miranville, Sergey Zelik. The Cahn-Hilliard equation with singular potentials and dynamic boundary conditions. Discrete & Continuous Dynamical Systems - A, 2010, 28 (1) : 275-310. doi: 10.3934/dcds.2010.28.275
S. Maier-Paape, Ulrich Miller. Connecting continua and curves of equilibria of the Cahn-Hilliard equation on the square. Discrete & Continuous Dynamical Systems - A, 2006, 15 (4) : 1137-1153. doi: 10.3934/dcds.2006.15.1137
Laurence Cherfils, Madalina Petcu, Morgan Pierre. A numerical analysis of the Cahn-Hilliard equation with dynamic boundary conditions. Discrete & Continuous Dynamical Systems - A, 2010, 27 (4) : 1511-1533. doi: 10.3934/dcds.2010.27.1511
Amy Novick-Cohen, Andrey Shishkov. Upper bounds for coarsening for the degenerate Cahn-Hilliard equation. Discrete & Continuous Dynamical Systems - A, 2009, 25 (1) : 251-272. doi: 10.3934/dcds.2009.25.251
Gianni Gilardi, A. Miranville, Giulio Schimperna. On the Cahn-Hilliard equation with irregular potentials and dynamic boundary conditions. Communications on Pure & Applied Analysis, 2009, 8 (3) : 881-912. doi: 10.3934/cpaa.2009.8.881
PDF downloads (39)
HTML views (0)
on AIMS
Laurence Cherfils Alain Miranville Sergey Zelik
Recipient's E-mail*
Content* | CommonCrawl |
Exploration of predictive and prognostic alternative splicing signatures in lung adenocarcinoma using machine learning methods
Qidong Cai1,2,
Boxue He1,2,
Pengfei Zhang1,2,
Zhenyu Zhao1,2,
Xiong Peng1,2,
Yuqian Zhang1,2,
Hui Xie1,2 &
Xiang Wang ORCID: orcid.org/0000-0002-5211-72061,2
Alternative splicing (AS) plays critical roles in generating protein diversity and complexity. Dysregulation of AS underlies the initiation and progression of tumors. Machine learning approaches have emerged as efficient tools to identify promising biomarkers. It is meaningful to explore pivotal AS events (ASEs) to deepen understanding and improve prognostic assessments of lung adenocarcinoma (LUAD) via machine learning algorithms.
RNA sequencing data and AS data were extracted from The Cancer Genome Atlas (TCGA) database and TCGA SpliceSeq database. Using several machine learning methods, we identified 24 pairs of LUAD-related ASEs implicated in splicing switches and a random forest-based classifiers for identifying lymph node metastasis (LNM) consisting of 12 ASEs. Furthermore, we identified key prognosis-related ASEs and established a 16-ASE-based prognostic model to predict overall survival for LUAD patients using Cox regression model, random survival forest analysis, and forward selection model. Bioinformatics analyses were also applied to identify underlying mechanisms and associated upstream splicing factors (SFs).
Each pair of ASEs was spliced from the same parent gene, and exhibited perfect inverse intrapair correlation (correlation coefficient = − 1). The 12-ASE-based classifier showed robust ability to evaluate LNM status of LUAD patients with the area under the receiver operating characteristic (ROC) curve (AUC) more than 0.7 in fivefold cross-validation. The prognostic model performed well at 1, 3, 5, and 10 years in both the training cohort and internal test cohort. Univariate and multivariate Cox regression indicated the prognostic model could be used as an independent prognostic factor for patients with LUAD. Further analysis revealed correlations between the prognostic model and American Joint Committee on Cancer stage, T stage, N stage, and living status. The splicing network constructed of survival-related SFs and ASEs depicts regulatory relationships between them.
In summary, our study provides insight into LUAD researches and managements based on these AS biomarkers.
Lung cancer is the most common and deadliest cancer worldwide, in which non-small cell lung cancer (NSCLC) accounts for 85% of all cases [1, 2]. NSCLC can be mainly classified into lung adenocarcinoma (LUAD), squamous cell carcinoma, and large cell carcinoma, among which LUAD is the major histological subtype. Although scientists and clinicians around the world have been making great efforts in the fight against LUAD, the survival outcome of LUAD is still poor because of the complexity of tumor initiation and progression, with an average 5-year survival rate of 15% [3]. Therefore, intensive study to provide more effective diagnostic and treatment strategies for patients with LUAD is of particular importance.
Findings of The Human Genome Project indicated a truth that the number of human protein-coding genes (less than 25,000) is far less than the previous estimation from the diversity of human proteome (including approximately 100,000 proteins). Further studies revealed this proteomic diversity may be attributed to post-transcriptional processing in the RNA level. Recent estimates indicated that nearly 95% of human genes are involved in alternative splicing (AS), where a pre-mRNA can be spliced into several mRNA isoforms with different functions [4]. Apart from increasing protein complexity, translation of mRNA isoforms can also be inhibited by AS through the introduction of a premature stop codon causing degradation [5]. The dysregulation of AS is implicated with multiple diseases. A growing amount of evidence showed that cancer cells exhibit massive aberrant splicings [6,7,8]. Many studies also demonstrated that the switching from oncogenic splicing isoforms to protective ones for certain genes represents crucial events in cancer [9]. These abnormal AS events (ASEs) consist of various tumorigenesis processes including cell proliferation, cell death inhibition, immune escape, and inducing angiogenesis [10, 11]. In addition, uncontrolled expression of splicing factors (SFs) promotes the emerging of numerous AS variants that drive carcinogenesis [12]. Recent studies targeting transcriptome and epigenetic alterations identified many molecules as promising diagnostic and therapeutic tumor biomarkers. Likewise, it is meaningful to integratively investigate the expression alterations of ASEs and identify tumor-specific ASEs for LUAD.
Machine learning is a discipline in computer science based on algorithms that parse data, learn from data, and make predictions or decisions on a wide variety of complex issues. The development of machine learning technology and its wide application in biomedical studies provide researchers with powerful tools to find the most informative detection markers from large, highly complex datasets [13]. In this study, we explored LUAD-related ASEs implicated in splicing switches, optimal AS signatures identifying lymph node metastasis (LNM) statuses of patients with LUAD, and a model to predict overall survival (OS) of patients with LUAD by applying machine learning algorithms to genome-wide AS data. Perfect inverse correlations (correlation coefficients = − 1) between the identified oncogenic isoforms and protective isoforms derived from the same genes were exhibited. Results also indicated the two signatures have robust predictive capacities. Random-forest based algorithm Boruta was used to evaluate the importance of ASEs for LUAD. Spearman correlation analysis was used to evaluate correlations among important ASEs which were originated from the same gene. Then a nested five-fold cross-validation algorithm was applied to decide the proper number of predictors in random forest classifiers. Cox regression model and random survival forest (RSF) algorithm were used to identify survival-related seed genes and the forward selection model was developed to identify prognosis-related key genes for model construction. Bioinformatical analyses were also performed to explore correlated pathways, identify upstream SFs, and analyze the correlations between the prognostic model and clinical variables.
Data collection and preprocessing
The mRNA data in fragments per kilobase per million mapped reads format and patients' clinical information of LUAD were retrieved from The Cancer Genome Atlas (TCGA) database (https://portal.gdc.cancer.gov/). AS data of LUAD were downloaded from TCGA SpliceSeq (https://bioinformatics.mdanderson.org/TCGASpliceSeq) [14]. The percent-spliced-in (PSI) value, representing the ratio between reads including or excluding exons, was calculated to describe detected ASEs. According to splicing patterns, all of these ASEs were classified into seven types: exon skip (ES), retained intron (RI), alternate donor site (AD), alternate acceptor site (AA), alternate promoter (AP), alternate terminator (AT), and mutually exclusive exons (ME) (Fig. 1a). Only ASEs available in more than 70% of samples were included in this study. Missing values were imputed using R package impute.
Illustrations of the seven AS types and UpSet plots. a Illustration of the seven types of AS. b Upset plot displaying the number of ASEs included in the current study in different types of splicing patterns. c Upset plot displaying selected ASEs after preliminary screening. AS alternative splicing, SD standard deviation, ASE alternative splicing event
The name of each ASE consists of three parts: gene symbol, ID number designated in TCGA SpliceSeq database, and splicing type. For example, the name "CHEK-19309-AP" indicates the parent gene of this event is CHEK, its ID number in TCGA SpliceSeq database is 19309, and its splicing type is AP.
Machine learning algorithms
Random forests
Random forests is an ensemble learning technique that makes a prediction based on constructing multiple unpruned decision trees, each of which is constructed on several bootstrap samples of the training set data using a subset of randomly picked variables [15]. The tree structure of random forests can be denoted as:
$$\{ h(X,\psi (t));t = 1, \ldots ,T\} .$$
In the formula, X is an input vector and ψ(t) represents the independent trees in random forests and each tree elects the most popular class for X via a unit vote. Then the decisions made by all the trees were aggregated and the class of X is determined based on the principle of majority voting. This supervised non-parametric machine learning method can help researchers acquire key information from massive complicated data and resist both overfitting and underfitting [16].
Boruta
Boruta algorithm is a random-forest based feature selection method. This algorithm estimates the importance of features and captures important features in the dataset [17]. Through the following workflow, the algorithm finds all features that have either strong or weak correlations with the outcome variable: (1) Boruta duplicates the given dataset and shuffles these added attributes to increase randomness. The new features are called shadow features. (2) It develops a random forest classifier on the extended dataset and gathers the importance of each feature, which was measured by Z-scores. Z-score is computed by dividing the average loss of mean decrease accuracy by its standard deviation (SD). The higher the Z-score, the more important the feature. (3) Then, the algorithm checks whether a real feature has a higher Z-score than the maximum Z-score among shadow attributes. If not, the real feature would be deemed as unimportant and removed. Afterward, another iteration would begin. (4) These procedures repeat until the importance of all the features is assigned or the algorithm reaches the preset limit of runs.
Random survival forests (RSF)
RSF is an extension of the original random forest technique which can be used for survival data [18]. Based on random forests, RSF splits decision trees on a predictor using the splitting criterion. A node of the decision tree is split on the predictor which makes differences across daughter nodes reaching the maximal. In this study, the differences were determined by a log-rank splitting rule. When the tree grows to its terminal node, the cumulative hazard function (CHF) for each node was calculated, which is calculated by the Nelson–Aalen estimator:
$$N_{b,h} (t) = \mathop \sum \limits_{{t_{l,h} \le t}} \frac{{d_{l,h} }}{{I_{l,h} }}.$$
In this formula, h, b, and t refer to the terminal node, survival tree, and time, respectively. dl,h represents the number of death, Il,h represents patients at risk, and tl,h represents distinct time events. The same CHF is assigned to all cases in h. Then an ensemble CHF is computed for the survival forest with B trees for a given d-dimensional case xi:
$$H_{e}^{s} (t,x_{i} ) = \frac{1}{B}\mathop \sum \limits_{b = 1}^{B} \mathop \sum \limits_{h \in T\left( b \right)} H_{b,h}^{s} (t,x_{i} )$$
\(H_{b,h}^{s} (t,x_{i} )\) in the above formula is calculated as [19]:
$$H_{b,h}^{s} \left( {t,x_{i} } \right) = \left\{ {\begin{array}{*{20}c} {N_{b,h} (t)} & {x_{i} \in h} \\ 0 & {otherwise.} \\ \end{array} } \right.$$
Through the above methods, RSF adapts traditional random forest algorithm and can handle problems associated with survival. And these procedures in the present study are carried by randomForestSRC R package. Using Surv and var.select functions in this package we preliminary screened out survival-related ASEs.
Selection models
Cox regression model
Cox regression model is a model simultaneously analyzing the effects of several variables on survival. Based on the condition of the proportional hazard, this model assumes the hazard functions for different individuals are proportional and covariates' effects on individuals are constant. Cox regression model can be formulated as:
$$h(t,X) = h_{0} (t ) {\text{exp}}\left( {\mathop \sum \limits_{i = 1}^{m} \beta_{i} X_{i} } \right).$$
In this formula, h0(t) is the baseline hazard function, t is a time variable, and βi is a coefficient vector weighing the contribution of feature Xi.
Forward selection model
We used a forward selection model to select prognosis-related genes from survival-related genes. This selection was achieved by rbsurv R package in the following procedures [20]: (1) The dataset was randomly divided into the training set (3/4 of all samples) and the validation set (1/4 of all samples). A gene was then fitted to the training set and the parameter estimate \(\hat{\beta }_{i}^{0}\) for this gene was obtained. Next, \(\hat{\beta }_{i}^{0}\) and the validation set were used to evaluate the log-likelihood. This process was repeated for each ASE. (2) The above procedures were repeated 100 times and we obtained 100 log-likelihoods for each ASE. Then the ASE with the largest mean log-likelihood was selected as the best ASE which is the most survival-associated one. Simultaneously, we selected the next best ASE by repeating previous procedures and found the optimal two-ASE model with the largest mean log-likelihood. (3) These forward selection methods continued until the fitting is impossible, resulting in a series of models. Then the Akaike Information Criterion (AIC) was calculated to evaluate these models to avoid overfitting. Finally, the model with the minimal AIC was selected as the final model.
Workflow of the current study
Preliminary filtering
SD reflects the information entropy of a feature. The greater the SD, the more informative the feature. To filter out less informative ASE and to decrease the computation of subsequent analyses, we analyzed SDs of all the ASEs in the dataset and excluded ASEs with SD < 0.1. Besides, ASEs whose mean PSI ≤ 0.05 were also excluded.
Identification of LUAD-related ASEs implicated in splicing switch
First, to circumvent the problem caused by severely imbalanced data (10.3% normal, 89.7% LUAD) in the learning process, we balanced the proportions of normal and LUAD samples by oversampling normal samples using the ovun.sample function of ROSE R package. Thus, we generated augmented data with a balanced class distribution. Second, we applied Boruta algorithm to select ASEs which were important for distinguishing between normal and LUAD samples. Third, to further explore ASEs implicated in splicing switch, we separately analyzed the correlations of LUAD-related ASEs derived from the same gene.
Construction of classifier for recognizing LNM
Applying Boruta algorithm, we selected ASEs correlated with outcome variables (LNM or not). Using these ASEs, we performed nested five-fold cross-validation based on the random forest model. The cross-validation sequentially reduced the number of ASEs (ranked by variable importances from Boruta analysis) and this process repeated five rounds. The mean cross-validation error was calculated and the classifier with the minimum error rate was chosen. The classifier's classification capacity was evaluated by cross-validation.
Model construction and functional enrichment analysis
Cox regression is the traditional method for survival analyses with an understandable output, while RSF provides more insight into the relative importance of model covariates [21]. Based on previous results, the combination of these two methods could produce results with higher confidence than a single one [21]. Therefore, RSF and Cox regression were performed for each ASE, and only ASEs which were survival-related in both methods were selected. Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways analysis and Reactome pathways analysis were performed to analyze the functional categories of parent genes of survival-related ASEs using Cytoscape (version 3.7.2) plug-in ClueGO (version 2.5.5) [22, 23]. The dataset was further divided into the training set (3/4 of all samples) and the test set (1/4 of all samples). We used the forward selection model to identify prognosis-related ASEs and multivariate Cox regression to construct the prognosis model. The final model was tested in the internal test set. Relationships between the prognostic model and clinical-pathological variables were analyzed in the entire set using the Wilcoxon test. P < 0.05 was considered statistically significant.
Splicing network analysis
A total of 390 SFs were retrieved from SpliceAid 2 database (http://193.206.120.249/splicing_tissue.html) [24]. Univariate Cox analysis was used to identify survival-related SFs. P < 0.01 was considered to be significant. Spearman correlation analysis was conducted to evaluate the correlations between survival-related SFs and ASEs. The criterion of selecting correlated variables was P < 0.01 and |coefficient| > 0.2. Finally, their correlations were visualized via Cytoscape.
All statistical analyses were conducted using R software (version 3.6.3). R package Boruta, randomForest, RandomforestSRC, caret, survival, rbsurv, pROC, timeROC, pheatmap, and ggplot2 were used in this study for analyzing data or drawing purposes. All codes are available on GitHub (GitHub, Inc., San Francisco, California) at https://github.com/cqd1308/JTM-Rscript-ASsignatures. Cytoscape software was applied to conduct functional and pathway enrichment analysis and plot the network graph.
Preparation of datasets
The flow chart presenting the overall analysis process of the current study is shown in Fig. 2. After data preprocessing, 572 samples (59 normal, 513 LUAD) of LUAD-AS dataset were included in analyses for distinguishing between normal and LUAD samples. 502 samples (330 LNM negative, 172 LNM positive) of LUAD-AS dataset had available N stage data and were included for recognition of LNM. We removed samples with unavailable survival data and follow-up days less than 30, after which 408 samples were left for the prediction of OS. The 408 samples were randomly split into the training set and the test set consisting of 306 and 102 samples for model construction and evaluation, respectively. Besides, 380 overlapping samples in LUAD-mRNA dataset and LUAD-AS dataset were used in further regulatory SFs analysis. Detailed clinical information is shown in Additional file 1: Table S1 to Additional file 4: Table S4.
Flow chart of this study
Preliminary screening
A total of 43,948 ASEs corresponding to 10,367 genes were included in this study, of which ES had the highest proportion, which was 27.2%, and the lowest was contributed by ME, which had only 0.9%. One single gene could have seven splicing types at most (Fig. 1b). To exclude less discriminative features, we filtered out ASEs according to the criteria described in Methods section and got 10,951 ASEs (Fig. 1c) spliced from 4915 parent genes.
LUAD-related ASEs implicated in splicing switch
Using the oversampled dataset consisting of 513 normal and 513 LUAD samples, 506 ASEs from 360 parent genes were confirmed as important features for differentiating between LUAD and normal samples using Boruta feature selection (Fig. 3a), whose detailed data are summarized in Additional file 5: Table S5. Subsequently, we picked out and analyzed the correlations of ASEs derived from the same genes. Interestingly, as shown in Fig. 3b, intrapair correlation coefficients in 24 pairs of ASEs spliced from the same parent genes were − 1, indicating perfect negative correlations. This also suggests the splicing pattern shifts of these genes significantly contributed to the pro-oncogenic or anti-oncogenic transition.
Identification of ASEs associated with splicing switches between normal and LUAD samples. a Heat map showing the PSI levels of important ASEs for differentiating between normal and LUAD samples after Boruta selection. b Heat map demonstrating PSI levels of the 24 pairs of ASEs implicated in splicing switches of LUAD development. Each pair of ASEs were perfectly contrarily expressed, and the expressions of these ASEs were distinct in normal and LUAD tissues
Classifier for LNM
Using Boruta feature selection, 19 ASEs spliced from 19 different genes were confirmed as important features for LNM (Additional file 6: Table S6). The Z-score of the top 30 ASEs with the highest Z-score and the other 20 randomly selected ASEs was shown in Fig. 4a. The 502 samples were randomly assigned to the training set or the test set by fivefold cross-validation. The 19 ASEs were removed from the random forest-classifier one by one from a lower Z-score to a higher Z-score, and each time a new ASE was removed, the classification performance was updated with the fivefold cross-validation. The cross-validation procedures were repeated five rounds and the average cross-validation error was calculated. The number of kept ASEs concerning the average error rate was shown in Fig. 4b and the best classification performance was achieved when the number of ASEs was 12. The 12 ASEs were: THUMPD2-53337-ES, LMBR1L-21525-ES, BEAN1-36708-AT, TRMT10B-86427-AA, VWA5A-19215-RI, ELMO2-59676-AP, SH3BP2-68592-AP, FAM222B-39979-AP, AIFM1-90068-AP, DNASE1L1-90573-AP, ZNF695-10502-AT, and OBFC1-13029-AP. The PSI value of kept features is shown in Fig. 4c. Results of ROC analysis (Fig. 4d) revealed the AUC values of the 12-ASE-based classifier were all more than 0.7 in fivefold cross-validation, indicating the robust sensitivity and specificity of this classifier for the recognition of LNM.
LNM classifier construction and the efficiency of the 12-ASE-based classifier. a Z-score of the top 30 important ASEs and 20 randomly picked ASEs using Boruta algorithm. b The mean cross-validation error of the five-round fivefold cross-validation about different numbers of ASEs. c The heat map showing PSI levels of ASEs in the LNM classifier. The data were normalized using R function scale. d ROC curves for the fivefold cross-validation of the classifier to identify LNM statuses of LUAD patients. LNM lymph node metastasis. Important ASEs, the ASEs confirmed as important features for the identification of LNM for LUAD patients by Boruta algorithm. Top 30 ASEs (rejected), the ASEs had the top 30 Z-score but rejected as unimportant features by Boruta algorithm for the identification of LNM for LUAD patients
Identification of survival-related ASEs and functional annotation
Cox regression and RSF methods identified 1439 and 544 ASEs (Additional file 7: Table S7 and Additional file 8: Table S8), respectively. 99 ASEs were survival-related in both results (Fig. 5a). To explore underlying mechanisms of survival-related ASEs, 85 parent genes (Fig. 5b) of these ASEs were used for KEGG and Reactome pathway analyses. Enrichment results indicated pathways including "Transcriptional Regulation by TP53", "Cell Cycle Checkpoints", "Generic Transcription Pathway", "Degradation of the extracellular matrix", and "Extracellular matrix organization" were significantly enriched (Fig. 5c).
Identification of survival-related ASEs and pathway enrichment analyses. a Venn diagram summarizing survival-related ASEs identified by Cox regression and random survival forests. b Upset plot displaying overlapping ASEs between two methods. c Pathway analyses of genes associated with OS-related splicing events
Prognostic model for LUAD
Baseline characteristics of the training set and the internal test set were shown in Table 1, and no statistically significant difference in clinical features existed between the two sets. Then the 99 survival-associated ASEs were introduced to the forward selection model using R package rbsurv. Afterward, 16 key prognosis-related ASEs were selected and a prognostic risk score model for LUAD was established using multivariate Cox regression (Additional file 9: Table S9). Choosing the median risk score of the training set as the cut-off, samples were divided into the high-risk group and the low-risk group (Fig. 6a). As shown in Fig. 6b, the patients in the high-risk group had higher mortality than those in the low-risk group. The heat map showed the PSI levels of the 16 ASEs involved in the prognostic model (Fig. 6c), and the Kaplan–Meier curves showed a clear distinction between two risk groups (P < 0.001) (Fig. 6d). The AUC of this model in 1, 3, 5, and 10 years in the training set was 0.753, 0.775, 0.832, and 0.867, respectively (Fig. 6e). Samples in the internal test set were also divided into the high-risk group and the low-risk group according to the median risk score of the training set. The risk plot, the scatter diagram showing OS, and the heat map reflecting PSIs of key ASEs of the test set were shown in Fig. 6f–h. The Kaplan–Meier plot also showed a very significant difference (P < 0.001) between the high-risk group and low-risk group in the test set (Fig. 6i). ROC analysis revealed the robust predictive capacity of the 16-ASE-based model, where AUC in 1, 3, 5, and 10 years were 0.766, 0.812, 0.800, and 0.800, respectively (Fig. 6j).
Table 1 Baseline characteristics of the training set and the internal test
Prognostic model construction and efficiency assessment. a, b Visualization of the risk score and survival for each patient in the training set. c The heat map comparing the PSI levels of the 16-ASE signature in the high-risk and the low-risk group of the training set. d Kaplan–Meier survival curve for patients in the high-risk and the low-risk group of the training set. e Time-dependent ROC curves for LUAD patients in the training set. f, g Visualization of the risk score and survival for each patient in the test set. h The heat map comparing the PSI levels of the 16-ASE signature in the high-risk and the low-risk group of the test set. i Kaplan–Meier survival curve for patients in the high-risk and the low-risk group of the test set. g Time-dependent ROC curves for LUAD patients in the test set
Further analysis of the risk score model indicated correlations between the 16-ASE prognostic model and clinical variables including American Joint Committee on Cancer (AJCC) stage (P < 0.01), T stage (P < 0.05), N stage (P < 0.05), and vital status (P < 0.001). The correlations between risk score and AJCC stage, T stage, N stage, M stage, vital status, smoking history, gender, and age were shown in Fig. 7a–h.
Relationships between clinical features and the risk model. The distribution of risk scores of LUAD patients in different clinical groups. LUAD patients were assigned to different groups according to clinical risk factors. a AJCC stage, b T stage, c N stage, d M stage, e vital status, f smoking history, g gender, h age
The prognostic model and clinical variables including age, gender, smoking history, AJCC stage, T stage, N stage and M stage were sent to univariate and multivariate Cox regression analyses. In the univariate analysis, AJCC stage, T stage, N stage and the risk score model were associated with adverse clinical outcomes (Fig. 8a). Distant metastasis, the widely recognized predictor for bad OS, was not correlated with OS in this analysis, which may be caused by too few samples with M1 stage (N = 20) in this dataset. In multivariate analysis, only AJCC stage and the risk score model were associated with bad clinical outcomes for LUAD patients, indicating their roles as independent prognostic factors (Fig. 8b). A nomogram was then plotted for clinical application (Fig. 8c).
Forest plots and the nomogram for the prognosis of LUAD patients. a The forest plot of univariate Cox regression analysis evaluating prognostic effects of clinical features and the risk model for LUAD patients. b The forest plot of multivariate Cox regression analysis evaluating prognostic effects of clinical features and the risk model for LUAD patients. c The nomogram predicting the overall survival probability of patients with LUAD
Construction of splicing network
We conducted univariate Cox analysis for the 390 SFs and found 18 SFs to have significant effects (P < 0.01) on OS of LUAD patients. Spearman test was used to identify correlations between survival-related SFs and ASEs, and a correlation network was established using Cytoscape software (Fig. 9a). The network contains 18 SFs (triangle), 35 protective ASEs (green circle), and 25 risk ASEs (red circle). Proportions of positive regulation (red line) effects and negative regulation (green line) effects were similar in the splicing network. Among these SFs, CIRBP and LUC7L regulated the most ASEs (38 and 29 ASEs, respectively). And CHEK1-19309-AP had correlations with the most SFs (15 SFs). Figure 9b, c shows the correlations between these most representative ASEs and SFs.
Correlation analysis between splicing factors and ASEs in the LUAD cohort. a The splicing network for splicing factors and ASEs. Yellow nodes indicate splicing factors, red nodes indicate poor survival associated ASEs, and green nodes represent good survival associated ASEs; Red lines represent positive correlations, and green lines represent negative correlations. b The correlation between PSI values of CHEK1-19309-AP and the expression of CIRBP. c The correlation between PSI values of CHEK1-19309-AP and the expression of LUC7L
Due to the heterogeneity and complexity of cancers, detecting, monitoring, and managing cancers are difficult for clinicians. With the deepening of scientific researches, scientists have unraveled more and more molecular characteristics of cancer initiation and progression. In recent years, many promising biomarkers for the diagnosis and prognosis of LUAD were identified. For example, CAV1 and DCN play critical roles in LUAD cell proliferation inhibition and progression regulation [25], and long noncoding RNA DGCR5 is an anti-apoptosis marker for LUAD and can promote LUAD progression [26]. Restricted by the sophisticated mechanisms behind LUAD, one single biomarker may only be effective on a proportion of patients. Therefore, many diagnostic or prognostic panels were come up based on various types of biomarkers to make the prediction more applicable and more effective [27,28,29]. However, most of these studies were restricted to exploration in transcriptome aspect, utilizing mRNAs, long non-coding RNAs, or microRNAs for the construction of a predictive model.
In the last decades, abnormal ASs and presences of specific ASEs have been identified as driven factors for cancers by many studies [30, 31]. For example, the splicing of BCL2L1 pre-mRNA generates two isoforms: the anti-apoptotic isoform Bcl-XL and the pro-apoptotic isoform Bcl-XS. Shifts of BCL2L1's splicing patterns between those two isoforms can influence the apoptosis of LUAD cells, resulting in the progression or suppression of LUAD [32]. Based on this mechanism, researchers utilized antisense oligonucleotides to push the splicing of BCL2L1 pre-mRNA towards its pro-apoptotic isoform Bcl-XS, which prompted the apoptosis of LUAD A549 cells in vitro [32]. Besides, a previous study reported that the mRNA ratio of Lamin C and Lamin A was increased in all clinical stages of breast cancer and the splicing switch of Lamin A/C alternative splice variants may be of diagnostic use [33]. Apart from participating in chemoresistance pathways, AS can also influence the efficacy of chemotherapeutic agents by the aberrant splicing of molecular targets [34]. In addition to a single AS biomarker, Li et al. and Zhao et al. identified prognostic models for NSCLC using data mining techniques [12, 35]. These studies indicated specific ASEs could be useful tools for the prediction and treatment of LUAD.
However, ASEs implicated in splicing pattern shifts of LUAD or splicing model determining LNM status were rarely explored. The main methods we used in this study for feature selections and classifier constructions are based on random forests. Random forests take advantage of two machine learning methods: bagging and random feature selection. One of the most significant advantages of random forest approaches is their accuracy benefited from the random split of the whole dataset into a training set and a validation set, which contributes to the removal of outliers and noise, resulting in its superior performance over other methods [36]. Based on random forests, Boruta can reduce the influence of random fluctuations and correlations by adding randomness to the dataset and identify features that are really important to the outcome [17]. Besides, another adaption of random forests, RSF model, provides researchers a method to deal with right-censored survival data using decision trees [37]. For these reasons, there is a growing interest in the application of random forest algorithms in bioinformatics fields. To our knowledge, no previous study has used random forest methods or machine learning methods to identify AS signatures in LUAD. Here, by initiatively using several machine learning methods, we integratively analyzed the AS data of LUAD patients and identified a series of AS biomarkers.
In this study, we identified 24 pairs of contrarily expressed ASEs participating in the transitions between risky and protective isoforms for LUAD. Being identical to the splicing of BCL2L1 and Lamin A/C as mentioned above, these biomarkers may have similar therapeutic or diagnostic values for LUAD. Besides, our results also indicate shifts in splicing patterns of QKI, a well-known AS regulator are also strongly correlated with the development of LUAD [38]. The skewed distribution of classes may compromise the result of data mining, so we utilized a data resampling technique to get data with balanced class distribution [39]. While further analysis was focused on the correlations between ASEs, which were not concerned with the distributions of normal and LUAD samples, we used the original imbalanced data for the correlation analysis.
The prognosis of LUAD is significantly correlated with LNM statuses. Previous data displayed 5-year OS of LUAD patients with LNM was 26–35%, while the 5-year OS of LUAD patients without LNM was more than 95% [40]. The 12-ASE-based classifier for LNM showed high sensitivity and specificity in five-fold cross-validation with over 0.7 AUC values in all folds.
We also constructed a prognostic model using 16 ASEs. We first selected survival-related ASEs by the combination of Cox regression and RSF, whose results could be more reliable than utilizing a single method [21]. Then the final list of genes for the prognosis model was selected by forward selection model using R package rbsurv. Based on robust likelihood, this algorithm is widely used for survival model construction [41,42,43] by utilizing the classical forward selection method to generate a series of models and select an optimal one. Compared with other survival analyses such as artificial neural network-based or deep learning-based survival models, this algorithm is straight-forward and user-friendly in the R programming environment. Although least absolute shrinkage and selection operator (LASSO) Cox regression model is also a popular and automated method for constructing a survival model, the robust partial likelihood-based Cox regression model employed in this study could not only help establish a robust predictive model but also provide the relative importance for survival of each ASE intuitively by calculating mean log-likelihood. This prognostic model was further validated in the internal test set and AUC in 1, 3, 5, and 10 years was 0.766, 0.812, 0.800, and 0.800, respectively, showing the robust predictive capacity. Further study revealed correlations between the risk score model and AJCC stage, T stage, N stage and vital status. These clinical parameters are all OS-relevant and the prognostic model was an independent risk factor. TNM stage is widely used to evaluate the prognosis of LUAD patients. However, the limitation of risk factors of this system makes it impossible to predict OS of LUAD patients precisely. Therefore, we built the nomogram as shown in Fig. 8c to help clinical prediction.
The splicing network built in this study showed the importance of CIRBP and LUC7L as AS regulators. As a stabilizing RNA-binding protein, CIRBP regulates multiple cancers through stabilizing specific mRNAs translating into cancer-associated proteins and modulating inflammation [44]. A recent study also proved its anticancer role in NSCLC [45]. LUC7L is rarely studied and encodes a putative RNA-binding protein, contributing to the metastasis of breast cancer [46, 47]. The ASE of CHEK1 showed the most correlations with SFs. Besides, CHEK1 encodes the cell cycle checkpoint kinase 1, which is a key kinase for DNA damage response and participates in the cell cycle regulation [48]. Evidence indicated CHEK1 may implicate with multiple cancers, including NSCLC, breast cancer, and ovarian cancer [49,50,51]. Our finding suggests its function in cancer progression could be strongly influenced by ASs. In addition, the splicing patterns of most of the biomarkers in the current study are AP, AT, and ES, suggesting the main splicing patterns in LUAD initiation and development.
The limitations of our study should be mentioned too. First, there was a lack of another AS dataset for external validation. Second, the concrete molecular mechanisms of these biomarkers are still unknown because of lacking in vitro or in vivo experiments. In future studies, we will perform in-depth studies to validate our current findings.
In conclusion, we identified 24 pairs of splicing isoforms strongly correlated with the splicing shifts of LUAD and established two useful AS models to identify LNM and predict OS for LUAD patients. Our findings highlight the importance of AS for LUAD. Biomarkers identified in the present study may provide a new strategy for the diagnosis and treatment of LUAD.
The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.
TCGA:
ASEs:
Alternative splicing events
LNM:
Lymph node metastasis
SFs:
Splicing factors
NSCLC:
RSF:
Random survival forest
Exon skip
RI:
Retained intron
Alternate donor
AA:
Alternate acceptor
AP:
Alternate promoter
Alternate terminator
Mutually exclusive exon
CHF:
Cumulative hazard function
AIC:
Akaike Information Criterion
Bray F, Ferlay J, Soerjomataram I, Siegel RL, Torre LA, Jemal A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin. 2018;68(6):394–424. https://doi.org/10.3322/caac.21492.
Duma N, Santana-Davila R, Molina JR. Non-small cell lung cancer: epidemiology, screening, diagnosis, and treatment. Mayo Clin Proc. 2019;94(8):1623–40. https://doi.org/10.1016/j.mayocp.2019.01.013.
Chen H, Carrot-Zhang J, Zhao Y, et al. Genomic and immune profiling of pre-invasive lung adenocarcinoma. Nat Commun. 2019;10(1):1–6. https://doi.org/10.1038/s41467-019-13460-3.
Song AX, Zeng Z, Wei H. Alternative splicing in cancers: from aberrant regulation to new therapeutics. Semin Cell Dev Biol. 2017. https://doi.org/10.1016/j.semcdb.2017.09.018.
Ge Y, Porse BT. The functional consequences of intron retention: alternative splicing coupled to NMD as a regulator of gene expression. BioEssays. 2014;36(3):236–43. https://doi.org/10.1002/bies.201300156.
Zhou LT, Ye SH, Yang HX, et al. A novel role of fragile X mental retardation protein in pre-mRNA alternative splicing through RNA-binding protein 14. Neuroscience. 2017;349:64–75. https://doi.org/10.1016/j.neuroscience.2017.02.044.
Yin J, Luo W, Zeng X, et al. UXT-AS1-induced alternative splicing of UXT is associated with tumor progression in colorectal cancer. Am J Cancer Res. 2017;7(3):462–72.
Kozlovski I, Siegfried Z, Amar-Schwartz A, Karni R. The role of RNA alternative splicing in regulating cancer metabolism. Hum Genet. 2017;136(9):1113–27. https://doi.org/10.1007/s00439-017-1803-x.
Wang BD, Lee NH. Aberrant RNA splicing in cancer and drug resistance. Cancers (Basel). 2018. https://doi.org/10.3390/cancers10110458.
Siegfried Z, Karni R. The role of alternative splicing in cancer drug resistance. Curr Opin Genet Dev. 2018;48:16–21. https://doi.org/10.1016/j.gde.2017.10.001.
Climente-González H, Porta-Pardo E, Godzik A, Eyras E. The functional impact of alternative splicing in cancer. Cell Rep. 2017;20(9):2215–26. https://doi.org/10.1016/j.celrep.2017.08.012.
Li Y, Sun N, Lu Z, et al. Prognostic alternative mRNA splicing signature in non-small cell lung cancer. Cancer Lett. 2017;393(February):40–51. https://doi.org/10.1016/j.canlet.2017.02.016.
Camacho DM, Collins KM, Powers RK, Costello JC, Collins JJ. Next-generation machine learning for biological networks. Cell. 2018;173(7):1581–92. https://doi.org/10.1016/j.cell.2018.05.015.
Ryan M, Wong WC, Brown R, et al. TCGASpliceSeq a compendium of alternative mRNA splicing in cancer. Nucleic Acids Res. 2016;44(D1):D1018–D1022. https://doi.org/10.1093/nar/gkv1288
Breiman L. Random forests. Mach Learn. 2001;45:5–32. https://doi.org/10.1201/9780367816377-11.
Deng M, Yu R, Wang L, Shi F, Yap PT, Shen D. Learning-based 3T brain MRI segmentation with guidance from 7T MRI labeling. Med Phys. 2016;43(12):6588–97. https://doi.org/10.1118/1.4967487.
Kursa MB, Rudnicki WR. Feature selection with the Boruta package. J Stat Softw. 2010;36(11):1–13. https://doi.org/10.18637/jss.v036.i11.
Ishwaran H, Kogalur UB, Blackstone EH, Lauer MS. Random survival forests. Ann Appl Stat. 2008;2(3):841–60. https://doi.org/10.1214/08-AOAS169.
Ruyssinck J, Van Der Herten J, Houthooft R, et al. Random survival forests for predicting the bed occupancy in the intensive care unit. Comput Math Methods Med. 2016. https://doi.org/10.1155/2016/7087053.
HyungJun C, Ami Y, Sukwoo K, Jaewoo K, Seung-Mo H. Robust likelihood-based survival modeling with microarray data. J Stat Softw. 2009;29(1):1–16. https://doi.org/10.1002/wics.10.
Datema FR, Moya A, Krause P, et al. Novel head and neck cancer survival analysis approach: random survival forests versus cox proportional hazards regression. Head Neck. 2012;34(1):50–8. https://doi.org/10.1002/HED.
Shannon P, Markiel A, Ozier O, et al. Cytoscape: a software environment for integrated models of biomolecular interaction networks. Genome Res. 2003;13(11):2498–504.
Bindea G, Mlecnik B, Hackl H, et al. ClueGO: a Cytoscape plug-in to decipher functionally grouped gene ontology and pathway annotation networks. Bioinformatics. 2009;25(8):1091–3. https://doi.org/10.1093/bioinformatics/btp101.
Piva F, Giulietti M, Burini AB, Principato G. SpliceAid 2: a database of human splicing factors expression data and RNA target motifs. Hum Mutat. 2012;33(1):81–5.
Yan Y, Xu Z, Qian L, et al. Identification of CAV1 and DCN as potential predictive biomarkers for lung adenocarcinoma. Am J Physiol Lung Cell Mol Physiol. 2019;316(4):L630–43. https://doi.org/10.1152/ajplung.00364.2018.
Dong HX, Wang R, Jin XY, Zeng J, Pan J. LncRNA DGCR5 promotes lung adenocarcinoma (LUAD) progression via inhibiting hsa-mir-22-3p. J Cell Physiol. 2018;233(5):4126–36. https://doi.org/10.1002/jcp.26215.
Haghjoo N, Moeini A, Masoudi-Nejad A. Introducing a panel for early detection of lung adenocarcinoma by using data integration of genomics, epigenomics, transcriptomics and proteomics. Exp Mol Pathol. 2020;112:104360. https://doi.org/10.1016/j.yexmp.2019.104360.
Ma B, Geng Y, Meng F, Yan G, Song F. Identification of a sixteen-gene prognostic biomarker for lung adenocarcinoma using a machine learning method. J Cancer. 2020;11(5):1288–98. https://doi.org/10.7150/jca.34585.
Wang Y, Deng H, Xin S, Zhang K, Shi R, Bao X. Prognostic and predictive value of three DNA methylation signatures in lung adenocarcinoma. Front Genet. 2019;10(APR):1–13. https://doi.org/10.3389/fgene.2019.00349.
Kim E, Goren A, Ast G. Insights into the connection between cancer and alternative splicing. Trends Genet. 2008;24(1):7–10. https://doi.org/10.1016/j.tig.2007.10.004.
Oltean S, Bates DO. Hallmarks of alternative splicing in cancer. Oncogene. 2014;33(46):5311–8. https://doi.org/10.1038/onc.2013.533.
Taylor JK, Zhang QQ, Wyatt JR, Dean NM. Induction of endogenous Bcl-xS through the control of Bcl-x pre-mRNA splicing by antisense oligonucleotides. Nat Biotechnol. 1999;17(11):1097–100. https://doi.org/10.1038/15079.
Aljada A, Doria J, Saleh AM, et al. Altered Lamin A/C splice variant expression as a possible diagnostic marker in breast cancer. Cell Oncol. 2016;39(2):161–74. https://doi.org/10.1007/s13402-015-0265-1.
Lung NS, De Figueiredo-pontes LL, Wong DW. Identification and characterization of ALK kinase splicing. J Thorac Oncol. 2014;9(2):248–53.
Zhao D, Zhang C, Jiang M, et al. Survival-associated alternative splicing signatures in non-small cell lung cancer. Aging (Albany NY). 2020;12(7):5878–93. https://doi.org/10.18632/aging.102983.
Yang WJ, Wang HB, Da Wang W, et al. A network-based predictive gene expression signature for recurrence risks in stage II colorectal cancer. Cancer Med. 2020;9(1):179–93. https://doi.org/10.1002/cam4.2642.
Ishwaran H, Kogalur UB. Random survival forests for R. R News. 2007;7(2):25–31.
de Miguel FJ, Pajares MJ, Martínez-Terroba E, et al. A large-scale analysis of alternative splicing reveals a key role of QKI in lung cancer. Mol Oncol. 2016;10(9):1437–49. https://doi.org/10.1016/j.molonc.2016.08.001.
Menardi G, Torelli N. Training and assessing classification rules with imbalanced data. Data Min Knowl Disc. 2014;28(1):92–122. https://doi.org/10.1007/s10618-012-0295-5.
Cen S, Fu K, Shi Y, et al. A microRNA disease signature associated with lymph node metastasis of lung adenocarcinoma. Math Biosci Eng. 2020;17(3):2557–68. https://doi.org/10.3934/mbe.2020140.
Wang H, Wu X, Chen Y. Stromal-immune score-based gene signature: a prognosis stratification tool in gastric cancer. Front Oncol. 2019;9(November):1–14. https://doi.org/10.3389/fonc.2019.01212.
Jin P, Tan Y, Zhang W, Li J, Wang K. Prognostic alternative mRNA splicing signatures and associated splicing factors in acute myeloid leukemia. Neoplasia (US). 2020;22(9):447–57. https://doi.org/10.1016/j.neo.2020.06.004.
Mao S, Li Y, Lu Z, et al. Systematic profiling of immune signatures identifies prognostic predictors in lung adenocarcinoma. Cell Oncol. 2020;43(4):681–94. https://doi.org/10.1007/s13402-020-00515-7.
Lujan DA, Ochoa JL, Hartley RS. Cold-inducible RNA binding protein in cancer and inflammation. Wiley Interdiscip Rev RNA. 2018;9(2):1–10. https://doi.org/10.1002/wrna.1462.
He R, Zuo S. A robust 8-gene prognostic signature for early-stage non-small cell lung cancer. Front Oncol. 2019;9(July):1–14. https://doi.org/10.3389/fonc.2019.00693.
Crawford NPS, Walker RC, Lukes L, Officewala JS, Williams RW, Hunter KW. The Diasporin Pathway: a tumor progression-related transcriptional network that predicts breast cancer survival. Clin Exp Metastasis. 2008;25(4):357–69. https://doi.org/10.1007/s10585-008-9146-6.
Tufarelli C, Hardison R, Miller W, et al. Comparative analysis of the α-like globin clusters in mouse, rat, and human chromosomes indicates a mechanism underlying breaks in conserved synteny. Genome Res. 2004;14(4):623–30. https://doi.org/10.1101/gr.2143604.
Cole KA, Huggins J, Laquaglia M, et al. RNAi screen of the protein kinome identifies checkpoint kinase 1 (CHK1) as a therapeutic target in neuroblastoma. Proc Natl Acad Sci USA. 2011;108(8):3336–3341. https://doi.org/10.1073/pnas.1012351108.
Ebili HO, Iyawe VO, Adeleke KR, et al. Checkpoint kinase 1 expression predicts poor prognosis in Nigerian breast cancer patients. Mol Diagn Ther. 2018;22(1):79–90. https://doi.org/10.1007/s40291-017-0302-z.
Alcaraz-Sanabria A, Nieto-Jiménez C, Corrales-Sánchez V, et al. Synthetic lethality interaction between aurora kinases and CHEK1 inhibitors in ovarian cancer. Mol Cancer Ther. 2017;16(11):2552–2562. https://doi.org/10.1158/1535-7163.MCT-17-0223.
Wang L, Qu J, Liang Y, et al. Identification and validation of key genes with prognostic value in non-small-cell lung cancer via integrated bioinformatics analysis. Thorac Cancer. 2020;11(4):851–66. https://doi.org/10.1111/1759-7714.13298.
The authors thank Professor Yongguang Tao for helpful comments and suggestions.
This work was supported by the National Natural Science Foundation of China (81672308, X. Wang) and the Hunan Provincial Key Area R&D Programmes (2019SK2253, X. Wang).
Department of Thoracic Surgery, The Second Xiangya Hospital, Central South University, Changsha, 410011, Hunan, China
Qidong Cai, Boxue He, Pengfei Zhang, Zhenyu Zhao, Xiong Peng, Yuqian Zhang, Hui Xie & Xiang Wang
Hunan Key Laboratory of Early Diagnosis and Precision Therapy, Department of Thoracic Surgery, The Second Xiangya Hospital, Central South University, Changsha, 410011, China
Qidong Cai
Boxue He
Pengfei Zhang
Zhenyu Zhao
Xiong Peng
Yuqian Zhang
Hui Xie
Xiang Wang
XW conceived and designed the work. QDC and PFZ carried out software coding and data analysis. XP and YQZ formatted the tables and figures. QDC, BXH, and PFZ wrote the manuscript. HX and ZYZ critically reviewed the codes and the manuscript. All authors read and approved the final manuscript.
Correspondence to Xiang Wang.
The research didn't involve animal experiments and human specimens, no ethics related issues.
All samples included in this study.
Samples used for differentiating normal and tumor tissues.
Samples for recognizing lymph node metastasis (LNM).
Information of samples included in the construction of prognostic model.
Feature importances and selection results presented by Boruta algorithm for the splicing events differentiating normal and tumor tissues.
Feature importances and selection results presented by Boruta algorithm for the classifier recognizing lymph node metastasis (LNM).
Unvariate Cox regression results.
Survival-related alternative splicing events selected by random survival forest model.
Alternative splicing events and their coefficient in the prognostic model.
Cai, Q., He, B., Zhang, P. et al. Exploration of predictive and prognostic alternative splicing signatures in lung adenocarcinoma using machine learning methods. J Transl Med 18, 463 (2020). https://doi.org/10.1186/s12967-020-02635-y
Splicing switch
Medical bioinformatics | CommonCrawl |
Discussion on machine learning technology to predict tacrolimus blood concentration in patients with nephrotic syndrome and membranous nephropathy in real-world settings
Weijia Yuan1 na1,
Lin Sui2 na1,
Haili Xin2,
Minchao Liu1 &
Huayu Shi1
Given its narrow treatment window, high toxicity, adverse effects, and individual differences in its use, we collected and sorted data on tacrolimus use by real patients with kidney diseases. We then used machine learning technology to predict tacrolimus blood concentration in order to provide a basis for tacrolimus dose adjustment and ensure patient safety.
This study involved 913 hospitalized patients with nephrotic syndrome and membranous nephropathy treated with tacrolimus. We evaluated data related to patient demographics, laboratory tests, and combined medication. After data cleaning and feature engineering, six machine learning models were constructed, and the predictive performance of each model was evaluated via external verification.
The XGBoost model outperformed other investigated models, with a prediction accuracy of 73.33%, F-beta of 91.24%, and AUC of 0.5531.
Through this exploratory study, we could determine the ability of machine learning to predict TAC blood concentration. Although the results prove the predictive potential of machine learning to some extent, in-depth research is still needed to resolve the XGBoost model's bias towards positive class and thereby facilitate its use in real-world settings.
Tacrolimus (TAC, FK506) is a new immunosuppressant that functions by inhibiting the activity of calcineurin and interfering with T cell activation and cytokine transcription after binding to intracellular FK binding protein. Recent studies have shown that TAC is effective in the treatment of a variety of chronic kidney diseases [1, 2]. However, its narrow treatment window, high toxicity, adverse effects, and individual differences in pharmacokinetics and pharmacodynamics have hindered its application in clinical treatment. Therefore, in clinical use, monitoring the blood concentration, adjusting the treatment plan, and administering individualized dosages of TAC are necessary to achieve the best treatment effect [3].
Real-world medical data are widely stored in hospital information systems, which include comprehensive diagnostic and treatment information. The optimization, upgrading, and popularization of hospital information systems not only provide a basis for the medical treatment of patients but also supply real-world data for retrospective research. Machine learning (ML) is a set of computer algorithms driven by data [4]. Its algorithms include the following: artificial neural network, decision tree, random forest, and support vector machine. ML is suitable for analyzing and mining real-world data in enormous quantities, high dimensions, complex relationships, and diverse forms. The rapid speed and strong generalizability of ML support its wide use in clinical decision-making. The application of ML algorithms to individualized medicine will aid in the understanding of precision medicine in clinical practice [5, 6]. The purpose of this study was to explore the influencing factors of TAC blood concentration in real-world settings using ML technology to predict TAC blood concentration and assist clinicians in adjusting TAC dosage, ensuring patient safety, and reducing adverse drug reactions.
Study population
The data of patients with nephrotic syndrome and/or membranous nephropathy treated with TAC in PLA General Hospital from January 1, 2013, to December 31, 2020, were collected retrospectively. The inclusion criteria were as follows: (1) diagnosis of nephrotic syndrome or membranous nephropathy; and (2) administration of TAC during hospitalization. The exclusion criteria were as follows: (1) use of TAC only during surgery; (2) TAC administration by skin test; and (3) patients with any missing data. This study was approved by the Ethics Committee of the Chinese People's Liberation Army General Hospital [S2022-278–01].
The data mining and modeling processes are shown in Fig. 1. Following the cleaning step, the final data set comprised 913 patients and the blood TAC concentrations from 1829 blood tests. Data from January 1, 2013, to December 31, 2019, including 821 patients and 1,649 blood tests, were randomly divided into a training set and a test set at an 8:2 ratio. The data from January 1, 2020, to December 31, 2020, including 115 patients and 180 blood tests, were used as the external validation set (Fig. 2).
Flow chart of data mining and modeling
Division of datasets
The relevant patient information was extracted from the database, including demographic, laboratory, and medical order information. Demographic information included data on age, sex, height, and weight. The laboratory information included the blood TAC concentrations, serum creatinine levels, sample receiving times, and result indicators. The medical order information included the name of the medication, dose, frequency of administration, and start and end times of the treatment. Because the medical order consisted of long-term information, it was split by frequency and processed into time-series data. To facilitate data processing, we stored patient hospitalization information in a tree structure rather than a two-dimensional table to build the data set (Fig. 3).
Patient information in a tree structure
First, data distribution was drawn according to the demographic information, and samples with outliers were deleted. Second, the medication and laboratory information were associated according to time. When there were multiple administrations of TAC before the collection of blood samples, we selected the data from the last TAC administration before sample collection to ascertain the test results matched the corresponding TAC administration. Additionally, a box plot was drawn for the time interval between the last administration and sample-receiving time, and only the samples between quartile 1 (Q1) and quartile 3 (Q3) were reserved to eliminate samples whose medication information was not related to the laboratory information. Seven doses of TAC were administered, and we organized them by the frequency of use as follows: 2.0, 1.0, 1.5, 3.0, 0.5, 2.5, and 4.0 mg.
In terms of combined medication, we extracted information on some of the most commonly prescribed medications by clinicians, including compound α-ketoacid, Shenyankangfu, ShenYanShu, Shenshuaining, Huangkui, Bailing, pidotimod, methylprednisolone, prednisone acetate, mycophenolate mofetil, and tripterygium glycoside. The variables of combined medication were dummy variables. Patients who had used one of these drugs between two blood tests were recorded as 1, and those who did not were recorded as 0.
Although blood concentration is a continuous variable, it was treated as a dummy variable in this study and classified according to the safe range of blood drug concentration [7, 8]. Concentrations were defined as 0 within the safety range, and those outside the safety range were defined as 1.
In this study, the blood concentration ratio of TAC classes 0 to 1 was unbalanced at 3:7. Therefore, we used the over-sampling method, SMOTE (Synthetic Minority Oversampling Technique), to balance the data. The core of SMOTE is to insert randomly generated new samples between those of minority and adjacent categories to increase the number of minority categories and improve the unbalanced distribution of the data set [9]. As XGBoost and LightGBM (LGBM) algorithms have hyperparameters for processing unbalanced data, we directly adjusted the super parameters without additional SMOTE processing of data for these algorithms.
Feature selection
The extracted variables included demographic information (age, sex, height, and weight), laboratory information (numerical results and collection time of blood TAC concentration and serum creatinine levels), medical order information (drug name, medication time, and dose), and medication combinations.
Various tools from different models were used to calculate the importance value of each factor. For example, logistic regression (LR), random forest, and AdaBoost (adaptive boosting) used the eli5 Library in SK-learn to visually display the value of each feature, whereas XGBoost and LGBM used their own algorithms. We removed the features with relatively low importance to reduce the feature dimension, simplify the model, and improve its generalization ability.
Classification algorithms in supervised learning included LR, artificial neural network, Naïve Bayes, and integration algorithms. In this study, six ML models, LR, random forest, AdaBoost, gradient boost decision tree, XGBoost, and LGBM, were established to classify and predict the blood concentration of TAC. All models except for LR belonged to the Ensemble Algorithms, which integrate several weak classifiers into one strong classifier. The Ensemble Algorithms have rapid speed and strong generalization ability, and they are suitable for application in many fields, including medical diagnosis [10].
In the process of model establishment, Grid Search was used to choose the hyperparameter of the model. Grid Search uses an exhaustive method to train the learner with the hyperparameter in the user-defined range, and then find the optimal value for the hyperparameter within this range. Table 1 lists the core hyperparameters of the six models. In addition, the threshold was continuously adjusted to achieve the best performance of the model.
Table 1 Hyperparameters for models
Model assessment
The evaluation criteria of binary factors generally include accuracy, precision, recall, F-1 score, and area under the curve (AUC) and come from the confusion matrix (Table 2). Accuracy refers to the prediction accuracy of positive sample results and was calculated as follows:
$$\mathrm{Accuracy}=\frac{TP + TN}{TP + TN + FP + FN}$$
where TP is true positive, TN is true negative, FP is false positive, and FN is false negative.
Table 2 Confusion matrix
Recall refers to how many positive samples in the data set are identified and can be calculated as follows:
$$\mathrm{Recall} =\frac{TP}{TP + FN}$$
In the ideal state, accuracy and recall are as high as possible; however, the two factors are inversely related, and a balance must be achieved. Therefore, the F-beta score was used to reflect the comprehensive situation of the model. The F-beta score was calculated using the following formula:
$$F - beta = 1 + \beta^{2} \times \frac{Precision \times Recall }{{\beta^{2} \times Precision + Recall}}$$
where precision is calculated using Eq. 4, \(\beta\) equals 1, and the F-beta score is calculated using Eq. 5.
$$\mathrm{Precision} =\frac{TP}{TP + FP}$$
$$\mathrm{F}1=2\times \frac{Precision \times Recall}{Precision +Recall}$$
When the accuracy and recall are equally important, they are given the same weight, that is, beta = 1 (F-1 score). However, in this study, type II errors were particularly important. Thus, we closely monitored situations in which patients with abnormal blood concentrations were not assessed, which had a negative effect on the treatment outcomes. Type II errors were generally measured by recall. Therefore, in this study, greater weight was given to recall, where beta = 2 (F-2 score). The F-beta score was > 0 and < 1, and the larger the value, the better the performance of the model. Finally, when the AUC was > 0.5, the model was meaningful. AUC can be calculated as follows:
$$\mathrm{AUC} =\frac{1 + TPR - FPR}{2}$$
where true positive rate (TPR) and false positive rate (FPR) are calculated using Eqs. 7 and 8, respectively.
$$\mathrm{TPR} =\frac{TP}{TP + FN}$$
$$\mathrm{FPR} =\frac{FP}{FP + TN}$$
Baseline information
Data from 913 patients and 1829 blood tests were included in this study. The baseline information of the study population is shown in Table 3. Continuous variables are presented as median (interquartile range [IQR]) and categorical variables as frequency (percentage). The median age of the patients in this study was 53 (39–64) years, median weight was 72 (64–80) kg, median height was 170 (162–174) cm, median serum creatinine level was 80.9 (65.8–103.1) μmol/L, and proportion of male patients was 66%. Additionally, the proportion of combined medication was as follows: 8.64% for compound α-ketoacid, 6.01% for Shenyankangfu, 12.30% for ShenYanShu, 6.51% for Shenshuaining, 45.05% for Huangkui, 39.58% for Bailing, 46.53% for pidotimod, 29.36% for methylprednisolone, 16.07% for prednisone acetate, 1.04% for mycophenolate mofetil, and 2.35% for tripterygium glycoside.
Table 3 Baseline information
Model performance
The prediction performance of the six models is shown in Table 4. In terms of accuracy, only XGBoost and LGBM displayed an accuracy of > 70%; the accuracy of XGBoost was higher than that of LGBM at 73.33%. The accuracy of the other models was low, and the effect was poor. We evaluated type II errors through the recall rate. A higher recall rate means that more patients with abnormal blood drug concentrations were correctly predicted, and clinicians can therefore adjust the dosage to reach effective and safe blood drug concentrations. However, when the probability of type II errors was low, the probability of type I errors increased. Therefore, XGBoost performed the best in balancing type I and II errors (F-beta score = 0.9124). In addition, the AUC value of XGBoost was the highest among all models. Therefore, considering the generalization ability and accuracy of the model, we believe that the XGBoost model is ideal for predicting the blood concentration of TAC.
Table 4 Performance of the models
Table 5 shows the performance of the XGBoost model under different quantitative features. The features were selected from top to bottom according to the feature importance of the XGBoost model. Although the recall rate of the model was 1 when the number of features in the model was three or less, the AUC was only 0.5, and the model was extremely poor with no effective discriminative ability. Thus, very few features will lead to the underfitting of the model. With an increase in the number of features during modeling, the evaluation indexes in Table 5 increased even if they slightly fluctuated. When the number of features was eight, all evaluation indexes were maximized (accuracy = 0.7333, F-beta = 0.9124, and AUC = 0.5531), and the performance of the model was the best. When the number of features increased beyond eight, the evaluation indexes decreased overall. Thus, too many features weakened the generalization ability of the model, causing the overfitting phenomenon. Therefore, the performance of the model was optimized when using the top eight features for modeling.
Table 5 Performance comparison of models according to number of features
As shown in Fig. 4, the top eight features in the XGBoost model in descending order were serum creatinine level, weight, age, height, TAC dosage, pidotimod, Bailing, and Huangkui usage. Among them, serum creatinine level was nearly twice as important as any other feature, indicating that serum creatinine has a significant effect on the blood concentration of TAC. Weight, age, and height were also more important than many other characteristics, whereas sex and some combined medications had relatively little influence on the model.
Feature importance of the XGBoost model
This study revealed that the XGBoost model—with an accuracy of 0.7333 and an F-beta score of 0.9124—showed the best effect that could be used to monitor the blood concentration of TAC. Zheng et al. [11] also achieved the best results in regression prediction of TAC blood concentration from real-world data using the XGBoost model. Thus, the XGBoost model has certain advantages for clinical data prediction in real-world settings.
The feature importance ranking of XGBoost revealed that the serum creatinine level of the patients with kidney diseases, particularly nephrotic syndrome and membranous nephropathy, had a significant effect on their blood TAC concentration, thus confirming that the blood TAC concentration is positively correlated with the serum creatinine level [12]. Weight and height also ranked high, in this study, as factors that affect the blood TAC concentration, which is consistent with the results from Zheng et al. [11, 13]. Patient age is routinely evaluated by researchers [14, 15]. In this study, it ranked third among all features. Finally, the importance of sex in the prediction model was relatively low and did not participate in the establishment of the final model.
Previous studies have focused on the effect of TAC combined with other drugs [16, 17], but did not evaluate the effect of the combination on blood TAC concentration. Our study showed that the combination of Bailing and Huangkui with TAC affects blood TAC concentration. However, although pidotimod also had a high importance value in our study, there are no reports to support this result. It is speculated that it may be related to the medication habits of physicians. These conclusions warrant future research.
In the last decade, a few studies have described the prediction of TAC concentration in the blood using ML technology. Additionally, the models used in previous research were mostly artificial neural networks and regression models [18, 19], the amount of data obtained was lower, the models were not verified externally, and the research is still in the exploratory stage. In this study, using patients with nephrotic syndrome and membranous nephropathy as examples, blood TAC concentration was classified according to the safe blood concentration range and predicted using a variety of ML models. The number of real-world samples included in this study was considerably more than that in previous research, and an external validation set was used to verify the model. Thus, the model results are more authentic and have clinical significance over previous models.
This study had several limitations. First, owing to the lack of information about blood sample collection time, we had to use the sample-receiving time. Ideally, the laboratory department can obtain the sample collection time in the future to further strengthen the integrity and analyzability of medical data. Second, more laboratory and genetic data should be analyzed.
In this study, an ML model was established to classify the blood TAC concentration in patients with nephrotic syndrome and membranous nephropathy. The over-sampling method was used to manage unbalanced data, the variables were screened according to their importance value, and the performance of the six models was compared. Finally, XGBoost was selected as the best prediction model, considering its accuracy of 0.7333, F-beta score of 0.9124, and AUC of 0.5531, which were higher than those of other models, demonstrating a better prediction ability. In the XGBoost model, serum creatinine, weight, age, height, TAC dose, and the use of pidotimod, Bailing, and Huangkui were the main influencing factors of blood TAC concentration. The low AUC and high sensitivity of the model also implies that it is biased towards positive class, which may have a negative impact on the prediction of clinical dose of TAC in patients with negative class. In this exploratory study, the ability of machine learning in predicting TAC blood concentration was investigated. The study findings prove the predictive potential of machine learning to a certain extent; however, further in-depth research is needed to resolve the model's bias towards positive class.
The data that support the findings of this study are available from the corresponding author upon reasonable request.
SC:
Serum creatinine
CαK:
Compound α-ketoacid
SYKF:
Shenyankangfu
ShenYanShu
SSN:
Shenshuaining
HK:
Huangkui
BL:
Bailing
MPS:
PA:
Prednisone acetate
MM:
Mycophenolate mofetil
TG:
Tripterygium glycoside
ML:
RF:
GBDT:
Gradient boost decision tree
LGBM:
LightGBM
Kohli HS, Rajachandran R, Rathi M, Jha V, Sakhuja V. Tacrolimus in nephrotic syndrome resistant to first line therapy in adults: a prospective study. Nephrol Dial Transplant. 2013;28:401–2.
Zhang J, Zhang Y, Yang H. Effect of tacrolimus on renal function, blood lipids, cytokines and peripheral HMGB-1 and NF-κB in nephrotic syndrome patients. Chin J Biochem Pharm. 2015;3:115–8.
Gérard C, Stocco J, Hulin A, Blanchet B, Verstuyft C, Durand F, et al. Determination of the most influential sources of variability in tacrolimus trough blood concentrations in adult liver transplant recipients: a bottom-up approach. AAPS J. 2014;16:379–91. https://doi.org/10.1208/s12248-014-9577-8.
Koelzer VH, Sirinukunwattana K, Rittscher J, Mertz KD. Precision immunoprofiling by image analysis and artificial intelligence. Virchows Arch. 2019;474:511–22. https://doi.org/10.1007/s00428-018-2485-z.
Schork NJ. Artificial intelligence and personalized medicine. Cancer Treat Res. 2019;178:265–83. https://doi.org/10.1007/978-3-030-16391-4_11.
Shamout F, Zhu T, Clifton DA. Machine learning for clinical outcome prediction. IEEE Rev Biomed Eng. 2021;14:116–26. https://doi.org/10.1109/RBME.2020.3007816.
Santosh T, Liu H, Liu B. Effect of tacrolimus in idiopathic membranous nephropathy: a meta-analysis. Chin Med J. 2014;127:2693–9.
Liang Q, Li H, Xie X, Qu F, Li X, Chen J. The efficacy and safety of tacrolimus monotherapy in adult-onset nephrotic syndrome caused by idiopathic membranous nephropathy. Ren Fail. 2017;39:512–8. https://doi.org/10.1080/0886022X.2017.1325371.
Dong H, He D, Wang F. SMOTE-XGBoost using Tree Parzen Estimator optimization for copper flotation method classification. Powder Technol. 2020;375:174–81. https://doi.org/10.1016/j.powtec.2020.07.065.
Dong X, Yu Z, Cao W, Shi Y, Ma Q. A survey on ensemble learning. Front Comput Sci. 2020;14:241–58. https://doi.org/10.1007/s11704-019-8208-z.
Zheng P, Yu Z, Li L, Liu S, Lou Y, Hao X, et al. Predicting blood concentration of tacrolimus in patients with autoimmune diseases using machine learning techniques based on real-world evidence. Front Pharmacol. 2021;12:727245. https://doi.org/10.3389/fphar.2021.727245.
Jing-ge G. 郭景鸽. Chin Rem Clin. Takemosi zhiliao chengren jisu dikangxing shenbing zonghezheng xueyao nongdu yu linchuang xiaoguo ji buliang fanying de xiangguanxing yanjiu 他克莫司治疗成人激素抵抗型肾病综合征血药浓度与临床效果及不良反应的相关性分析 [Correlation analysis between blood concentration, clinical effect, and adverse reactions of tacrolimus in the treatment of the adult hormone-resistant nephrotic syndrome]. 2019;19:773–5.
Sam WJ, Tham LS, Holmes MJ, Aw M, Quak SH, Lee KH, et al. Population pharmacokinetics of tacrolimus in whole blood and plasma in Asian liver transplant patients. Clin Pharmacokinet. 2006;45:59–75. https://doi.org/10.2165/00003088-200645010-00004.
Przepiorka D, Blamble D, Hilsenbeck S, Danielson M, Krance R, Chan KW. Tacrolimus clearance is age-dependent within the pediatric population. Bone Marrow Transplant. 2000;26:601–5. https://doi.org/10.1038/sj.bmt.1702588.
Staatz CE, Tett SE. Pharmacokinetic considerations relating to tacrolimus dosing in the elderly. Drugs Aging. 2005;22:541–57. https://doi.org/10.2165/00002512-200522070-00001.
Yan Xiao-hui LY, Feng Ting JG, Xiao-Ming W. Clinical effects of tacrolimus combined with okra capsule in treatment of refractory membranous nephropathy. Prog Mod Biomed. 2017;17:wpr-615041.
Li Y, Xu T, Qiu X, Tian B, Bi C, Yao L. Effectiveness of Bailing capsules in the treatment of lupus nephritis: a meta-analysis. Mol Med Rep. 2020;22:2132–40. https://doi.org/10.3892/mmr.2020.11293.
Venkataramanan R, Shaw LM, Sarkozi L, Mullins R, Pirsch J, MacFarlane G, et al. Clinical utility of monitoring tacrolimus blood concentrations in liver transplant patients. J Clin Pharmacol. 2001;41:542–51. https://doi.org/10.1177/00912700122010429.
Tang J, Liu R, Zhang YL, Liu MZ, Hu YF, Shao MJ, et al. Application of machine-learning models to predict tacrolimus stable dose in renal transplant recipients. Sci Rep. 2017;7:42192. https://doi.org/10.1038/srep42192.
The authors would like to thank the PLA General Hospital and the patients for their support and involvement in our study.
This research was supported by a grant from the Big Data and Artificial Intelligence Research and Development Project of the PLA General Hospital [reference: 2019MBD-002].
Weijia Yuan and Lin Sui have contributed equally to the study as co-first authors.
Department of Information, Medical Supplies Center of PLA General Hospital, Beijing, China
Weijia Yuan, Minchao Liu & Huayu Shi
Department of Pharmacy, Medical Supplies Center of PLA General Hospital, Beijing, China
Lin Sui & Haili Xin
Weijia Yuan
Lin Sui
Haili Xin
Minchao Liu
Huayu Shi
WY, HX, and LS designed the study. HS collected the data and provided guidance on data processing. HX and LS provided guidance on medication information. WY analyzed the data, performed the study, and wrote the manuscript. HX and ML revised the manuscript. All authors agreed to be accountable for the content of the work, and read and approved the final version of the manuscript.
Correspondence to Haili Xin or Minchao Liu.
The risk posed by the study to subjects does not exceed the minimum risk. This study was approved by the Ethics Committee of the Chinese People's Liberation Army General Hospital [S2022-278–01]. The Ethics Committee of the Chinese People's Liberation Army General Hospital provided an exempt determination and waived informed consent. All methods were carried out in accordance with relevant guidelines and regulations.
Yuan, W., Sui, L., Xin, H. et al. Discussion on machine learning technology to predict tacrolimus blood concentration in patients with nephrotic syndrome and membranous nephropathy in real-world settings. BMC Med Inform Decis Mak 22, 336 (2022). https://doi.org/10.1186/s12911-022-02089-w
Blood concentration prediction
Membranous nephropathy | CommonCrawl |
Upper and Lower Bounds
Every subset of $$\mathbb{R}$$ is a set of real numbers. We shall define the upper and lower bounds for a non-empty set $$S$$ of real numbers.
Upper bound: If for a set $$S$$ of reals $$\exists {\text{ }}K \in \mathbb{R}$$ such that $$\forall x \in S \Rightarrow x \leqslant K$$, then $$K$$ is said to be an upper bound of $$S$$. In such a case, $$S$$ is said to be bounded above. If there is a least member amongst the upper bounds of the set $$S$$, then this member is called the least upper bound (l.u.b) or supremum of the set $$S$$, and it is usually denoted by $$\sup S$$.
It easily follows that if a set $$S$$ has at least one upper bound then there are infinitely many upper bounds greater than it. In case $$S$$ has no upper bound, $$S$$ is said to be unbounded above.
Lower bound: If, for a set $$S$$ of reals $$\exists {\text{ }}k \in \mathbb{R}$$ such that $$\forall x \in S \Rightarrow x \geqslant k$$, then $$k$$ is said to be a lower bound of $$S$$. In such a case, $$S$$ is said to be bounded below. If there is a greatest member amongst the lower bounds of the set $$S$$, then this member is called the greatest lower bound (g.l.b.) or infimum of the set $$S$$, and it is usually denoted by $$\inf S$$.
It follows that if $$S$$ has at least one lower bound then there are infinitely many lower bounds of $$S$$ less than it. In case $$S$$ has no lower bound, $$S$$ is said to be unbounded below.
From the definitions it evidently follows that supremum and infimum of sets, if they exist, are unique. The existence of supremum and infimum of non-empty sets bounded above and below respectively is ensured by the completeness axiom in $$\mathbb{R}$$. It should be noted, from the definition, if $$u$$ is the supremum of a set $$S$$ then for every $$\varepsilon > 0{\text{ }}\exists $$ at least one member $$y \in S$$ such that $$u \geqslant y > u – \varepsilon $$. Similarly, if $$l$$ is the infimum of $$S$$ then for every $$\varepsilon > 0{\text{ }}\exists $$ at least one member $$x \in S$$ such that $$l \leqslant x < l + \varepsilon $$.
Bounded and Unbounded Sets of Reals: If a set $$S$$ of reals is bounded both above and below, then it is said to be bounded. In case $$S$$ is either unbounded above or below, then it is said to be unbounded. For example, the set $$\left\{ {1,3,11,2059} \right\}$$ is a bounded set and the set $$\mathbb{R}$$ is an unbounded set.
For every bounded set $$S{\text{ }}\exists {\text{ }}k \in {\mathbb{R}^ + }$$ such that $$\left| x \right| \leqslant k{\text{ }}\forall x \in S$$. If $$S$$ is unbounded then there exists no such $$k$$.
Greatest and Least Members of Sets of Reals: A number $$b$$ is said to be the greatest (or largest) member of a set $$S$$ if $$b \in S \wedge x \in S \Rightarrow x \leqslant b$$. If such a number $$b$$ exists, then it is unique and is also the supremum of the set $$S$$. A set may or may not have a greatest member such as $$\left\{ {x:1 < x \leqslant 2} \right\}$$ has $$2$$ as the greatest member, but $$\left\{ {x:1 \leqslant x \leqslant 2} \right\}$$ has no greatest member.
Similarly, a number $$a$$ is said to be the least (or smallest) member of a set $$S$$ if $$a \in S \wedge x \in S \Rightarrow x \geqslant a$$. If such an $$a$$ exists, then it is unique and is also the infimum of the set $$S$$. A set may or may not have a least member. For example, $$\left\{ {x:1 \leqslant x < 2} \right\}$$ has $$1$$ as the least member, but $$\left\{ {x:1 < x \leqslant 2} \right\}$$ has no least member. It should be noted that a set cannot have a greatest or a least member according it is unbounded above or below.
The set$${\mathbb{R}^ + }$$ is bounded below and unbounded above.
The set $$\mathbb{R}$$ is an unbounded set.
The spremum and infimum for a set, if they exist, are unique.
The null set is neither bounded below or above, nor unbounded.
If $$S = \left\{ { – 1,\frac{1}{2}, – \frac{1}{3}, – \frac{1}{4}, \ldots } \right\}$$, then $$\sup S = \frac{1}{2}$$ and $$\inf S = – 1$$.
⇐ Completeness Axiom ⇒ Absolute Value of a Number ⇒ | CommonCrawl |
Average of two random variables - CDF comparison
Given are two independent random variables $X$ and $Y$ with different probability density functions $f_X(t)$ and $f_Y(t)$. It is furthermore given that the cumulative distribution function $F_X(x) = \int_{-\infty}^x f_X(t)\,dt$ of $X$ is always larger than or equal to $F_Y(x)=\int_{-\infty}^x f_Y(t)\,dt$ of $Y$. That is $\forall x \in \mathbb{R}: F_X(x) \geq F_Y(x).$
A third random variable $Z$ with $f_Z(t)$ is defined as $Z = \frac{X+Y}{2}$. My assumption is that $F_X(x)$ is always larger than or equal to $F_Z(x) = \int_{-\infty}^x f_Z(t)\,dt$, too. That is $\forall x \in \mathbb{R}: F_X(x) \geq F_Z(x).$
Is this a valid assumption and if yes, how to prove it? Under which conditions does it hold if it is not a universally valid assumption? The plots below depict the cumulative distribution functions as well as the probability density functions of exemplary X,Y and Z.
Edit: I was able to show it for two given uniform distributions as follows. However, I assume it should also be possible to prove it for two arbitrary distributions.
PDFs:
$f_X(t) = \begin{cases} \frac{1}{2} & \text{for } 0 \leq t \leq 2,\\ 0 & \text{otherwise} \end{cases}$, $f_Y(t) = \begin{cases} \frac{1}{2} & \text{for } 4 \leq t \leq 6,\\ 0 & \text{otherwise} \end{cases}$
CDFs:
$F_X(x) = \begin{cases} 0 & \text{for } x < 0,\\ \frac{x}{2} & \text{for } 0 \leq x \leq 2,\\ 1 & \text{for } x > 2 \end{cases}$, $F_Y(x) = \begin{cases} 0 & \text{for } x < 4,\\ \frac{x-4}{2} & \text{for } 4 \leq x \leq 6,\\ 1 & \text{for } x > 6 \end{cases}$,
$\forall x \in \mathbb{R}: F_X(x) \geq F_Y(x)$ obviously holds, because $F_X(x) = 1$ for all $x$ where $F_Y(x) > 0$.
The addition of X and Y is described by the convolution of $f_X(t)$ and $f_Y(t)$ as
$f_{X+Y}(t) = (f_X * f_Y)(t) = \int_{-\infty}^\infty f_X(\tau)\cdot f_Y(t-\tau)\,d\tau = \begin{cases} -1+\frac{t}{4} & \text{for } 4 \leq t \leq 6\\ 2 - \frac{t}{4} & \text{for } 6 < t \leq 8\\ 0 & \text{otherwise} \end{cases}$
and (as far as I know) dividing by 2 corresponds to scaling on both axes, so
$f_Z(t) = 2\cdot f_{X+Y}(2t) = \begin{cases} -2+t & \text{for } 2 \leq t \leq 3\\ 4 - t & \text{for } 3 < t \leq 4\\ 0 & \text{otherwise.} \end{cases}$
From this we get the CDF of Z:
$F_Z(x) = \int_{-\infty}^x f_Z(t)\,dt = \begin{cases} 0 & \text{for } x < 2,\\ \frac{1}{2} x^2 -2x+2 & \text{for } 2 \leq x \leq 3,\\ -\frac{1}{2} x^2 + 4x -7& \text{for } 3 < x \leq 4,\\ 1 & \text{for } x > 4 \end{cases}$
$\forall x \in \mathbb{R}: F_X(x) \geq F_Z(x)$ holds, too, because $F_X(x) = 1$ for all $x$ where $F_Z(x) > 0$.
probability probability-distributions random-variables
koalo
koalokoalo
$\begingroup$ I think given the uniform distribution result you should be able to use the probability integral transformation to connect to general continuous distributions. $\endgroup$ – Ian Oct 27 '17 at 15:40
$\begingroup$ @Ian Currently I have no proof for two generic uniform distributions, but I will work on it as a next step and then your suggestion might work, thanks a lot! $\endgroup$ – koalo Oct 27 '17 at 15:53
Browse other questions tagged probability probability-distributions random-variables or ask your own question.
Find the PDF of $X+2Y$ given the PDFs of $X$ and $Y$
What is the density of the quotient of two independent standard uniform random variables?
Finding the density function of the sum of two random variables
PDF and CDF of the division of two Random variables
Product distribution function of two independent random Variables
Sum of two independent, continuous random variables
Finding PDF of sum of 2 uniform random variables
Sum of Random Variables and Convolution
Find density function $Z=2X+Y$
Given two independent random variables, calculate $P(X^2 > Y)$ | CommonCrawl |
The initial volume of a gas cylinder is 750.0 mL, If the pressure of the gas inside the cylinder changes from 840.00 mmHg to 360.00 mmHg, the final volume of the gas will be:
a.) 3.60 L
b.) 4.032 L
c.) 1.750 L
d.) 7.50 L
Hint: According to Boyle's law, at constant temperature, pressure is inversely proportional to volume. Using the equation ${ V }_{ 2 }=\dfrac { { P }_{ 2 }{ V }_{ 1 } }{ { P }_{ 1 } } $, we will get the answer by substituting the values into it.
We have been provided with the initial volume of the gas which is equal to 750 mL.
We need to find the final volume of the gas when its pressure changes from 840.00 mmHg to 360.00 mmHg.
Consider, 1 as the initial condition and 2 as the final condition,
Therefore,${ P }_{ 1 }$ = 840 mmHg
${ P }_{ 2 }$= 360 mmHg
${ V }_{ 1 }$ = 750 mL
${ V }_{ 2 }$ = ?
We need to find the value of ${ V }_{ 2 }$,
Let us assume the temperature remains constant during the process, then we can apply Boyle's law over here.
According to Boyle's law, at constant temperature, volume of a gas is inversely proportional to the pressure of that gas.
So, we can write as $\dfrac { { P }_{ 1 } }{ { V }_{ 1 } } =\frac { { P }_{ 2 } }{ { V }_{ 2 } } $
Rearrange the equation in order to find ${ V }_{ 2 }$,
${ V }_{ 2 }=\dfrac { { P }_{ 2 }{ V }_{ 1 } }{ { P }_{ 1 } } $ -----(i)
Substitute the values in equation (i),
${ V }_{ 2 }=\dfrac { 360\times 750 }{ 840 } $ mL = 1750 mL = 1.750 L.
So, the correct answer is "Option C".
Additional Information: We can now look at some real life application of Boyle's Law,
(i)While filling the bike tires with air. When we pump air into a tire, the gas molecules inside the tire get compressed and packed closer together. This causes an increase in the pressure of the gas, and it starts to push against the walls of the tire. We can feel how the tire becomes pressurized and tighter.
(ii) Another example we can easily observe is carbonated drinks. To get carbon dioxide gas into the liquid, the whole bottle is usually pressurized with carbon dioxide gas. As long as the bottle is closed, it is very hard to squeeze, as the gas is confined to a small space and pushes against the bottle walls. When we remove the cap, however, the available volume increases and some of the gas escapes. At the same time the pressure decreases.
One of the most important applications of Boyle's law one can find is in our breathing. Inhaling and exhaling basically means increasing and decreasing the volume of our chest cavity (thorax region) . This creates low pressure and high pressure in our lungs, resulting in air getting sucked into our lungs and leaving our lungs which is the inhalation and exhalation.
Note: Boyle's Law is only valid under constant temperature conditions. Also, one should take care of the unit conversions. Over here, the answer is given in the SI unit of volume i.e,Litre. 1 L=1000mL. | CommonCrawl |
Experimental method
Measurement of third-order elastic constants and stress dependent coefficients for steels
Sennosuke Takahashi1Email authorView ORCID ID profile
Mechanics of Advanced Materials and Modern Processes20184:2
Received: 6 August 2017
Accepted: 19 January 2018
Published: 9 February 2018
There has been little discussion of the third-order elastic constants of steels in the literature until now. In this study, the precise second- and third-order elastic constants of polycrystalline steels were measured under adiabatic and isothermal conditions.
To measure the minute change in the propagation time of the elastic wave corresponding to the tensile stress, the uniform and isotropic specimens were processed with high precision, the measuring instruments were strictly calibrated, and the temperature of the measurement chamber was kept constant. The author proposes an experimental formula to obtain the third-order elastic constants of steels. The stress dependent coefficients α ij in this formula are absolutely necessary to obtain the third-order elastic constants.
The obtained stress dependent coefficients clearly indicated that there is a special relationship between the directions of stress and that of the oscillation of the elastic wave. When the frequency direction of the elastic wave matched the direction of the applied stress, α ij became a larger negative value. Lamè constants and Murnaghan's third-order elastic constants ℓ,m,n were obtained for four types of steels.
The second- and third-order elastic constants under adiabatic conditions were smaller than those under isothermal conditions. Oscillation of crystal lattice is nonlinear and this is observed as the third-order elastic constants. Therefore, it is possible to obtain the knowledge on the internal stress and the thermal properties of the materials. This is also the basis of theoretical discussion of the thermal expansion coefficients.
Second- and third-order elastic modulus
Elastic wave
Stress dependent coefficient
Polycrystalline material
The first study of the theory and measurement of the third-order elastic constants of practical materials was published by Hughes and Kelly (1953). D.Lazarus reported the third-order elastic constants of the single crystals like KCl, Cu etc by measureing propagation velocity of elastic wave under hydrostatic pressure (Lazarus 1949) and it may useful in comparison of the theories of finite strain proposed by Murnagham (1951). R.N.Thurston published a paper on the theoretical analysis of the propagation of elastic wave (Thurston and Brugger 1964). D.M. Egle et al. carried out the measurement of the third-order elastic constants for rail steel using Hughes's result (Egle and Bray 1976). S.Takahashi got the U.S.patent on the stress measurement and its equipment according to the method of the present paper (Takahashi 2007). T.Batemen et al. reported that the third-order elastic constant of semiconductor was related to thermal expansion coefficient and Gűneisen constants (Bateman et al. 1961). As described above, the knowledge on the third-order elastic constants contributes to the study of physical properties of various materials.
This paper describes the measurement of the third-order elastic constants of four common steels. The stress applied to the specimen was increased stepwise, and the velocity of the elastic wave was measured at every step using a high accuracy measuring technique. The shape and dimensions of the specimen were controlled as precisely as possible and the measuring equipment was also accurately calibrated. The change in room temperature was kept to 1°C or less during the measurement. Second- and third-order elastic constants contribute to the change in the propagation velocity of the elastic wave caused by stress (Hughes and Kelly 1953; Takahashi and Motegi 2015).
The author proposed a simple equation of the propagation velocity under stress by introducing the coefficient α ij including the second- and third-order elastic constants. The value of α ij can be obtained from the measured stress and the change ratio in propagation velocity. The coefficients α ij is absolutely necessary to obtain the third-order elastic constants. When the frequency direction of the elastic wave matched the direction of the applied stress, the value of α ij became more negative. This means that the coincidence of the frequency direction with the applied stress greatly contributed to the propagation velocity of the wave. The value of α ij is based on the stress-strain relation. Therefore it is necessary to obtain the vales of α ij on this relationship.
Test specimen
Figure 1 shows the dimensions and coordinates of the test specimens. The specimens were designed to be attached to the tensile testing machine and make it easy to measure their elastic waves. Coordinates numbered 1, 2, and 3 were used instead of x, y, and z. The long axis direction of the specimen was denoted 1, and the directions perpendicular to it were denoted 2 and 3. The propagation velocity of the longitudinal wave in direction 1 was expressed as V11, while that of the transvers wave to direction 1 and vibration in direction 2 was expressed as V12. For the propagation velocity V, the same subscripts express the longitudinal wave, while different subscripts express the transverse wave. T11 represents the tensile stress in the direction of the long axis of the specimen. Table 1 lists the chemical compositions of the S20C(AISI 1020), S30C(AISI 1030), S40C(AISI 1039) and S50C(AISI 1049) test specimens.
Schematic diagram of specimen L and T : longitudinal and transvers transducers
Chemical composition of S20C to S50C specimens(wt%)
Ni+Cr
Stress dependent coefficients of elastic wave α ij
The propagation velocity V ij of the elastic wave in the specimen under an applied stress of T11 is expressed as
$$ V_{ij} =V_{0}\left(1+\alpha_{ij}\frac{T_{11}}{E}\right) $$
where V0 is the propagation velocity of the elastic wave under non-loaded state, α ij is the stress dependent coefficient of the elastic wave, and E is Young's modulus. The expression of V ij by Hughes and Kelly (1953) and the authors Takahashi and Motegi (2015) is
$$ {} \rho_{0}V_{11}^{2}=\lambda+2\mu+\frac{T_{11}}{E}\left[5\lambda+10\mu+2\ell+4m-2\nu(\lambda+2\ell)\right] $$
where λ, μ are Lamè constants, ℓ,m,n are the Murnaghan's third order elastic constants, ν is Poisson's ratio and ρ0 is the density in the non-deformed state. The formula (1) is based on an equation previously introduced by the authors Takahashi and Motegi (2015) and it can be rewritten as
$$ \begin{aligned} V_{11}^{2} &=V_{0}^{2}\left(1+\alpha_{11}\frac{T_{11}}{E}\right)^{2}\approx V_{0}^{2}\left(1+2\alpha_{11}\frac{T_{11}}{E}\right) \\ V_{0}^{2} &=\frac{\lambda+2\mu}{\rho_{0}} \\ \end{aligned} $$
From the relations described above, α11 is given as follows,
$$\alpha_{11}=\frac{1}{2(\lambda+2\mu)} [5\lambda+10\mu+2\ell+4m-2\nu(\lambda+2\ell)] $$
In the same way, the formulae of other α22, α21, α12 and α23 can be obtained.
$$ \alpha_{22}=\frac{1}{2(\lambda+2\mu)}[\lambda+2\ell-\nu(6\lambda+10\mu+4\ell+4m)] $$
The coefficient α ij is used to obtain Murnaghan's third-order elastic constants ℓ,m,n as follows.
$$\begin{array}{@{}rcl@{}} \begin{aligned} \ell &=\frac{(2\alpha_{11}-5)(\lambda+2\mu)}{2(1-2\nu)}-\frac{2m-\nu\lambda}{1-2\nu} \\ m &=\left[\frac{\alpha_{11}-\alpha_{22}}{2(1+\nu)}-1\right](\lambda+2\mu)-\frac{\mu}{2}\\ n_{12} &=\frac{2}{\nu}[-(a+4\nu)+2\nu(a+\mu)+2\mu\alpha_{12}]\\ n_{21} &=\frac{2}{\nu}[-(a+2\mu)+2\nu(a+2\mu)+2\mu\alpha_{21}]\\ n_{23}&=2[a-2\nu(a+3\mu)-2\mu\alpha_{23}] \end{aligned} \end{array} $$
here a= λ+m
Measurement of the propagation velocity of elastic wave in the applied stress direction
The stress in the gripping regions is complicated and should be eliminated, so two kinds of specimens with identical grip sizes but different gauge length were prepared. Hereafter the symbol a denotes the long specimen while b denotes the short specimen. The propagation times under stress free are written as
$$t_{0a}={L_{a}}/{V_{0}}, t_{0b}={L_{b}}/{V_{0}}, $$
where L a , L b are the total length of the specimens under stress free conditions, and t a , t b are the propagatioon time under applied stress.
The differences in propagation time are written as Δt a =t a −t0a and Δt b =t b −t0b.
The propagation time in the applied stress direction is obtained from formula (1) as follows,
$$ t=\frac{L_{g}}{V_{g}}+\frac{L_{m}\left(1+\frac{T_{11}}{E}\right)}{V_{0}\left(1+\alpha_{11}\frac{T_{11}}{E}\right)} $$
where L m is the length of the gauge part in the non-loaded state, L g is the grip length under the applied stress, V g is the average velocity of the wave passing through grip part. In the case of a longitudinal wave propagating in the applied stress direction of the long specimen, Δt a /t0a can be written using an approximate calculation as
$$ \frac{\Delta t_{a}}{t_{0a}}=\frac{\left(\frac{L_{g}}{V_{g}}-\frac{L_{m}}{V_{0}}\right)}{\frac{L_{a}}{V_{0}}}+ \frac{L_{ma}}{L_{a}}(1-\alpha_{11})\frac{T_{11}}{E} $$
Δt a /t0a and Δt b /t0b can be obtained by measuring the time of the propagating elastic wave.
In a similar manner, for the short specimen,
$$ \frac{\Delta t_{b}}{t_{0b}}=\frac{\left(\frac{L_{g}}{V_{g}}-\frac{L_{m}}{V_{0}}\right)}{\frac{L_{b}}{V_{0}}}+ \frac{L_{mb}}{L_{b}}(1-\alpha_{11})\frac{T_{11}}{E} $$
The grip and the gauge parts are expressed separately in the above formulae. Using above two formulae, α11 is given as
$$ \begin{aligned} \alpha_{11}&=1-\frac{E}{T_{11}}\left[\frac{L_{a}}{\Delta L} \cdot\left(\frac{\Delta t_{a}}{t_{0a}}\right)- \frac{L_{b}}{\Delta L}\cdot\left(\frac{\Delta t_{b}}{t_{0b}}\right)\right]\\ \Delta L&=L_{a}-L_{b}=50 \end{aligned} $$
The transvers wave of α12 is also obtained from applying the measured Δt a /t0a and Δt b /t0b data to the formula (9).
Measurement of propagation velocity of elastic wave in direction orthogonal to the tensile stress axis
The propagation time of the elastic wave t measured at the side of the specimen is defined by the following,
$$ t=\frac{W\left(1-\frac{\nu T_{11}}{E}\right)}{V_{0}\left(1+\frac{\alpha_{ij}T_{11}}{E}\right)} $$
where W is width of the non-loaded specimen, and ν is Poisson's ratio. From the approximate calculation of formula(10), the coefficient related to the elastic wave in the direction of 2j is given as
$$ \alpha_{2j}=-\nu-\frac{E}{T_{11}}\left(\frac{\Delta t}{t_{0}}\right) $$
The coefficient α22 for longitudinal waves and α21, α23 for transverse waves can be obtained from formula (11) using the respective value of \(\left (\frac {\Delta t}{t_{0}}\right)\) from the measurements of S20C to S50C specimens.
An Instron type tensile testing machine and a computerized strain measurement apparatus were used in this work. The load cell was compensated using the standard gauge. The strain of the specimen was measured by strain gauge adhered on both sides of the gauge region as shown in Fig. 1. Measurement of the propagation time of the elastic wave was performed with a device having a time resolution of 10ns as shown in Fig. 2 (Takahashi and Motegi 1987). The grip holder of the tensile testing machine was designed and manufactured to pull the leading wire of transducer from the both ends of the specimen (Takahashi and Takahashi 2007). PZT type(2-5MHz) piezoelectric resonators, plates of 10×10mm2 in size, were used as transducers for the longitudinal and tansvers waves.
Schematic diagram of the measurement system
The stress applied to the specimen was increased in steps of 5.4MPa. The stress, strain, Poisson's ratio, propagation time of longitudinal and transvers waves were measured after each increase. Figure 3 shows the stress-strain curve for S30C sample. Figure 4 shows the relationship between the ratio of change in the propagation time with stress for S30C, obtained by measuring the longitudinal and transvers waves propagating parallel and perpendicular to the stress axis. The coefficient α ij was obtained by measurering the gradient of the stress vs change ratio of the propagation time curve shown in Fig. 4 to formula (9) and (11) for long and short specimens.
Stress vs strain for S30C specimen
Ratio of change in propagation velocity of elastic wave vs stress for S30C L:long specimen, S:short specimen
The values of Lamè constant, Young's modulus, Poisson's ratio, third-order elastic constants and α ij obtained by the stress-strain curve of tencile testing machine were set as values under isothermal conditions. On the other hand, these obtained from the measured values of the propagating time of the elastic wave were taken as values of adibatic conditions. The measured values of α11, α12, α21, α22 and α23 in the isothermal and adiabatic states are shown in Table 2. These were different owing to the different elastic constants measured by the tensile test or from the adiabatic elastic wave. The second-order elastic constants of the isothermal and adiabatic measurements are shown in Table 2 alongside the third-order elastic constants.
Coefficient α ij and second- and third-order elastic constants in the isothermal and adiabatic states
(A) Stress dependent coefficients of α ij
α ij
S20iso
S20adi
α 11
(B) Lamè constants and Young's modulus(× 103MPa)
(C) Murnaghan's third order elastic constants(× 103MPa)
ℓ
ρ : density, ν : Poisson's ratio
Figure 5 shows the stress dependent coefficients α11, α12, α21, α22 and α23 of each specimen. It was clear that α having coordinates 11 or 21 showed a larger negative value. It is considered that the agreement between the stress and oscillation direction of the elastic wave greatly displaced α ij towards a more negative value. The measured values of the third-order elastic constants of each specimen in adiabatic state are shown in Fig. 6.
Stress dependent coefficiets of α11∼α23 for S20C to S50C specimens in adiabatic state
Third-order elastic constants in adiabatic state
The main goal of this study was to accurately measure the change in the propagation time of elastic waves in the material with stress and obtain a mathematical formula connecting theory and experiments to derive the third-order elastic constants. An empirical formula consisting of the stress dependent coefficient α ij related with the third-order elastic constants was obtained based on the mathematical formula derived by Hughes and Kelly (1953) and the present author Takahashi and Motegi (2015). Care was taken in preparing the specimens to precise dimensions, using well-calibrated measument equipment, and maintaining good temperature control during the measurement to obtain precise α ij data.
The basis of this experiment is to measure the value of α ij . The value of α ij can be obtained from the gradient of changing ratio of elastic wave propagation time to the applied stress. Therefore from this viewpoint, α ij should be determined. When the direction of the stress matches the direction of the vibration wave, α ij has a larger negative value compared with in the other cases. It means the decrease of propagation velocity of the elastic wave. Thus, this coefficient α ij also has a relationship to the propagation velocity of the elastic waves. Murnaghan's third order elastic constants could be calculated using the obtained coefficients α ij . As described above, of the obtained third-order elastic constants, ℓ, m and n were negative for all specimens.
Table 2(B)(C) show the differences between isothermal and adiabatic elastic constants. The difference in the third-order elastic constants are larger than the difference in the second-order ones. O.M.Krasinikov reported no significant difference between isothermal and adiabatic elastic constants (Krasilnikov 1977). However, there is a relatively large difference between m and n in isothermal state and adiabatic state.
An experimental methods for measuring third-order elastic constants accurately and relatively easy was described.
Measurements of the change ratio of propagation time of the elastic wave with respect to the change in the applied stress, stress-strain curves, were performed for four types of practical steel specimens.
A formula relating the experimentally measured values to theory was proposed. This formula consisted of stress, Young's modulus, the propagation velocity of the elastic wave, stress dependent coefficient α ij , the values of all of which were measurable.
The two types of speimens with identical grip sizes but different gauge lengths were prepared to eliminate the influence of the grip parts and to apply uniform stress.
A formula for α ij was proposed using data obtained from long and short specimens.
α ij was obtained from the gradient of the relationship between the change ratio of the propagation time and stress.
α ij is not only a coefficient necessary to obtain the third-order elastic constants but also provide other information on the behavior of elastic wave propagation. The values of α11 and α21 where the frequency direction of the elastic wave matched the stress axis, were negative and larger in magnitude than those of the other coefficients. This meant a deceleration of the propagation of the elastic wave.
The Murnaghan's third-order elastic constants ℓ,m,n were obtained for the four types of practical steels under isothermal and adiabatic conditions and those were negative values.
The differences between the values of Lamè constants λ, μ and the Young's modulus E measured under isothermal and adiabatic conditions were not very large, but a relatively large difference was found in the case of m and n in third-order modulus.
The present study of the third-order elastic constants of materials will greatly contribute to understanding internal stress, thermal properties, Grűneisen constants and so on of steel, semiconductor and non ferrous metals. The author got U.S.Patent on the stress measurement and its equipment (Takahashi 2007).
AISI:
American iron and steel institute
Data and materials are available.
The author read and approved the final manuscript.
Author agrees to publication.
Authors' information
ST is PhD.
National Research Institute for Metals, 6-13-12 Kanamori, Machida, Tokyo 194-0012, Japan
Bateman, T, Mason WP, McSkimin HJ (1961) Third-order elastic moduli of Germanium. J Appl Phys 32(5):928–936.View ArticleGoogle Scholar
Egle, DM, Bray DE (1976) Measurement of acoustoelastic and third-order elastic constants for rail steel. J Acoust Soc Am 60(3):741–744.View ArticleGoogle Scholar
Hughes, DS, Kelly JL (1953) Second order elastic deformation of solid. Phys Rev 92(5):1145–1149.View ArticleMATHGoogle Scholar
Krasilnikov, OM (1977) Temperature dependence of third-order elastic constants. Sov Phgs State 19(5):764–768.Google Scholar
Lazarus, D (1949) The variation of the adiabatic elastic constants of KCl, NaCl, CuZn, Cu and Al with pressure to 10,000 bars. Phys Rev 76(4):545–553.View ArticleGoogle Scholar
Murnagham, FD (1951) Finite deformation of an elastic solid. Wiley, New York.Google Scholar
Takahashi, S (2007) Stress measurement method and its apparatus. U.S. Patent No. 7299138, December 10.Google Scholar
Takahashi, S, Motegi R (1987) Stress dependency on ultrasonic wave propagation velocity. J Mater Sci 22:1857–1863.View ArticleGoogle Scholar
Takahashi, S, Motegi R (2015) Measurement of third-order elastic constants and applications to loaded structural materals. Springer Plus July 4:325.View ArticleGoogle Scholar
Takahashi, S, Takahashi K (2007) Third order elastic constants of semi continuous casting ingot A3004 aluminium alloy and measurement of stress. J Mater Sci 42:2070–2075.View ArticleGoogle Scholar
Thurston, RN, Brugger K (1964) Third-order elastic constants and the velocity of small amplitude elastic waves in homogeneously stressed media. A Phys Rev 133(6):A1604–A1612.View ArticleMATHGoogle Scholar | CommonCrawl |
euclidean distance similarity
We can also use it to compute the similarity of nodes based on lists computed by a Cypher query. We can use the Euclidean Distance algorithm to work out the similarity between two things. Square root was wrong. The following will find the similarity between cuisines based on the, Use-cases - when to use the Euclidean Distance algorithm, Euclidean Distance algorithm function sample, Euclidean Distance algorithm procedures sample. But what if we have distance is 0 that why we add 1 in the denominator. It is used as a common metric to measure the similarity between two data points and used in various fields such as geometry, data mining, deep learning and others. Active 5 years, 2 months ago. interpolating between the smallest and the largest distance. The euclidean similarity of the two nodes. The ids of items to which we need to compute similarities. In fact, a direct relationship between Euclidean distance and cosine similarity exists! A distance that satisfies these properties is called a metric. It is a symmetrical algorithm, which means that the result from computing the similarity of Item A to Item B is the same as computing the similarity of Item B to Item A. I mean it seems different to me than calculating all the distances and then converting them to a similarity by e.g. For efficiency reasons, the euclidean distance between a pair of row vector x and y is computed as: dist (x, y) = sqrt (dot (x, x)-2 * dot (x, y) + dot (y, y)) This formulation has two advantages over other ways of computing distances. The function is best used when calculating the similarity between small numbers of sets. It is usually non-negative and are often between 0 and 1, where 0 means no similarity, and 1 means complete similarity. What I don't really understand is why he calculates at the end the following to get a "distance based similarity": So, I somehow get that this must be the conversion from a distance to a similarity (right?). If 0, it will return as many as it finds. He calculates the Euclidean distance for two persons $p_1$ and $p_2$ by We can use it to compute the similarity of two hardcoded lists. The Euclidean Distance procedure computes similarity between all pairs of items. Keywords—Distance, Histogram, Probability Density Function, Similarity. The following will find the similarity between cuisines based on the embedding property: If the similarity lists are very large they can take up a lot of memory. The Hamming distance is used for categorical variables. Using euclidean distance to measure the similarity between two people By measuring the similarity, we can know which person you like most, even what they wrote is similar. computing the similarity of Item B to Item A. If 0, it will return as many as it finds. We could use this technique to compute the similarity of a subset of items to all other items. How is the Ogre's greatclub damage constructed in Pathfinder? k-means implementation with custom distance matrix in input, Converting similarity matrix to (euclidean) distance matrix. Euclidean is basically calculate the dissimilarity of two vectors, because it'll return 0 if two vectors are similar. The cosine similarity is advantageous because even … Some of the popular similarity measures are – Euclidean Distance. It uses Pythagorean Theorem which learnt from secondary school. The standard deviation of similarities scores computed. The number of similar values to return per node. rev 2021.1.11.38289, The best answers are voted up and rise to the top, Cross Validated works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us, There can be many ways to convert dissimilarities and similarities into each other - the specific formula depends on what make sense to you and for the future analysis. If so, we can filter those out by passing in the similarityCutoff parameter. like: The following will run the algorithm and returns the result in form of statistical and measurement values. The size of the targets list of one node. For example, to get movie recommendations based on the preferences of users who have given similar ratings to other movies Points with smaller angles are more similar. To measure the distance and similarity (in the semantic sense) the first thing to check is if you are moving in a Euclidean space or not. Where did all the old discussions on Google Groups actually come from? These are the distance of items in a virtual space. Informally, the similarity is a numerical measure of the degree to which the two objects are alike. When to use cosine similarity over Euclidean similarity. Jaccard Similarity Before any distance measurement, text have to be tokenzied. When calling the function, we should provide lists that contain the overlapping items. (Reverse travel-ban). Correctly interpreting Cosine Angular Distance Similarity & Euclidean Distance Similarity. Standardized Euclidean distance Let us consider measuring the distances between our 30 samples in Exhibit 1.1, using just the three continuous variables pollution, depth and temperature. The 99.9 percentile of similarities scores computed. We can therefore compute the score for each pair of nodes once. The following will return a stream of node pairs, along with their intersection and finite euclidean similarities: We can see in these results that Zhen and Arya and Zhen and Karin have been removed. of using Cypher statements to project the graph instead. Asking for help, clarification, or responding to other answers. How to perform charge analysis for a molecule. 再来说一下 余弦相似度(Cosine Similarity) :. $$d(p_1, p_2) = \sqrt{\sum_{i~\in~\textrm{item}} (s_{p_1} - s_{p_2})^2} $$. It can be calculated from the Cartesian coordinates of the points using the Pythagorean theorem, therefore occasionally being called the Pythagorean distance. The 75 percentile of similarities scores computed. i have three points a(x1,y1) b(x2,y2) c(x3,y3) i have calculated euclidean distance d1 between a and b and euclidean distance d2 between b and c. if now i just want to travel through a path like from a to b and then b to c. can i add d1 and d2 to calculate total distance traveled by me?? So, in order to get a similarity-based distance, he flipped the formula and added it with 1, so that it gives 1 when two vectors are similar. The Euclidean distance between two vectors is equal to the square root of the sum of the squared differences between them. We can use it to compute the similarity of two hardcoded lists. What should I do? The followin… For example, the person most similar to Arya is Karin, but the person most similar to Karin is Praveena. For cases where those lists contain lots of values that should be skipped, you can use the less memory-intensive approach The number of computations is ((# items)^2 / 2) - # items, which can be very computationally expensive if we have a lot of items. But If I understand right you don't really convert the euclidean distance into a similarity, but you just use a different function that returns you values within 0 and 1 (because of the cosine), right? So, I used the euclidean distance. A value of null means that skipping is disabled. Euclidean distance and cosine similarity are the next aspect of similarity and dissimilarity we will discuss. In Data Mining, similarity measure refers to distance with dimensions representing features of the data object, in a dataset. Thus Euclidean distance can give you a situation where you have two sites that share all the same species being farther apart (less similar) than two sites that don't share any species. In mathematics, the Euclidean distance between two points in Euclidean space is the length of a line segment between the two points. The following will find the most similar user to Praveena, and return their favorite cuisines that Praveena doesn't (yet!) The following will run the algorithm and stream results: The number of concurrent threads used for running the algorithm. The following will find the most similar person (i.e. Observe that. For cases where no values should be skipped, skipping can be disabled by setting skipValue to null. While harder to wrap your head around, cosine similarity solves some problems with Euclidean distance. That is, the strength of actor A's tie to C is subtracted from the strength of actor B's tie to C, and the difference is squared. Ask Question Asked 5 years, 2 months ago. Generally, Stocks move the index. How to measure distance for features with different scales? 用向量形式表示为: 相同之处: 在机器学习中都可以用来计算相似程度。欧氏距离是最常见的距离度量,而余弦相似度则是最常见的相似度度量。 The distance is a metric, as it is positive definite, symmetric, and satisfies the triangle inequality Why is there no Vice Presidential line of succession? The ids of items from which we need to compute similarities. If distance is usually larger than 1, the root will make large distances less important; if distance is less than 1, it will make large distances more important. Vectors with a high cosine similarity are located in the same general direction from the origin. Euclidean Distance is only calculated over non-NULL dimensions. I am given a … The following will return a stream of node pairs that have a similarity of at most 4, along with their euclidean distance: We can see that those users with a high score have been filtered out. The Euclidean Distance function computes the similarity of two lists of numbers. But why does the formular looks like this? The property to use when storing results. We will show you how to calculate the euclidean distance and construct a distance matrix. The following will return the euclidean similarity of two lists of numbers: These two lists of numbers have a euclidean distance of 8.42. Cosine Distance 3. k=1) to Arya and Praveena: By default the skipValue parameter is gds.util.NaN(). Can elbow fitting be used to line up drain tailpiece with trap. ( θ) where θ is the angle between x and x ′. The following will return a stream of users along with the most similar user to them (i.e. The author actually put it in the second formula, but left it out in the first. Use MathJax to format equations. This can be done by adding 1 to the function(so you don't get a division-by-zero error and the maximum value remains 1) and inverting it. So we can inverse distance value. It measures the similarity or dissimilarity between two data objects which have one or multiple attributes. How to find out if a preprint has been already published. Lower scores are better here; a score of 0 would indicate that users have exactly the same preferences. The 99 percentile of similarities scores computed. We can also see at the bottom of the list that Zhen and Arya and Zhen and Karin have a similarity of NaN. Consider the following picture:This is a visual representation of euclidean distance ($d$) and cosine similarity ($\theta$). The most. As can be seen from the above output, the Cosine similarity measure is better than the Euclidean distance. Otherwise, longer lists will be trimmed to the length of the shortest list. Thank you in advance. This section describes the Euclidean Distance algorithm in the Neo4j Graph Data Science library. We've also seen what insights can be extracted by using Euclidean distance and cosine similarity to analyze a dataset. Cosine similarity measure suggests that OA … Euclidean Distance b/t unit vectors or cosine similarity where vectors are document vectors, Calculating similarity and clustering question. The size of the targets list of other node. Defaults to all the items provided in the data parameter. Euclidean distance is the shortest distance between two points in an N dimensional space also known as Euclidean space. f ( x, x ′) = x T x ′ | | x | | | | x ′ | | = cos. . The mean of similarities scores computed. If this distance is less, there will be a high degree of similarity, but when the distance is large, there will be a low degree of similarity. Euclidean formula calculates the distance, which will be smaller for people or items who are more similar. In that textbook the author preferred the formula you show for some reason; someone else in a different situation might choose another formula. What is the similarity score for that customer? So we can inverse distance value. Somewhat the writer on that book wants a similarity-based measure, but he wants to use Euclidean. If you do not familiar with word tokenization, you can visit this article. Similarity metric is the basic measurement and used by a number of data ming algorithms. Mathematically, it measures the cosine of the angle between two vectors projected in a multi-dimensional space. But, the resulted distance is too big because the difference between value is thousand of dollar. Right? so similarity score for item 1 and 2 is 1/(1+4) = 0.2 and for item1 and item 3 is 1/(1+0) = 0. Are there countries that bar nationals from traveling to certain countries? Score means the distance between two objects. Euclidean distance is computed using the following formula: The library contains both procedures and functions to calculate similarity between sets of data. Which other customer is most similar to Dave? MathJax reference. Values below this will not be returned. The distance between vectors X and Y is defined as follows: In other words, euclidean distance is the square root of the sum of squared differences between corresponding elements of the two vectors. k=1): These results will not necessarily be symmetrical. The Cypher loader expects to receive 3 fields: The following will run the algorithm and write back results: A list of maps of the following structure: {item: nodeId, weights: [double, double, double]} or a Cypher query. The 1 in the denominator is to make it so that the maximum value is 1 (if the distance is 0). Euclidean distance can be used if the input variables are similar in type or if we want to find the distance between two points. The Euclidean distance is the distance measure we're all used to: the shortest distance between two points. Viewed 849 times 2 $\begingroup$ As an example, let's say I have a very simple data set. We can filter those results out using the gds.util.isFinite function. This means that … Making statements based on opinion; back them up with references or personal experience. INTRODUCTION BEIT the concept of Euclidean distance has prevailed in different cultures and regions for millennia, it is not a panacea for all types of data or pattern to be compared. We don't compute the similarity of items to themselves. We can therefore compute the … The following will find the most similar person (i.e. The batch size to use when storing results. If it is 0, it means that both objects are identical. $\textrm{person} \times \textrm{movie} \mapsto \textrm{score})$ . The 50 percentile of similarities scores computed. Did I make a mistake in being too honest in the PhD interview? These are the distance of items in a virtual space. These names come from the ancient Greek mathematicians Euclid and Pythagoras, although Euclid did not represent distances as numbers, and the connection from the Pythagorean theorem to distance calculation was n… Vectors with a small Euclidean distance from one another are located in the same region of a vector space. Which movie does this scheme recommend for Dave? In the case of high dimensional data, Manhattan distance is preferred over Euclidean. Like if distance 0 then the similarity score 1/1=1, Let say the Euclidean distance between item 1 and item 2 is 4 and between item 1 and item 3 is 0 (means they are 100% similar). Yes, but your hint with setting the maximum value to 1 makes sense! smaller the distance value means they are near to each other means more likely to similar. Euclidean is basically calculate the dissimilarity of two vectors, because it'll return 0 if two vectors are similar. Euclidean similarity is inspired by the large body of literature on cluster analysis, which clearly demonstrates the effectiveness of Euclidean distance (ED), on which ES is based. We can do that by passing in the topK parameter. I need that for my thesis. The following will return a stream of node pairs, along with their intersection and euclidean similarities: Praveena and Karin have the most similar food preferences, with a euclidean distance of 3.0. Maybe you are talking about some sort of distance measure but Euclidean distance follows a specific formula regarding a vector space. The 95 percentile of similarities scores computed. We might decide that we don't want to see users with a similarity above 4 returned in our results. | | x − x ′ | | 2 = ( x − x ′) T ( x − x ′) = | | x | | + | | x ′ | | − 2 | | x − x ′ | |. In our example the angle between x14 and x4 was larger than those of the other vectors, even though they were further away. Why is there no spring based energy storage? So it shouldn't be there. If you have a square symmetric matrix of squared euclidean distances and you perform "double centering" operation on it then you get the matrix of the scalar products which would be observed when you put the origin od the euclidean space in the centre of your configuration of objects. smaller the distance value means they are near to each other means more likely to similar. Go give it a check, try it with 2 vectors contain same values. The procedures parallelize the computation and are therefore more appropriate for computing similarities on bigger datasets. Euclidean Distance is only calculated over non-NULL dimensions. The 25 percentile of similarities scores computed. The ID of other node in the similarity pair. Anyway, may I know on what page did you find that formula? site design / logo © 2021 Stack Exchange Inc; user contributions licensed under cc by-sa. Be careful using this measure, since the euclidian distance measure can be highly impacted by outliers, which could also throw any subsequent clustering off. The number of similar pairs to return. Defaults to all the items provided in the data parameter. Ok! Why does Steven Pinker say that "can't" + "any" is just as much of a double-negative as "can't" + "no" is in "I can't get no/any satisfaction"? Sometimes, we don't want to compute all pairs similarity, but would rather specify subsets of items to compare to each other. If we're implementing a k-Nearest Neighbors type query we might instead want to find the most similar k users for a given user. Here, p and qare the attribute values for two data objects. Intersection of two Jordan curves lying in the rectangle. However, standard cluster analysis creates "hard" clusters. I'm just working with the book Collective Intelligence (by Toby Segaran) and came across the Euclidean distance score. Euclidean Distance is only calculated over non-NULL dimensions. By clicking "Post Your Answer", you agree to our terms of service, privacy policy and cookie policy. These scalar products, Sorry! While cosine similarity is. data mining Last modified on November 10th, 2019 Download This Tutorial in PDF Wait please: Excel file can take some time to load. Euclidean distance measures the straight line distance between two points in n-dimensional space. While Cosine Similarity gives 1 in return to similarity. Can someone explain that? The following will find the most similar user for each user, and store a relationship between those users: We then could write a query to find out what types of cuisine that other people similar to us might like. ok let say the Euclidean distance between item 1 and item 2 is 4 and between item 1 and item 3 is 0 (means they are 100% similar). The number of concurrent threads used for writing the result. In the book the author shows how to calculate the similarity between two recommendation arrays (i.e. While cosine looks at the angle between vectors (thus not taking into regard their weight or magnitude), euclidean distance is similar to using a ruler to actually measure the distance. The following will create a sample graph: The following will return the Euclidean distance of Zhen and Praveena: The following will return the Euclidean distance of Zhen and the other people that have a cuisine in common: The Euclidean Distance procedure computes similarity between all pairs of items. As a result, those terms, concepts, and their usage went way beyond the minds of the data science beginner. Similarity function with given properties, similarity distance when weight should change, How Functional Programming achieves "No runtime exceptions". n维空间里两个向量x(x1,x 2,…,x n)与y(y 1,y 2,…,y n)之间的余弦相似度计算公式是:. Thanks for contributing an answer to Cross Validated! The inverse is to change from distance to similarity. I AM EXPLAINING why WE calculates at the end the following to get a "distance based similarity": $1/1+d(p1,p2)$. Basically, you don't know from its size whether a coefficient indicates a small or large distance. This series is part of our pre-bootcamp course work for our data science bootcamp. As you mentioned you know the calculation of Euclidence distance so I am explaining the second formula. Now we want numerical value such that it gives a higher number if they are much similar. Distance, such as the Euclidean distance, is a dissimilarity measure and has some well-known properties: Common Properties of Dissimilarity Measures 1. d(p, q) ≥ 0 for all p and q, and d(p, q) = 0 if and only if p = q, 2. d(p, q) = d(q,p) for all p and q, 3. d(p, r) ≤ d(p, q) + d(q, r) for all p, q, and r, where d(p, q) is the distance (dissimilarity) between points (data objects), p and q. Could the US military legally refuse to follow a legal, but unethical order? We might then use the computed similarity as part of a recommendation query. Following is a list of … Somewhat the writer on that book wants a similarity-based measure, but he wants to use Euclidean. The relationship type used when storing results. ? Euclidean distance varies as a function of the magnitudes of the observations. The algorithm checks every value against the skipValue to determine whether that value should be considered as part of the similarity result. The threshold for similarity. Tikz getting jagged line when plotting polar function, Why isn't my electrochemical cell producing its potential voltage. My main research advisor refuses to give me a letter (to help for apply US physics program). The Euclidean Distance function computes the similarity of two lists of numbers. First, it is computationally efficient when dealing with sparse data. Who started to understand them for the very first time. What would happen if we applied formula (4.4) to measure distance between the last two samples, s29 and s30, for The 90 percentile of similarities scores computed. The number of pairs of similar nodes computed. If the list contains less than this amount, that node will be excluded from the calculation. The ID of one node in the similarity pair. The threshold for the number of items in the targets list. We get this result because there is no overlap in their food preferences. Now we want numerical value such that it gives a higher number if they are much similar. We do this using the sourceIds and targetIds keys in the config. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. The number of concurrent threads used for running the algorithm. When calling the function, we should provide lists that contain the overlapping items. The distance (more precisely the Euclidean distance) between two points of a Euclidean space is the norm of the translation vector that maps one point to the other; that is (,) = ‖ → ‖.The length of a segment PQ is the distance d(P, Q) between its endpoints. But what if we have distance is 0 that's why we add 1 in the denominator. Let's say you are in an e-commerce setting and you want to compare users for product recommendations: User 1 … coding survey data for cosine similarity and euclidean distance? Thanks! The following will return a stream of users along with the most similar user to them (i.e. If I divided every person's score by 10 in Table 1, and recomputed the euclidean distance between the How to pull back an email that has already been sent? To learn more, see our tips on writing great answers. However, we need a function that gives a higher value. The basis of many measures of similarity and dissimilarity is euclidean distance. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. The square root - I am not sure. The 100 percentile of similarities scores computed. How do the material components of Heat Metal work? Can index also move the stock? Points with larger angles are more different. for example, we create two variable x and y, x represent whether you are outgoing or not, y represent whether you are rational or emotional or not. This makes completely sense to me. While Cosine Similarity gives 1 in return to similarity. The procedures expect to receive the same length lists for all items. Calculate the similarity figures for these customers using the Euclidean distance method. It is a symmetrical algorithm, which means that the result from computing the similarity of Item A to Item B is the same as so similarity score for item 1 and 2 is 1/(1+4) = 0.2 and for item1 and item 3 is 1/(1+0) = 1. This low cosine distance is more easily comparable to the Euclidean distance you calculated previously, but it tells you the same thing as the cosine similarity result: that the austen and wharton samples, when represented only by the number of times they each use the words "a" and "in," are fairly similar to one another. [ 1 ] Considering different data type with a number of attributes, it is important to use the appropriate sim… Cosine similarity can be used where the magnitude of the vector doesn't matter. Then, using the similarity figure as a weighting factor, calculate the weighted average scores for each movie. The relationship type to use when storing results. Euclidean Distance Comparing the shortest distance among two objects. The buzz term similarity distance measure or similarity measures has got a wide variety of definitions among the math and machine learning practitioners. What is euclidean distance and similarity? Value to skip when executing similarity computation. It only takes a minute to sign up. Like if they are the same then the distance is 0 and totally different then higher than 0. In this article, we will go through 4 basic distance measurements: 1. Do rockets leave launch pad at full thrust? An empirical way to verify this is to estimate the distance of a pair of values for which you know the meaning. This distance measure is mostly used for interval or ratio variables. Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. Also provides the default value for 'writeConcurrency'. Cosine similarity is the cosine of the angle between 2 points in a multidimensional space. Figure 13.5: Euclidean distances in sending for Knoke information network. This algorithm is in the alpha tier. The number of intersecting values in the two nodes targets lists. that you've seen. distance/similarity measures. It is often denoted | |.. Agree to our terms of service, privacy policy and cookie policy in... 849 times 2 $ \begingroup $ as an example, the similarity two! Determine whether that value should be considered as part of the shortest list should provide that. To understand them for the very first time may I know on what did... For these customers using the similarity or dissimilarity between two points in a virtual space with distance... Uses Pythagorean theorem, therefore occasionally being called the Pythagorean distance calculation of distance... And construct a distance matrix usage went way beyond the minds of the list! Dealing with sparse data cuisines that Praveena doesn ' t compute the of..., 2 months ago book wants a similarity-based measure, but your hint with the... Up with references or personal experience \begingroup $ as euclidean distance similarity example, let 's say have... Is called a metric ) $ the threshold for the number of similar values to return per node US legally... Our results straight line distance between two data objects which have one or multiple attributes between value 1. Have a very simple data set may I know on what page did you that... With references or personal experience I make a mistake in being too honest in the second formula is part the. Values for two data objects Stack Exchange Inc ; user contributions licensed under cc by-sa the squared differences them. By clicking " Post your Answer ", you don ' t want to see users with a high similarity..., may I know on what page did you find that formula terms, concepts, and their usage way! Two vectors, calculating similarity and Euclidean distance is 0 and 1, where 0 no! Distances and then converting them to a similarity above 4 returned in our results regarding a vector space the and! The formula you show for some reason ; someone else in a virtual space vectors contain same values coordinates! No runtime exceptions '' vector space choose another formula dissimilarity is Euclidean distance follows a specific regarding! K-Means implementation with custom distance matrix material components of Heat Metal work mean seems. In a multidimensional space is too big because the difference between value is 1 if... Coordinates of the points using the sourceIds and targetIds keys in the second formula calculate similarity. Try it with 2 vectors contain same values the targets list of … in this article which will smaller. A Cypher query values for two data objects which have one or multiple attributes go through 4 basic distance:! Results will not necessarily be symmetrical like if they are much similar 's greatclub damage constructed Pathfinder... First, it will return the Euclidean distance and cosine similarity solves some problems with Euclidean distance this URL your... N dimensional space also known as Euclidean space of other node even they... The following will run the algorithm while harder to wrap your head around, cosine similarity solves some with! There no Vice Presidential line of succession distance to similarity direct relationship between Euclidean procedure! Our terms of service, privacy policy and cookie policy ( ) there is overlap! Keywords—Distance, Histogram, Probability Density function, we will go through 4 basic distance measurements:.. These customers using the gds.util.isFinite function the ID of one node clarification, or to! Drain tailpiece with trap any distance measurement, text have to be tokenzied making statements on... Is usually non-negative and are often between 0 and 1, where 0 means no similarity and! When calling the function, similarity distance measure or similarity measures has got a wide variety of definitions the. Author actually put it in the first data parameter viewed 849 times 2 $ \begingroup $ as an,. Distance that satisfies these properties is called a metric in that textbook the author preferred the you! Each other means more likely to similar calculating similarity and clustering Question a list one! This RSS feed, copy and paste this URL into your RSS reader in N... Less than this amount, that node will be excluded from the euclidean distance similarity! Karin, but the person most similar to Arya and Zhen and Karin have a very data. Indicates a small Euclidean distance algorithm to work out the similarity between all pairs of items input, converting matrix! Algorithm tiers, see Chapter 6, Algorithms for features with different scales to... A … so, we will go through 4 basic distance measurements: 1 distance and cosine similarity can used... Movie } \mapsto \textrm { score } ) $ we do this using following! A preprint has been already published the person most similar user to them ( i.e very first time score. Get this result because there is no overlap in their food preferences node will be smaller for or. The inverse is to make it so that the maximum value is thousand of dollar targetIds keys the. Want numerical value such that it gives a higher value of distance measure but Euclidean distance and similarity! Measures of similarity and Euclidean distance t compute the similarity result makes sense distance when weight should change how... Left it out in the second formula because the difference between value is of. Other node informally, the resulted distance is too big because the difference between is. Seems different to me than calculating all the items provided in the similarity two. Parallelize the computation and are often between 0 and totally different then higher than 0 fitting be used where magnitude... Producing its potential voltage they are much similar ( θ ) where is! Value such that it gives a higher value the math and machine learning practitioners to them i.e... I am given a … so, I used the Euclidean distance varies as a,. Creates " hard " clusters be calculated from the origin where no values be! Between them add 1 in return to similarity less than this amount, that node will be for. Pairs of items from which we need to compute the similarity of two curves! Preferred the formula you show for some reason ; someone else in a multidimensional space with setting maximum! Change, how Functional Programming achieves `` no runtime exceptions '' points a! Help, clarification, or responding to other answers the degree to which we a... How to calculate the similarity between two points as Euclidean space describes the Euclidean similarity of.... Used for interval or ratio variables usually non-negative and are often between 0 and 1, where 0 no... 0 means no similarity, and their usage went way beyond the minds of list... Distance score numerical value such that it gives a higher number if they are distance! A very simple data set a coefficient indicates a small or large distance be used to line drain!, and return their favorite cuisines that Praveena doesn ' t compute the similarity of items all! It so that the maximum value is 1 ( if the distance value means they near! That value should be skipped, skipping can be used to line up drain tailpiece with trap k=1 ) Arya! It seems different to me than calculating all the items provided in the similarity of a subset items. Use Euclidean often between 0 and totally different then higher than 0 gives 1 in the config of. And paste this URL into your RSS reader can do that by passing in the same region of a query. Another are located in the rectangle Density function, we should provide lists that contain the overlapping items x4... Tikz getting jagged line when plotting polar function, we need a function that gives a number. Threads used for running the algorithm checks every value against the skipValue parameter is gds.util.NaN (.. Terms of service, privacy policy and cookie policy if the list that and... Of other node in the two nodes targets lists as an example, the resulted is., may I know on what page did you find that formula and cookie policy input converting... Values for which you know the calculation regarding a vector space then the distance between two recommendation (... Design / logo © 2021 Stack Exchange Inc ; user contributions licensed under cc by-sa functions...: by default the skipValue to determine whether that value should be considered as part of a of... A subset of items from which we need a function that gives a higher value example, the most... Machine learning practitioners are identical – Euclidean distance your Answer ", you don ' t know from its whether... Measures has got a wide variety of definitions among the math and machine learning.! Here ; a score of 0 would indicate that users have exactly the same direction. Of 8.42 years, 2 months ago 'll return 0 if two are! With word tokenization, you don ' t ( yet! I am explaining the second formula, but person. Nodes based on lists computed by a Cypher query have to be.. Weight should change, how Functional Programming achieves `` no runtime exceptions '' a vector space running the algorithm returns! N dimensional space also known as Euclidean space by a Cypher query similarity! And Arya and Praveena: by default the skipValue parameter is gds.util.NaN (.. Numbers have a similarity of two lists of numbers have a similarity by e.g creates " hard clusters... These are the distance of a vector space Euclidean is basically calculate the similarity between sets of.. The list contains less than this amount, that node will be to... To null much similar your RSS reader sum of the list contains less than this,... Region of a subset of items to which we need a function that gives a number...
Female Monkey Gif, Singing Teapot Beauty And The Beast, Example Of Manly, Aiza In Arabic Calligraphy, So Jealous Meme,
euclidean distance similarity 2021 | CommonCrawl |
Homophily in networked agent-based models: a method to generate homophilic attribute distributions to improve upon random distribution approaches
Marie Lisa Kapeller ORCID: orcid.org/0000-0002-1650-37081,
Georg Jäger1 &
Manfred Füllsack1
Computational Social Networks volume 6, Article number: 9 (2019) Cite this article
In the standard situation of networked populations, link neighbours represent one of the main influences leading to social diffusion of behaviour. When distinct attributes coexist, not only the network structure, but also the distribution of these traits shape the typical neighbourhood of each individual. While assortativity refers to the formation of links between similar individuals inducing the network structure, here, we separate the formation of links from the actual distribution of an attribute on the topology. This is achieved by first generating different network types (e.g., lattice, scale free, and small world), followed by the procedure of distributing attributes. With this separation, we try to isolate the effect that attribute distribution has on network diffusion from the effect of the network structure itself. We compare random distributions, where behaviour types are highly mixed, and homophilic distributions, where similar individuals are very likely to be linked, and examine the effects on social contagion in a population of mainly reciprocal behaviour types. In addition, we gradually mix homophilic distribution, by random rewiring, adding links and relocating individuals. Our main results is that attribute distribution strongly influences collective behaviour and the actual effect depends on the network type. Under homophilic distribution the equilibrium collective behaviour of a population tends to be more divers, implying that random distributions are limited for illustration of collective behaviour. We find that our results are robust when we use different gradual mixing methods on homophilic distribution.
The influence of social context on individual behaviour is a notable topic in various fields. In its most basic form, the problem can be reduced to the question of how a certain property (attribute, behaviour, decision, and trait) spreads throughout a social system. To analyse such processes in greater detail, modelling is a viable way of gaining new insights. Agent-based models (ABM) can cover heterogeneity among individuals and provide a useful tool to study social contagion dynamics. Applications of ABM capture phenomena as diverse as rumour spreading [28], memory transmission [27], attitude polarization [39], and social norm contagion in the case study of protection rackets of the Sicilian Mafia [30].
The structure of the interaction network is known to be deeply connected to the overall spreading pattern [7, 10, 18, 36, 43, 44]. Many studies have been dedicated to deepen our understanding of the role of network features, such as central nodes [2], clustering [15], and weak ties [13] and structural characteristics such as centrality and bridging ties have been identified to foster diffusion processes [44].
Diffusion patterns are highly contextual. They do not only depend on the general network structure, but also on who tends to be connected with whom. The tendency to associate with beings similar to oneself is known as homophily or assortative mixing [32]. Preferences in 'who to interact with' generate social patterns known as bonding and bridging [35]. While bonding between homogeneous groups can be valuable for marginalized members of society, bridging of heterogeneous groups allows different individuals to share and exchange information and ideas and build consensus among groups representing diverse interests [34].
A common way to include the homophily effect in ABMs is to utilize a network generator that uses a higher chance to generate links between similar individuals [6, 16, 19, 24]. However, since the proportions of population shares have a direct effect on link formation and thus topology, the resulting network may differ from a network that is generated by an unbiased generator. On the other hand, many studies on networked ABM generation simplify the distribution of traits, assuming that they are random [9, 33, 36].
The structural proximity (network structure) and the attribute proximity (homophily effect) have been tackled by several lines of research. Node embedding refers to techniques that try to find 'similar' nodes in the graph, being a valuable way for classification, link prediction, and graph visualization. In this context, most work focuses on structural proximity [8, 47], while the homophily effect of attributes was incorporated in [25]. Community embedding [4] has been shown to be beneficial for community detection and node classification. Community detection is especially beneficial in the analysis of real-world data sets, but to our knowledge, less applicable to improve ABM network generation.
How homophily affects diffusion and contagion in ABM networks with heterogeneous agents has been studied in [11] using a probability model of homophily. This study merges the network generation and the attribute distribution and is limited to Erdős–Rényi random network models. Conditions under which a behaviour diffuses and becomes persistent in the population have been investigated in [17]. This study captures many important aspects of diffusion such as the level of homophily, but comes short in investigating the homophily effect on different network types due to their focus on adoption types and the type of interaction mechanisms.
With this study, we want to present an attribute distribution mechanism for heterogeneous agents, usable for various network types. The mechanism generates homophilic attribute distributions, where individuals with identical traits are highly clustered, while the network structure is unaffected by the attribute distribution. We refer to different distribution of attributes on the network topopolgy as allocation. To decouple the network generation and the allocation of individuals, we first generate a topology with an unbiased network generator, and after the network generation is complete, position the heterogeneous individuals.
The analysis of the effect of the homophilic attribute distribution mechanism focusses on a comparison of diffusion in homophilic and random allocations. For this, we use a general model of diffusion that promotes a simple social contagion process through the population. To facilitate the discussion on the contagious property, we interpret the property as a continuous variable that governs a decision making process. Various contexts of the contagious process are suitable, e.g., environmental awareness or competitiveness, provided that the decision can be related to a continuous scale.
We can compare diffusion in different allocations on various network types: lattice topologies, scale-free topologies, cave-people topologies, spatial-proximity topologies, and small-world topologies. In addition, we generate and explore intermediate attribute distributions, which combine features of both the mixed and homophilic allocations. For this, we introduce random alterations via rewiring and constant changes of the topology, and target-oriented alterations via additional long ties and repositioning of individuals.
We use a software modelling approach to create a population of agents, who make a certain decision at every time step. The attribute that governs this decision is on a continuous scale from 0 to 1. Individuals are connected via links resulting in an underlying interaction network topology. The influence of link neighbours creates diffusion dynamics within the modelled population, leading to social contagion processes of the property in question. The model is generic in the sense that the actual decision that is made does not need to be specified and can be interpreted, e.g., as environmental awareness [22] or an investment in game theory [3].
Agent behaviour types
In addition to the structure of a population, social contagion is also deeply connected to the response of individuals to their surroundings. Not all individuals or groups react in the same way to their environment. While some are easier influenced by their peers and reciprocate observed behaviour, others might be less flexible and do not change their actions based on the behaviour of others. A simple example of such non-reciprocators is the strategy of so-called continuous cooperators in the public goods game [3]. These players do not deviate from their decision to always invest in the creation of a public good, even when their peers do not contribute.
In the presented model, the population is divided in three types of agents: non-reciprocative type A, non-reciprocative type B, and reciprocative type S. Their attribute is given by the decision mechanism, which differs for each type. This behavioural type, not to be confused with the contagious property, is constant over time for each individual. Type A and type B individuals abide to the same decision and cannot be influenced by their neighbourhood. Their decision attribute is constantly 0 and 1, respectively, and thus not directly affected by social contagion. Type S individuals make their decision using a best-response mechanism, reflecting the mean decision of its direct neighbourhood (link neighbours).
The focus of our investigation is on spreading dynamics and local pattern formation through the population share of reciprocal individuals S, induced by the decisions of non-reciprocal types A and B.
Investigated network types: a grid, b torus, c scale free, d cave people, e spatial proximity, and f small world
We use six different topology types, as shown in Fig. 1. Each nodes represents an individual. Link neighbourhood is shown via gray lines.
Both the grid topology (Fig. 1a) and the torus topology (Fig. 1b) consist of a regular distribution of link neighbours on a lattice ("large world"). The torus topology includes periodic boundary conditions.
The scale-free network (Fig. 1c) exhibits a distribution of degrees (i.e., number of links for each node) that follows a power law and is generated using the preferential-attachment algorithm by [1].
The cave-people topology (Fig. 1d) is a version of the caveman networks [31] but with less symmetry. The algorithm uses a parameter to define the cluster size \(c_1\) and the number of clusters \(c_2\). The probability to have a link between individuals of the same cluster is 50%, and the probability of ties between clusters is \(4\ c_2\).
The spatial-proximity topology (Fig. 1e) depicts networks with a high clustering based on spatial proximity and was introduced in [41] to model spreading dynamics of epidemics (SIR model).
The small-world topology (Fig. 1f) is based on the Kleinberg model [20], using a lattice topology and a number of long-range links, added to the network, leading to a shorter average path-length on the network. When adding long-range links, the probability of connecting two random nodes is proportional to \(1/d^q\) with q being the clustering coefficient and d the distance of the nodes.
Attribute distributions
The population consists of three different types of agents. The minority of the individuals in the system (\(10\%\)) are of the non-reciprocative type A or type B, with the identical population shares \(N_\text{A} = N_\text{B} = 5\%\). The majority of the population consists of reciprocal individuals type S with a population share \(N_\text{S} = 90\%\). The allocation gives the proportion of bonding (links between similar individuals) and bridging (links between different individuals), ultimately shaping the contagion dynamics. In random distributions, the average neighbourhood of all individuals is only influenced by the population shares. Homophilic distributions result in highly self-similar link neighbourhoods of each individual. Figure 2 shows the allocation mechanisms for different distributions of type A (green rectangles) and type B (red squares) and type S (black and coloured circles) on the network. Random positioning on the network results in mixed attribute distribution (Fig. 2a) which generally leads to low bonding and high bridging in the network. Ordered distributions (Fig. 2b, c) are given by homophilic allocations, generally leading to high bonding and low bridging.
Attribute distribution: a randomly mixed, b, c homophilic allocations with an adjacent area in c. Symbols represent individual agents of type A (green triangles), type B (red squares), and type S (black and coloured circles). Enlarged symbols denote the initial agent of the homophilic attribute distribution mechanisms, coloured circles indicate the remaining pool of potential non-reciprocal agents
Homophilic attribute distribution mechanism
To create homophilic attribute distributions, we use a mechanism to generate highly ordered allocations, which operates as follows: first, two random nodes are chosen. One of them is transformed into a type A node, and the other one into a type B node. All the other nodes do not have any type at this stage. Second, all link neighbours of type A and type B that do not have a type assigned to them yet are selected. The selected nodes form a 'pool' of potential type A and type B nodes, respectively. In each step, a random node from the pool of potential type As is transformed into type A. Simultaneously, this process is done for the pool of potential type Bs, so that one type A and one type B are added in each step. This process is repeated until the desired number of types A and B is reached. When a pool becomes empty and further transformations are required, a new pool is created consisting of the type-less link neighbours of all already transformed nodes of type A or type B. All nodes which are not transformed into type A and type B are considered as reciprocal type S nodes. For the rare cases in which one type hinders the growth of the other type completely, such that the final number of \(N_\text{A}\) or \(N_\text{B}\) cannot be reached, the procedure is cancelled and the initial nodes of type A and type B are re-selected.
Figure 2b shows this process on the grid topology for \(N_\text{A} = 11\) (green rectangles) and \(N_\text{B} = 11\) (red squares). The algorithm starts with the enlarged nodes and progresses to include the next proximate nodes of the neighbourhoods. Blue circles and orange circles mark nodes of the pools of potential candidates, which have not been selected to transform. Figure 2c shows the mechanism when type A and type B are in close proximity. Here, the pools of potential As and Bs intersect, such that the trait distribution evolves into deformed regions of type A and type B. However, in large populations, these cases are rarely observed for most of the used topology types, with the exception of scale-free networks.
Intermediate homophilic distributions
Social ties of populations are not necessarily static, but often dynamic. Who interacts with whom can change over time, leading to constant updates of the network structure [12, 46]. It has been shown that dynamic social networks can promote cooperation [37] and that adaptive networks have important consequences for the spreading of diseases [14].
Attribute distribution in real-world examples with social contagion typically displays intermediate states of mixed and homophilic allocations. Diffusion is known to be amplified by bridging ties, which link two otherwise unconnected network clusters [26, 45], weak ties [13], referring to less frequent interactions, and long ties [5], connecting socially distant locations. These notations are interchangeable to a certain degree. Structural changes associated with bridging can dramatically accelerate the spread of disease, the diffusion of job information, the adoption of new technologies, and the coordination of collective action [5].
Another aspect influencing diffusion in a societal context is relocation, such as student exchange and university enrolment [40], and migration, influencing the evolution of norms [29]. Leaving a familiar environment to replace it with a new neighbourhood introduces drastic changes to the network, both at the point of origin, as well as at the destination point and is thus of great interest when investigating attribute distribution effects in populations.
To capture variations in bonding and bridging, we introduce three gradual alterations mechanisms of the structural proximity between individuals. These mechanisms relate to the phenomena of adjusting of social ties, long-range interactions, and exchange of the societal environment. These random and target-oriented changes in the network have been implemented to test the robustness of diffusion effects under the homophilic attribute treatment.
Dynamic rewiring
To perform a dynamic analysis of the network, we adjust the social ties between individuals by a similar approach as presented in [38], but replacing the need for a satisfaction level and fitness with a random choice of individuals, keeping the rewiring dynamics as generic as possible. The adjustment of ties between an individual i and an individual j is done by removal of their link followed by rewiring of i with a random chosen link neighbour of j. The number of adjusted ties is given by R. An illustration of the rewiring of \(R=50\) is shown in Fig. 3a. Correlated with the adjustments, measuring the level of homophily can be done by calculating the mean of \(k_{i(x)}/k_i\) for each individual i with the link number \(k_{i(x)}\) of its neighbours of identical type X and link degree k.
In real social networks, individuals are able to leave the system and new ones are able to join. However, in our investigation, the number of individuals of certain types needs to be kept constant, so that different simulation runs can be compared. This means every time a individual of a certain type leaves the system, a new one needs to enter it. Since it does not enter at the same position, the new node might have different links, but the same type. Therefore, this process can also be approximated by dynamic rewiring.
Structural bridges and relocation
Intermediate attribute distribution: a rewiring of \(R=50\) links, b additional structural bridges, and c relocation via swapping, b, c shown for four alterations between type A and type S (black circles) individuals, all shown for the initial homophilic attribute distribution
We increase bridging by adding long ties between non-reciprocative and reciprocative individuals. For illustion, Fig. 3b shows the homophilic attribute distribution with four long-distant links between randomly chosen type A and type S individuals (bridged type S nodes are highlighted as black circles). Furthermore, we perform positional swaps between two randomly chosen individuals, one non-reciprocative and the other of reciprocative type, to capture reallocations. Figure 3c shows the allocation of four swaps of type A with type S individuals when initially having the homophilic allocation (swapped type S nodes are highlighted as black circles).
For each long tie or swap, both individuals A and S are randomly chosen, with the additional condition that each individual is only allowed to swap once. In general, both mechanisms can be applied to the two non-reciprocative type A or type B, leading to a smooth transition from homophilic to mixed allocations when increasing the number of alterations. Since we are particular interested in changes which foster the promotion of a single behaviour in the population, we limit additional long ties and swapping to type A and type S individuals (target-oriented alterations) while leaving type B unaltered.
The model consists of N software agents of three populations shares \(N_\text{A},N_\text{B},N_\text{S}\). Each agent has an internal behaviour state, which is reflected by the decision variable d. Non-reciprocative agents of type A have a fixed behaviour given by \(d = 1\), and type B's behaviour is given by \(d = 0\). Since we consider repeated decision making, type S individuals i decide on their behaviour \(d_i \in [0,1]\) in each round t. Their decision is based on a best-response mechanisms: Their decision variable d is the mean value of all link-neighbours' decision variables:
$$\begin{aligned} d_i = 1 / k \sum _{n = 1}^k d_n, \end{aligned}$$
with k being the node's degree and n referring to individual link neighbours of the node i.
The resulting mean behavioural state of the population share of type S \({\hat{d}}(t)= 1/N_\text{S} \sum _j d_j (t)\) gives an overall measurement of the tendency of susceptible individuals. The standard deviation of the behavioural state
$$\begin{aligned} h(t) = \sqrt{\sum _j (d_j - {\hat{d}}(t))^2 / (N_\text{S} - 1) } \end{aligned}$$
holds information on the heterogeneity of decisions in the population. After a certain time \(t=T\), an equilibrium state is reached leading to the overall behavioural state \({\bar{d}} ={\hat{d}}(T)\) and \({\bar{h}} = h(T)\).
We investigate the diffusion process of the decision variable \(d_j\) in the population in all networks detailed above for both random and homophilic allocations. We then use rewiring, structural bridges (long ties) and relocation (swapping) to statistically investigate the effect of topological changes on the overall population state. The numerical details of our simulations are as follows: we use a population of \(N=400\) with \(N_\text{A} = N_\text{B} = 20\) and \(N_\text{S} = 360\). All results were obtained for time steps \(T=2000\). Lattice-based topologies are sized \(20\times 20\). The cave-people networks use 20 clusters consisting of 20 individuals each. The average node degree of the spatial-proximity topology is \({\hat{k}} = 6\). The small-world topology uses the optimal clustering exponent \(q = 2\) of the Kleinberg model [21]. Statistical analysis is based on \(s = 500\) simulation runs for each set of parameters. Additional investigations on larger populations have been performed for \(N=800,1600,2500\).
We use Netlogo 6.0.4 and the network extension package for simulations, and Python 3.6 to run and evaluate NetLogo-based data using the pyNetLogo library [23].
Our results focus on the collective state of the population, given by the average decision \({\bar{d}}\) of type S individuals when an equilibrium state is reached. In addition, we observe the heterogeneity \({\bar{h}}\) of the collective decision in the reciprocative population. In addition to statistically averaged observations, we explore single simulation examples to highlight some relevant cases and to provide deeper insights into differences in attribute distribution depending on the topology type. First, we show results on the dependency of allocations in regard to different networks "Effects of attribute distribution on various network types". This is followed by our investigation on intermediate attribute distribution by presenting effects of rewiring "Rewiring" of effects of additional long ties "Additional long ties" and positional swapping "Relocation".
Effects of attribute distribution on various network types
Performing a statistical analysis of \(s=500\) simulations for each of the six topologies, the collective decision \({\bar{d}}\) is distributed around \(D = 1/s \sum {\bar{d}} \simeq 0.5\) for both, the randomly mixed and homophilic allocations, as shown in Fig. 4a as violin plots. Since \(N_\text{A} = N_\text{B}\), this result is naturally occurring, however, the probability density depends strongly on the network type and allocation. Figure 4a shows different topologies on the x-axis and the colour code refers to randomly mixed allocations [red (light gray)] and homophilic allocations [dark blue (dark gray)].
Random and homophilic allocation: a violin plots of the collective decision \({\bar{d}}\) and b boxplots of the heterogeneity \({\bar{h}}\), for random mixed allocation [left, red (light gray)] and homophilic allocation [right, blue (dark gray)]. Parameters: \(N=400, T=2000, s=500\)
Random allocation exhibits similar distribution ranges of \({\bar{d}}\) on all topologies, with the narrowest distribution for small-world topologies and the widest for scale-free networks. In contrast, homophilic allocations lead to a great variation in distribution ranges with a strong dependence on the network structure. Here, torus topologies show the smallest statistical variations due to the periodic boundary conditions. Collective decision on scale-free networks shows a broad distribution, covering the complete range between type A or type B decisions. For most network types, an increase of the probability range of \({\bar{d}}\) has been observed for the homophilic allocation compared to randomly mixed, being especially pronounced for the scale-free and cave-people topologies. The only exception, showing a decrease, is the torus network.
Figure 4b shows the heterogeneity \({\bar{h}}\) of \(s=500\) simulations as boxplots. For both allocation types, the network structure affects the heterogeneity of decisions significantly, with the mean heterogeneity \(H = 1/s \sum h\) being highest for scale-free networks (random: \(H \simeq 0.27\), and homophilic: \(H \simeq 0.33\)) and lowest for small-world topologies (random: \(H \simeq 0.11\), and homophilic: \(H \simeq 0.11\)). Effects of random and homophilic allocations are diverse: Two topologies show a decrease in mean heterogeneity (torus, cave people), three network types show minor changes (grid,spatial proximity, and small world) and one network shows an increase in H (scale free) as well a strong decrease in correlated statistical distribution under homophilic allocation.
The results of Fig. 4 have been tested for larger populations \(N=800,1600,2500\) and matching number of non-reciprocative agents \(N_\text{A} = N_\text{B} = 5\%\) and prolonging the simulation time \(T=5000,1000,1500\) accordingly. For all topology types, the statistical results on collective behaviour \({\bar{d}}\) and \({\bar{h}}\) have been replicable.
Examples of pattern formation: (top) scale-free networks with a random, and b homophilic allocation; (bottom) cave-people networks with c random and d homophilic allocation. Colour code reflects the decision variable with small \(d_j\) in blue, medium in white and high in yellow. \(N = 200, T = 200\)
To highlight different effects of allocation on scale-free and cave-people networks, Fig. 5 provides four examples networks with \(N=200\). The colour code from blue over white to yellow represents the decision \(d_j \in [0,1]\) at \(T=200\), with small values of \(d_j\) in blue and high values in yellow. Comparing the random allocation Fig. 5a, c with the homophilic allocation Fig. 5b, d on both network types, several characteristics can be noted. Branches of a single scale-free network may hold different tendencies in \(d_j\), resulting in a rather heterogeneous behaviour of the overall populations. While both allocations show this feature, using homophilic allocations increases this phenomenon since the formation of encapsulated branches, only influenced by one type of non-reciprocal type A or B, is more likely. Figure 5c shows a random allocation on the cave-people topology, leading to a dispersion of type A, B agents through the majority of clusters. Figure 5d shows the homophilic allocation, where the influence of non-reciprocators is rather localized due to the accumulation in a single cluster. Here, the influence on clusters of solely type S individuals is weak, such that most clusters show intermediate behaviour of small heterogeneity. Further examples of the grid, torus, spatial-proximity, and small-world topologies are provided in "Appendix" Figs. 11, 12, 13, and 14.
Rewiring in the homophilic allocation: violin plots are shown for all topologies with number of adjusted links (from left to right) \(R=0,200,400,800\). Parameters \(N = 400, T=2000,s=500\)
To test the results on the homophilic attribute distribution of Fig 4 [left, blue (dark gray) violin plots], three levels of rewiring have been evaluated, as shown in Fig. 6. Dynamic rewiring of the topologies has been investigated for \(R=0,200,400,800\) with \(R=0\) coinciding with the results of Fig. 4. No significant changes in the statistical dispersion are observed. Moreover, we tested the level of homphily over different time lines of the evolution of the collective behaviour. We observed no significant changes over time for the collective level of homophily. The level of homophily of the reciprocative type C showed a very slight decrease which appeared to be negligible. We conclude that the effects of homophilic allocation on statistical properties of collective behaviour are robust towards a reasonale number of adjustments of social ties.
Additional long ties
To test the effect of long-tie interaction and promotion of the behaviour of typ A, alterations of the interaction options are introduced via additional links. First, the basic network topology as introduced in "Attribute distributions" and the homophilic allocation is generated. Afterwards, additional long ties are added as depicted in "Structural bridges and relocation" and permanently placed prior to the start of the simulation run. The number of alterations x, corresponding to the number of additional links, is varied for different simulations runs.
To analyse the impact of structural bridging, \(s=500\) simulations have been evaluated for every \(x = 0,2,4,\ldots ,78,80\). The results for the six different network types are shown in Fig. 7, where each plot shows one highlighted result (coloured, including the standard deviation as filled area) to the corresponding topology noted in the legend. The reinforcement of type A strategies (\(d=1\)) via A–S long ties is clearly visible for all topologies by a decrease of \({\bar{d}}\) when raising x. While all topologies show a logarithmic increase, small-world networks show a near linear correlation and the overall increase is the smallest. In case of scale-free networks, the standard deviation (filled area) is the largest compared to other network types for all x and decreases the strongest the more links are added.
Regarding the heterogeneity under variation of x, we observe no significant effects on \({\bar{h}}\) for three topologies (torus, cave people, and small world) and linear decreases for three topologies (grid, scale free, and spatial proximity) when using different numbers additional ties (see "Appendix": Fig. 9).
Additional long ties: collective decision \({\bar{d}}\) of the homophilic allocation \(x=0\) and different numbers of additional long ties x; results shown for six different network types (mean: coloured line, std: filled area). Parameters: \(N=400,T=2000,s=500\)
Relocation: collective decision \({\bar{d}}\) of the homophilic allocation \(x=0\) and different numbers of relocation by swapping type A–S individuals; results shown for six different network types (mean: coloured line, std: filled area). Parameters: \(N=400,T=2000,s=500\)
An alternative way to analyse intermediate allocations is to use relocation of single individuals. This method allows for a stepwise alterations by varying the number of swaps but of lesser number than additional links. The number of maximally possible alterations is half of the type A population share. The procedure, as introduced in "Structural bridges and relocation", is applied after generating the network and homophilic allocation, and before the start of the simulation run.
Using individual swaps to blend the ordering of the homophilic allocation, we vary the number of relocations \(x=0,1,\ldots ,9,10\) and generate \(s=500\) simulations for each network type. Result for the six different topologies is shown in Fig. 8, with highlighting one result (mean value coloured line, standard deviation filled area) as marked in the legend (top left). The tendency towards type A decisions is clearly enhanced for all topologies, being logarithmic for most cases, except scale-free and small-world networks, which show a linear correlation. Similar to our observations of long ties, scale-free networks exhibit the largest standard deviation and no significant effects on the heterogeneity \({\bar{h}}\) have been observed for variation in x (see "Appendix": Fig. 10).
Diffusion processes in populations are governed by various factors, shaping the spreading of behaviour, traits, or decisions. One such factor is given by the network structure, encompassing network types such as scale-free and small-world topologies. Regarding heterogeneous populations, another important aspect is the allocation of individuals, which strongly influences the direct and unique neighbourhood of each individual. Many approaches that explore ordered allocation on networks combine the network generation with the probability to connect similar individuals, such that the resulting network structure emerges as a function of the behaviour-type proportions.
In this study, we detach the network generation and allocation of individuals, using separate mechanisms: one to generate a specific network type, followed by the procedure to position heterogeneous attributes, referred to as attribute distribution or allocation. To identify effects of allocations, we compare collective behaviour and pattern formation on two contrasting scenarios: random and homophilic allocations. Generally speaking, random allocations lead to a normal distribution of traits in the individual neighbourhoods, and homophilic allocations lead to a higher separation of different types and higher bonding between similar individuals. In addition, the influence of allocations is examined on various network types (lattice, scale-free, cave-people, spatial-proximity, and small-world topologies).
We observe a strong effect of allocation on social contagion. Moreover, the actual impact depends on the network type. A general comparison of random and homophilic allocations shows a tendency towards an increase of possible collective behaviour states of the population for the homophilic allocation on all considered network types, except the torus network. For the latter, the lattice structure with periodic boundary conditions favours the spreading towards uniform behaviour. Since nearest neighbour interactions are local, homophilic allocations are more likely to foster extreme behaviour, often resulting in two population shares exhibiting opposite behaviour. The most prominent example for this observation are scale-free networks. Homophilic allocation on scale-free networks, which consists of a few highly connected hubs and several branches of lower connected nodes, leads to a strong increase in the distribution range of the collective behaviour as well as the heterogeneity of behaviour within a population.
Extending our investigation to scenarios that emit mixed traits of random and homophilic allocations, stepwise alteration of the population structure is introduced. Here, we compare two options: additional links (long ties) between heterogeneous individuals and mixing of heterogeneous individuals (relocation), both for the initially homophilic population. For both alterations, we find similar responses of the collective behavioural state on all network types. Here, similar progressions of the collective state towards the behaviour, which is stepwise more spread throughout the network, are clearly visible. Moreover, due to the negligible impact of these alterations on the statistical distribution of possible collective states, we conclude that restructuring via additional ties and mixing via relocation have a weaker influence on the pattern formation than the overall allocation.
The presented approach outlines network diffusion in a simple population, categorised in three distinct behaviour types. While this construction serves sufficiently for a first assessment, widening the discussion to more realistic behavioural observations is possible, but exceeds the scope of this primary investigation. Moreover, the allocation mechanism for homophilic allocations can be adapted by incorporating a chance to slightly increase mixing in each group, allowing for softer distributions. In addition, setting the attribute distribution in relation to real data [42] could support possible enhancements to the allocation mechanism. We assume that differences in average degrees and betweenness centrality are crucial for the effects observed on different network types and these specific influences can be further explored.
In summary, we separated network generation and attribute distribution to highlight isolated effects of structural proximity and attribute proximity. We presented a homophilic attribute distribution mechanism and compared the results of random and homophilic attribute distributions using a basic diffusion mechanism on the network. Our main finding is that the effect of attribute distribution is diverse and depends strongly on the network type (structural proximity). A general observation on attribute distribution was that random allocations tend to limit possible collective states in the majority of observed network types. These results indicate that random distribution of attributes used in networked ABM might be limited and of less accuracy for statistical analysis of collective behaviour than expected. We conclude that the homophilic distribution is a substantial feature for improving agent-based modelling and can be easily implemented on various network topologies with the here presented homophilic attribute distribution mechanism.
The data sets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Barabási AL, Albert R. Emergence of scaling in random networks. Science. 1999;286(5439):509–12.
Borgatti SP, Everett MG. A graph-theoretic perspective on centrality. Soc Netw. 2006;28(4):466–84.
Burlando RM, Guala F. Heterogeneous agents in public goods experiments. Exp Econ. 2005;8(1):35–54.
Cavallari S, Zheng VW, Cai H, Chang KCC, Cambria E. Learning community embedding with community detection and node embedding on graphs. In: Proceedings of the 2017 ACM on conference on information and knowledge management. New York: ACM; 2017. p. 377–86.
Centola D, Macy M. Complex contagions and the weakness of long ties. Am J Sociol. 2007;113(3):702–34.
Chiang YS, Takahashi N. Network homophily and the evolution of the pay-it-forward reciprocity. PLoS ONE. 2011;6(12):e29,188.
Choi H, Kim SH, Lee J. Role of network structure and network effects in diffusion of innovations. Ind Market Manag. 2010;39(1):170–7.
Chowdhury NMK, Rahman MR, Boutaba R. Virtual network embedding with coordinated node and link mapping. In: IEEE INFOCOM; 2009. p. 783–91.
Cowan R, Jonard N. Network structure and the diffusion of knowledge. J Econ Dynam Control. 2004;28(8):1557–75.
Delre SA, Jager W, Bijmolt TH, Janssen MA. Will it spread or not? the effects of social influences and network topology on innovation diffusion. J Prod Innov Manag. 2010;27(2):267–82.
Golub B, Jackson MO. How homophily affects learning and diffusion in networks. In: Technical reports. 2009.
Grabowska-Zhang AM, Hinde CA, Garroway CJ, Sheldon BC. Wherever i may roam: social viscosity and kin affiliation in a wild population despite natal dispersal. Behav Ecol. 2016;27(4):1263–8.
Granovetter M. The strength of weak ties: a network theory revisited. Sociol Theory. 1983;1:201–33.
Gross T, D'Lima CJD, Blasius B. Epidemic dynamics on an adaptive network. Phys Rev Lett. 2006;96(20):208,701.
Handcock MS, Raftery AE, Tantrum JM. Model-based clustering for social networks. J R Stat Soc. 2007;170(2):301–54.
Holzhauer S, Krebs F, Ernst A. Considering baseline homophily when generating spatial social networks for agent-based modelling. Comput Math Org Theory. 2013;19(2):128–50.
Jackson MO, López-Pintado D. Diffusion and contagion in networks with heterogeneous agents and homophily. Netw Sci. 2013;1(1):49–67.
Karsai M, Kivelä M, Pan RK, Kaski K, Kertész J, Barabási AL, Saramäki J. Small but slow world: how network topology and burstiness slow down spreading. Phys Rev E. 2011;83(2):025,102.
Kim M, Leskovec J. Multiplicative attribute graph model of real-world networks. Internet Math. 2012;8(1–2):113–60.
Kleinberg J. The small-world phenomenon: an algorithmic perspective. Technical reports. Ithaca: Cornell University; 1999.
Kleinberg JM. Navigation in a small world. Nature. 2000;406(6798):845.
Krause D. Environmental consciousness: an empirical study. Environ Behav. 1993;25(1):126–42.
Kwakkel JH, Jaxa-Rozen M. pynetlogo documentation. 2017. https://pynetlogo.readthedocs.io/en/latest/. Accessed 4 Apr 2019.
Largeron C, Mougel PN, Rabbany R, Zaïane OR. Generating attributed networks with communities. PLoS ONE. 2015;10(4):e0122,777.
Liao L, He X, Zhang H, Chua TS. Attributed social network embedding. IEEE Trans Knowl Data Eng. 2018;30(12):2257–70.
Liu W, Sidhu A, Beacom AM, Valente TW. Social network theory. In: The international encyclopedia of media effects. 2017. p. 1–12.
Luhmann CC, Rajaram S. Memory transmission in small groups and large networks: an agent-based model. Psychol Sci. 2015;26(12):1909–17.
Mazzoli M, Re T, Bertilone R, Maggiora M, Pellegrino J. Agent based rumor spreading in a scale-free network. arXiv preprint arXiv:180505999. 2018.
McElreath R, Boyd R, Richerson P. Shared norms and the evolution of ethnic markers. Curr Anthropol. 2003;44(1):122–30.
Nardin LG, Andrighetto G, Conte R, Székely Á, Anzola D, Elsenbroich C, Lotzmann U, Neumann M, Punzo V, Troitzsch KG. Simulating protection rackets: a case study of the sicilian mafia. Autonom Agents Multi-Agent Syst. 2016;30(6):1117–47.
Neuman EJ, Mizruchi MS. Structure and bias in the network autocorrelation model. Soc Netw. 2010;32(4):290–300.
Newman M. Networks. Oxford: Oxford University Press; 2018.
Nishikawa T, Motter AE, Lai YC, Hoppensteadt FC. Heterogeneity in oscillator networks: are smaller worlds easier to synchronize? Phys Rev Lett. 2003;91(1):014,101.
Norris P. The bridging and bonding role of online communities. 2002.
Putnam RD. Bowling alone: America's declining social capital. Culture and politics. New York: Springer; 2000. p. 223–34.
Rahmandad H, Sterman J. Heterogeneity and network structure in the dynamics of diffusion: comparing agent-based and differential equation models. Manag Sci. 2008;54(5):998–1014.
Rand DG, Arbesman S, Christakis NA. Dynamic social networks promote cooperation in experiments with humans. Proc Natl Acad Sci. 2011;108(48):19,193–8.
Santos FC, Pacheco JM, Lenaerts T. Cooperation prevails when individuals adjust their social ties. PLoS Comput Biol. 2006;2(10):e140.
Song H, Boomgaarden HG. Dynamic spirals put to test: an agent-based model of reinforcing spirals between selective exposure, interpersonal networks, and attitude polarization. J Commun. 2017;67(2):256–81.
Stangor C, Jonas K, Stroebe W, Hewstone M. Influence of student exchange on national stereotypes, attitudes and perceived group variability. Eur J Soc Psychol. 1996;26(4):663–75.
Stonedahl F, Wilensky U. Netlogo virus on a network model. 2008. http://ccl.northwestern.edu/netlogo/models/VirusonaNetwork. Accessed 4 Apr 2019.
Thiriot S, Kant JD. Generate country-scale networks of interaction from scattered statistics. In: The fifth conference of the European social simulation association, Brescia, Italy. 2008. p. 240.
Ugander J, Backstrom L, Marlow C, Kleinberg J. Structural diversity in social contagion. Proc Natl Acad Sci. 2012;2011:16502.
Valente TW. Network models of the diffusion of innovations. Comput Math Org Theory. 1996;2(2):163–4.
Valente TW, Fujimoto K. Bridging: locating critical connectors in a network. Soc Netw. 2010;32(3):212–20.
Van Rooy D. A connectionist abm of social categorization processes. Adv Complex Syst. 2012;15(06):1250,077.
Wang X, Cui P, Wang J, Pei J, Zhu W, Yang S. Community preserving network embedding. In: Thirty-first AAAI conference on artificial intelligence. 2017.
MLK acknowledges financial support of the University of Graz, Austria, and the Steiermärkischen Sparkassen.
MLK receives a student Grant of the University of Graz, Austria, and the Steiermärkischen Sparkassen.
University of Graz, Merangasse 18/I A, 8010, Graz, Austria
Marie Lisa Kapeller
, Georg Jäger
& Manfred Füllsack
Search for Marie Lisa Kapeller in:
Search for Georg Jäger in:
Search for Manfred Füllsack in:
MLK performed all computational simulations related tasks and is the major contributor in writing the manuscript. MLK and GJ conceptualized the model and interpreted the results. GJ contributed to the introduction, and GJ and MF revised the manuscript. All authors read and approved the final manuscript.
Correspondence to Marie Lisa Kapeller.
See Figs. 9, 10, 11, 12, 13 and 14.
Additional long ties: heterogeneity \({\bar{h}}\) of the homophilic allocation \(x=0\) and different numbers of additional long ties x; results shown for six different network types (mean: coloured line, std: filled area). Parameters: \(N=400,T=200,s=500\)
Relocation: heterogeneity \({\bar{h}}\) of the homophilic allocation \(x=0\) and different numbers of relocation by swapping type A–S individuals; results shown for six different network types (mean: coloured line, std: filled area). Parameters: \(N=400,T=200,s=500\)
Examples of pattern formation on grid networks: a random allocation, b homophilic allocation. Parameters \(N=14\times 14, T = 200\)
Examples of pattern formation on torus networks: a random allocation, b homophilic allocation. Parameters \(N=10\times 20, T = 200\)
Examples of pattern formation on spatial-proximity networks: a random allocation, b homophilic allocation. Parameters \(N=200, T = 200\)
Examples of pattern formation on small-world networks: a random allocation, b homophilic allocation. Parameters \(N=10\times 20, T = 200\)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Kapeller, M.L., Jäger, G. & Füllsack, M. Homophily in networked agent-based models: a method to generate homophilic attribute distributions to improve upon random distribution approaches. Comput Soc Netw 6, 9 (2019) doi:10.1186/s40649-019-0070-5
Received: 08 February 2019
Network diffusion
Scale-free networks | CommonCrawl |
High School Geometry
Domain Geometric Measure and Dimension
Cluster Explain volume formulas and use them to solve problems
Standard Use volume formulas for cylinders, pyramids, cones, and spheres to solve problems.
Task Volume Estimation
Volume Estimation
Alignments to Content Standards: G-GMD G-GMD.A.3
Charles and Olivia are trying to estimate the volume of water that could be held by the figure shown below, which is 10 feet high and has a circular top of radius 20 feet. Charles proposes they approximate the volume by using a cylinder of radius 20 feet and height 10 feet. Olivia proposes that they instead use a circular cone connecting the top of the tank to the vertex at the bottom.
What answers would the two methods predict? Which is likely to be most accurate? What is your best estimate for the volume of the tank?
This task has the dual purpose of having students apply geometric volume formulas, and to have them reason about modeling with geometric figures. Students are presented with a surface, though not by that name, and asked to consider the process of modeling its volume by that of a cylinder or a cone. The task also provides ample opportunity for open-ended reasoning, as the lack of more explicit information about the surface makes a standard volume formula impossible. (Indeed, such formulas do not typically appear until late in the calculus sequence).
As such, the task gives students the opportunity to model a large range of practice standards. For example, students may be unaccustomed to making volume arguments without explicit formulas, in which case making sense and persevering in solving problems (MP 1), modeling (MP 4), and using available tools/formulas (MP 5), all come into play. Indeed, students might reason intuitively about the relationship between the cone, cylinder, and the surface, or might develop their own more rigorous techniques. For example, a student might compare the areas in a given cross-section, reducing the problem to a comparison of the area under a line and under a quadratic-like curve. A diagram of the comparison is below:
The shaded region in the left diagram shows the difference in area between the cylinder and paraboloid in such a cross-section, and the shaded region in the right diagram shows the analogous difference between the paraboloid and the cone.
Incidentally, we note that the actual surface represented by the picture is a so-called paraboloid, for which calculus provides volume formulas via integration. In fact, the numbers in the problem were chosen so that after intuiting that the cone gave the more accurate approximation, a reasonable rough guess of $2000\pi$ is exactly the correct answer. For the curious calculophiles, the exact answer can be computed as a surface of revolution, or as a double or triple integral, e.g.: $$ V=\int_0^{2\pi}\int_0^{10} \int_0^{20} r\, dr\,dz\,d\theta=2000\pi. $$
As valid reasonings are varied and far-ranging, this solution provides only the volume calculations of the cone and cylinder, and some brief discussion of the comparison.
Both the volumes of the cone and the cylinder can be computed solely from the known radius of $r=20$ feet and the height of $h=10$ feet. The volume of the cylinder is
$$ V(\text{cylinder}) = \pi r^2h=4000\pi\,\text{ft}^3, $$
and the volume of the cone is $\frac{1}{3}$ of that:
$$V(\text{cone}) = \frac{1}{3}\,\pi r^2h=\frac{4000}{3}\pi\,\text{ft}^3,$$
It is intuitively clear to see that the volume of the cone is an under-estimate, since the cone would fit completely in the given tank, and the volume of the cylinder is an over-estimate, since the tank would fit completely inside of the cylinder. We conclude that
$$ \frac{4000}{3}\pi\,\text{ft}^3 < V(\text{tank}) < 4000\pi\,\text{ft}^3. $$
(Note that $\frac{4000}{3}=1333.\overline{3}$). A reasonable estimate the for the volume of the tank thus might be to average the two to get $$ V(\text{tank})\approx 2666\pi \text{ft}^3. $$
It is true and possibly intuitive, but difficult to show, that in fact the cone does a significantly better job at approximating the tank than does the cylinder. As a consequence, a better approximation might be to choose an estimate closer to the volume of the cone than the cylinder. An estimate closer to $$ V(\text{tank})\approx 2000\pi \text{ft}^3 $$
might therefore be a little more reasonable. As it happens, this estimate turns out to be exactly correct, assuming the walls of the tank are quadratic curves. | CommonCrawl |
Atoms or molecules with spin 1 in the ground state?
Is there any atom or molecule that has spin 1 in its ground state?
Do Hund's rules keep this from happening for an atom?
The reason I'm curious is that it would be nice to have a spin-1 example for use in pedagogical discussions of the Stern-Gerlach experiment.
[EDIT] Clarification: when I say "spin," I mean the total angular momentum, not just the sum of the spin-1/2's. (The total angular momentum is what you are seeing in the Stern-Gerlach experiment.) I deleted the part of the question about ions, because, as pointed out by Orthocresol, they won't be usable in a normal Stern-Gerlach spectrometer.
electronic-configuration spin
Atomic carbon with its $\mathrm{1s^2 2s^2 2p^2}$ configuration has a triplet ground state ($S = 1$), precisely because of Hund's first rule.
However, in the context of the Stern–Gerlach experiment, you might run into a problem with orbital angular momentum, as carbon's ground state also has nonzero orbital angular momentum ($^3\mathrm{P}$ ground state, $L = 1$). The behaviour in a magnetic field will be rather more complex and you probably need to take into account spin-orbit coupling.
At the moment I can't think of any atoms with a ground state term symbol of $\mathrm{^3S}$. I actually suspect that it's impossible, but I'm not really up to proving it right now.
Triplet dioxygen (as Zhe mentioned) has no orbital angular momentum ($^3\Sigma_\mathrm{g}^-$ ground state, $\Lambda = 0$), but I'm not sure if the inhomogeneity of the electron density would have any impact. (as in, I'm genuinely not sure.)
The $\ce{^2H}$ nucleus (which us chemists usually refer to as $\ce{D+}$) is lightest stable nucleus that has a spin of 1.
The only issue is that you then have a moving charge in a magnetic field. If you could ignore it, or take it out of the equation, somehow...
orthocresol♦orthocresol
$\begingroup$ The behaviour in a magnetic field will be rather more complex and you probably need to take into account spin-orbit coupling. I don't think the behavior of a neutral particle can be any more complicated than what it normally is in a Stern-Gerlach spectrometer. The Hamiltonian is $z\mu_z=zgJ_z$, and spin versus orbital angular momentum would just have an effect on $g$. I'm not sure if the inhomogeneity of the electron density would have any impact. For similar reasons, I don't think this matters. $\endgroup$ – Ben Crowell Jul 11 '17 at 17:21
$\begingroup$ In an S state (L=0) you only have room for 2 electrons, one with spin up, one down. $\endgroup$ – Magicsowon Jul 11 '17 at 17:38
$\begingroup$ @BenCrowell It is slightly complicated (imo, at least!). The coupling of the angular momenta $S$ and $L$ gives rise to the overall angular momentum $J$, which takes values $0, 1, 2$ (Clebsch-Gordan series). Of the three resulting terms, Hund's third rule dictates that $\mathrm{^3P_0}$ is the ground state (which unfortunately isn't magnetic). $\mathrm{^3P_1}$ (the one we're probably interested in) and $\mathrm{^3P_2}$ are then $16$ and $43~\mathrm{cm^{-1}}$ above the ground state. $\endgroup$ – orthocresol♦ Jul 11 '17 at 17:41
$\begingroup$ @orthocresol: You're talking about carbon, right? All I'm saying is that a Stern-Gerlach experiment only measures the component of the total magnetic moment along the field axis. There is no other, independent parameter that is available to measure. The angular momentum isn't independent of the magnetic moment. Since the first two excited states are so close in energy to the ground state, they would presumably be equally populated in an atomic beam. Therefore you would probably see a superposition of results corresponding to the three energy states, each with its own quantized magnetic moment. $\endgroup$ – Ben Crowell Jul 11 '17 at 17:48
$\begingroup$ @BenCrowell Yes, I agree entirely. By "complicated", what I meant was that if you performed a S–G experiment on a beam of carbon atoms, the result wouldn't be a clean separation into three different beams. If you consider carbon to be a good enough example, though, then I guess we're done here? I was hoping to find an example that doesn't have a mixture of multiple states, but I don't think it's forthcoming. $\endgroup$ – orthocresol♦ Jul 11 '17 at 17:53
Triplet oxygen has two unpaired electrons with the same spin, and a total spin value of 1.
In fact, by Hund's rule, the triplet states are preferred over the singlet states which have two electrons with opposite spins.
https://en.wikipedia.org/wiki/Triplet_oxygen
https://en.wikipedia.org/wiki/Singlet_oxygen
ZheZhe
Rubidium 87 is one candidate, if you take into account hyperfine splitting, the ground state (which is part of hyperfine manifold of 5$S_{1/2}$) has total angular momentum $\vec{F}=\vec{J}+\vec{I}$ momentum of $F=1$.
EugeneEugene
Not the answer you're looking for? Browse other questions tagged electronic-configuration spin or ask your own question.
Memorize Transition Metal Electronic Ground State
Does the triplet sigma state of a diatomic molecule experience spin-orbit coupling?
Change in spin angular momentum during transition in one-electron atoms
What accounts for the high spin state of the complex Tris(acetylacetonato)iron(III)?
Transition from ground state to excited state
Nuclear spin isomerism in molecules other than H2
How does one find the ground-state term symbol for a configuration that is exactly half-filled?
Can an organic molecule have a triplet ground state?
Name for a spin state with a multiplicity of 13
Total spin number 0 together with $B_2$ state - confusion | CommonCrawl |
How does approximating gates via universal gates scale with the length of the computation?
I understand that there is a constructive proof that arbitrary gates can be approximated by a finite universal gate set, which is the Solovay–Kitaev Theorem.
However, the approximation introduces an error, which would spread and accumulate in a long computation. This would presumably scale badly with the length of the calculation? Possibly one might apply the approximation algorithm to the complete circuit as a whole, not to a single gate. But how does this scale with the length of the computation (i.e. how does the approximation scale with the dimension of the gates)? How does the gate approximation relate to gate synthesis? Because I could imagine that this affects the final length of the computation?
Even more disturbing to me: What happens if the length of the calculation is not known at the time when the gate sequence is compiled?
gate-synthesis universal-gates fault-tolerance noise solovay-kitaev-algorithm
Throughout this answer, the norm of a matrix $A$, $\left\lVert A\right\rVert$ will be taken to be the spectral norm of $A$ (that is, the largest singular value of $A$). The solovay-Kitaev theorem states that approximating a gate to within an error $\epsilon$ requires $$\mathcal O\left(\log^c\frac 1\epsilon\right)$$ gates, for $c<4$ in any fixed number of dimensions.
For the first part:
the approximation introduces an error, which would spread and accumulate in a long computation
Well, it can be shown by induction that errors accumulating through using one matrix to approximate another are subadditive (see e.g. Andrew Child's lecture notes). That is, for unitary matrices $U_i$ and $V_i$, $\left\lVert U_i - V_i\right\rVert < \epsilon\,\forall\, i \in \left\lbrace1, 2, \ldots, t\right \rbrace\implies \left\lVert U_t\ldots U_2U_1 - V_t\ldots V_2V_1\right\rVert \leq t\epsilon$.
What this means in terms of implementation is that, for an overall error no more than $\epsilon$ to be achieved, each gate needs to be approximated to within $\epsilon/t$, or
applying the approximation to the circuit as a whole
is the same as applying the approximation to each individual gate, each with an individual error no more than that of the entire circuit divided by the number of gates that you're approximating.
In terms of gate synthesis, The algorithm is performed by taking products of the gate set $\Gamma$ to form a new gate set $\Gamma_0$ which forms an $\epsilon^2$ net for $\operatorname{SU}\left(d\right)$ (for any $A \in \operatorname{SU}\left(d\right),\, \exists U\in\Gamma_0\, s.t. \left\lVert A-U\right\rVert\leq\epsilon^2$). Starting from identity, a new unitary is recursively found from the new gate set in order to get a tighter net round the target unitary. Oddly enough, the time for a classical algorithm to perform this operation is also $\mathcal O\left(\mathit{poly} \log 1/\epsilon\right)$, which is sub-polynomial time. However, as per Harrow, Recht, Chuang, in $d$-dimensions, as a ball of radius $\epsilon$ around $\operatorname{SU}\left(d\right)$ has a volume $\propto \epsilon^{d^2-1}$, this scales exponentially in $d^2$ for a non-fixed number of dimensions.
This does have an affect on the final computation time. However, as the scaling in both number of gates and classical computational complexity is sub-polynomial, this doesn't change the complexity class of any algorithm, at least for the commonly considered classes.
For $t$ gates, the overall (time and gate) complexity is then $$\mathcal O\left(t\, \mathit{poly} \log \frac t\epsilon\right)$$.
When using the unitary circuit model without intermediary measurements, the number of gates to be implemented will always be known prior to the computation. However, it is feasible to assume this isn't the case when intermediary measurements are used, so when then number of gates that you want to approximate is unknown, this is saying that $t$ is unknown. and if you don't know what $t$ is, you obviously can't approximate each gate to an error $\epsilon/t$. If you know a bound on the number of gates (say, $t_{\text{max}}$), then you could approximate each gate to within $\epsilon/t_{\text{max}}$ to get an overall error $\leq\epsilon$ and complexity $$\mathcal O\left(t\, \mathit{poly} \log \frac {t_{\text{max}}}{\epsilon}\right),$$ although if no upper bound on the number of gates is known, then each gate would be approximated to some (smaller) $\epsilon'$, giving an overall error $\leq t'\epsilon$ for the resulting number of implemented gates (which is unknown at the start) $t'$, with an overall complexity of $$\mathcal O\left(t'\, \mathit{poly} \log \frac {1}{\epsilon'}\right).$$
Of course, the total error of this is still unbounded, so one simple1 way of keeping the error bounded would be to reduce the error each time by a factor of, say, $2$, so that the $n^{th}$ gate would be implemented with error $\epsilon/2^n$. The complexity would then be $$\mathcal O\left(\mathit{poly} \log \frac {2^n}{\epsilon'}\right) = \mathcal O\left(\mathit{poly}\, n\log \frac {1}{\epsilon'}\right),$$ giving an overall (now polynomial) complexity $$\mathcal O\left(\mathit{poly}\, t \log \frac {1}{\epsilon}\right),$$ although this does have the advantage of guaranteeing a bounded error.
This isn't too bad, so I would hope that (when the number of gates is unknown) classical computers would be able to keep coming up with the correct gates at least as fast as a quantum processor would need them. If not currently, then hopefully once quantum processors become good enough that this actually becomes a problem!
1 Although, likely not the most efficient
Mithrandir24601♦Mithrandir24601
Not the answer you're looking for? Browse other questions tagged gate-synthesis universal-gates fault-tolerance noise solovay-kitaev-algorithm or ask your own question.
What are the models of quantum computation?
What is the mathematical justification for the "universality" of the universal set of quantum gates (CNOT, H, Z, X and π/8)?
Requirements for Achieving a Quantum Speedup
Obtaining gate $e^{-i\Delta t Z}$ from elementary gates
Shortest sequence of universal quantum gates that correspond to a given unitary
Basic approximation in Solovay-Kitaev algorithm
Reversible computation without inverting the circuit
Approximating unitary matrices
Approximating unitary matrices — restricted gateset
Sampling random circuits vs Solovay-Kitaev compiler
Why is the Toffoli gate not sufficient for universal quantum computation? | CommonCrawl |
Hilbert quasi-polynomial for order domains and application to coding theory
AMC Home
Constacyclic codes of length $np^s$ over $\mathbb{F}_{p^m}+u\mathbb{F}_{p^m}$
May 2018, 12(2): 263-286. doi: 10.3934/amc.2018017
Indiscreet logarithms in finite fields of small characteristic
Robert Granger 1, , Thorsten Kleinjung 1, and Jens Zumbrägel 2,
Laboratory for Cryptologic Algorithms, École polytechnique fédérale de Lausanne, Station 14, 1015 Lausanne, Switzerland
Faculty of Computer Science and Mathematics, University of Passau, Innstraße 33, 94032 Passau, Germany
Received April 2016 Revised December 2017 Published March 2018
Fund Project: The first author is supported by the Swiss National Science Foundation via grant number 200021-156420
Recently, several striking advances have taken place regarding the discrete logarithm problem (DLP) in finite fields of small characteristic, despite progress having remained essentially static for nearly thirty years, with the best known algorithms being of subexponential complexity. In this expository article we describe the key insights and constructions which culminated in two independent quasi-polynomial algorithms. To put these developments into both a historical and a mathematical context, as well as to provide a comparison with the cases of so-called large and medium characteristic fields, we give an overview of the state-of-the-art algorithms for computing discrete logarithms in all finite fields. Our presentation aims to guide the reader through the algorithms and their complexity analyses ab initio.
Keywords: Discrete logarithm problem, finite fields, number field sieve, function field sieve, quasi-polynomial algorithms.
Mathematics Subject Classification: Primary: 11Y16, 11T71.
Citation: Robert Granger, Thorsten Kleinjung, Jens Zumbrägel. Indiscreet logarithms in finite fields of small characteristic. Advances in Mathematics of Communications, 2018, 12 (2) : 263-286. doi: 10.3934/amc.2018017
G. Adj, A. Menezes, T. Oliveira and F. Rodríguez-Henríquez, Weakness of $\mathbb{F}_{3^{6 · 509\;\;}}$ for discrete logarithm cryptography, in: Pairing-Based Cryptography—Pairing 2013, Springer, LNCS 8365 (2014), 20–44. Google Scholar
G. Adj, A. Menezes, T. Oliveira and F. Rodríguez-Henríquez, Computing discrete logarithms in $\mathbb{F}_{3^{6 · 137}}\;\;$ and $\mathbb{F}_{3^{6 · 163}}\;\;$ using Magma, in: Arithmetic of Finite Fields, Springer, LNCS 9061 (2015), 3–22. Google Scholar
G. Adj, I. Canales-Martínez, N. Cruz-Cortés, A. Menezes, T. Oliveira, L. RiveraZamarripa and F. Rodríguez-Henríquez, Computing discrete logarithms in cryptographicallyinteresting characteristic-three finite fields, IACR Cryptology ePrint Archive, (2016), 19 pages, eprint. iacr. org/2016/914. Google Scholar
L. M. Adleman, A subexponential algorithm for the discrete logarithm problem with applications to cryptography, in: 20th Annual Symposium on Foundations of Computer Science, (1979), 55–60. Google Scholar
L. M. Adleman, The function field sieve, in: Algorithmic Number Theory, Springer, LNCS 877 (1994), 108–121. Google Scholar
L. M. Adleman and M.-D. A. Huang, Function field sieve method for discrete logarithms over finite fields, Inform. and Comput., 151 (1999), 5-16. doi: 10.1006/inco.1998.2761. Google Scholar
R. Barbulescu, C. Bouvier, J. Detrey, P. Gaudry, H. Jeljeli, E. Thomé, M. Videau and P. Zimmermann (the CARAMEL group), Discrete logarithm in GF(2809) with FFS, in: Public-Key Cryptography—PKC 2014, Springer, LNCS 8383 (2014), 221–238. Google Scholar
R. Barbulescu, P. Gaudry, A. Guillevic and F. Morain, Improving NFS for the discrete logarithm problem in non-prime finite fields, in: Advances in Cryptology—EUROCRYPT 2015, Springer, LNCS 9056 (2015), 129–155. Google Scholar
R. Barbulescu, P. Gaudry, A. Joux and E. Thomé, A heuristic quasi-polynomial algorithm for discrete logarithm in finite fields of small characteristic, in: Advances in Cryptology—EUROCRYPT 2014, Springer, LNCS 8441 (2014), 1–16. Google Scholar
R. Barbulescu, P. Gaudry and T. Kleinjung, The Tower Number Field Sieve, in: Advances in Cryptology—ASIACRYPT 2015, Springer, LNCS 9453 (2015), 31–55. Google Scholar
R. Barbulescu and T. Kim, Extended tower number field sieve: A new complexity for the medium prime case, in: Advances in Cryptology—CRYPTO 2016, Springer, LNCS 9814 (2016), 543–571. Google Scholar
A. W. Bluher, On $x^{q+1} + a x + b$, Finite Fields Appl., 10 (2004), 285-305. doi: 10.1016/j.ffa.2003.08.004. Google Scholar
D. Boneh and M. Frapringer, LNCS 2139 (2001nklin, Identity-based encryption from the Weil pairing, in: Advances in Cryptology—CRYPTO 2001, S), 213–229. Google Scholar
E. R. Canfield, P. Erdős and C. Pomerance, On a problem of Oppenheim concerning 'factorisatio numerorum', J. Number Theory, 17 (1983), 1-28. doi: 10.1016/0022-314X(83)90002-1. Google Scholar
A. Commeine and I. Semaev, An algorithm to solve the discrete logarithm problem with the number field sieve, in: Public Key Cryptography—PKC 2006, Springer, LNCS 3958 (2006), 174–190. Google Scholar
D. Coppersmith, Fast evaluation of logarithms in fields of characteristic two, IEEE Trans. Inform. Theory, 30 (1984), 587-594. doi: 10.1109/TIT.1984.1056941. Google Scholar
C. Diem, On the discrete logarithm problem in elliptic curves, Compositio Math., 147 (2011), 75-104. doi: 10.1112/S0010437X10005075. Google Scholar
W. Diffie and M. E. Hellman, New directions in cryptography, IEEE Trans. Inform. Theory, 22 (1976), 644-654. doi: 10.1109/TIT.1976.1055638. Google Scholar
T. ElGamal, A public key cryptosystem and a signature scheme based on discrete logarithms, in: Advances in Cryptology—CRYPTO '84, Springer, LNCS 196 (1985), 10–18. Google Scholar
A. Enge and P. Gaudry, A general framework for subexponential discrete logarithm algorithms, Acta Arithmetica, 102 (2002), 83-103. doi: 10.4064/aa102-1-6. Google Scholar
S. D. Galbraith, Supersingular curves in cryptography, in: Advances in Cryptology—ASIACRYPT 2001, Springer, LNCS 2248 (2001), 495–513. Google Scholar
C. F. Gauß, Disquisitiones Arithmeticae, Translated into English by Arthur A. Clarke, S. J. Yale University Press, New Haven, Conn. -London, 1966. Google Scholar
F. Göloğlu, R. Granger, G. McGuire and J. Zumbrägel, On the function field sieve and the impact of higher splitting probabilities: Application to discrete logarithms in $\mathbb{F}_{2^{1971}}\;\;$ and $\mathbb{F}_{2^{3164}}\;\;$, in: Advances in Cryptology—CRYPTO 2013, Springer, LNCS 8043 (2013), 109–128. Google Scholar
F. Göloğlu, R. Granger, G. McGuire and J. Zumbrägel, Solving a 6120-bit DLP on a desktop computer, in: Selected Areas in Cryptography—SAC 2013, Springer, LNCS 8282 (2014), 136–152. Google Scholar
D. M. Gordon, Discrete logarithms in ${\rm GF}(p)$ using the number field sieve, SIAM J. Discrete Math., 6 (1993), 124-138. doi: 10.1137/0406010. Google Scholar
D. M. Gordon and K. S. McCurley, Massively parallel computation of discrete logarithms, in: Advances in Cryptology—CRYPTO'92, Springer, LNCS 740 (1993), 312–323. Google Scholar
R. Granger, T. Kleinjung and J. Zumbrägel, Breaking '128-bit secure' supersingular binary curves, in: Advances in Cryptology—CRYPTO 2014, Springer, LNCS 8617 (2014), 126–145. Google Scholar
R. Granger, T. Kleinjung and J. Zumbrägel, On the powers of 2, IACR Cryptology ePrint Archive, (2014), 18 pages, eprint. iacr. org/2014/300. Google Scholar
R. Granger, T. Kleinjung and J. Zumbrägel, On the discrete logarithm problem in finite fields of fixed characteristic, Trans. Amer. Math. Soc., 370 (2018), 3129-3145. doi: 10.1090/tran/7027. Google Scholar
T. Hayashi, T. Shimoyama, N. Shinohara, T. Takagi, Breaking pairing-based cryptosystems using ηT pairing over GF(397), in: Advances in Cryptology—ASIACRYPT 2012, Springer, LNCS 7658 (2012), 43–60. Google Scholar
T. Hayashi, N. Shinohara, L. Wang, S. I. Matsuo, M. Shirase and T. Takagi, Solving a 676-bit discrete logarithm problem in GF(36n), in: Public Key Cryptography—PKC 2010, Springer, LNCS 6056 (2010), 351–367. Google Scholar
T. Helleseth and A. Kholosha, $\smash{x^{2^l+1}}+ x + a$ and related affine polynomials over $GF(2^k)$, Cryptogr. Commun., 2 (2010), 85-109. doi: 10.1007/s12095-009-0018-y. Google Scholar
A. Joux, A one round protocol for tripartite Diffie-Hellman, in: Algorithmic Number Theory, Springer, LNCS 1838 (2000), 385–393. Google Scholar
A. Joux, Faster index calculus for the medium prime case; application to 1175-bit and 1425-bit finite fields, in: Advances in Cryptology—EUROCRYPT 2013, Springer, LNCS 7881 (2013), 177–193. Google Scholar
A. Joux, A new index calculus algorithm with complexity L(1/4 + o(1)) in small characteristic, in: Selected Areas in Cryptography—SAC 2013, Springer, LNCS 8282 (2014), 355–379. Google Scholar
A. Joux and R. Lercier, The function field sieve is quite special, in: Algorithmic Number Theory, Springer, LNCS 2369 (2002), 431–445. Google Scholar
A. Joux and R. Lercier, Improvements to the general number field sieve for discrete logarithms in prime fields, Math. Comp., 72 (2003), 953-967. doi: 10.1090/S0025-5718-02-01482-5. Google Scholar
A. Joux and R. Lercier, The function field sieve in the medium prime case, in: Advances in Cryptology—EUROCRYPT 2006, Springer, LNCS 4004 (2006), 254–270. Google Scholar
A. Joux, R. Lercier, N. Smart and F. Vercauteren, The number field sieve in the medium prime case, in: Advances in Cryptology—CRYPTO 2006, Springer, LNCS 4117 (2006), 326–344. Google Scholar
A. Joux, A. M. Odlyzko and C. Pierrot, The past, evolving present and future of discrete logarithm, in: Open Problems in Mathematical and Computational Science, Springer (2014), 5–36. Google Scholar
A. Joux and C. Pierrot, Improving the polynomial time precomputation of Frobenius representation discrete logarithm algorithms, in: Advances in Cryptology—ASIACRYPT 2014, Springer, LNCS 8873 (2014), 378–397. Google Scholar
M. Kalkbrener, An upper bound on the number of monomials in determinants of sparse matrices with symbolic entries, Mathematica Pannonica, 8 (1997), 73-82. Google Scholar
T. Kim and J. Jeong, Extended tower number field sieve with application to finite fields of arbitrary composite extension degree, Public-Key Cryptography---PKC 2017, 10174 (2017), 388-408. Google Scholar
B. A. LaMacchia and A. M. Odlyzko, Solving large sparse linear systems over finite fields, in: Advances in Cryptology—CRYPTO'90, Springer, LNCS 537 (1991), 109–133. Google Scholar
C. Lanczos, An iteration method for the solution of the eigenvalue problem of linear differential and integral operators, J. Research Nat. Bur. Standards, 45 (1950), 255-282. doi: 10.6028/jres.045.026. Google Scholar
A. K. Lenstra and H. W. Lenstra, Jr, Algorithms in number theory, in: Handbook of Theoretical Computer Science (A): Algorithms and Complexity, Elsevier, (1990), 673–715. Google Scholar
A. K. Lenstra and H. W. Lenstra, Jr (eds), The Development of the Number Field Sieve, Springer, 1993. Google Scholar
A. K. Lenstra, H. W. Lenstra Jr and L. Lovász, Factoring polynomials with rational coefficients, Math. Ann., 261 (1982), 515-534. doi: 10.1007/BF01457454. Google Scholar
H. W. Lenstra Jr, Finding isomorphisms between finite fields, Math. Comp., 56 (1991), 329-347. doi: 10.1090/S0025-5718-1991-1052099-2. Google Scholar
R. Lovorn, Rigorous Subexponential Algorithms for Discrete Logarithms over Finite Fields, Ph. D. Thesis, University of Georgia, 1992. Google Scholar
V. I. Nechaev, On the complexity of a deterministic algorithm for a discrete logarithm, Mat. Zametki, 55 (1994), 91-101. doi: 10.1007/BF02113297. Google Scholar
J. Neukirch, Algebraic Number Theory, Translated from the 1992 German original, Springer, 1999. Google Scholar
A. M. Odlyzko, Discrete logarithms in finite fields and their cryptographic significance, in: Advances in Cryptology—CRYPTO'84, Springer, LNCS 209 (1985), 224–314. Google Scholar
A. M. Odlyzko, Discrete logarithms: the past and the future, Des. Codes Cryptogr., 19 (2000), 129-145. doi: 10.1023/A:1008350005447. Google Scholar
S. C. Pohlig and M. E. Hellman, An improved algorithm for computing logarithms over ${\rm GF}(p)$ and its cryptographic significance, IEEE Trans. Inform. Theory, 24 (1978), 106-110. doi: 10.1109/TIT.1978.1055817. Google Scholar
J. M. Pollard, Monte Carlo methods for index computation (mod p), Math. Comp., 32 (1978), 918-924. doi: 10.1090/S0025-5718-1978-0491431-9. Google Scholar
C. Pomerance, Analysis and comparison of some integer factoring algorithms, in: Computational Methods in Number Theory, Math. Centre Tracts, Math. Centrum, Amsterdam, 154 (1982), 89–139. Google Scholar
C. Pomerance, Fast, rigorous factorization and discrete logarithm algorithms, in: Discrete Algorithms and Complexity, Perspect. Comput., Academic Press, 15 (1987), 119–143. Google Scholar
R. Sakai, K. Ohgishi and M. Kasahara, Cryptosystems based on pairing, in: Symposium on Cryptography and Information Security, Okinawa, Japan, (2000), 26–28. Google Scholar
P. Sarkar and S. Singh, A general polynomial selection method and new asymptotic complexities for the tower number field sieve algorithm, in: Advances in Cryptology—ASIACRYPT 2016, Springer, LNCS 10031 (2016), 37–62. Google Scholar
O. Schirokauer, Discrete logarithms and local units, Philos. Trans. Roy. Soc. London Ser. A, 345 (1993), 409-423. doi: 10.1098/rsta.1993.0139. Google Scholar
O. Schirokauer, Using number fields to compute logarithms in finite fields, Math. Comp., 69 (2000), 1267-1283. doi: 10.1090/S0025-5718-99-01137-0. Google Scholar
O. Schirokauer, Virtual logarithms, J. Algorithms, 57 (2005), 140-147. doi: 10.1016/j.jalgor.2004.11.004. Google Scholar
C.-P. Schnorr, Efficient signature generation by smart cards, J. Cryptology, 4 (1991), 161-174. doi: 10.1007/BF00196725. Google Scholar
I. A. Semaev, Special prime numbers and discrete logs in finite prime fields, Math. Comp., 71 (2002), 363-377. doi: 10.1090/S0025-5718-00-01308-9. Google Scholar
P. W. Shor, Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer, SIAM Computing J., 26 (1997), 1484-1509. doi: 10.1137/S0097539795293172. Google Scholar
V. Shoup, Lower bounds for discrete logarithms and related problems, in: Advances in Cryptology—EUROCRYPT'97, Springer, LNCS 1223 (1997), 256–266. Google Scholar
D. Wan, Generators and irreducible polynomials over finite fields, Math. Comp., 66 (1997), 1195-1212. doi: 10.1090/S0025-5718-97-00835-1. Google Scholar
D. H. Wiedemann, Solving sparse linear equations over finite fields, IEEE Trans. Inform. Theory, 32 (1986), 54-62. doi: 10.1109/TIT.1986.1057137. Google Scholar
Table 1. Discrete logarithm record computations in finite fields of small or medium characteristic. Details, as well as further announcements, can be found in the number theory mailing list (https://listserv.nodak.edu/cgi-bin/wa.exe?A0=NMBRTHRY)
bitlength charact. Kummer who/when running time
127 2 no Coppersmith 1984 [16] $L(1/3\, , \, 1.526..1.587)$
401 2 no Gordon, McCurley 1992 [26] $L(1/3\, , \, 1.526..1.587)$
521 2 no Joux, Lercier 2001 [36] $L(1/3\, , \, 1.526)$
607 2 no Thomé 2002 $L(1/3\, , \, 1.526..1.587)$
613 2 no Joux, Lercier 2005 $L(1/3\, , \, 1.526)$
556 medium yes Joux, Lercier 2006 [38] $L(1/3\, , \, 1.442)$
676 3 no Hayashi et al. 2010 [31] $L(1/3\, , \, 1.442)$
1175 medium yes Joux 24 Dec 2012 [34] $L(1/3\, , \, 1.260)$
1425 medium yes Joux 6 Jan 2013 [34] $L(1/3\, , \, 1.260)$
1778 2 yes Joux 11 Feb 2013 [35] $L(1/4 + o(1))$
1971 2 yes GGMZ 19 Feb 2013 [23] $L(1/3\, , \, 0.763)$
4080 2 yes Joux 22 Mar 2013 [35] $L(1/4 + o(1))$
809 2 no CARAMEL 6 Apr 2013 [7] $L(1/3\, , \, 1.526)$
6120 2 yes GGMZ 11 Apr 2013 [24] $L(1/4)$
6168 2 yes Joux 21 May 2013 $L(1/4 + o(1))$
1303 3 no AMOR 27 Jan 2014 [2] $L(1/4 + o(1))$
4404 2 no GKZ 30 Jan 2014 [27] $L(1/4 + o(1))$
9234 2 yes GKZ 31 Jan 2014 $L(1/4 + o(1))$
1551 3 no AMOR 26 Feb 2014 [2] $L(1/4 + o(1))$
3796 3 no Joux, Pierrot 15 Sep 2014 [41] $L(0 + o(1))$
1279 2 no Kleinjung 17 Oct 2014 $L(0 + o(1))$
4841 3 no ACCMORR, 18 Jul 2016 [3] $L(0 + o(1))$
Palash Sarkar, Shashank Singh. A unified polynomial selection method for the (tower) number field sieve algorithm. Advances in Mathematics of Communications, 2019, 13 (3) : 435-455. doi: 10.3934/amc.2019028
Carla Mascia, Giancarlo Rinaldo, Massimiliano Sala. Hilbert quasi-polynomial for order domains and application to coding theory. Advances in Mathematics of Communications, 2018, 12 (2) : 287-301. doi: 10.3934/amc.2018018
Fabio Camilli, Francisco Silva. A semi-discrete approximation for a first order mean field game problem. Networks & Heterogeneous Media, 2012, 7 (2) : 263-277. doi: 10.3934/nhm.2012.7.263
Laurent Imbert, Michael J. Jacobson, Jr., Arthur Schmidt. Fast ideal cubing in imaginary quadratic number and function fields. Advances in Mathematics of Communications, 2010, 4 (2) : 237-260. doi: 10.3934/amc.2010.4.237
Josu Doncel, Nicolas Gast, Bruno Gaujal. Discrete mean field games: Existence of equilibria and convergence. Journal of Dynamics & Games, 2019, 6 (3) : 221-239. doi: 10.3934/jdg.2019016
László Mérai, Igor E. Shparlinski. Unlikely intersections over finite fields: Polynomial orbits in small subgroups. Discrete & Continuous Dynamical Systems - A, 2020, 40 (2) : 1065-1073. doi: 10.3934/dcds.2020070
Fabio Camilli, Elisabetta Carlini, Claudio Marchi. A model problem for Mean Field Games on networks. Discrete & Continuous Dynamical Systems - A, 2015, 35 (9) : 4173-4192. doi: 10.3934/dcds.2015.35.4173
Yves Achdou, Victor Perez. Iterative strategies for solving linearized discrete mean field games systems. Networks & Heterogeneous Media, 2012, 7 (2) : 197-217. doi: 10.3934/nhm.2012.7.197
Juan Pablo Maldonado López. Discrete time mean field games: The short-stage limit. Journal of Dynamics & Games, 2015, 2 (1) : 89-101. doi: 10.3934/jdg.2015.2.89
Daniele Bartoli, Adnen Sboui, Leo Storme. Bounds on the number of rational points of algebraic hypersurfaces over finite fields, with applications to projective Reed-Muller codes. Advances in Mathematics of Communications, 2016, 10 (2) : 355-365. doi: 10.3934/amc.2016010
Santos González, Llorenç Huguet, Consuelo Martínez, Hugo Villafañe. Discrete logarithm like problems and linear recurring sequences. Advances in Mathematics of Communications, 2013, 7 (2) : 187-195. doi: 10.3934/amc.2013.7.187
Juan Li, Wenqiang Li. Controlled reflected mean-field backward stochastic differential equations coupled with value function and related PDEs. Mathematical Control & Related Fields, 2015, 5 (3) : 501-516. doi: 10.3934/mcrf.2015.5.501
Maria Schonbek, Tomas Schonbek. Moments and lower bounds in the far-field of solutions to quasi-geostrophic flows. Discrete & Continuous Dynamical Systems - A, 2005, 13 (5) : 1277-1304. doi: 10.3934/dcds.2005.13.1277
Ekkasit Sangwisut, Somphong Jitman, Patanee Udomkavanich. Constacyclic and quasi-twisted Hermitian self-dual codes over finite fields. Advances in Mathematics of Communications, 2017, 11 (3) : 595-613. doi: 10.3934/amc.2017045
Nguyen Huy Chieu, Jen-Chih Yao. Subgradients of the optimal value function in a parametric discrete optimal control problem. Journal of Industrial & Management Optimization, 2010, 6 (2) : 401-410. doi: 10.3934/jimo.2010.6.401
Antoni Ferragut, Jaume Llibre, Adam Mahdi. Polynomial inverse integrating factors for polynomial vector fields. Discrete & Continuous Dynamical Systems - A, 2007, 17 (2) : 387-395. doi: 10.3934/dcds.2007.17.387
Hakan Özadam, Ferruh Özbudak. A note on negacyclic and cyclic codes of length $p^s$ over a finite field of characteristic $p$. Advances in Mathematics of Communications, 2009, 3 (3) : 265-271. doi: 10.3934/amc.2009.3.265
Keisuke Hakuta, Hisayoshi Sato, Tsuyoshi Takagi. On tameness of Matsumoto-Imai central maps in three variables over the finite field $\mathbb F_2$. Advances in Mathematics of Communications, 2016, 10 (2) : 221-228. doi: 10.3934/amc.2016002
Tetsuya Ishiwata, Kota Kumazaki. Structure preserving finite difference scheme for the Landau-Lifshitz equation with applied magnetic field. Conference Publications, 2015, 2015 (special) : 644-651. doi: 10.3934/proc.2015.0644
Yi Shi, Kai Bao, Xiao-Ping Wang. 3D adaptive finite element method for a phase field model for the moving contact line problems. Inverse Problems & Imaging, 2013, 7 (3) : 947-959. doi: 10.3934/ipi.2013.7.947
PDF downloads (118)
HTML views (313)
Robert Granger Thorsten Kleinjung Jens Zumbrägel | CommonCrawl |
Software | Open | Published: 04 March 2019
Assessing taxonomic metagenome profilers with OPAL
Fernando Meyer1,2 na1,
Andreas Bremges1,2,3 na1,
Peter Belmann1,2,4,
Stefan Janssen1,3,5,6,
Alice C. McHardy1,2 &
David Koslicki7
The explosive growth in taxonomic metagenome profiling methods over the past years has created a need for systematic comparisons using relevant performance criteria. The Open-community Profiling Assessment tooL (OPAL) implements commonly used performance metrics, including those of the first challenge of the initiative for the Critical Assessment of Metagenome Interpretation (CAMI), together with convenient visualizations. In addition, we perform in-depth performance comparisons with seven profilers on datasets of CAMI and the Human Microbiome Project. OPAL is freely available at https://github.com/CAMI-challenge/OPAL.
Taxonomic metagenome profilers predict the taxonomic identities and relative abundances of microorganisms of a microbial community from shotgun sequence samples. In contrast to taxonomic binning, profiling does not result in assignments for individual sequences, but derives a summary of the presence and relative abundance of different taxa in microbial community. In some use cases, such as pathogen identification for clinical diagnostics, accurate determination of the presence or absence of a particular taxon is important, while for comparative studies, such as quantifying the dynamics of a microbial community over an ecological gradient, accurately determining relative abundances of taxa is paramount.
Given the variety of use cases, it is important to understand the benefits and drawbacks of the particular taxonomic profiler for different applications. While there has been much effort in developing taxonomic profiling methods [1–12], only recently have community efforts arisen to perform unbiased comparisons of such techniques and assess their strengths and weaknesses [13, 14]. Critical obstacles to such comparisons have been a lack of consensus on performance metrics and output formats by the community, as different taxonomic profilers report their results in a variety of formats and interested parties had to implement their own metrics for comparisons.
Here, we describe the Open-community Profiling Assessment tooL (OPAL), a framework that directly addresses these issues. OPAL aggregates the results of multiple taxonomic profilers for one or more benchmark datasets, computes relevant metrics for different applications on them, and then presents the relative strengths and weaknesses of different tools in intuitive graphics. OPAL leverages the emerging standardized output format recently developed by the CAMI consortium [13, 15] to represent a taxonomic profile and which has been implemented for a variety of popular taxonomic profilers [2, 4–10, 12]. OPAL can also use the popular BIOM (Biological Observation Matrix) format [16]. The metrics that OPAL computes range from simple presence-absence metrics to more sophisticated comparative metrics such as UniFrac [17] and diversity metrics. The resulting metrics are displayed in graphics viewable in a browser and allow a user to dynamically rank taxonomic profilers based on the combination of metrics of their choice.
Similar efforts to provide comparative frameworks have recently been made for genome binners of metagenome samples (AMBER [18]) and metagenomic assemblers (QUAST [19, 20]). OPAL augments these efforts by addressing the issue of comparing and assessing taxonomic profilers. OPAL will assist future systematic benchmarking efforts. It will aid method developers to rapidly assess how their implemented taxonomic profilers perform in comparison to other techniques and facilitate assessing profiler performance characteristics, such as clarifying when and where tool performance degrades (e.g., performance at particular taxonomic ranks). Importantly, OPAL will help to decide which profiler is best suited to analyze particular datasets and biological research questions, which vary widely depending on the nature of the sampled microbial community, experimental setup, and sequencing technology used [21].
OPAL accepts as inputs one or several taxonomic profiles and benchmarks them at different taxonomic ranks against a given taxonomic gold standard profile.
Both the predicted and gold standard taxonomic profiles may contain information for multiple samples, such as for a time series, technical or biological replicates. A gold standard taxonomic profile can, for instance, be created with the CAMISIM metagenome simulator [21, 22]. The taxonomic profiles can be either in the Bioboxes profiling format [15, 23] or in the BIOM format [16]. Examples are provided in the OPAL GitHub repository [24].
Metrics and accompanying visualizations
OPAL calculates a range of relevant metrics commonly used in the field [13] for one or more taxonomic profiles of a given dataset by comparing to a gold standard taxonomic profile. Below, we give formal definitions of all metrics, together with an explanation of their biological meaning.
Preliminaries
For r, a particular taxonomic rank (or simply rank), let xr be the true bacterial relative abundances at rank r given by the gold standard. That is, xr is a vector indexed by all taxa at rank r, where entry (xr)i is the relative abundance of taxon i in the sampled microbial community at rank r. With $x_{r}^{*}$, we denote the vector of predicted bacterial relative abundances at rank r. Accordingly, $\left (x_{r}^{*}\right)_{i}$ is the predicted relative abundance of taxon i at rank r.
By default, OPAL normalizes all (predicted) abundances prior to computing metrics, such that the sum of all abundances equals 1 at each rank, i.e., $\sum _{i} (x_{r})_{i} = 1$ and $\sum _{i} \left (x_{r}^{*}\right)_{i} = 1$. This is to avoid any bias towards profiling software that makes fewer predictions, say, for only 50% of the sample.
Assessing the presence or absence of taxa
The purity and completeness of taxonomic predictions are common measures for assessing profiling quality [25]. They assess how well a profiler correctly identifies the presence and absence of taxa in a sampled microbial community without considering how well their relative abundances were inferred. This can be relevant, for example, in an emergency situation in clinical diagnostics, when searching for a pathogen in a metagenomic sample taken from patient material. To define these measures, let the support of the vector xr be
$$ supp(x_{r})=\{i | (x_{r})_{i} > 0\}. $$
That is, supp(xr) is the set of indices of the taxa at rank r present in the sample. Analogously, $supp\left (x_{r}^{*}\right)$ is the set of indices of the taxa at rank r predicted to be in the sample. For each rank r, we define the true positives TPr, false positives FPr, and false negatives FNr, respectively, as
$$ {TP}_{r}=|supp(x_{r}) \cap supp\left(x_{r}^{*}\right)| $$
$$ {FP}_{r}=|supp(x_{r})^{c} \cap supp \left(x_{r}^{*} \right)| $$
$$ {FN}_{r}=|supp(x_{r}) \cap supp \left(x_{r}^{*} \right)^{c}| $$
where supp(xr)c and $supp\left (x_{r}^{*} \right)^{c}$ are the complement of the respective support vectors and, thus, give the indices of the taxa at rank r absent or predicted as absent in the sample. Specifically, TPr and FPr are the number of taxa correctly and incorrectly predicted as present in the sample, respectively, and FNr is the number of taxa incorrectly predicted as being absent in the sample.
The purity pr at rank r, also known as precision or specificity, is the ratio of taxa correctly predicted as present in the sample and all predicted taxa at that rank. For each rank r, the purity is computed as
$$ p_{r}=\frac{TP_{r}}{TP_{r} + {FP}_{r}}. $$
The completeness sr at rank r, also known as recall or sensitivity, is the ratio of taxa correctly predicted as present and all taxa present in the sample at that rank. For each taxonomic rank r, the completeness is computed as
$$ s_{r} = \frac{TP_{r}}{TP_{r} + {FN}_{r}}. $$
Purity and completeness range from 0 (worst) to 1 (best).
We combine purity and completeness into a single metric by computing their harmonic average, also known as the F1 score. It is defined for each rank r as
$$ \mathrm{F1}_{r} = 2* \frac{p_{r}*s_{r}}{p_{r} + s_{r}}. $$
The F1 score ranges from 0 to 1, being closer to 0 if at least one of the metrics purity or completeness has a low value, and closer to 1 if both the purity and completeness are high.
The Jaccard index J is a common metric to determine the percentage of organisms common to two populations or samples. We define it as an indicator of similarity between the sets of true and predicted taxa at each rank by computing the ratio of the number of taxa in the intersection of these sets to the number of taxa in their union. Formally, it is computed for each rank as
$$ J_{r} = \frac{|supp(x_{r}) \cap supp\left(x_{r}^{*}\right)|}{|supp(x_{r}) \cup supp \left(x_{r}^{*}\right)|}. $$
The Jaccard index ranges from 0 (complete dissimilarity) to 1 (complete overlap).
Abundance estimates
The next category of metrics for assessing profiling quality not only considers whether taxa was predicted as present or absent in the sample, but also considers their abundances.
The L1 norm measures the accuracy of reconstructing the relative abundance of taxa in a sample at rank r. The L1 norm is given by
$$ \mathrm{L1}_{r}= \sum_{i} |(x_{r})_{i} - \left(x_{r}^{*}\right)_{i}|. $$
The L1 norm thus gives the total error between the true and predicted abundances of the taxa at rank r. It ranges from 0 to 2, where 0 indicates perfect reconstruction of the relative abundances of organisms in a sample and 2 indicates totally incorrect reconstruction of relative abundances.
Another metric, the Bray-Curtis distance dr, is derived from the L1 norm by dividing the sum of the absolute pairwise differences of taxa abundances by the sums of all abundances at the given rank. This bounds the Bray-Curtis distance between 0 and 1. For each rank r, it defined as
$$ d_{r} = \frac{\sum_{i}|(x_{r})_{i}-\left(x_{r}^{*} \right)_{i}|}{\sum_{i}(x_{r})_{i} +\sum_{i}\left(x_{r}^{*}\right)_{i}}. $$
The weighted UniFrac distance is a tree-based measure of taxonomic similarity of microbial communities [17] measuring the similarity between true and predicted abundances. Instead of a phylogenetic tree as in [17], we use a taxonomic tree with nodes restricted to eight major ranks and store the true and predicted abundances on the appropriate nodes. In summary, the UniFrac distance is the total amount of predicted abundances that must be moved (along the edges of the taxonomic tree, with all branch lengths here set to 1) to cause them to overlap with the true relative abundances. We use the EMDUnifrac implementation of the UniFrac distance [26–28]. A low UniFrac distance indicates that a taxonomic profiling algorithm gives a prediction that is taxonomically similar to the actual profile of the sample. The weighted UniFrac distance ranges between 0 and twice the height of the taxonomic tree used. Because each level of the tree represents one of the ranks superkingdom, phylum, class, order, family, genus, species, and strain, the maximum weighted UniFrac distance is 16.
The unweighted UniFrac distance is similar to the weighted UniFrac distance, but instead of storing the relative abundances for the appropriate nodes, a 1 is placed on the node if the profile indicates a non-zero relative abundance at that node and a 0 otherwise. Hence, it can be considered a measure of how well (in terms of taxonomic similarity) a profiler correctly identified the presence and absence of taxa in a sample. The maximum unweighted UniFrac distance is equal to
$$ \left(|R|-1\right)*\sum_{r \in R}|supp(x_{r})|. $$
where R is the set of all taxonomic ranks.
Alpha diversity metrics
Unlike the metrics above, alpha diversity metrics are computed from a single profile of (predicted) abundances at each rank, without a comparison to, e.g., a gold standard profile. Alpha diversity metrics summarize the variety (or richness) and distribution of taxa present in a profile [29] and, among other uses, are commonly used to observe global shifts in community structure as a result of some environmental parameter [30–33].
The simplest alpha diversity metric is the number of taxa present in a given environment. We measure this at each rank individually for a given profiler, allowing a comparison to the underlying gold standard. For a given profile xr (or $x_{r}^{*}$), we denote the number of taxa at rank r as Sr=|supp(xr)|.
As a measure of diversity also considering the relative taxon abundances, we combine Sr and all abundances (xr)i (or $(x_{r}^{*})_{i}$) using the Shannon diversity index Hr [34]. For each rank r, it is calculated as
$$ H_{r}= \sum\limits_{i=1}^{S_{r}} (x_{r})_{i} \ln(x_{r})_{i}. $$
Hr ranges from 0 to ln(Sr), where ln(Sr) represents the maximal possible diversity, with all taxa being evenly represented. We note that the Shannon diversity index traditionally assumes that all taxa are represented in the sample. However, because some profilers may not predict abundances for all taxa, we ignore such taxa in the sum (where $\left (x^{*}_{r}\right)_{i}=0$ or (xr)i=0).
While Hr accounts for diversity and evenness, the Shannon equitability index Er is a measure of evenness. It is a normalized form of the Shannon diversity index obtained by dividing Hr by its maximum value ln(Sr), i.e.,
$$ E_{r} = \frac{H_{r}}{\ln(S_{r})}. $$
Thus, Er ranges from 0 to 1 with 1 indicating complete evenness.
Beta diversity metrics
In contrast to alpha diversity, beta diversity metrics give an indication of taxa distribution similarity between a pair of profiles [29]. If beta diversity is small, not only is the diversity similar between the profiles, but the actual distribution of relative abundances between profiles are similar. To compare the similarity of beta diversity predictions for each profiler versus the gold standard, we display the following information in a scatter plot. Each point corresponds to a pair of input samples with the x-coordinate being the Bray-Curtis distance between the taxonomic profilers predictions on the pair of samples. The y-coordinate is the Bray-Curtis distance between the gold standards corresponding to the pair of samples. The closer this scatter plot is to the line y=x, the more closely the taxonomic profiler results in taxa distributions similar to the gold standard. These plots are shown at each taxonomic rank.
To indicate a global sense of relative performance, we also rank profilers by their relative performance across each sample, taxonomic rank, and metric. In particular, each profiler is assigned a score for its performance for each metric within a taxonomic rank and sample. The best performing profiler gets score 0, the second best, 1, and so on. These scores are then added over the taxonomic ranks and samples to produce a single score per metric for each profiler. Also, an overall score of each profiler is computed by summing up all its scores per metric. The resulting scores are displayed in an interactive table of an HTML page, with a row per profiler, a column per metric, and an additional column for the overall scores. The columns can be sorted by the user and, therefore, yield a ranking of the profilers over all metrics or for a specific one. Optionally, the overall score of each profiler can be computed by summing up its score per metric in a weighted fashion, i.e., a user can interactively select custom weighting on the HTML page, depending on the combination of metrics that most suits their needs. The default weight of each metric is 1 and can vary between 0 and 10, in steps of 0.1. For example, if a user is interested in profilers that are highly precise and accurately reconstruct the exact relative abundance of predicted taxa, they can emphasize purity and L1 norm (e.g., giving each weight 3) over UniFrac error and completeness (e.g., giving each weight 1). The resulting rankings are dynamically updated in real time and graphically presented to the user.
Output and visualizations
OPAL outputs the assessment of the predictions of multiple profilers in several formats: flat files, tables (per profiling program, taxonomic rank, and in tidy format [35]), plots, and in an interactive HTML visualization. An example page is available at [36]. The visualizations created include:
Absolute performance plots: To visually compare the relative performance of multiple profilers, spider plots (also known as radar plots) of completeness and purity are created, with the spokes labeled with the corresponding profiler name. At least three profilers are required for these plots. The completeness and purity metrics are shown as colored lines connecting the spokes, with the scale on the spokes indicating the value of the error metric. One such spider plot is created at each taxonomic rank to give an indication of performance versus rank. For examples, see Fig. 2b and Additional file 1: Figure S5b, d.
Relative performance plots: Similarly, spider plots are created for the completeness, purity, false positives, weighted UniFrac, and L1 norm for three or more profilers. Since the values of these metrics have very different scales, they are each normalized by the maximum value attained by any input profiler. Hence, these plots indicate the relative performance of each profiler with respect to the different metrics. For example, one profiler having the largest value of the purity metric would indicate that, among the compared profilers, it is the most precise (without indicating what the exact value of the purity metric is). These plots are also shown at each taxonomic rank. For examples, see Fig. 2a and Additional file 1: Figure S5a, c.
Shannon equitability: The Shannon equitability index is plotted against taxonomic ranks for each input profile along with the gold standard. This results in a visual indication of how closely a taxonomic profile reflects the actual alpha diversity of the gold standard. For examples, see Fig. 3a and Additional file 1: Figure S12.
Bray-Curtis distances: For each profiler, a scatter plot of Bray-Curtis distances is created to compare the similarity of beta diversity of the profiler predictions versus the gold standard. For details, see the section above on beta diversity metrics. Examples are given in Fig. 3b–h and Additional file 1: Figure S13.
Ranking: In a bar chart shown on the created HTML page, each bar corresponds to the sum of scores obtained by a profiler as a result of its ranking for the metrics completeness, purity, L1 norm, and weighted UniFrac over all major taxonomic ranks. The bar chart is dynamically updated in real time according to the weight assigned to each metric by the user. For details of the computation of the scores, see the above section on rankings. Examples of such bar charts are given in Additional file 1: Figure S11 and on the example HTML page at [36].
Taxa proportions: For each taxonomic rank, a stacked bar chart shows the taxa proportions in each sample of the gold standard, with each bar corresponding to a sample and each color to a taxon. This gives a visual indication of the taxa abundances and variations among the samples. On the HTML page, the user may opt to see a legend of the colors and corresponding taxa. The legend is only optionally displayed since the number of taxa can vary between a few superkingdoms to hundreds or thousands of species or strains, and these cannot all be reasonably displayed on a single image. Examples are given in Additional file 1: Figures S1, S2, and S3.
Rarefaction and accumulation curves: A plot simultaneously shows rarefaction and accumulation curves for all the major taxonomic ranks. To ease the visualization at different ranks, another plot shows the curves in logarithmic scale with base 10. For examples, see Additional file 1: Figure S4.
Comparison of taxonomic profilers: an application example
To demonstrate an application, we evaluated taxonomic profilers on three datasets. First, we evaluated taxonomic profiling submissions to the first CAMI challenge [13] on the dataset with the highest microbial complexity in the challenge. We will call this dataset CAMI I HC for short. This is a simulated time series benchmark dataset with five samples, each with size 15 Gbp, and a total of 596 genomes. It includes bacteria, archaea, and high-copy circular elements (plasmids and viruses) with substantial real and simulated strain-level diversity. We reproduce and extend the results for this dataset from [13] with alpha and beta diversity metrics implemented in OPAL and measure the run time and memory usage of profiling methods.
The second dataset that we evaluated taxonomic profilers on were the short-read data of a new practice dataset of the second CAMI challenge (CAMI II MG, for short). This consists of 64 samples with a total size of 320 Gbp and was simulated from taxonomic profiles for microbial communities from the guts of different mice [21]. This resulted in the inclusion of 791 genomes as meta-community members from public databases. The samples in both CAMI I HC and CAMI II MG are paired-end 150-bp Illumina reads and are available at [37, 38].
Lastly, to demonstrate the application of OPAL on a real (not simulated) dataset, we also benchmarked profilers on the Human Microbiome Project Mock Community dataset [39] (HMP MC, for short), namely on the staggered sample available from NCBI SRA (accession SRR172903). It comprises 7.9 million 75-bp reads, with organismal abundances available in [40].
To visualize the taxonomic composition and properties of these datasets, we produced plots of the taxa proportions at all major taxonomic ranks for all samples with OPAL (Additional file 1: Figures S1, S2, and S3 for CAMI I HC, CAMI II MG, and HMP MC, respectively) and calculated rarefaction curves (Additional file 1: Figure S4). All plots and assessments were computed with OPAL version 1.0.0 [41].
The assessed profilers were CommonKmers (corresponding to MetaPalette 1.0.0) [2, 42], CAMIARKQuikr 1.0.0 [43], abbreviated Quikr (a combination of Quikr [8], ARK [9], and SEK [10]), TIPP 2.0.0 [12], Metaphlan 2.2.0 [5], MetaPhyler 1.25 [6], mOTU 1.1 [7], and FOCUS 0.31 adapted for CAMI [4]. To facilitate the reproduction of the assessments, we ran the profilers as Bioboxes docker containers. The corresponding docker images are available on Docker Hub, and their names and the preconfigured parameters used by the profilers are provided in Additional file 1: Table S1. Instructions for reproducing the results are provided in Additional file 2 and in the OPAL GitHub repository [24]. The reference databases used by each profiler precede the release of the genomes used for generating the first CAMI challenge datasets. Thus, the metagenomic information of the CAMI I HC dataset was completely new for these profilers and at different taxonomic distances to available reference genomes, differently from the metagenome data of the CAMI II MG practice dataset. The Bioboxes were run on a computer with an Intel Xeon E5-4650 v4 CPU (virtualized to 16 CPU cores, 1 thread per core) and 512 GB of main memory. Metaphlan was the fastest method on CAMI II MG with a run time of 12.5 h, whereas on CAMI I HC, Metaphlan and Quikr were the fastest methods, requiring roughly the same execution time of 2.12 h (Fig. 1 and Additional file 1: Table S2). On HMP MC, FOCUS was the fastest method, requiring 0.07 h. mOTU was the most memory efficient method on all three datasets (1.19 GB of maximum main memory usage on CAMI I HC and CAMI II MG, and 1.01 GB on HMP MC), closely followed by Metaphlan (1.44, 1.66, and 1.41 GB maximum main memory usage on CAMI I HC, CAMI II MG, and HMP MC, respectively).
Computing efficiency. Run time in hours and maximum main memory usage in gigabytes required by the profilers to process the CAMI I high complexity (a), the CAMI II mouse gut (b), and the HMP Mock Community (c) datasets
On the CAMI I HC data, Quikr, TIPP, and MetaPhyler, in this order, achieved the overall highest completeness (Additional file 1: Figures S5a, b, e and S6-S8a-g). However, these profilers obtained the lowest purity. In this metric, CommonKmers and Metaphlan performed best. In terms of the F1 score, computed from completeness and purity, Metaphlan was the best method. This indicates that Metaphlan performed particularly well in determining the presence or absence of taxa. However, it could not accurately predict their relative abundances, as indicated by the high L1 norm error. In this metric, MetaPhyler did well, followed by FOCUS and CommonKmers.
When ranking methods over all taxonomic ranks using completeness, purity, L1 norm, and weighted UniFrac with equal weights (Additional file 1: Figures S5e and S11a), TIPP performed best with total score 184. TIPP ranked second for completeness and weighted UniFrac (scores 31 and 5, respectively), third for L1 norm (score 52), and only for purity it did not do so well and was ranked fifth (score 96). When considering the performance of the profilers at different taxonomic ranks, we found that most profilers performed well until the family level. For example, TIPP and MetaPhyler achieved a 0.92 completeness at the family level, but this decreased to 0.43 at the genus level. Similarly, the purity of CommonKmers decreased from 0.96 at the family level to 0.77 and 0.08 at the genus and species levels, respectively.
In terms of alpha diversity, no profiler estimated taxon counts well. Most programs overestimated the diversity at all taxonomic ranks. Quikr, FOCUS, and CommonKmers predicted taxon abundances that better reflect the Shannon equitability of the gold standard (Additional file 1: Figure S12a, b). However, Quikr, mOTU, and TIPP made no predictions at the strain level. The predicted abundance distributions of CommonKmers and mOTU across all samples at the species level best reflect the gold standard, as visualized with the scatter plots of Bray-Curtis distances (Additional file 1: Figure S13). Taken together, the OPAL results fully reproduce the results from [13], where performance was summarized in three categories of profilers: profilers that correctly predicted relative abundances, profilers with high purity, and those with high completeness. OPAL extends the overall performance view by providing analysis of computing efficiency and microbial diversity predictors.
On the CAMI II MG data, Metaphlan obtained the overall best ranking over all taxonomic ranks, using the equally weighted metrics completeness, purity, L1 norm, and weighted UniFrac (Fig. 2d and Additional file 1: Figure S11b). MetaPhyler achieved the highest completeness at most taxonomic ranks, followed by TIPP and Metaphlan (Additional file 1: Figures S6-S8h-n), whereas CommonKmers achieved the highest completeness at the species level (Fig. 2c). Metaphlan was not only among the profilers with the highest completeness, but it also maintained a high purity throughout all taxonomic ranks, with only a small decrease from genus (0.94) to species (0.89). This can be explained by a high coverage of CAMI II MG by the reference genomes used by Metaphlan. It also contrasts with the results in [13], showing that a profiler can be precise while achieving a relative high completeness, but with this being very dependent on the input data. Metaphlan also predicted taxon distributions across the samples well. MetaPhyler and TIPP could not identify well differences in taxa abundances for the samples and tended to predict similar abundances, which is reflected in many points in the plots being above the line x=y (Fig. 3b–h).
Assessment results on the CAMI II mouse gut dataset. a Relative performance plots with results for the metrics: weighted UniFrac, L1 norm, completeness, purity, and number of false positives at different taxonomic ranks. The values of the metrics in these plots are normalized by the maximum value attained by any profiler at a certain rank. b Absolute performance plots with results for the metrics completeness and recall, ranging between 0 and 1. c Results at the species level for all computed metrics, as output by OPAL in the produced HTML page. The values are averaged over the results for all 64 samples of the dataset, with the standard error being shown in parentheses. The colors indicate the quality of the prediction by a profiler with respect to a metric, from best (dark blue) to worst (dark red). d Rankings of the profilers according to their performance and scores for different metrics computed over all samples and taxonomic ranks
Examples of alpha and beta diversity plots from the results on the CAMI II mouse gut dataset. a Shannon equitability at different taxonomic ranks as a measure of alpha diversity. The closer the Shannon equitability of the predicted profile by a method to the gold standard, the better it reflects the actual alpha diversity in the gold standard in terms of evenness of the taxa abundances. b–h Scatter plots of Bray-Curtis distances visualizing beta diversity at the species level. For each profiling method and plot, a point corresponds to the Bray-Curtis distance between the abundance predictions for a pair of input samples by the method (x-axis) and the Bray-Curtis distance computed for the gold standard for the same pair of samples (y-axis). The closer a point is to the line x=y, the more similar the predicted taxa distributions are to the gold standard
In terms of alpha diversity, Metaphlan, CommonKmers, and mOTU predicted taxon counts similar to the gold standard for most taxonomic ranks, whereas the other profilers mostly overestimated the counts. On the other hand, TIPP, MetaPhyler, and mOTU predicted taxon abundances that more closely reflect their evenness, i.e., Shannon equitability, in the gold standard (Fig. 3a and Additional file 1: Figure S12c, d). As on the CAMI I HC data, Quikr, mOTU, and TIPP made no strain-level predictions on this dataset.
On the HMP MC dataset, the profilers ranked similarly as on CAMI II MG dataset for the sum of scores of completeness, purity, L1 norm, and weighted UniFrac (Additional file 1: Figures S5f and S11c). Metaphlan and MetaPhyler, in this order, again performed best. They were followed by mOTU and CommonKmers (on CAMI II MG, CommonKmers and mOTU) and Quikr and FOCUS (on CAMI II MG, FOCUS and Quikr). Metaphlan ranked best for all these metrics except for completeness, being outperformed by MetaPhyler. At the species level, MetaPhyler and mOTU identified the highest number of true positives, with 21 and 18 out of 22, respectively (Additional file 1: Figure S10g). They also achieved the highest completeness of 95% and 81%, respectively. However, MetaPhyler reported 144 false positives, the highest number after Quikr, with 618, and achieved a relatively low purity. We did not assess TIPP, because it could not make predictions. We believe that blastn, which TIPP uses in its pipeline with default parameters, was not able to score part of the reads, consequently stopping the pipeline.
In terms of alpha diversity, Metaphlan's (MetaPhyler's) predicted taxon abundances were among the ones that best (worst) reflected the Shannon equitability of the gold standard throughout the rankings (Additional file 1: Figure S12e, f). At the strain level, CommonKmers performed best with this metric.
OPAL facilitates performance assessment and interpretation for taxonomic profilers using shotgun metagenome datasets as input. It implements commonly used performance metrics, including diversity metrics from microbial ecology, and outputs the assessment results in a convenient HTML page, in tables, and plots. By providing rankings and the possibility to give different weights to the metrics, OPAL enables the selection of the best profiler suitable for a researcher's particular biological interest. In addition, computational efficiency results that OPAL returns can guide users on the choice of a profiler under time and memory constraints. We plan to continually extend the metrics and visualizations of OPAL according to community requirements and suggestions.
We used OPAL to analyze the CAMI I HC data, demonstrating how it enables reproduction of the results of this study [13]. We also used it for the analysis of a new large dataset, the CAMI II MG, and the HMP MC. This revealed consistency across many metrics and softwares analysed, and also a few striking differences. Specifically, while on the CAMI I HC data Quikr had the highest completeness by a wide margin, on the CAMI II MG and the HMP MC data, MetaPhyler performed best with this metric and Quikr was among the least complete profiling tools. Similarly, the Metaphlan results changed from the lowest to the highest weighted UniFrac score. Results such as these indicate the importance of choosing a program suitable for the particular properties of the microbial community analyzed and considering variables such as the availability of reference genome sequences of closely related organisms to those in the sample. Given the wide variety of environments from which metagenome data are obtained, this further demonstrates the relevance of OPAL.
BIOM:
Biological Observation Matrix
CAMI:
Critical Assessment of Metagenome Interpretation
CAMI I HC:
CAMI I high complexity challenge dataset
CAMI II MG:
CAMI II mouse gut practice dataset
HMP MC:
Human Microbiome Project Mock Community
OPAL:
Open-community Profiling Assessment tooL
Ounit R, Wanamaker S, Close TJ, Lonardi S. CLARK: fast and accurate classification of metagenomic and genomic sequences using discriminative k-mers. BMC Genomics. 2015; 16(1):236. https://doi.org/10.1186/s12864-015-1419-2.
Koslicki D, Falush D. MetaPalette: a k-mer painting approach for metagenomic taxonomic profiling and quantification of novel strain variation. mSystems. 2016; 1(3):00020–16. https://doi.org/10.1128/msystems.00020-16.
Piro VC, Lindner MS, Renard BY. DUDes: a top-down taxonomic profiler for metagenomics. Bioinformatics. 2016; 32(15):2272–80. https://doi.org/10.1093/bioinformatics/btw150.
Silva GG, Cuevas DA, Dutilh BE, Edwards RA. FOCUS: an alignment-free model to identify organisms in metagenomes using non-negative least squares. PeerJ. 2014; 2. https://doi.org/10.7717/peerj.425.
Segata N, Waldron L, Ballarini A, Narasimhan V, Jousson O, Huttenhower C. Metagenomic microbial community profiling using unique clade-specific marker genes. Nat Methods. 2012; 9(8):811–4. https://doi.org/10.1038/nmeth.2066.
Liu B, Gibbons T, Ghodsi M, Treangen T, Pop M. Accurate and fast estimation of taxonomic profiles from metagenomic shotgun sequences. BMC Genomics. 2011; 12(Suppl 2):4. https://doi.org/10.1186/1471-2164-12-s2-s4.
Sunagawa S, Mende DR, Zeller G, Izquierdo-Carrasco F, Berger SA, Kultima JR, Coelho LP, Arumugam M, Tap J, Nielsen HB, Rasmussen S, Brunak S, Pedersen O, Guarner F, de Vos WM, Wang J, Li J, Dore J, Ehrlich SD, Stamatakis A, Bork P. Metagenomic species profiling using universal phylogenetic marker genes. Nat Methods. 2013; 10(12):1196–9. https://doi.org/10.1038/nmeth.2693.
Koslicki D, Foucart S, Rosen G. Quikr: a method for rapid reconstruction of bacterial communities via compressive sensing. Bioinformatics. 2013; 29(17):2096–102. https://doi.org/10.1093/bioinformatics/btt336.
Koslicki D, Chatterjee S, Shahrivar D, Walker AW, Francis SC, Fraser LJ, Vehkaperä M, Lan Y, Corander J. ARK: aggregation of reads by k-means for estimation of bacterial community composition. PLoS ONE. 2015; 10(10):1–6. https://doi.org/10.1371/journal.pone.0140644.
Chatterjee S, Koslicki D, Dong S, Innocenti N, Cheng L, Lan Y, Vehkaperä M, Skoglund M, Rasmussen LK, Aurell E, Corander J. SEK: sparsity exploiting k-mer-based estimation of bacterial community composition. Bioinformatics. 2014; 30(17):2423–31. https://doi.org/10.1093/bioinformatics/btu320.
Klingenberg H, Aßhauer KP, Lingner T, Meinicke P. Protein signature-based estimation of metagenomic abundances including all domains of life and viruses. Bioinformatics. 2013; 29(8):973–80. https://doi.org/10.1093/bioinformatics/btt077.
Nguyen N-p, Mirarab S, Liu B, Pop M, Warnow T. TIPP: taxonomic identification and phylogenetic profiling. Bioinformatics. 2014; 30(24):3548–55. https://doi.org/10.1093/bioinformatics/btu721.
Sczyrba A, Hofmann P, Belmann P, Koslicki D, Janssen S, Dröge J, Gregor I, Majda S, Fiedler J, Dahms E, Bremges A, Fritz A, Garrido-Oter R, Jørgensen TSS, Shapiro N, Blood PD, Gurevich A, Bai Y, Turaev D, DeMaere MZ, Chikhi R, Nagarajan N, Quince C, Meyer F, Balvočiūtė M, Hansen LHH, Sørensen SJ, Chia BKH, Denis B, Froula JL, Wang Z, Egan R, Don Kang D, Cook JJ, Deltel C, Beckstette M, Lemaitre C, Peterlongo P, Rizk G, Lavenier D, Wu Y-WW, Singer SW, Jain C, Strous M, Klingenberg H, Meinicke P, Barton MD, Lingner T, Lin H-HH, Liao Y-CC, Silva GGGZ, Cuevas DA, Edwards RA, Saha S, Piro VC, Renard BY, Pop M, Klenk H-PP, Göker M, Kyrpides NC, Woyke T, Vorholt JA, Schulze-Lefert P, Rubin EM, Darling AE, Rattei T, McHardy AC. Critical assessment of metagenome interpretation-a benchmark of metagenomics software. Nat Methods. 2017; 14(11):1063–71.
Lindgreen S, Adair KL, Gardner PP. An evaluation of the accuracy and speed of metagenome analysis tools. Sci Rep. 2016;6(1) https://doi.org/10.1038/srep19233.
Belmann P, Dröge J, Bremges A, McHardy AC, Sczyrba A, Barton MD. Bioboxes: standardised containers for interchangeable bioinformatics software. GigaScience. 2015;4(1) https://doi.org/10.1186/s13742-015-0087-0.
McDonald D, Clemente JC, Kuczynski J, Rideout J, Stombaugh J, Wendel D, Wilke A, Huse S, Hufnagle J, Meyer F, Knight R, Caporaso J. The Biological Observation Matrix (BIOM) format or: how I learned to stop worrying and love the ome-ome. GigaScience. 2012; 1(1):7. https://doi.org/10.1186/2047-217x-1-7.
Lozupone C, Knight R. UniFrac: a new phylogenetic method for comparing microbial communities. Appl Environ Microbiol. 2005; 71(12):8228–35. https://doi.org/10.1128/aem.71.12.8228.
Meyer F, Hofmann P, Belmann P, Garrido-Oter R, Fritz A, Sczyrba A, McHardy AC. AMBER: assessment of metagenome binners. GigaScience. 2018; 7(6). https://doi.org/10.1093/gigascience/giy069.
Mikheenko A, Saveliev V, Gurevich A. MetaQUAST: evaluation of metagenome assemblies. Bioinformatics (Oxford, England). 2016; 32(7):1088–90. https://doi.org/10.1093/bioinformatics/btv697.
Gurevich A, Saveliev V, Vyahhi N, Tesler G. QUAST: quality assessment tool for genome assemblies. Bioinformatics (Oxford, England). 2013; 29(8):1072–5. https://doi.org/10.1093/bioinformatics/btt086.
Fritz A, Hofmann P, Majda S, Dahms E, Droege J, Fiedler J, Lesker TR, Belmann P, DeMaere MZ, Darling AE, Sczyrba A, Bremges A, McHardy AC. CAMISIM: simulating metagenomes and microbial communities. bioRxiv. 2018. https://doi.org/10.1101/300970.
Fritz A, Hofmann P, Majda S, Dahms E, Droege J, Fiedler J, Lesker TR, Belmann P, DeMaere MZ, Darling AE, Sczyrba A, Bremges A, McHardy AC. CAMISIM: simulating metagenomes and microbial communities. 2018. https://github.com/CAMI-challenge/CAMISIM/. Accessed 20 Nov 2018.
Bioboxes profiling format. 2018. https://github.com/bioboxes/rfc/tree/master/data-format. Accessed 20 Nov 2018.
OPAL GitHub repository. 2018. https://github.com/CAMI-challenge/OPAL. Accessed 20 Nov 2018.
Baldi P, Brunak S, Chauvin Y, Andersen CAF, Nielsen H. Assessing the accuracy of prediction algorithms for classification: an overview. Bioinformatics. 2000; 16(5):412–24. https://doi.org/10.1093/bioinformatics/16.5.412.
Evans SN, Matsen FA. The phylogenetic Kantorovich–Rubinstein metric for environmental sequence samples. J R Stat Soc Ser B Stat Methodol. 2012; 74(3):569–92. https://doi.org/10.1111/j.1467-9868.2011.01018.x.
McClelland J, Koslicki D. EMDUniFrac: exact linear time computation of the unifrac metric and identification of differentially abundant organisms. J Math Biol. 2018. https://doi.org/10.1007/s00285-018-1235-9.
EMDUnifrac GitHub repository. 2018. https://github.com/dkoslicki/EMDUnifrac. Accessed 20 Nov 2018.
Whittaker RH. Evolution and measurement of species diversity. Taxon. 1972; 21(2):213–51.
Menni C, Jackson MA, Pallister T, Steves CJ, Spector TD, Valdes AM. Gut microbiome diversity and high-fibre intake are related to lower long-term weight gain. Int J Obes. 2017; 41:1099–105. https://doi.org/10.1038/ijo.2017.66.
Menni C, Zierer J, Pallister T, Jackson MA, Long T, Mohney RP, Steves CJ, Spector TD, Valdes AM. Omega-3 fatty acids correlate with gut microbiome diversity and production of n-carbamylglutamate in middle aged and elderly women. Sci Rep. 2017; 7(1):2045–322. https://doi.org/10.1038/s41598-017-10382-2.
Fierer N, Leff JW, Adams BJ, Nielsen UN, Bates ST, Lauber CL, Owens S, Gilbert JA, Wall DH, Caporaso JG. Cross-biome metagenomic analyses of soil microbial communities and their functional attributes. Proc Natl Acad Sci. 2012; 109(52):21390–5. https://doi.org/10.1073/pnas.1215210110. http://www.pnas.org/content/109/52/21390.full.pdf.
Mendes LW, Tsai SM, Navarrete AA, de Hollander M, van Veen JA, Kuramae EE. Soil-borne microbiome: linking diversity to function. Microb Ecol. 2015; 70(1):255–65. https://doi.org/10.1007/s00248-014-0559-2.
Shannon CE. A mathematical theory of communication. Bell Syst Tech J. 1948; 27:379–423. https://doi.org/10.1145/584091.584093.
Wickham H. Tidy data. J Stat Softw. 2014;59(10).
OPAL example page. 2018. https://cami-challenge.github.io/OPAL/. Accessed 20 Nov 2018.
CAMI datasets download page. 2018. https://data.cami-challenge.org/participate. Accessed 20 Nov 2018.
Belmann P, Bremges A, Dahms E, Dröge J, Fiedler J, Fritz A, Garrido-Oter R, Gregor I, Hofman P, Janssen S, Jørgensen T, Koslicki D, Majda S, Sczyrba A, Blood P, Shapiro N, Gurevich A, Bai Y, DeMaere M, Turaev D, Chikhi R, Nagarajan N, Quince C, Meyer F, Balvočiūtė M, Hansen L, Sørensen S, H DBChiaBKand, JL F, Z W, R E, DD K, JJ C, C D, M B, C L, Peterlongo P, Rizk G, Lavenier D, Wu Y, Singer S, Jain C, Strous M, Klingenberg H, Meinicke P, Barton M, Lingner T, Lin H, Liao Y, Z Silva G, Cuevas D, Edwards R, Saha S, Piro V, Renard B, Pop M, Klenk H, Göker M, Kyrpides N, Woyke T, Vorholt J, Schulze-Lefert P, Rubin E, Darling A, Rattei T, McHardy A. Benchmark data sets, software results and reference data for the first CAMI challenge. GigaDB. 2017. https://doi.org/10.5524/100344.
Methé BA, Nelson KE, Pop M, Creasy HH, Giglio MG, et al. A framework for human microbiome research. Nature. 2012; 486:215–21. https://doi.org/10.1038/nature11209.
NIH Human Microbiome Project. Mock community composition - summary table. https://www.hmpdacc.org/HMMC/. Accessed 20 Nov 2018.
OPAL: Open-community Profiling Assessment tooL v1.0.0. https://doi.org/10.5281/zenodo.1885324. Accessed 03 Dec 2018.
MetaPalette: v1.0.0. https://doi.org/10.5281/zenodo.1730624. Accessed 30 Nov 2018.
CAMIARKQuikr v1.0.0. https://doi.org/10.5281/zenodo.1730572. Accessed 30 Nov 2018.
This work was supported by the Helmholtz Society. DK acknowledges that this material is based upon work supported by the National Science Foundation under Grant No. 1664803.
Alice C. McHardy and David Koslicki contributed equally to this work.
Department of Computational Biology of Infection Research, Helmholtz Centre for Infection Research (HZI), Braunschweig, Germany
Fernando Meyer
, Andreas Bremges
, Peter Belmann
, Stefan Janssen
& Alice C. McHardy
Braunschweig Integrated Centre of Systems Biology (BRICS), Braunschweig, Germany
German Center for Infection Research (DZIF), partner site Hannover-Braunschweig, Braunschweig, Germany
Andreas Bremges
& Stefan Janssen
Faculty of Technology and Center for Biotechnology, Bielefeld University, Bielefeld, Germany
Peter Belmann
Department of Pediatrics, University of California San Diego, La Jolla, CA, USA
Stefan Janssen
Department of Pediatric Oncology, Hematology and Clinical Immunology, Heinrich-Heine University Dusseldorf, Dusseldorf, Germany
Mathematics Department, Oregon State University, Corvallis, OR, USA
David Koslicki
Search for Fernando Meyer in:
Search for Andreas Bremges in:
Search for Peter Belmann in:
Search for Stefan Janssen in:
Search for Alice C. McHardy in:
Search for David Koslicki in:
FM implemented most of OPAL, performed all profiling comparisons, and wrote the manuscript together with ACM and DK. AB implemented the Jaccard index and integrated DK's implementation of the UniFrac distance into OPAL. PB implemented the purity and completeness measures. AB and PB contributed to OPAL's technical specification and software design. SJ implemented the rankings and built Bioboxes of profilers. ACM and DK supervised the OPAL project and wrote parts of the manuscript. All authors read and approved the final manuscript.
Correspondence to Alice C. McHardy or David Koslicki.
OPAL is implemented in Python 3. Its source code is available under the Apache 2.0 license at https://github.com/CAMI-challenge/OPAL [24]. Version 1.0.0 used in the manuscript is permanently available under https://doi.org/10.5281/zenodo.1885324 [41]. The CAMI benchmark datasets are available at https://data.cami-challenge.org/participate [13, 37, 38]. The HMP MC dataset is available from NCBI SRA (https://www.ncbi.nlm.nih.gov/sra) (accession SRR172903) [39, 40]. The benchmarked profiling programs are available as Bioboxes docker images on Docker Hub (image names are provided in Additional file 1: Table S1).
Additional file 1
Supplementary tables and figures. (PDF 2601 kb)
Instructions for reproducing the comparisons of taxonomic profilers. (PDF 79 kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Taxonomic profiling
Bioboxes
Microbiomes and Metagenomics | CommonCrawl |
Classic Iterated Function Systems
Heighway
McWorter
Pentigree
Larry Riddle, Agnes Scott College
Note: This site uses Javascript. It also uses MathJax (which requires Javascript) to display mathematical expressions. Please turn on JavaScript, then refresh the page.
Dihedral Group Example
Dihedral Group D4
The eight transformations of a square shown below form the dihedral group D4 with 8 elements. Transformations 2, 3, and 4 are counterclockwise rotations by 90°, 180°, and 270° respectively. Transformations 5 and 6 are vertical and horizontal reflections, while transformations 7 and 8 are reflections across the two diagonals of the square.
The following table shows the result of combining one transformation with another. The one down the rows is done first, followed by the one across the columns. If we call the transformations gn for n = 1, 2, 3, 4, 5, 6, 7, 8 then the table shows the result of the composition \(g_c \circ g_r\), where r and c denote row and column.
\[\begin{array}{*{20}{c}} {} & {\begin{array}{*{20}{c}} 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \end{array} } \\ {\begin{array}{*{20}{c}} 1 \\ 2 \\ 3 \\ 4 \\ 5 \\ 6 \\ 7 \\ 8 \\ \end{array} } & {\boxed{\begin{array}{*{20}{c}} 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ 2 & 3 & 4 & 1 & 8 & 7 & 5 & 6\\ 3 & 4 & 1 & 2 & 6 & 5 & 8 & 7\\ 4 & 1 & 2 & 3 & 7 & 8 & 6 & 5\\ 5 & 7 & 6 & 8 & 1 & 3 & 2 & 4\\ 6 & 8 & 5 & 7 & 3 & 1 & 4 & 2\\ 7 & 6 & 8 & 5 & 4 & 2 & 1 & 3\\ 8 & 5 & 7 & 6 & 2 & 4 & 3 & 1\\ \end{array} }} \\ \end{array} \]
So, for example, applying a reflection across the lower left to upper right diagonal (#7) followed by a rotation by 270° (#4) is the same as doing a vertical reflection (#5), as illustrated below. Hence \(g_5 = g_4 \circ g_7\).
Suppose we compose the elements of D4 with the k functions \(\{h_i\}\) in an IFS. The new IFS would consists of 8k functions, and its unique attractor B would satisfy $$ B = \bigcup_{i=1}^k \Big( g_1 \circ h_i(B) + g_2 \circ h_i(B) + g_3 \circ h_i(B) + g_4 \circ h_i(B) + g_5(B) \circ h_i(B) \\ \qquad+ g_6(B) \circ h_i + g_7 \circ h_i(B) + g_8 \circ h_i(B) \Big) $$
Now if we apply the \(g_7\) reflection to the set B, we would get
\( g_7(B) = \displaystyle\bigcup_{i=1}^k \Big( g_7 \circ g_1 \circ h_i(B) + g_7 \circ g_2 \circ h_i(B) +g_7 \circ g_3 \circ h_i(B) +g_7 \circ g_4 \circ h_i(B) + g_7 \circ g_5(B) \circ h_i(B) \\ \qquad\qquad\qquad+ g_7 \circ g_6(B) \circ h_i +g_7 \circ g_7 \circ h_i(B) +g_7 \circ g_8 \circ h_i(B) \Big) \\ \qquad= \displaystyle\bigcup_{i=1}^k \Big( g_7 \circ h_i(B) + g_5 \circ h_i(B) + g_8 \circ h_i(B) + g_6 \circ h_i(B) + g_2(B) \circ h_i(B) \\ \qquad\qquad\qquad+ g_4(B) \circ h_i + g_1 \circ h_i(B) + g_3 \circ h_i(B) \Big) \\ \qquad = B \)
This shows that B is symmetric with respect to that reflection. A similar calculation would show that \(g_i(B) = B\) for all 8 elements in D4, and thus B would be symmetric with respect to each of those rotations and reflections. | CommonCrawl |
Methodology article
The role of alternative Polyadenylation in regulation of rhythmic gene expression
Natalia Ptitsyna1,
Sabri Boughorbel2,
Mohammed El Anbari2 &
Andrey Ptitsyn2,3
Alternative transcription is common in eukaryotic cells and plays important role in regulation of cellular processes. Alternative polyadenylation results from ambiguous PolyA signals in 3′ untranslated region (UTR) of a gene. Such alternative transcripts share the same coding part, but differ by a stretch of UTR that may contain important functional sites.
The methodoogy of this study is based on mathematical modeling, analytical solution, and subsequent validation by datamining in multiple independent experimental data from previously published studies.
In this study we propose a mathematical model that describes the population dynamics of alternatively polyadenylated transcripts in conjunction with rhythmic expression such as transcription oscillation driven by circadian or metabolic oscillators. Analysis of the model shows that alternative transcripts with different turnover rates acquire a phase shift if the transcript decay rate is different. Difference in decay rate is one of the consequences of alternative polyadenylation. Phase shift can reach values equal to half the period of oscillation, which makes alternative transcripts oscillate in abundance in counter-phase to each other. Since counter-phased transcripts share the coding part, the rate of translation becomes constant. We have analyzed a few data sets collected in circadian timeline for the occurrence of transcript behavior that fits the mathematical model.
Alternative transcripts with different turnover rate create the effect of rectifier. This "molecular diode" moderates or completely eliminates oscillation of individual transcripts and stabilizes overall protein production rate. In our observation this phenomenon is very common in different tissues in plants, mice, and humans. The occurrence of counter-phased alternative transcripts is also tissue-specific and affects functions of multiple biological pathways. Accounting for this mechanism is important for understanding the natural and engineering the synthetic cellular circuits.
Circadian oscillation plays important role in regulation of gene expression. The number of reported cycling genes differs from study to study. Some publications report hundreds [1,2,3] others thousands [4] of oscillating transcripts, depending on experiment design and analysis of data. Some reports insist on majority or the entire transcriptome experiencing diurnal oscillations [5, 6]. In any case, a considerable fraction of rhythmically expressed genes is bound to modulate the activity in multiple biological pathways. Multiple other factors are known to regulate gene expression in the context of biological pathways. Alternative polyadenylation is one of such factors, sometimes considered a form of alternative splicing, but rarely mentioned in connections with circadian oscillation. Recent reviews, such as [7, 8] provide a panoramic overview of the prominent role of alternative polyadenylation in various healthy and disease states, but make no connection to the rhythmic alternations in alternative transcript population. However, some studies point specifically to the importance of such connection [9] in Arabidopsis thaliana and Drosophila melanogaster [10]. Others point at the role of polyadenylation in regulation of rhythmic protein expression [11, 12] while observing the length of PolyA tail in various transcripts in mice. In some estimations up to 35% of all alternative 3'UTR transcripts may have different turnover rate [8]. Generalizing these observations we come to a conclusion that transcription factors are not the only mechanism regulating circadian expression. Post-transcriptional mechanisms such as alternative splicing, polyadenylation, nonsense-mediated decay, etc. are also important in formation of dynamic pattern of transcripts. In this connection we would like to remember one old study reporting a perplexing pattern of alternative transcripts of suppressor of cytokine signaling (SOCS3) in mice oscillating in opposite phase to each other [13]. The paper described the occurrence of alternative microarray probes traced to alternative transcripts sharing the coding part, but resulting from alternative polyadenylation sites. This effect was first discovered in JAK-STAT (Janus Kinase - Signal Transducer and Activator of Transcription) signal transduction pathway. Counter-phased alternative transcripts were observed in brown adipose tissue, but not in white adipose or liver samples. Other elements of the same pathway such as JAK were also showing counter-phase transcripts in one tissue, but not the other. The study proposed that such pattern of alternative transcripts may represent an adaptive mechanism regulating the pathway in a tissue-specific manner by creating a constant abundance of a particular protein. For example, constant production of SOC3 from alternating short and long transcripts can block signal transduction in a particular tissue regardless of the diurnal change of the baseline. In the current study we attempt to generalize this observation and propose the model of molecular mechanism responsible for the observed pattern of counter-phase oscillation of alternative transcripts.
Let n 1 (t) denote the change in abundance (for instance, relative to invariant sum of intensity of microarray control spots) of the long isoform in time and n 2 (t) stands for the change abundance of the short isoform of the transcript n in time.
Let r p describe the expression rate of the gene from which both isoforms are transcribed. Since they share the same promotor and all other functional sites except 3′ UTR polyadenylation signal, the rate is the same for both short and long transcripts. Let p denote the probability of transcription resulting in production of the long isoform. Then 1-p is the probability of transcription resulting in the short isoform. The UTRs of these transcripts are different, thus we introduce separate variables for the degradation rates:
r d1 describes the degradation rate for the long isoform of transcript n 1
$$ \left\{\begin{array}{c}\frac{d{n}_1}{dt}=p{r}_p-{r}_{d1};\\ {}\frac{d{n}_2}{dt}=\left(1-p\right){r}_p-{r}_{d2};\end{array}\right. $$
Let's consider the case when the baseline of expression is modulated by a simple harmonic process, such as circadian rhythm. Since the entire cell (or even the organism consisting of magnitude of cells) is modulated by the same factors, we consider the period of oscillation equal in all equations. The baseline oscillation is described by the travelling wave equation
$$ {r}_p=a \sin \left(\omega t+{\alpha}_1\right);{r}_{d1}=b \sin \left(\omega t+{\alpha}_2\right);{r}_{d2}=c \sin \left(\omega t+{\alpha}_3\right); $$
Here we assume that b > c, which means that longer transcripts have a shorter life span. This assumption models the action of miRNA that can bind the longer transcript and facilitate the decay. The shorter isoform lacks the miRNA binding site and thus can last longer, transcribing more copies of encoded protein. The overall model takes the following form:
$$ \left\{\begin{array}{c}\frac{d{n}_1}{dt}=pa\mathit{\sin}\left(\omega t+{\alpha}_1\right)-b\mathit{\sin}\left(\omega t+{\alpha}_2\right);\\ {}\frac{d{n}_2}{dt}=\left(1-p\right)a\mathit{\sin}\left(\omega t+{\alpha}_1\right)-c\mathit{\sin}\left(\omega t+{\alpha}_3\right);\end{array}\right. $$
The formula (14) (see Methods for complete analytic solution) allows direct calculation of the phase shift from the estimated degradation rates of short and long isoforms. These values can be estimated experimentally.
Summing up isoforms n 1 and n 2 we can estimate the overall level of expression and amplitude of oscillation for the entire population of alternative transcripts of gene n. While n 1 + n 2 = n at all times, the amplitude of the resulting curve for n depends on the phase shift between isoforms n 1 and n 2 . The phase lag between isoforms may have values varying between 0 and 2π. In the middle of this range, when β 2 −β 1 =π the amplitude of n is reduced to 0. In terms of biology, this means that gene expression oscillatory in nature at the origin can produce a constant steady production of peptides using the mechanism of differentially polyadenylated transcriptional isoforms. This mechanism provides the "power rectifier" element for the cellular circuitry. Figure 1 illustrates the action of molecular circuit rectifier. The degradation rate of mRNA which determines the turnover rate of mRNA copies and eventually the amount of synthesized protein can be affected by many factors, such as post-transcriptional modification, mRNA transport, tertiary structure, etc. However, the most well-known factor is the action of miRNA.
Molecular mechanism of a cellular circuit rectifier. a Two subpopulations of transcripts are created by occurrence of canonical distant PolyA signal and proximal non-canonical signal. b Transcription produces two types of mRNA that differ by a stretch of RNA that may contain functional sites such as microRNA target areas. Both transcripts share the same coding part. c When both transcripts have the same turnover rate, the transcript abundance has oscillating baseline. If more abundant transcript decays faster the peak abundance also shifts in time and can reach complete counter-phase (see Analytic Solution). In such case the sum of two transcripts approaches non-oscillatory steady line
Pattern datamining
Frequently referencing the same or similar data sets from independent circadian studies we could not help but notice that the pattern of alternative probe sets for the same gene showing oscillations in a different phase is quite common in different plants and animals. Here we present the results of systematic search for expression patterns indicative of counter-phase transcripts.
We present a conservative estimation of the counter-phase transcript occurrence. The standard expression microarrays are poorly suited for observation of alternative transcripts. The higher representation of 3′ UTR is usually viewed as unwanted bias that designers strive to avoid. Full length mRNAs from Refseq database are given priority over alternative shorter ESTs. Engineers also try to avoid excess probe sets interrogating the same gene in order to make quantitative estimation of gene expression more consistent. As a result we are only able to observe alternative polyadenylation through unintended imperfection on microarray design.
The phase estimation procedure described previously provides for each probe an estimation of the phase among one of six phase classes discretized by cyclic shift π i /6, (i = 1..6) and a corresponding p-value for each estimate. The p-value calculation is obtained from the bootstrap analysis described in Algorithm 1. The latter can be used to filter probes with low statistical confidence on their phase estimate. We used the mouse annotation data (available from Affymetrix and on the shared github source code) to identify multiple probe sets interrogating expression of the same gene. All probes that correspond to the same gene symbol are gathered in a same probe set. The next step of the analysis is to generate phase differences within each probe set. All probe pairs in each probe set are used to compute the absolute value of phase difference.
Figure 2 shows the distribution of phase differences for three mouse tissues. We used a threshold p-value of 0.1 to filter probes with very low confidence on phase estimation. There is a peak around the zero phases for the different tissues. This result is expected since the probes are designed to provide a robust estimation of expression levels. Probe sets and separate probes within each set reporting results inconsistent with other probes and probe sets for the same gene tend to be eliminated from the chip on early design stages. As a result, the majority of alternative probe sets report abundance of the same transcript and shows no phase difference. There is a degree of uncertainty in identification of phase, considering the low sampling rate and high level of technical variation in microarray data. Thus, we expect high number of alternative probesets with phase difference by one time point (second bar on the diagrams in Fig. 2). Likewise, there should be progressively fewer alternative probesets with larger phase difference. However, the diagrams show pronounced peaks corresponding to approximately opposite phases of oscillation.
Distribution of the number of probes as function of phase difference for three mouse tissues (from left to right: white adipose tissue, brown adipose tissue, liver). In all three tissues there are many genes with multiple probe sets oscillating in a different phase. Moreover, there is a pronounced peak corresponding to probes oscillating in opposite phases
We tested the significance of these visible bumps on diagrams in Fig. 2. Let's assume that in ideal microarray design all probe sets report consistent abundance of transcripts and there is no phase difference between alternative probe sets. In this case the first bar on Fig. 2 would be dominating, but we would still observe other bars due to technical variation and uncertainty in peak time estimations. However, if perceived phase difference was caused by stochastic variation only, we would observe progressively fewer cases with larger phase difference. It is expected that as the phase difference increases the count would decrease in an exponential manner. Therefore we can test the hypothesis that the observed distribution of phase difference has some stochastic decay. In order to test this hypothesis we fitted the phase difference distribution by a Poisson distribution. The advantage of using a Poisson distribution is that it can capture random variables that have stochastic decays. In addition since we have a discrete number of bins for phase difference, Poisson distribution is a good choice for discrete support. After fitting the distribution to the phase difference data, we applied a Chi-Square test to verify if this fitting is statistically valid G. Table 1 summarizes the Poisson distribution fitting and the hypothesis testing results. We observe that, in the five datasets, the null hypothesis can be rejected and that the stochastic decay does not completely explain the phase difference distribution. We are aware that Poisson distribution does not capture all possible distributions with stochastic decay. We also performed non-parametric estimation of the phase difference distribution. The results are consistent with the tests in Table 1 (data not shown, see Additional file 1: Supplemental method). The distributions in Figs. 2 and 3 cannot be explained by stochastic variation and reflects a fraction of probes that oscillate consistently in opposite or near-opposite phase to each other. This observation is also true for all analyzed datasets (see complete list of probe sets with phase differences in Additional files 2 and 3).
Table 1 Phase difference distribution and stochastic decay hypothesis testing
Distribution of the number of probes as a function of phase difference for Arabidopsis thaliana. The results are similar for both data sets from the University of Warwick [14] (left figure) and from UC Davis [13] (right figure). In both data the highest bar corresponds to pairs of alternative probesets that oscillate as expected with no phase difference. However, in both cases the second largest number of probe pairs oscillates with a significant phase difference
We applied the same analysis to Arabidopsis thaliana circadian gene expression data. The data comes from the published sources (GEO GSE8365, GSE5612) and the primary analysis has been previously published [14, 15] and later re-analyzed and published again [16]. Additional challenges in identifying alternative probesets in Arabidopsis thaliana microarray come from the less extensive annotation (also available from Affymetrix). For this analysis we considered probesets representing the same gene if any of the following fields were identical: RefSeq Transcript ID, AGI and Entrez Gene. Figure 3 shows that overall the phase difference distribution is similar to the analysis based on mouse data with some differences in the distribution shape. The occurrence of alternative transcripts oscillating with pronounced phase difference in such distant organism leads to conclusion that the mechanism creating phenomena is likely to be common for all eukaryotic organisms.
Regardless of the source of oscillation, the rhythmic nature of expression demands a significant revision of the way we understand and model the function and regulation of genes. One of the previously published models predicted that for a rhythmically expressed gene addition of miRNA may have two different effects: either expected decrease or surprising increase of transcript abundance, depending on timing of miRNA action [17]. In case of alternative polyadenylation, we have first noted, investigated and reported a strange abnormality in expression of alternative probe sets reporting activity of the same genes [13]. Disagreement in intensity among alternative probe sets is usually attributed to cross-hybridization, flaws in microarray design or manufacturing or other factors reflecting technical rather than biological variation. Indeed, experiments comparing only single points in time are insufficient to explain such discordance. Observation of complete circadian (or other periodic) time leaves no doubt that at least some of the alternative probe sets report biologically relevant rather than technical variation.
Our model shows that such strange behavior of alternative probes is not only natural; they are performing an important function. This function eliminates the effect of oscillation in transcription mechanism. Other studies have already reported pervasive oscillation of the entire transcriptome (see [18, 19, 20] for review). Current study presents the mechanism for rectification of constant baseline oscillation. If there is such mechanism than the default state of gene expression must be rhythmic. Our observations show that this mechanism is common in among plant, mouse and human genes. However, our study tends to underestimate the occurrence of alternative transcripts. On the chips used to produce this data thousands of genes are interrogated by a single probe set only. In cases when original CEL files are available for Affymetrix microarrays it is possible to analyze more genes with alternative transcripts by low level single probe analysis. However, individual probes may not be uniformly distributed to represent all transcripts and are less reliable in quantitative estimation of transcript abundance. The true occurrence of such mechanism is yet to be determined in a specially designed experimental study using a different detection mechanism.
Most of approaches in bioengineering and synthetic biology make no account for oscillation [21]. We believe a significant advancement in Pathway Engineering will require better understanding of the principles on which complex biological systems are organized. One of the principles is oscillation in production of cellular components, signal transduction and energy metabolism. Ignoring the fact of oscillation is only possible on the early steps or when implementing most primitive constructions. This study offers one of the components for building artificial cellular systems or re-engineering the existing cells based on the knowledge of the rhythmic nature of gene expression. This component is a functional analog of a diode in electronic circuits and can rectify oscillations occurring in biological circuits due to the oscillatory nature of gene expression.
The findings described in this paper may have practical applications in pathway engineering and synthetic biology. Our model provides the mechanism for re-engineering of existing biological pathways in a living cell or de novo design of cellular circuits. The model predicts that it is possible to find the parameters (such as miRNA site with certain affinity) regulating the ratio of alternatively adenylated transcripts. Manipulating such ration allows changing the amplitude of a particular gene expression or even complete elimination of oscillation. Constant abundance of a gene product can be used for production purpose to maximize the output of a peptide or an enzyme producing the product of interest. Alternatively this mechanism can be engineered to block unwanted pathways such as apoptosis or cell motivity, etc., or to keep certain pathways active at any time. Likewise, the same model can be used to create a blueprint for constructing artificial genes with certain properties. For example, the formula given in the description can be used to select parameters of the artificial gene (affinity of early and late PolyA sites, affinity of microRNA binding site) in order to create the desirable amplitude of oscillating product abundance.
The mouse gene expression profiles was obtained in the original study of circadian gene expression in adipose tissues [22]. The AKR/J mice acclimated to a 12 h light: 12 h dark cycle, were harvested in sets of 3–5 mice at 4 h intervals in duplicates over a 24 h period. Total RNA samples from inguinal (iWAT) white adipose tissue, brown adipose tissue (BAT), and liver have been assayed by Affymetrix U74 GeneChip microarrays.
The plant (Arabidopsis thaliana) data sets
We used two independent data sets similar in experiment design [14, 15]. Seedlings were entrained in 12-h white light (light source was cool white fluorescence tubes)/12-h dark cycles for 7 days before being released into free-running conditions of continuous white light at 22 °C. Starting at subjective dawn of day 8 [14] or day 9 [15], tissue was harvested every 4 h over the course of the next 44 h. Following standard protocols labelled cRNA targets were prepared from total RNA and hybridized to Affymetrix Arabidopsis expression GeneChips according to the manufacturer's instructions.
Analytic solution
We integrate each equation separately with respect to time. The solution of the system is:
$$ \left\{\begin{array}{c}{n}_1(t)=-\frac{pa}{\omega}\mathit{\cos}\left(\omega t+{\alpha}_1\right)+\frac{b}{\omega}\mathit{\cos}\left(\omega t+{\alpha}_2\right);\\ {}{n}_2(t)=-\left(1-p\right)\frac{a}{\omega}\mathit{\cos}\left(\omega t+{\alpha}_1\right)+\frac{c}{\omega}\mathit{\cos}\left(\omega t+{\alpha}_3\right);\end{array}\right. $$
The rotating-vector description of simple harmonic oscillation provides a neat way of rewriting n 1 and n 2 as single harmonic oscillations:
$$ {n}_1(t)=A\mathit{\cos}\left(\omega t+{\beta}_1\right); $$
$$ {n}_2(t)=B\mathit{\cos}\left(\omega t+{\beta}_2\right); $$
The following geometric solution is illustrated in Fig. 4. The harmonic oscillations x 1 and x 2 can be represented as two vectors a1 and a2 that rotate around their tails, which are pivoted at the origin O. The angular speed of the rotation is equal to ω. As the vectors rotate around the origin their projections x 1 and x 2 on the horizontal axis vary cosinusoidally. Hence we have
$$ \gamma =\frac{2\pi -2\Big(\pi -{\alpha}_1+{\alpha}_{2\Big)}}{2}={\alpha}_1-{\alpha}_2; $$
$$ {A}^2={\left|{\mathrm{a}}_1\right|}^2+{\left|{\mathrm{a}}_2\right|}^2-2\mid {\mathrm{a}}_1\mid \mid {\mathrm{a}}_2\mid \mathit{\cos}\left({\alpha}_1-{\alpha}_2\right); $$
$$ {A}^2={\left(\frac{pa}{\omega}\right)}^2+{\left(\frac{b}{\omega}\right)}^2-\frac{2pab}{\omega^2}\mathit{\cos}\left({\alpha}_1-{\alpha}_2\right); $$
Geometric solution for the phase shift between oscillating transcript isoforms. See detailed description in the text
Therefore n 1(t) = A cos(ωt + β 1), with
$$ A=\sqrt{{\left(\frac{pa}{\omega}\right)}^2+{\left(\frac{b}{\omega}\right)}^2-\frac{2pab}{\omega^2}\mathit{\cos}\left({\alpha}_1-{\alpha}_2\right);} $$
$$ \mathit{\tan}{\beta}_1=\frac{\mid {a}_2\mid \mathit{\sin}{\alpha}_2-\mid {a}_1\mid \mathit{\sin}{\alpha}_1}{\mid {a}_2\mid \mathit{\cos}{\alpha}_2-\mid {a}_1\mid \mathit{\cos}{\alpha}_1}=\frac{b\mathit{\sin}{\alpha}_2-pa\mathit{\sin}{\alpha}_1}{b\mathit{\cos}{\alpha}_2-pa\mathit{\cos}{\alpha}_1}; $$
Similarly, we can get the expression for n 2:
where.
$$ B=\sqrt{{\left(1-p\right)}^2{\left(\frac{a}{\omega}\right)}^2+{\left(\frac{c}{\omega}\right)}^2-\frac{2\left(1-p\right)ac}{\omega^2}\mathit{\cos}\left({\alpha}_1-{\alpha}_3\right);} $$
$$ \mathit{\tan}{\beta}_2=\frac{c\mathit{\sin}{\alpha}_3-\left(1-p\right)a\mathit{\sin}{\alpha}_1}{c\mathit{\cos}{\alpha}_3-\left(1-p\right)a\mathit{\cos}{\alpha}_1}; $$
And the phase difference is.
$$ {\beta}_2-{\beta}_1=\mathit{\arctan}\left(\frac{c\mathit{\sin}{\alpha}_3-\left(1-p\right)a\mathit{\sin}{\alpha}_1}{c\mathit{\cos}{\alpha}_3-\left(1-p\right)a\mathit{\cos}{\alpha}_1}\right)-\mathit{\arctan}\left(\frac{b\mathit{\sin}{\alpha}_2-pa\mathit{\sin}{\alpha}_1}{b\mathit{\cos}{\alpha}_2-pa\mathit{\cos}{\alpha}_1}\right). $$
Phase assignment and phase confidence algorithm
The algorithm used in the analysis of the data is based on resampling techniques. Indeed, we use the maximum entropy bootstrap algorithm to generate a large number of replications of a given gene expression time series. Then, we calculate a bootstrapped p-value to test for circadian genes, and finally we construct a bootstrap percentile confidence interval that will be used to assign a phase to each oscillating gene.
The complete description with source code and test results has been published in [23].
AGI:
Arabidopsis Genome Initiative
BAT:
cRNA:
Antisense amplified RNA
GEO:
iWAT:
Inguinal white adipose tissue
JAK:
Janus Kinase
miRNA:
micro Ribonucleic Acid
mRNA:
Messenger Ribonucleic Acid
SOCS3:
Suppressor of cytokine signaling
STAT:
Signal Transducer and Activator of Transcription
UTR:
Untranslated region
Panda S, Hogenesch JB, Kay SA. Circadian rhythms from flies to human. Nature. 2002;417(6886):329–35.
Storch KF, Lipan O, Leykin I, Viswanathan N, Davis FC, Wong WH, et al. Extensive and divergent circadian gene expression in liver and heart. Nature. 2002;417(6884):78–83.
Keegan KP, Pradhan S, Wang JP, Allada R. Meta-analysis of drosophila circadian microarray studies identifies a novel set of rhythmically expressed genes. PLoS Comput Biol. 2007;3(11):e208.
Hogenesch JB, Panda S, Kay S, Takahashi JS. Circadian transcriptional output in the SCN and liver of the mouse. Novartis Found Symp. 2003;253:171–80. discussion 52-5, 02-9, 80-3 passim.
Ptitsyn AA, Zvonic S, Gimble JM. Digital signal processing reveals circadian baseline oscillation in majority of mammalian genes. PLoS Comput Biol. 2007;3(6):e120.
Zhang R, Lahens NF, Ballance HI, Hughes ME, Hogenesch JB. A circadian gene expression atlas in mammals: implications for biology and medicine. Proc Natl Acad Sci U S A. 2014;111(45):16219–24.
Elkon R, Ugalde AP, Agami R. Alternative cleavage and polyadenylation: extent, regulation and function. Nat Rev Genet. 2013;14(7):496–506.
Mayr C. Evolution and biological roles of alternative 3'UTRs. Trends Cell Biol. 2016;26(3):227–37. doi:10.1016/j.tcb.2015.10.012. Epub 2015 Nov 18. Review. PubMed PMID: 26597575; PubMed Central PMCID: PMC4955613.
Perez-Santangelo S, Schlaen RG, Yanovsky MJ. Genomic analysis reveals novel connections between alternative splicing and circadian regulatory networks. Brief Funct Genomics. 2013;12(1):13–24.
Petrillo E, Sanchez SE, Kornblihtt AR, Yanovsky MJ. Alternative splicing adds a new loop to the circadian clock. Commun Integr Biol. 2011;4(3):284–6.
Kojima S, Sher-Chen EL, Green CB. Circadian control of mRNA polyadenylation dynamics regulates rhythmic protein expression. Genes Dev. 2012;26(24):2724–36.
Kojima S, Green CB. Analysis of circadian regulation of poly(a)-tail length. Methods Enzymol. 2015;551:387–403.
Ptitsyn AA, Gimble JM. Analysis of circadian pattern reveals tissue-specific alternative transcription in leptin signaling pathway. BMC bioinformatics. 2007;8(Suppl 7):S15.
Covington MF, Harmer SL. The circadian clock regulates auxin signaling and responses in Arabidopsis. PLoS Biol. 2007;5(8):e222.
Edwards KD, Anderson PE, Hall A, Salathia NS, Locke JC, Lynn JR, et al. FLOWERING LOCUS C mediates natural variation in the high-temperature response of the Arabidopsis circadian clock. Plant Cell. 2006;18(3):639–50.
Ptitsyn A. Comprehensive analysis of circadian periodic pattern in plant transcriptome. BMC bioinformatics. 2008;9(Suppl 9):S18.
Ptitsyn AP, Natalia; Elsebakhi, Emad; Marincola, Francesco; Al-Ali, Rashid; Temanni, M.-Ramzi; AlSaad, Rawan; Wang, Ena Modulation of mRNA circadian transcription cycle by microRNAs. Middle East Conference on Biomedical Engineering (MECBME); 17–20 Feb. 2014; Doha, Qatar 2014.
Ptitsyn AA, Gimble JM. True or false: all genes are rhythmic. Ann Med. 2011;43(1):1–12.
Klevecz RR, Li CM, Marcus I, Frankel PH. Collective behavior in gene regulation: the cell is an oscillator, the cell cycle a developmental process. FEBS J. 2008;275(10):2372–84.
Lloyd D, Eshantha L, Salgado J, Turner MP, Murray DB. Respiratory oscillations in yeast: clock-driven mitochondrial cycles of energization. FEBS Lett. 2002;519(1–3):41–4.
Raman K, Chandra N. Flux balance analysis of biological systems: applications and challenges. Brief Bioinform. 2009;10(4):435–49.
Zvonic S, Ptitsyn AA, Conrad SA, Scott LK, Floyd ZE, Kilroy G, et al. Characterization of peripheral circadian clocks in adipose tissues. Diabetes. 2006;55(4):962–70.
El Anbari M, Fadda A, Ptitsyn A. Confidence in phase definition for periodicity in genes expression time series. PLoS One. 2015;10(7):e0131111.
We would like to thank Dr. Jeffrey Gimble for kindly providing the raw data from timeline microarray experiments.
All initial data is taken from previously published public sources. Intermediate data used in this study is included in supplemental materials. The murine circadian gene expression data is available by request from the corresponding author, [email protected]. The data from A. thaliana studies can be downloaded from Gene Expression Omnibus database (GEO) (http://www.ncbi.nlm.nih.gov/geo) using accession number GSE8365 and NASCArrays database (http://bar.utoronto.ca/NASCArrays/index.php?ExpID=108) under the accession NASCARRAYS-108 (circadian time course).
Software availability. All scripts and test data sets can be downloaded from the Github project directory https://github.com/sidratools/, following the link to /bsabri/gx_phase_shift All software is provided free of charge on the open source basis.
Embry-Riddle Aeronautical University, Daytona Beach, FL, 32114, USA
Natalia Ptitsyna
Sidra Medical and Research Center, P.O. box 26999, Doha, Qatar
Sabri Boughorbel, Mohammed El Anbari & Andrey Ptitsyn
Present affiliation: Gloucester Marine Genomics Institute, Gloucester, MA, 01930, USA
Andrey Ptitsyn
Sabri Boughorbel
Mohammed El Anbari
NP designed the model and provided analytical solution; SB performed data mining and extraction of specific patterns of expression in multiple data; ME contributed the software and analysis of oscillation phase; AP designed the model and wrote the paper. All authors read and approved the final manuscript.
Correspondence to Andrey Ptitsyn.
Supplemental method. In addition to Chi-Square test summarized in Table 1 we run a Monte Carlo simulation with 1000 repetitions with the option simulate.p.value as described in https://stat.ethz.ch/R-manual/R-devel/library/stats/html/chisq.test.html. This supplemental file provides description and p-values obtained in simulations. (DOCX 10 kb)
Supplemental Data Tables. This zip archive contains the results of analysis of phase difference among redundant probe sets in mouse liver, mouse brown fat, mouse white fat and two independent studies of Arabidopsis thaliana timeline gene expression. (ZIP 32 kb)
Supplemental Initial Data Tables. This zip archive contains the initial timeline data for probe sets in mouse liver, mouse brown fat, mouse white fat originally published in Zvonic et al. [21]. (ZIP 3710 kb)
Ptitsyna, N., Boughorbel, S., El Anbari, M. et al. The role of alternative Polyadenylation in regulation of rhythmic gene expression. BMC Genomics 18, 576 (2017). https://doi.org/10.1186/s12864-017-3958-1
Alternative transcription
Oscillatory gene expression
Cellular circuits
Molecular diode, mathematical modeling, datamining
Comparative and evolutionary genomics | CommonCrawl |
Signatures, sums of hermitian squares and positive cones on algebras with involution
ERA-MS Home
Zermelo deformation of finsler metrics by killing vector fields
2018, 25: 8-15. doi: 10.3934/era.2018.25.002
Hyperbolic dynamics of discrete dynamical systems on pseudo-riemannian manifolds
Mohammadreza Molaei
Mahani Mathematical Research Center, Shahid Bahonar University of Kerman, Kerman, Iran
We express our thanks to anonymous referee for his/her valuable comments
Received July 21, 2017 Published April 2018
We consider a discrete dynamical system on a pseudo-Riemannian manifold and we determine the concept of a hyperbolic set for it. We insert a condition in the definition of a hyperbolic set which implies to the unique decomposition of a part of tangent space (at each point of this set) to two unstable and stable subspaces with exponentially increasing and exponentially decreasing dynamics on them. We prove the continuity of this decomposition via the metric created by a torsion-free pseudo-Riemannian connection. We present a global attractor for a diffeomorphism on an open submanifold of the hyperbolic space $H^2(1)$ which is not a hyperbolic set for it.
Keywords: hyperbolic set, pseudo-Riemannian manifold.
Mathematics Subject Classification: Primary: 37D05; Secondary: 53B30.
Citation: Mohammadreza Molaei. Hyperbolic dynamics of discrete dynamical systems on pseudo-riemannian manifolds. Electronic Research Announcements, 2018, 25: 8-15. doi: 10.3934/era.2018.25.002
V. M. Alekseev and M. Yakobson, Symbolic dynamics and hyperbolic dynamical systems, Phys. Rep., 75 (1981), 287-325. doi: 10.1016/0370-1573(81)90186-1. Google Scholar
V. Araujo and M. Viana, Hyperbolic dynamical systems, in Mathematics of Complexity and Dynamical Systems, Vols. 13, Springer, New York, 2012, 740-754. Google Scholar
C. Bona and J. Massó, Hyperbolic evolution system for numerical relativity, Phys. Rev. Lett., 68 (1992), 1097-1099. doi: 10.1103/PhysRevLett.68.1097. Google Scholar
Y. Choquet-Bruhat and T. Ruggeri, Hyperbolicity of the 3+1 system of Einstein equations, Commun. Math. Phys., 89 (1983), 269-275. doi: 10.1007/BF01211832. Google Scholar
A. Gogolev, Bootstrap for local rigidity of Anosov automorphisms on the 3-torus, Commun. Math. Phys., 352 (2017), 439-455. doi: 10.1007/s00220-017-2863-4. Google Scholar
J. S. Hadamard, Sur l'it$\acute{e}ration$ et les solutions asymptotiques des équations différentielles, Bulletin de la Société Mathématique de France, 29 (1901), 224-228. Google Scholar
B. Hasselblatt, Hyperbolic dynamical systems, in Handbook of Dynamical Systems, Vol. 1A, North-Holland, Amsterdam, 2002, 239-319. doi: 10.1016/S1874-575X(02)80005-4. Google Scholar
S. W. Hawking and G. F. R. Ellis, The Large Scale Structure of Space-Time, Cambridge Monographs on Mathematical Physics, No. 1, Cambridge University Press, London-New York, 1973. Google Scholar
A. Mukherjee, Differential Topology, Springer International Publishing AG Switzerland, 2015 doi: 10.1007/978-3-319-19045-7. Google Scholar
[10] J. Palis Jr and W. de Melo, Geometric Theory of Dynamical Systems. An Introduction, Springer-Verlag, New York-Berlin, 1982. Google Scholar
J. Palis and F. Takens, Hyperbolicity and Sensitive Chaotic Dynamics at Homoclinic Bifurcations. Fractal Dimensions and Infinitely Many Attractors, Cambridge Studies in Advanced Mathematics, 35, Cambridge University Press, Cambridge, 1993. Google Scholar
H. Poincaré, Sur le probléme des trois corps et les equations de la dynamique, Acta Mathematica, 13 (1890), 1-270. Google Scholar
C. Ragazzo and L. S. Ruiz, Dynamics of an isolated, viscoelastic, self-gravitating body, Celestial Mech. Dynam. Astronom., 122 (2015), 303-332. doi: 10.1007/s10569-015-9620-9. Google Scholar
S. Smale, Differentiable dynamical systems, Bull. Amer. Math. Soc., 73 (1967), 747-817. doi: 10.1090/S0002-9904-1967-11798-1. Google Scholar
R. Yang and J. Qi, Dynamics of generalized tachyon field, Eur. Phys. J. C, 72 (2012), 2095. doi: 10.1140/epjc/s10052-012-2095-x. Google Scholar
Figure 1. The hyperbolic space $H^2(1)$
Figure 2. The black circle $C$ is a global attractor for $h$ but it is not a hyperbolic set for it
David M. A. Stuart. Solitons on pseudo-Riemannian manifolds: stability and motion. Electronic Research Announcements, 2000, 6: 75-89.
Alexander Nabutovsky and Regina Rotman. Lengths of geodesics between two points on a Riemannian manifold. Electronic Research Announcements, 2007, 13: 13-20.
Aylin Aydoğdu, Sean T. McQuade, Nastassia Pouradier Duteil. Opinion Dynamics on a General Compact Riemannian Manifold. Networks & Heterogeneous Media, 2017, 12 (3) : 489-523. doi: 10.3934/nhm.2017021
Saikat Mazumdar. Struwe's decomposition for a polyharmonic operator on a compact Riemannian manifold with or without boundary. Communications on Pure & Applied Analysis, 2017, 16 (1) : 311-330. doi: 10.3934/cpaa.2017015
Shengbing Deng, Zied Khemiri, Fethi Mahmoudi. On spike solutions for a singularly perturbed problem in a compact riemannian manifold. Communications on Pure & Applied Analysis, 2018, 17 (5) : 2063-2084. doi: 10.3934/cpaa.2018098
Erwann Delay, Pieralberto Sicbaldi. Extremal domains for the first eigenvalue in a general compact Riemannian manifold. Discrete & Continuous Dynamical Systems, 2015, 35 (12) : 5799-5825. doi: 10.3934/dcds.2015.35.5799
Anna Maria Candela, J.L. Flores, M. Sánchez. A quadratic Bolza-type problem in a non-complete Riemannian manifold. Conference Publications, 2003, 2003 (Special) : 173-181. doi: 10.3934/proc.2003.2003.173
Stefanie Hittmeyer, Bernd Krauskopf, Hinke M. Osinga, Katsutoshi Shinohara. How to identify a hyperbolic set as a blender. Discrete & Continuous Dynamical Systems, 2020, 40 (12) : 6815-6836. doi: 10.3934/dcds.2020295
Marian Gidea, Rafael de la Llave, Tere M. Seara. A general mechanism of instability in Hamiltonian systems: Skipping along a normally hyperbolic invariant manifold. Discrete & Continuous Dynamical Systems, 2020, 40 (12) : 6795-6813. doi: 10.3934/dcds.2020166
Zheng Yin, Ercai Chen. Conditional variational principle for the irregular set in some nonuniformly hyperbolic systems. Discrete & Continuous Dynamical Systems, 2016, 36 (11) : 6581-6597. doi: 10.3934/dcds.2016085
Erchuan Zhang, Lyle Noakes. Riemannian cubics and elastica in the manifold $ \operatorname{SPD}(n) $ of all $ n\times n $ symmetric positive-definite matrices. Journal of Geometric Mechanics, 2019, 11 (2) : 277-299. doi: 10.3934/jgm.2019015
E. Camouzis, H. Kollias, I. Leventides. Stable manifold market sequences. Journal of Dynamics & Games, 2018, 5 (2) : 165-185. doi: 10.3934/jdg.2018010
Camillo De Lellis, Emanuele Spadaro. Center manifold: A case study. Discrete & Continuous Dynamical Systems, 2011, 31 (4) : 1249-1272. doi: 10.3934/dcds.2011.31.1249
Zhiguo Feng, Ka-Fai Cedric Yiu. Manifold relaxations for integer programming. Journal of Industrial & Management Optimization, 2014, 10 (2) : 557-566. doi: 10.3934/jimo.2014.10.557
Franz W. Kamber and Peter W. Michor. The flow completion of a manifold with vector field. Electronic Research Announcements, 2000, 6: 95-97.
Claudia Valls. The Boussinesq system:dynamics on the center manifold. Communications on Pure & Applied Analysis, 2005, 4 (4) : 839-860. doi: 10.3934/cpaa.2005.4.839
Hongyu Cheng, Rafael de la Llave. Time dependent center manifold in PDEs. Discrete & Continuous Dynamical Systems, 2020, 40 (12) : 6709-6745. doi: 10.3934/dcds.2020213
Lan Wen. On the preperiodic set. Discrete & Continuous Dynamical Systems, 2000, 6 (1) : 237-241. doi: 10.3934/dcds.2000.6.237
François Berteloot, Tien-Cuong Dinh. The Mandelbrot set is the shadow of a Julia set. Discrete & Continuous Dynamical Systems, 2020, 40 (12) : 6611-6633. doi: 10.3934/dcds.2020262
YanYan Li, Tonia Ricciardi. A sharp Sobolev inequality on Riemannian manifolds. Communications on Pure & Applied Analysis, 2003, 2 (1) : 1-31. doi: 10.3934/cpaa.2003.2.1
HTML views (1913) | CommonCrawl |
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. Join them; it only takes a minute:
Combinatorial Proofs for Summations: A Possible Generalisation?
\begin{equation*} \sum_{r=1}^{n} r^\alpha (n-r)^\beta = ? \end{equation*} where $\alpha$ and $\beta$ are natural numbers
Here's something I wish to share with the Math SE community, a very interesting combinatorial proof, that I came across sometime ago. I strongly believe,that this approach could lead to a possible generalisation of similar summations.
Those who wish to read the following warm-up proofs (I thought it's a good idea to share what motivated me towards asking the problem I intend to ask), may do so while enjoying the little snack for the mind - the art of combinatorial proof. Others, may skip directly to the question I intend to ask, right after these proofs.
\begin{equation*} 1 + 2 + 3 + \cdots + n = {n+1 \choose 2} \end{equation*}
The above identity is a very familiar one, apparently one that Gauss came up with when his teacher asked him to sum up numbers from 1 to 100 (supposed to be punishment). Of course, his proof is well known - writing the sequence in the reverse order and summing it with the original one greatly reduces computational effort.
Now, what's interesting is the combinatorial approach, which I summarise as follows (for people who may not be familiar with it)-
In general, to give a combinatorial proof for a binomial identity, say A=B you do the following:
Find a counting problem you will be able to answer in two ways. Explain why one answer to the counting problem is A. Explain why the other answer to the counting problem is B. Since both A and B are the answers to the same question, we must have A=B.
The tricky thing is coming up with the question. This is not always obvious, but it gets easier the more counting problems you solve. You will start to recognize types of answers as the answers to types of questions. More often what will happen is you will be solving a counting problem and happen to think up two different ways of finding the answer. Now you have a binomial identity and the proof is right there. The proof is the problem you just solved together with your two solutions.
Now, coming to the proof of the above-stated identity-
Consider the question:"How many subsets of $A = {1,2,3, \ldots, n+1}$ contain exactly two elements?" Answer 1: We must choose 2 elements from $n+1$ in set A, so there are exactly ${n+1 \choose 2}$ subsets.
Answer 2: We break this question down into cases, based on what the larger of the two elements in the subset is. The larger element can't be 1, since we need at least one element smaller than it.
Larger element is 2: there is 1 choice for the smaller element.
Larger element is 3: there are 2 choices for the smaller element.
Larger element is 4: there are 3 choices for the smaller element. And so on. When the larger element is $n+1$, there are $n$ choices for the smaller element. Since each two element subset must be in exactly one of these cases, the total number of two element subsets is $1 + 2 + 3 + \cdots + n\text{.}$
Answer 1 and answer 2 are both correct answers to the same question, so they must be equal. Therefore, \begin{equation*} 1 + 2 + 3 + \cdots + n = {n+1 \choose 2} \end{equation*}
To drive y'all another step closer to my question, here's another (just one) combinatorial proof: \begin{equation*} 1(n) + 2(n-1) + 3 (n-2) + \cdots + (n-1) 2 + n(1) = {n+2 \choose 3}. \end{equation*}
Consider the question "How many 3-element subsets are there of the set $\{1,2,3,\ldots, n+2\}\text{?}$"
Answer 1: We must select 3 elements from the collection of $n+2$ elements. This can be done in ${n+2 \choose 3}$ ways.
Answer 2: Break this problem up into cases by what the middle number in the subset is. Say each subset is $\{a,b,c\}$ written in increasing order. We count the number of subsets for each distinct value of $b$. The smallest possible value of $b$ is $2$ and the largest is $n+1$
When $b=2$, there are $1(n)$ subsets: 1 choice for $a$ and $n$ choices (3 through $n+2$) for $c$
When $b=3$, there are $2(n-1)$ subsets: 2 choices for $a$ and $n-1$ choices for $c$
When $b=4$, there are $3(n-2)$ subsets: 3 choices for $a$ and $n-2$ choices (3 through $n+2$) for $c$
And so on. When $b=n+1$ there are $n$ choices for $a$ and only 1 choice for $c$, so, $n(1)$ subsets.
Therefore, the total number of subsets is $1(n) + 2(n-1) + 3 (n-2) + \cdots + (n-1) 2 + n(1)$.
Since Answer 1 and Answer 2 are answers to the same question, they must be equal.
Therefore, \begin{equation*} 1(n) + 2(n-1) + 3 (n-2) + \cdots + (n-1) 2 + n(1) = {n+2 \choose 3}. \end{equation*}
This, sort of hints at a pattern, in summations involving r and n-r multiplied, raised to different powers(r varying from 1 to n). Finally, my question is: \begin{equation*} \sum_{r=1}^{n} r^\alpha (n-r)^\beta = ? \end{equation*} where $\alpha$ and $\beta$ are natural numbers
I need help not only in solving for the answer to the above summation in terms of $\alpha$ and $\beta$, but also figuring out a combinatorial approach. The power of the art of combinatorial proof convinces me that the above summation, a generalisation of what we have seen till now - can be possibly evaluated using combinatorial techniques/arguments.
Please help, thanks a lot.
Edit: Something just struck me. Can we look at possible recursion, for the given sum? We know how to solve the easy cases, like ($\alpha$,$\beta$)= (0,1) or (1,0). Is it possible to find a recursion of S($\alpha$,$\beta$) in terms of S($\alpha$-1,$\beta$), or S($\alpha$,$\beta$-2), etc, where S denotes the required sum with parameters ($\alpha$,$\beta$)
sequences-and-series combinatorics summation binomial-theorem combinatorial-proofs
arya_stark
arya_starkarya_stark
$\begingroup$ Combinatorial proof means that formal power series technics are out of question, doesn't it? $\endgroup$ – Clément Guérin Mar 1 '18 at 9:14
$\begingroup$ Also, as a sidenote, taking $\beta=0$ leaves you with the sum of $\alpha$-powers of the first $n$ integers. There are some formulation of this in terms of Stierling numbers of the second kind and binomial coefficients. The proof is not combinatorial (it uses some formula involving Stierling numbers). Stierling numbers count the number of partitions such that... and binomial coefficients count the number of subsets such that... Well my point is that both have a combinatorial interpretations. Maybe you can figure out something in the case $\beta=0$ with said formula. $\endgroup$ – Clément Guérin Mar 1 '18 at 9:25
$\begingroup$ @ClémentGuérin, you may solve it using non-combinatorial techniques as well, for starters. Once we figure out the closed form of that summation, maybe we can brainstorm towards a combinatorial approach. $\endgroup$ – arya_stark Mar 1 '18 at 9:48
$\begingroup$ @schrodinger_16 There is an algorithm for finding out if the given formula has a closed form with a (certain) bounded polynomial degree. Check out A=B from Petkovsek, Wilf and Zeilberger, online here: math.upenn.edu/~wilf/AeqB.html $\endgroup$ – SK19 Mar 3 '18 at 11:23
$\begingroup$ @SK19, there is no doubt that this summation has a closed form. If given particular values of $\alpha$ and $\beta$, one may use binomial expansion to express that sum as a polynomial in r. Once this is done, we know that summation of r^k for values of k = 1,2,3,4... and so on can be evaluated. $\endgroup$ – arya_stark Mar 3 '18 at 11:28
The subject is indeed interesting, but here we can just introduce the main lines along which it could be analized.
Let's change symbols and consider the function $$ \bbox[lightyellow] { S(a,b,n) = S(b,a,n) = \sum\limits_{0\, \le \,k\, \le \,n} {k^{\,a} \left( {n - k} \right)^{\,b} } = \sum\limits_{1\, \le \,k\, \le \,n - 1} {k^{\,a} \left( {n - k} \right)^{\,b} } \quad \left| \matrix{ \;0 \le n \in \mathbb Z \hfill \cr \;0 < a,b \in \mathbb R \hfill \cr} \right. }$$ where for generality we admit that $a$ and $b$ be positive reals. We will deal specifically the case in which $a$ or $b$ are null.
a) Preliminary considerations
$S(a,b,n)$ is clearly symmetric in $a$ and $b$;
it is the convolution of the discrete functions of $n$ : $n^a,\; n^b$;
it looks like the discrete analog of the integral of Bernstein basis polynomials
and actually it is proportional to the corresponding Rieman Sum
$$ \bbox[lightyellow] { \eqalign{ & \left( \matrix{ a + b \cr a \cr} \right){{S(a,b,n)} \over {(n+1) n^{\,a + b} }} = {1 \over n}\sum\limits_{0\, \le \,k\, \le \,n} {\left( \matrix{a + b \cr a \cr} \right)\left( {{k \over n}} \right)^{\,a} \left( {1 - {k \over n}} \right)^{\,a + b - a} } \buildrel {n\, \to \,\infty } \over \longrightarrow \cr & \to \;\int_0^1 {b_{\,a,\,a + b} (x)dx} = \int_0^1 {\left( \matrix{ a + b \cr a \cr} \right)x^{\,a} \left( {1 - x} \right)^{\,b} dx} = {1 \over {\left( {a + b + 1} \right)}} \cr} } \tag{1} $$ so that we have an asymptotical relation in $n$, also having a clear
connection with the Beta function.
b) relation with S(a,0,n)
For $a,\,b$ non negative integers, developing $(n-k)^b$ we get $$ \bbox[lightyellow] { \eqalign{ & S(a,b,n) = \sum\limits_{0\, \le \,k\, \le \,n} {k^{\,a} \left( {n - k} \right)^{\,b} } = \sum\limits_{0\, \le \,k\, \le \,n} {\left( {n - k} \right)^{\,a} k^{\,b} } \quad \left| {\;0 \le a,b \in \mathbb Z} \right. \cr & = \sum\limits_{0\, \le \,j\, \le \,b} {\left( {\left( { - 1} \right)^{\,b - j} \left( \matrix{ b \cr j \cr} \right)\sum\limits_{0\, \le \,k\, \le \,n} {k^{\,a + b - j} } } \right)n^{\,j} } = \sum\limits_{0\, \le \,j\, \le \,a} {\left( {\left( { - 1} \right)^{\,a - j} \left( \matrix{ a \cr j \cr} \right)\sum\limits_{0\, \le \,k\, \le \,n} {k^{\,a + b - j} } } \right)n^{\,j} } \cr & = \sum\limits_{0\, \le \,j\, \le \,b} {\left( {\left( { - 1} \right)^{\,b - j} \left( \matrix{ b \cr j \cr} \right)S(a + b - j,0,n)} \right)n^{\,j} } = \sum\limits_{0\, \le \,j\, \le \,a} {\left( {\left( { - 1} \right)^{\,a - j} \left( \matrix{ a \cr j \cr} \right)S(a + b - j,0,n)} \right)n^{\,j} } \cr} } \tag{2} $$
Concerning $S(a,0,n)$, there are four basic expressions for it $$ \bbox[lightyellow] { \eqalign{ & S(m,0,n) = \sum\limits_{0\, \le \,k\, \le \,n} {k^{\,m} } \quad \left| {\;0 \le m,n \in \mathbb Z} \right. = \cr & = \sum\limits_{0\, \le \,k\, \le \,n} {\sum\limits_{\left( {0\, \le } \right)\,j\,\left( { \le \,m} \right)} {\left\{ \matrix{ m \cr j \cr} \right\}k^{\,\underline {\,j\,} } } } = \quad \quad (3.1) \cr & = \sum\limits_{\left( {0\, \le } \right)\,l\,\left( { \le \,m} \right)} {\;l!\;\left\{ \matrix{ m \cr l \cr} \right\}\left( \matrix{ n + 1 \cr l + 1 \cr} \right)} = \quad \quad (3.2)\cr & = \sum\limits_{\left( {0\, \le } \right)\,l\,\left( { \le \,m} \right)} {\left\langle \matrix{ m \cr l \cr} \right\rangle \left( \matrix{ n + 1 + l \cr m + 1 \cr} \right)} = \quad \quad (3.3) \cr & = {{n + 1} \over {m + 1}}\sum\limits_{0\, \le \,l\,\left( { \le \,m} \right)} {\left( \matrix{ m + 1 \cr l + 1 \cr} \right)\;b_{\,m - l} \;\left( {n + 1} \right)^{\,l} } \quad \quad (3.4) \cr} }$$ where:
- ${\left\{ \matrix{ m \cr l \cr} \right\}}$ denotes the Stirling N. of 2nd kind;
- ${\left\langle \matrix{ m \cr l \cr} \right\rangle }$ denotes the Eulerian N. (Worpitzky's identity);
- $b_n$ denotes the Bernoulli N. ( in the version $b_1 = -1/2$).
c) recurrence on $n$
We can write $$ \bbox[lightyellow] { \left\{ \matrix{ S(a,b,n) = 0\quad \left| {\;n < 0} \right. \hfill \cr S(a,b,0) = 0^{\,a} 0^{\,b} = \left[ {0 = a} \right]\left[ {0 = b} \right] \hfill \cr S(a,b,1) = \left[ {0 = a} \right] + \left[ {0 = b} \right] \hfill \cr S(a,b,2) = 0^{\,a} 2^{\,b} + 1^{\,a} 1^{\,b} + 2^{\,a} 0^{\,b} = 1 + 2^{\,b} \left[ {0 = a} \right] + 2^{\,a} \left[ {0 = b} \right] \hfill \cr \hfill \cr S(a,b,n) = \sum\limits_{0\, \le \,k\, \le \,n} {k^{\,a} \left( {\left( {n - 1} \right) + 1 - k} \right)^{\,b} } = \hfill \cr = n^{\,a} 0^{\,b} + \sum\limits_{0\, \le \,j\,} {\left( \matrix{ b \cr j \cr} \right)\sum\limits_{0\, \le \,k\, \le \,n - 1} {k^{\,a} \left( {n - 1 - k} \right)^{\,j} } } = \hfill \cr = n^{\,a} \left[ {0 = b} \right] + \sum\limits_{0\, \le \,j\,} {\left( \matrix{ b \cr j \cr} \right)S(a,j,n - 1)} \hfill \cr} \right. } \tag{4} $$ where, in the last two sums, the upper bound for $j$ would extend to infinity if $b$ were not an integer.
However, taking $b$ to be a non-negative integer, it is still valid for $a$ real.
d) z-Transform
From the consideration that $S(a,b,n)$ is the convolution of the two signals $n^a$ and $n^b$, all considered as functions of $n$, the (unilateral) z-Transform follows easily. We consider here $a$ and $b$ non-negative integers. $$ \bbox[lightyellow] { \eqalign{ & F(a,z) = \sum\limits_{0\, \le \,n} {n^{\,a} z^{\,n} } \quad \left| {\,0 \le a \in \mathbb Z} \right.\quad = \cr & = \sum\limits_{\left( {0\, \le } \right)\,j\,\left( { \le \,m} \right)} {\left\{ \matrix{ a \cr j \cr} \right\}z^{\,j} \sum\limits_{0\, \le \,n} {n^{\,\underline {\,j\,} } z^{\,n - j} } } = \cr & = \sum\limits_{\left( {0\, \le } \right)\,j\,\left( { \le \,a} \right)} {\left\{ \matrix{ a \cr j \cr} \right\}z^{\,j} \sum\limits_{0\, \le \,n} {{{d^{\,j} } \over {dz^{\,j} }}\left( {1 - z} \right)^{\, - 1} } } = {1 \over {1 - z}}\sum\limits_{\left( {0\, \le } \right)\,j\,\left( { \le \,a} \right)} {j!\left\{ \matrix{ a \cr j \cr} \right\}\left( {{z \over {1 - z}}} \right)^{\,j} } = \cr & = {1 \over {\left( {1 - z} \right)^{\,a + 1} }}\sum\limits_{\left( {0\, \le } \right)\,j\,\left( { \le \,a} \right)} {j!\left\{ \matrix{ a \cr j \cr} \right\}z^{\,j} \left( {1 - z} \right)^{\,a - j} } = \cr & = {1 \over {\left( {1 - z} \right)^{\,a + 1} }}\sum\limits_{\left( {0\, \le } \right)\,l\,\left( { \le \,a} \right)} {\left( {\sum\limits_{\left( {0\, \le } \right)\,j\,\left( { \le \,l} \right)} {j!\left\{ \matrix{ a \cr j \cr} \right\}\left( { - 1} \right)^{\,l - j} \left( \matrix{ a - j \cr l - j \cr} \right)} } \right)z^{\,l} } = \cr & = {{p_a (z)} \over {\left( {1 - z} \right)^{\,a + 1} }} \cr} } \tag{5} $$
So $$ \bbox[lightyellow] { \eqalign{ & G(a,z) = \sum\limits_{0\, \le \,n} {S(a,0,n)z^{\,n} } = {{p_a (z)} \over {\left( {1 - z} \right)^{\,a + 2} }} \cr & G(a,b,z) = {{p_a (z)p_b (z)} \over {\left( {1 - z} \right)^{\,a + b + 2} }} \cr} } \tag{6} $$
G CabG Cab
$\begingroup$ 1. That looks pretty interesting, the generalisation over reals is well done, however, is there a simpler closed form for integral values of a and b? $\endgroup$ – arya_stark Mar 5 '18 at 2:08
$\begingroup$ @schrodinger_16 the formulas (3.x) are the "closest" known, and they correspond to $b=0$ (or $a=0$). So for $a,b$ general integers ($0 \le$) there won't be closer than .. the starting formula itself. $\endgroup$ – G Cab Mar 5 '18 at 10:50
Ok I am writing this as an answer because I don't feel like displaying the formula in a comment, I denote $\left\lbrace\begin{array}{l}n\\k\end{array}\right\rbrace$ the Stierling number of the second kind, i.e. the number of partitions with $k$ sets of a set with $n$ elements. You can show (to show this I use formal power series, the proof will look like my answer there Infinite Sum with Combinatoric Expression you may also used the second formula in (1) in the answer given by Robjohn for the same question, this amount to the same thing) that (there is a condition: $n\geq \alpha$):
$$\sum_{k=0}^nk^{\alpha}=\sum_{\ell=1}^{\alpha}\ell!\left\lbrace\begin{array}{l}\alpha\\\ell\end{array}\right\rbrace\binom{n+1}{\ell+1}=\sum_{\ell=1}^{\alpha}\frac{(n+1)!}{(n-\ell)!}\frac{1}{\ell+1}\left\lbrace\begin{array}{l}\alpha\\\ell\end{array}\right\rbrace$$
the condition is not really necessary for the first equality as long as you ask for the convention: $\binom{n+1}{\ell+1}=0$ if $\ell>n$.
I don't know if it can be called a "closed formula" for this sum but this is the best I know.
As far as I know you can interpret it with nice drawings for $\alpha=1,2,3$ which is kind of a combinatorial proof but probably not the kind you are looking for. If you want I can tell you more about it.
Stierling numbers have a combinatorial interpretation, binomial coefficients too. Maybe you can guess something by just looking at the formula.
I have been looking for a combinatorial proof a while ago. I have not been successful. Doesn't mean there doesn't exist one in the literature, of course!
Clément GuérinClément Guérin
10.1k11 gold badge99 silver badges3636 bronze badges
$\begingroup$ Note that it is $$ k^{\,\alpha } = \sum\limits_{\left( {0\, \le } \right)\,l\,\left( { \le \,\alpha } \right)} {\left\{ \matrix{ \alpha \cr l \cr} \right\}\,k^{\,\underline {\,l\,} } } = \sum\limits_{\left( {0\, \le } \right)\,l\,\left( { \le \,\alpha } \right)} {l!\left\{ \matrix{ \alpha \cr l \cr} \right\}\,\left( \matrix{ k \cr l \cr} \right)} \quad \left| {\;0 \le \alpha \in Z} \right. $$ $\endgroup$ – G Cab Mar 3 '18 at 7:57
$\begingroup$ while $$ \sum\limits_{0\, \le \,k\, \le \,n} {k^{\,\alpha } } = \sum\limits_{0\, \le \,k\, \le \,n} {\sum\limits_{\left( {0\, \le } \right)\,l\,\left( { \le \,\alpha } \right)} {l!\left\{ \matrix{ \alpha \cr l \cr} \right\}\,\left( \matrix{ k \cr l \cr} \right)} } = \sum\limits_{\left( {0\, \le } \right)\,l\,\left( { \le \,\alpha } \right)} {l!\left\{ \matrix{ \alpha \cr l \cr} \right\}\,\left( \matrix{ n + 1 \cr n - l \cr} \right)} \quad \left| {\;0 \le \alpha ,n \in Z} \right. $$ $\endgroup$ – G Cab Mar 3 '18 at 7:58
$\begingroup$ @GCab, agreed. Thanks for checking! I edit. $\endgroup$ – Clément Guérin Mar 3 '18 at 8:38
$\begingroup$ @ClémentGuérin something just struck me. Can we look at possible recursion, for the given sum? We know how to solve the easy cases, like ($\alpha$,$\beta$)= (0,1) or (1,0). Is it possible to find a recursion of S($\alpha$,$\beta$) in terms of S($\alpha$-1,$\beta$), or S($\alpha$,$\beta$-2), etc, where S denotes the required sum with parameters ($\alpha$,$\beta$) $\endgroup$ – arya_stark Mar 4 '18 at 13:38
Thanks for contributing an answer to Mathematics Stack Exchange!
Not the answer you're looking for? Browse other questions tagged sequences-and-series combinatorics summation binomial-theorem combinatorial-proofs or ask your own question.
Infinite Sum with Combinatoric Expression
How to begin combinatorial proof of $\sum_{k=1}^n k \binom nk^2 = n \binom{2n-1}{n-1}$
Combinatorial proof involving reciprocals
Find the number of $2$-element subsets $\{a,b\}$ of $\{1,\cdots,1000\}$ such that $5 \mid a\cdot b$
Applying multiplication principle to counting subsets
A few questions about proofs regarding the Binomial Coefficient.
Combinatorial Proofs with Summation
Give a combinatorial proof for the identity
Double counting in combinatorial formulae.
Combinatorial proof of ${n \choose 1} + {n \choose 3} +\cdots = {n \choose 0} + {n \choose 2}+\cdots$
Combinatorial proof for an identity | CommonCrawl |
Volume 10 Supplement 2
Selected articles from the IEEE International Conference on Bioinformatics and Biomedicine 2015: systems biology
Identifying the topology of signaling networks from partial RNAi data
Yuanfang Ren1,
Qiyao Wang1,
Md Mahmudul Hasan1,
Ahmet Ay2 &
Tamer Kahveci1
BMC Systems Biology volume 10, Article number: 53 (2016) Cite this article
Methods for inferring signaling networks using single gene knockdown RNAi experiments and reference networks have been proposed in recent years. These methods assume that RNAi information is available for all the genes in the signal transduction pathway, i.e., complete. This assumption does not always hold up since RNAi experiments are often incomplete and information for some genes is missing.
In this article, we develop two methods to construct signaling networks from incomplete RNAi data with the help of a reference network. These methods infer the RNAi constraints for the missing genes such that the inferred network is closest to the reference network. We perform extensive experiments with both real and synthetic networks and demonstrate that these methods produce accurate results efficiently.
Application of our methods to Wnt signal transduction pathway has shown that our methods can be used to construct highly accurate signaling networks from experimental data in less than 100 ms. The two methods that produce accurate results efficiently show great promise of constructing real signaling networks.
Cells respond to external stimuli often initiated by external signaling molecules such as steroid hormones or growth factors. This response is tightly controlled by complex protein-protein interaction networks, namely, signal transduction pathways [1]. When an external molecule binds to a specific receptor molecule located in the cell membrane or inside the cell, the receptor undergoes a conformational change and triggers a chain of signaling events to propagate the external signal inside the cell. As the appropriate response to the external stimuli, the chain of biochemical reactions culminate in the activation or suppression of a target protein (or a set of proteins) known as the reporter protein.
Signaling networks are vital for proper functioning of cells as they govern key cellular processes. For instance, Mitogen-activated protein kinase (MAPK) signaling network is involved in the regulation of cellular proliferation, differentiation, mitosis, survival, and apoptosis [2, 3]. Any disruption in signal transduction in cells leads to a number of disorders such as cancer, Alzheimer's, Parkinson's, and kidney and cardiovascular disease [4–7]. It is paramount that we study the topology of the signaling networks to gain insights into how cells respond to external stimuli, how its deviation results in various diseases and how the cells respond to treatments.
Experimental methods such as yeast-two hybrid, RNA interference (RNAi) give us information about the signaling events inside the cells. In the RNAi experiment [8], mRNA levels of a predetermined set of genes are artificially knocked down [8, 9]. For each gene, the effect of the knockdown is measured in the reporter genes. The role of the knocked down gene in the signal transduction pathway is inferred by comparing the responses of RNAi treated and wild type cells [10, 11]. If the response deviates greatly in the RNAi treated cells compared to the wild type, it shows that the knocked down gene plays an important role in signal transduction from the receptor to the reporter.
Single gene knockdown RNAi experiment gives insight about the importance of a single gene in signal transduction from receptor to reporter gene. However, constructing the complete network topology from RNAi experiments is computationally challenging [12]. To alleviate the computational cost, many computational methods have been developed that use available experimental data such as gene expression, RNAi knock down assay and protein-protein interaction networks [13–15]. These methods often employ Bayesian networks, probabilistic Boolean networks, combinatorial optimization methods and differential equation models [15–21]. Some inference algorithms start with a network topology called the reference network. These methods assume that the network to be constructed is similar (few network edit operations away) to the reference network [20–23]. The methods that utilize prior knowledge construct accurate network topology faster than methods that do not.
Signaling Network Constructor (SiNeC) [20] is an algorithm that infers signaling networks using a reference network and RNAi data. SiNeC starts from the signaling network of a reference organism, makes minimum number of interaction addition or deletion to this reference network so that it satisfies the RNAi data (or RNAi constraints) of the target organism. SiNeC assumes that the RNAi experimental data is available for all the genes in the network. However, RNAi experiments are often noisy, and there are usually genes that the RNAi data is not collected [24]. Therefore, the development of network construction methods for incomplete RNAi experimental data is at most importance.
Network construction using a reference network and complete RNAi data is NP-Complete [20]. If RNAi data is missing for a subset of genes, that further increases the complexity of the problem. Assume that there are n genes for which RNAi data is missing. Note that each of these genes can be either critical for signal transduction from receptor to reporter genes, or noncritical, i.e., each gene has two possibilities. Therefore, for n missing genes, an optimal solution must evaluate all 2n possible configurations to compute the correct values for the missing genes. It is impractical to evaluate all 2n constraint configurations since exhaustive method will fail as n increases.
In this article, we construct signaling networks using incomplete/ missing RNAi data. We design and develop two iterative network construction algorithms namely the holistic optimization and the prioritized optimization algorithms to infer signaling networks. Assume that there are n genes with missing RNAi data. Holistic optimization evaluates each of these genes one by one to decide if it is critical or noncritical, leading to O(n 2) constraint combinations. Prioritized optimization lowers the number of constraint combinations by exclusively setting each gene as critical and combining the genes that yield networks with the same distance to the reference network (more on distance in Section 'Preliminary terms') in subsets of genes. This divides the set of n unknown genes to k subsets of mutually exclusive genes where each subset is of size n i (\(\sum _{i=1}^{k} n_{i} = n\)). In each iteration, prioritized optimization evaluates only the genes in a subset to see if it's critical or noncritical, thus leading to only \(O(\sum _{i=1}^{k} n_{i}^{2})\) iterations. We also develop a node ordering algorithm named TopSoG that takes causality into account and both holistic and prioritized optimization algorithms employ it as a subroutine.
We evaluate our methods using both synthetic and real signaling network dataset. To compare the performance with the gold standard, we also implement an exhaustive algorithm that evaluates all subsets of the genes with missing RNAi data and infers the network with the closest distance to the reference network. We found that the proposed methods run much faster than the exhaustive algorithm and produce the same accuracy levels in their inferred networks. For instance, it takes less than 100ms for our method to reconstruct highly accurate Wnt signaling networks for different organisms. We also evaluate our methods using synthetic networks by varying a broad spectrum of parameters, such as the number of genes with missing RNAi data, the number of nodes in the network and the amount of deviation between the reference and the target network to be constructed. We found our methods to be robust as they produced highly accurate networks in all these scenarios.
The organization of the rest of the paper is as follows. In Section 'Method', we formally define the problem and propose two algorithms to solve it. We present the results of our extensive experiments in Section 'Results and discussion' and conclude the paper in Section 'Conclusions'.
In this section we present two novel methods we developed to solve the signaling network construction problem. First, we present the key terms used in our method. Then, we briefly explain the SiNeC algorithm. Next, we describe our two methods in detail, holistic optimization algorithm and prioritized optimization algorithm. Last, we explain our new sorting algorithm ToPSoG for the critical genes.
Preliminary terms
We start by introducing the key terms that will help present our method. First, we introduce an important concept, critical and noncritical genes in a network with a receptor and reporter gene pair.
Assume that we are given a directed network G=(V,E) with receptor gene v s and reporter gene v t , we say that a gene v∈V is a critical gene if there is no path from v s to v t that does not contain v. Otherwise, it is a noncritical gene.
A simple example in Fig. 1 clarifies this. In this figure, node v a appears on all the paths from v s to v t . Thus, only node v a is the critical node. Single gene knockdown RNAi experiments discover if a gene is important during the transmission of a signal from a receptor to a reporter gene. Let us denote the RNAi experiment result on the ith gene with an indicator variable c i . If a signal is unable to reach to the reporter gene from the receptor gene after the ith gene is knocked down, the variable c i =1. Otherwise, c i =0. If the RNAi experiment for the ith gene is missing, we set c i =−1. We call such genes as unknown genes in the rest of the paper. Suppose we want to construct a network with l genes, we represent the RNAi constraints imposed on all these genes with a vector of variables C=(c 1,c 2,…,c l ). Following definition clarifies how to impose the RNAi constraints on a given network's topology.
An hypothetical signaling network. Nodes v s and v t are the receptor and reporter genes. Nodes v a and v b are constrained to be critical genes
CONSISTENT NETWORK Consider a directed network G=(V,E) with a receptor and reporter gene pair, and the RNAi constraints C imposed on the set of genes. We say that G is consistent with C if ∀v i is a critical gene when c i =1, or v i is a noncritical gene when c i =0.
Notice that in Definition 2 above, only critical and noncritical genes are imposed rules. For unknown genes c i =−1, they can be either critical or noncritical. Next, we introduce another notation which is needed to define our problem.
DISTANCE BETWEEN TWO NETWORKS Assume that we are provided with two networks built on the same set of genes, G 1=(V,E 1) and G 2=(V,E 2). We denote the set difference and set cardinality with operators " ∖" and " ∣.∣" respectively. We define the distance between G 1 and G 2 as:
$$dist(G_{1},G_{2}) = \mid E_{1} \backslash E_{2} \mid + \mid E_{2} \backslash E_{1} \mid $$
In what follows we formally define the signaling network construction problem.
SIGNALING NETWORK CONSTRUCTION Assume that we are given a reference network G R =(V,E R ) with respect to v s and v t , and also a vector of RNAi constraints C. The problem is to construct a network G=(V,E) which is consistent with C and the distance d i s t(G,G R ) to the reference is minimum.
It is important to note that Definition 4 conjectures that the topology of the reference network is close to that of target network. When reference networks from phylogenetically close organisms are available, this conjecture has already been shown to obtain accurate results [20].
Next, we present two novel algorithms we have developed for the problem as defined above. Both algorithms apply a hill climbing strategy. They first start from an initial configuration of constraints. Then they gradually update their constraints. Since we observe that there are usually a few critical genes in real signaling networks, in the initial configuration, for all c i =−1 (i.e., missing data), we set c i =0 (noncritical).
Overview of the SiNeC algorithm
Before introducing our method, we first take a small detour to briefly summarize the SiNeC algorithm, which is necessary to better understand our method. SiNeC is a recent network inference algorithm which uses a given reference network and the RNAi data to construct the target network [20]. It however assumes that the RNAi constraints for all genes are known. In this paper, we develop algorithms that utilize SiNeC and deal with the missing RNAi data problem.
Briefly, SiNeC works in three steps: (i) It first estimates the order of critical genes in which the signal is propagated from the receptor to the reporter genes. SiNeC uses the Sloan algorithm [25] to generate a putative ordering. The Sloan algorithm assigns a priority value to each node based on its degree and its distance to the end node. It removes the node with highest priority and updates the priority of remaining nodes. It continues this process until all the nodes are processed. This greedy strategy results in an ordering which imposes that every path from receptor to reporter should pass critical genes in that order. (ii) SiNeC then deletes edges that conflict with the ordering of critical genes. If there is a path of noncritical genes between two nonconsecutive critical genes, a signal is still reachable without traversing through the intermediate critical genes. SiNeC deletes all these edges with minimum number of edge deletions to make the network consistent with the ordering of critical genes found by Sloan algorithm. (iii) SiNeC inserts some missing edges to make the reference network satisfy the experimental RNAi constraints. It inserts an edge if one of the following cases happens: 1) No path exits between two consecutive critical genes, or 2) At least a noncritical gene exits on all the paths between two consecutive critical genes, i.e, eventually making it a critical gene. For any further and detailed information, the interested readers can refer to Hashemikhabir et al. [20].
Holistic optimization algorithm
Holistic optimization algorithm starts to construct the network topology with each unknown gene setting to noncritical. Then it iteratively tries to alter the constraint of one unknown gene at a time from noncritical to critical. It is worth mentioning that after this alteration, the constraints for all the genes are fixed, that is there are no unknown genes left at this stage. For each such constraint, it uses the SiNeC algorithm to construct the network topology. It then only accepts the alteration with the best result. Holistic optimization algorithm describes this process in detail. It consists of following two steps.
Step 1: Initialization. It first sets the constraints of all unknown genes to noncritical. Then it uses these constraints to construct the network with minimum distance to the reference and maintains the resulting distance (Lines 2-6).
Step 2: Climbing. This step is of significance. It iterates over the set of all the unknown genes. For each such gene g i , it first temporarily sets g i to critical that is the constraint c i =1. Then it uses this new constraint vector C and the given reference network G R as the guide to construct a new network G i by applying SiNeC (Line 11). After temporarily altering the constraints for all unknown genes, it chooses the network G m with the least distance to the reference G R (Line 14). If the distance between G m and G R is better than the current best result, it decides the constraint of the gene g m should be critical (Line 15-17). Otherwise, it concludes that no single constraint alteration can improve the result and simply returns the current best result (Line 19).
Here, we analyze the performance of the holistic optimization algorithm. The most time consuming step in this algorithm by far is the network construction step (Line 11) using SiNeC. We denote the number of unknown genes with n. This step is O(n 2).
Prioritized optimization algorithm
Holistic optimization algorithm carefully tries to construct the network close to the reference network. However, trying O(n 2) alternative constraint combinations is prohibitively time consuming as n and the network size grow. In this section, we developed a method that alleviates this problem by reducing the number of alterations in the constraint vector.
Our next algorithm utilizes the distance between the network G i and the reference network G R which is obtained after altering the constraint of gene g i to 1 at a time. With these distances, it prioritize the role of gene g i in the network, i.e., whether gene g i is critical or not. Smaller values of d i s t(G i ,G R ) indicate higher likelihood of being critical gene for gene g i in the target network. Prioritized optimization algorithm describes this idea in detail. Similar to holistic optimization, it also consists of two steps.
Step 1: Initialization. Same as holistic optimization, this step starts by initializing the constraints of all the unknown genes to noncritical. It constructs the network and maintains the distance to the reference (Lines 2-6). Then for each unknown gene g i , it temporarily alters its constraint to critical (i.e., c i =1), constructs a new network G i and keeps the distance d i s t(G i ,G R ) in D i s t[i] (Lines 7-12).
Step 2: Climbing. This step presents the major difference between our two methods. Unlike holistic optimization, the prioritized one iterates over only a subset of unknown genes instead of the whole set. Let us denote this subset with U ′ (Line 16). This subset consists of unknown genes with the smallest value of D i s t[i] obtained in the first step, and it is likely that there are more than one with the same smallest value. For each unknown gene g i in the set U ′, prioritized optimization temporarily sets it as critical, constructs a network G i using the new constraint vector C, and computes the distance with the reference d i s t(G i ,G R ) (Lines 17-21). It finalizes the constraint that provides a better result than the current best and continues this process iteratively (Lines 22-25). It returns the current best result until there is no single constraint alteration can improve the result.
Like holistic optimization, constructing the network using SiNeC (Line 19) is the most time consuming step of prioritized optimization. We denote the size of unique d i s t(G i ,G R ) values among all unknown genes with k (k≤n). Then for each unique d i s t(G i ,G R ) value, there will be k different sets U ′. We represent the size of these k sets as n 1, n 2, …, n k (\(n = \sum _{i=1}^{k} n_{i}\)). Thus, prioritized optimization executes that step \(O(\sum _{i=1}^{k} {n_{i}^{2}})\) times. And we expect that when k is large and all n i have similar values, the time complexity of prioritized optimization is significantly better than that of holistic optimization.
Sorting critical genes
Both of our holistic and prioritized optimization algorithms employ the SiNeC algorithm to construct the network topology when the constraints of all the genes are determined. Recall from the Section 'Overview of the SiNeC algorithm' that an important step of SiNeC is to rank the critical genes. SiNeC applies the Sloan algorithm to do this. The Sloan algorithm ranks genes based on their degrees and distances to the reporter gene (See Section 'Overview of the SiNeC algorithm'). This strategy however fails to capture the causality between the genes in signal transfer and thus leads SiNeC to incorrect network topologies. Figure 1 explains this on a toy example. In this example, nodes v s and v t denote the receptor and reporter genes respectively. Assume that nodes v a and v b are critical genes according to the given RNAi constraints. Therefore, we need to rank nodes v a and v b . Intuitively, v a should appear before v b as v a can pass a signal to v b , and they have the same distance to the reporter. However, since v b has a larger degree than v a , the Sloan algorithm prefers v b to come before v a for a signal starting from the receptor. This causes many redundant edge insertions and deletions (e.g., it requires inserting an edge from v s to v b ). More importantly, it results in an incorrect network topology. In summary, the Sloan algorithm is not tailored for signaling network construction and better ranking algorithms are needed. Next, we develop a new gene ranking algorithm named Topological Sorting for General Graph (TopSoG).
The TopSoG algorithm (see Algorithm 3) is loosely based on the classical topological sorting algorithm [26], which is designed only for directed acyclic graphs (DAGs). A reference network in our problem however may contain cycles. To tackle this problem, we convert the reference network G R =(V,E R ) to a DAG \(\phantom {\dot {i}\!}{G_{R'}} = (V', E')\). Initially, we set \(\phantom {\dot {i}\!}{G_{R'}}\) to be the same as the reference network G R . We then update both V ′ and E ′ using the following strategy to convert it to a DAG. We start by applying the Kosaraju's algorithm [26] to find the Strongly Connected Components (SCC) in G R (Line 2). Let us denote the ith SCC with S i . Each S i defines a small subnetwork in G R which contains the nodes in S i and the edges incident to them. We compress each S i and replace it with a single node in \(\phantom {\dot {i}\!}{G_{R'}}\). For each S i , if there is an incoming edge (u,v) where u∈V∖S i and v∈S i , we call v an entry point to S i . Note that there can be multiple incoming edges to S i leading to possibly multiple entry points. Among all these entry points, we designate one as the entrance to S i whose sum of distances to all the other entry points is the smallest (Lines 5-6). After selecting the entrance for every SCC, we replace each SCC with a single node, called super node using the strategy below. We first remove all the nodes in S i from V ′ along with the edges incident to them from E ′. We then insert a new super node s i into V ′. For each edge (u,v)∈E R with u∈V∖S i and v∈S i , we insert the edge (u,s i ) into E ′. Similarly, for each edge (u,v)∈E R with v∈V∖S i and u∈S i , we insert the edge (s i ,v) into E ′. We repeat this process for each S i . The resulting network \(\phantom {\dot {i}\!}{G_{R'}}\) is guaranteed to be a DAG (Line 7). We are now ready to rank the nodes.
In the ranking step, we first get the topological ranking R of all the nodes in \(\phantom {\dot {i}\!}{G_{R'}}\) using the Depth-first-Search (DFS) algorithm in the order they are visited starting from v s (Lines 10-11). Notice that some of the nodes in this ranking are super nodes. Thus they actually represent a set of nodes which still needs to be ranked. To do that, we run DFS on the subnetwork S i starting from the entrance node u i , rank the nodes in S i in the order that they are visited, and replace s i with the ranked list of nodes in S i (Lines 14-16). We repeat this for each super node s i in R and obtain a complete ranking of all the nodes in the original reference network G R . Then we extract the ranking of all the critical nodes from R (Line 19).
Finally, we emphasize that the DFS strategy used in our algorithm differs from the classical DFS algorithm [26]. When there are multiple unvisited successors, instead of arbitrarily selecting one to traverse next, we select the successor as follows. Consider a possible successor node v. We denote the distance between v and the source node v s in the original reference network G R with d s . Similarly, we denote the distance between v and the target node v t with d t . Among all the unvisited successors, we select the one with the largest (1/d s −1/d t ) value, which indicates it is close to v s but far from v t .
In this section, we evaluate the performance of our methods extensively on both synthetic and real datasets. We compute the performance of our methods in terms of the quality of the results and the running time. Next we introduce the datasets and the quality measures used in our experiments and the implementation details.
Datasets We use both synthetically generated and real datasets in our experiments. In the following, to simplify our notation, we use the size and density of the network to represent the number of nodes and the number of edges per node in a network respectively.
Synthetic dataset. We run experiments on synthetic networks to observe the performance of our methods under diverse parameters including network size, mutation rate (noise) etc. We randomly generate scale-free synthetic networks following the Barabási-Albert model [27] by varying the network size. This model is commonly used in the literature for simulating the real biological network behavior. Using this model, we generate target networks with various sizes 50, 75, 100 and 125. In particular, we generate 10 random networks with density three for each network size. Thus, the dataset contains 40 (i.e., 4 × 10) target networks. According to the problem definition, we impose a receptor and reporter gene pair, RNAi constraints for the gene set on the target network. For each target network, we choose the receptor and reporter genes in the following way. We first find all the shortest paths between all pairs of genes. Among these paths, we choose the longest one as the diameter of the network. Then we set the source node on this path as the receptor gene, and the sink node as the reporter gene. If there are more than one path that can be chosen as the diameter of the network, we choose one of these paths randomly. Upon choosing the receptor and reporter gene pair, we set all the articulation points which appear on all the paths from the receptor gene to the reporter gene as the critical genes, and the remaining genes as noncritical.
Each target network has 7 reference networks that are obtained by performing specific level of topological perturbations on it. To do this, we apply the degree preserving edge shuffling method [28] with a given mutation rate (i.e, noise). Specifically, we use seven linearly spaced mutation rates of 5 %, 10 %, …, 35 %. Thus, in total 280 (i.e., 7 × 40) reference networks are created. A mutation rate of r means that r×|E| edges in the target network are shuffled to generate a reference network.
Real dataset. This dataset consists of five Wnt signaling networks in the KEGG database. Specifically, they are from organisms Bos mutus (bom), Python bivittatus(pbi), Pan paniscus (pps), Xenopus laevis(xla), and Mus musculus (mmu).
Quality measures. We use various quantifiable measures to evaluate the performance of our method. We first report the distance between the inferred network G and the reference network G R , d i s t(G,G R ). This criteria measures how well our method constructs the network. Smaller values of this measure indicate better results. We have described this distance criteria formally in Definition 3. We then report the F-score in terms of the accuracy of the result compared to the real network topology. This criteria measures how successfully our method build true biological network topology. Larger values of this measure indicate better results. It is worth mentioning that only if the true result is known, we can calculate F-score to measure the result. Next we describe the method to compute F-score.
F-score. F-score considers precision and recall to evaluate the accuracy of the result. We define them with the true positive (TP), false positive (FP), and false negative (FN) terms. We calculate the precision as \(\frac {TP}{TP + FP}\) and recall as \(\frac {TP}{TP + FN}\). Thus, we calculate the F-score as
$$\text{F-score} = \frac{2 \times precision \times recall}{precision + recall} $$
Implementation details & environment. We implemented the holistic optimization and prioritized optimization algorithms using Java. We conducted all the experiments on a Linux server which has AMD Opteron dual core processors (up to 2.2 GHz) and 3GB RAM.
Default parameter settings. To observe how robust our methods are on the synthetic dataset, we vary a broad spectrum of parameters, such as network size, noise and the number of unknown genes. Notice that the topology of the reference network is affected by the network size and the noise level, and the inference method is affected by the number of unknown genes. In our experiments, unless stated otherwise, we always set the default values for these three parameters as follows: network size (100), noise level (15 %), the number of unknown genes (15).
Effects of parameters on the inference methods
To test the robustness of our methods under various parameters, we run experiments on synthetic dataset and compute the accuracy of results. In this respect, we vary the following three parameters: (i) network size, (ii) noise, and (iii) the number of unknown genes in the network. To observe the impact of each parameter on our methods, each time we only vary one parameter and fix the other parameters to their default values. To ensure the results are reliable, for each parameter, we conduct experiments on 10 reference networks and report their average distance d i s t(G,G R ) and running time.
Effect of network size. First, we explore the impact of network size. We fix the noise to 20 % and the number of unknown genes to 15. We experiment for network size 50,75,100 and 125.
For all different network sizes, we observe that our two methods both successfully build a network topology which is close to the reference network (Fig. 2 a). Generally, both of them obtain roughly same distance values. Thus, in regards to the quality of the results, no clear winner emerges. On the other hand, we also observe that the distance between G and G R lightly grows as the network size increases. This is because when the noise and density are set, the increase of the network size leads to the increase of the number of edges shuffled.
Effect of parameters on the inference methods. a, b, and c show the average distance between the constructed and the reference networks for varying network size, noise and number of unknown genes respectively. d, e, and f show the running time of the inference methods for the same setup. The running time is reported in milliseconds (ms) and presented in log-scale
The running time of our methods is significantly fast (Fig. 2 d). Even for networks with 125 nodes, the running time is only around 10 seconds. For all network sizes, we observe that the prioritized optimization runs faster than the holistic optimization method. This is expected since the former one tests fewer constraint combinations. Moreover, we also see that the running time of each inference method grows with the network size. This is because the number of edges in the reference network contributes a lot to the complexity of our methods. As the number of nodes grows, the number of edges in the reference networks also grows when the network density is fixed.
Effect of noise. Next, we consider the impact of noise. We set the network size to 100 and the number of unknown genes to 15. We experiment for noise 5 %,10 %,…,35 %.
For all noise values, in terms of the distance between G and G R , we observe similar results with those in (Fig. 2 a and b). Generally, the resulting distance values are roughly same. Both methods successfully build a network topology close to the reference network. On the other hand, we also observe that the distance increases with the increase in noise. This is because when the noise grows, the amount of deviation between the reference and the target network will also increase. Thus, in order to be consistent with the
RNAi constraints, more edge insertions and deletions are expected to happen in the reference network.
The running time of our methods is very fast (in milliseconds to seconds) (Fig. 2 e). For all noise values, we see that compared to the holistic optimization, the prioritized one runs faster. Moreover, we also observe that the running time increases as the noise level increases. One possible reason is that with the growth of the difference between the reference and the target network, more time is needed to reach the smallest distance value.
Effect of the number of unknown genes. Finally, we focus on the impact of the number of unknown genes. We fix the network size to 100 and the noise to 20 %. We experiment for the number of unknown genes 10, 15 and 20.
For all numbers of the unknown genes, like our previous experiments, we observe the similar distance results (Fig. 2 c). Both methods have a small distance value. Interestingly, as the number of unknown genes increases, we see that the distance values do not noticeably change. Thus, our methods are robust to the change of the number of unknown genes.
Similar to our other experiments, our methods demonstrate practical running time (Fig. 2 f). Both methods construct networks from milliseconds to seconds. We observe that the advantage in running time of the prioritized optimization does not change. Moreover, we also see that the running time increases gradually with the number of unknown genes, which is very favorable since there are usually many unknown genes in practical applications.
In summary Our experiments show that our methods are robust to various parameters. Under a variety of parameter settings, both the holistic and prioritized optimization successfully infer a network topology with a small distance to the reference network. Among all three parameters, we observe that the fitness between the predicted and actual network is affected most by the noise level. Although both methods yield similar distance values, the prioritized optimization runs much faster. Moreover, we also observe that the network size affects the running time of two methods the most. The running time grows with the network size. According to the above discussion, we conclude that the prioritized optimization is more desirable since it obtains the similar distance value as the holistic one in a much faster time. As a result, we apply the prioritized optimization in the remaining experiments.
Ranking strategies: Sloan vs. TopSoG
Existing methods [20] such as SiNeC use the Sloan algorithm [25] to rank the critical genes in the network. We have already discussed how the Sloan algorithm works (Section 'Overview of the SiNeC algorithm'), its limitations, and developed a new ranking algorithm named TopSoG (Section 'Sorting critical genes'). Here, we seek the answer to the question whether TopSoG indeed yields any improvement experimentally. We fix the network size and noise to 100 and 20 % respectively and vary the number of unknown genes from 10 to 20. We compare the performance of our prioritized optimization in terms of the distance between the constructed and the reference networks and the running time when it employs Sloan and TopSoG algorithms.
Figure 3 a presents the average distance between the constructed and the reference networks. We observe that TopSoG is superior to Sloan in minimizing the distance values regardless of the number of unknown genes. However, this improvement comes with a price of an increase in the running time. Figure 3 b shows the running time of our prioritized optimization for both ranking strategies. We see that on the average the Sloan is faster than TopSoG in all cases. That said, both strategies have practical running times as they both work in less than a second. Thus, we conclude that the TopSoG algorithm is more preferable as the accuracy of the network topology is of primary target in network construction. In the rest of our experiments, we use TopSoG to rank critical genes.
Comparison of the Sloan and TopSoG ranking strategies. a shows the distance between the inferred and the reference networks. b reports the running time of the inference algorithm when employed with each strategy in milliseconds (ms)
Comparison with the exhaustive search method
As mentioned before, our inference methods employ a heuristic strategy which greedily determine the role of next unknown gene. It is interesting to see how well our methods perform comparing to the deterministic exhaustive approach, which takes all possible combinations of unknown genes into account. To answer this question, we conduct a set of experiments with the synthetic dataset. We change the number of unknown genes from 10 to 20 with the network size and noise fixed as 100 and 20 % respectively. For each number of unknown genes, we repeat the experiment with 10 reference networks and compute the average.
For all numbers of unknown genes, our method obtains a high accuracy (Fig. 4 a). Although our method is heuristic, it obtains similar or even exactly the same distance values as the optimum results produced by the exhaustive approach.
Comparison of the prioritized and the exhaustive methods. a shows the average distance between the inferred and reference networks. b reports the running time in milliseconds (ms)
Besides the accuracy, we also pay attention to the efficiency of our method. We observe that in terms of running time, our method has great advantage (Fig. 4 b). As the number of unknown genes grows, the running time of our strategy grows only quadratically while that of the exhaustive search is exponential (we discuss about the time complexity in Section 'Prioritized optimization algorithm'). Thus, when the networks are large or with great number of unknown genes, using exhaustive strategy is impractical, whereas it only takes negligible time for our method to produce almost the same quality results as the exhaustive strategy.
Evaluations on real dataset
In the above sections, we have demonstrated the robustness of our method under various parameters. Even though the Barabási-Albert model is used to simulate the behavior of the real biological networks, slight differences might exist between the resulting and real network topological characteristics. To show the applicability of our method to real networks, in this section, we evaluate our method with a real dataset. Networks in this dataset are from the following organisms, Bos mutus (bom), Python bivittatus(pbi), Pan paniscus (pps), Xenopus laevis(xla), and Mus musculus (mmu). We set xla and mmu to the target networks, and the rest are taken as references. When two organisms are orthologs, we say that a node (gene) in one network has a corresponding node in another, but it is possible to have nodes not matching between two organisms. If a node is absent in the target network, we remove it and its incident edges in the reference network. We change the amount of unknown genes n from 4 to 20. For each n value, we set the constraints of n randomly picked genes from the target gene set to "unknown". According to the network's topology, we decide the roles of the remaining nodes, i.e., whether it is critical or not. To ensure the results are reliable, for each parameter, we conduct the experiment for 200 times and compute the average F-score of the resulting network.
First, we fix xla as the target network and the rest as the reference. We set nemo-like kinase (KEGG entry: xla398295) and glycogen synthase kinase 3 beta (KEGG entry: xla399097) as the receptor gene and the reporter gene respectively. As Fig. 5 a shows, the F-score of resulting topology is as high as 0.75 when bom or pps is the reference network. If mmu or pbi is the reference network, the accuracy drops slightly but still remains significantly high, which indicates that the choice of the reference impacts the accuracy of the result. Moreover, we find that the accuracy of our method is robust as the number of unknown genes grows. This is very promising since we expect to have many unknown genes in real networks, especially for those less studied organisms.
The F-score of the constructed Wnt signaling network using different reference networks. a shows the F-score for target network xla. b shows the F-score for target network mmu
Then we fix mmu as the target network. We set nemo-like kinase (KEGG entry: mmu18099) and naked cuticle 2 homolog (KEGG entry: mmu72293) as the receptor gene and the reporter gene respectively. We make the similar observation that our method is robust to the growing number of unknown genes while having a high accuracy (Fig. 5 b).
When the rest of the organisms are target networks, we observe the similar results (results not shown). Last, we turn our attention to the running time of our method. In this dataset, each network is inferred within less than 100 ms. In summary, our method is a practical tool for constructing real signaling networks because of its efficiency and high accuracy.
In this study, we presented two novel methods for constructing signaling networks with incomplete RNAi data under the guidance of a reference network. These methods infer the network topology, which is consistent with the RNAi experiments and is close to a given reference network. We also presented a new biologically relevant gene ranking method for signaling network construction. Our experiments showed that the new ranking strategy greatly improve our methods in minimizing the distance to the reference. Moreover, both of our methods construct highly accurate signaling networks in a much faster time than an exhaustive research. We observed that although the accuracy of our two methods are comparable, the prioritized optimization method outperforms the holistic method in terms of the running time. Application of our method to the real Wnt signaling network demonstrated its efficiency and applicability in real signaling networks.
Cooper GM. Signaling Molecules and Their Receptors. The Cell: A Molecular Approach. Sunderland: Sinauer Associates; 2000.
Bonni A, Brunet A, West AE, Datta SR, Takasu MA, Greenberg ME. Cell survival promoted by the Ras-MAPK signaling pathway by transcription-dependent and -independent mechanisms. Science. 1999; 286(5443):1358–62.
Zhang W, Liu HT. MAPK signal pathways in the regulation of cell proliferation in mammalian cells. Cell Res. 2002; 12(1):9–18. doi:10.1038/sj.cr.7290105.
Polakis P. The many ways of Wnt in cancer. Curr Opin Genet Dev. 2007; 17(1):45–51. doi:10.1016/j.gde.2006.12.007.
Belloni E, Muenke M, Roessler E, Traverso G, Siegel-Bartelt J, Frumkin A, Mitchell HF, Donis-Keller H, Helms C, Hing AV, Heng HH, Koop B, Martindale D, Rommens JM, Tsui LC, Scherer SW. Identification of Sonic hedgehog as a candidate gene responsible for holoprosencephaly. Nat Genet. 1996; 14(3):353–6. doi:10.1038/ng1196-353.
Deng M, Sun F, Chen T. Assessment of the reliability of protein-protein interactions and protein function prediction. In: Pac. Symp. Biocomputing (PSB 2003). Singapore: World Scientific : 2002. p. 140–51.
Hunter T. Signaling–2000 and beyond,. Cell. 2000; 100(1):113–27.
Fire A, Xu S, Montgomery MK, Kostas SA, Driver SE, Mello CC. Potent and specific genetic interference by double-stranded RNA in Caenorhabditis elegans. nature. 1998; 391(6669):806–11.
Ipsaro JJ, Joshua-Tor L. From guide to target: molecular insights into eukaryotic RNA-interference machinery. Nat Struct Mol Biol. 2015; 22(1):20–8.
Kolben T, Peröbner I, Fernsebner K, Lechner F, Geissler C, Ruiz-Heinrich L, Capovilla S, Jochum M, Neth P. Dissecting the impact of frizzled receptors in Wnt/ β-catenin signaling of human mesenchymal stem cells. Biol Chem. 2012; 393(12):1433–47.
Brummelkamp TR, Nijman SM, Dirac AM, Bernards R. Loss of the cylindromatosis tumour suppressor inhibits apoptosis by activating NF- κB. Nature. 2003; 424(6950):797–801.
Moffat J, Sabatini DM. Building mammalian signalling pathways with RNAi screens. Nat Rev Mol Cell Biol. 2006; 7(3):177–87. doi:10.1038/nrm1860.
Singh R. Algorithms for the analysis of protein interaction networks. PhD thesis, Massachusetts Institute of Technology. 2011.
Yeang CH, Ideker T, Jaakkola T. Physical network models. J Comput Biol. 2004; 11(2-3):243–62. doi:10.1089/1066527041410382.
Vinayagam A, Stelzl U, Foulle R, Plassmann S, Zenkner M, Timm J, Assmus HE, Andrade-Navarro MA, Wanker EE. A directed protein interaction network for investigating intracellular signal transduction,. Sci Signal. 2011; 4(189):8. doi:10.1126/scisignal.2001699.
Kaderali L, Dazert E, Zeuge U, Frese M, Bartenschlager R. Reconstructing signaling pathways from RNAi data using probabilistic Boolean threshold networks. Bioinformatics. 2009; 25(17):2229–35.
Müller P, Kuttenkeuler D, Gesellchen V, Zeidler MP, Boutros M. Identification of JAK/STAT signalling components by genome-wide RNA interference. Nature. 2005; 436(7052):871–5. doi:10.1038/nature03869.
Böck M, Ogishima S, Tanaka H, Kramer S, Kaderali L. Hub-centered gene network reconstruction using automatic relevance determination. PLoS ONE. 2012; 7(5):35077. doi:10.1371/journal.pone.0035077.
Mazur J, Ritter D, Reinelt G, Kaderali L. Reconstructing nonlinear dynamic models of gene regulation using stochastic sampling. BMC Bioinformatics. 2009; 10:448. doi:10.1186/1471-2105-10-448.
Hashemikhabir S, Ayaz ES, Kavurucu Y, Can T, Kahveci T. Large-scale signaling network reconstruction. IEEE/ACM Trans Comput Biol Bioinformatics (TCBB). 2012; 9(6):1696–708.
Ozsoy OE, Can T. A divide and conquer approach for construction of large-scale signaling networks from PPI and RNAi data using linear programming. IEEE/ACM Trans Comput Biol Bioinform. 2013; 10(4):869–83. doi:10.1109/TCBB.2013.80.
Ruths D, Tseng JT, Nakhleh L, Ram PT. De novo signaling pathway predictions based on protein-protein interaction, targeted therapy and protein microarray analysis. In: Systems Biology and Computational Proteomics. Haidelberg: Springer: 2007. p. 108–18.
Tu Z, Argmann C, Wong KK, Mitnaul LJ, Edwards S, Sach IC, Zhu J, Schadt EE. Integrating siRNA and protein-protein interaction data to identify an expanded insulin signaling network. Genome Res. 2009; 19(6):1057–67. doi:10.1101/gr.087890.108.
Boutros M, Ahringer J. The art and design of genetic screens: RNA interference. Nat Rev Genet. 2008; 9(7):554–66.
Böck SWS. An algorithm for profile and wavefront reduction of sparse matrices. Intl J Numerical Methods Eng. 1986; 23(5):239–51.
Cormen TH, Stein C, Rivest RL, Leiserson CE. Introduction to Algorithms, 2nd edn. Cambridge, MA: MIT press; 2001.
Albert R, Barabási A-L. Statistical mechanics of complex networks. Rev Modern Phys. 2002; 74(1):47.
Milo R, Kashtan N, Itzkovitz S, Newman M, Alon U. On the uniform generation of random graphs with prescribed degree sequences. 2003. arXiv preprint cond-mat/0312028.
We would like to thank the reviewers for their insightful comments and suggestions.
This article has been published as part of BMC Systems Biology Vol 10 Suppl 2 2016: Selected articles from the IEEE International Conference on Bioinformatics and Biomedicine 2015: systems biology. The full contents of the supplement are available online at http://bmcsystbiol.biomedcentral.com/articles/supplements/volume-10-supplement-2.
Publication of this article was funded by NSF under grant DBI-1262451.
Developed two methods: WQY, RYF, MMH and TK. Conceived and Designed the experiments: WQY, RYF, MMH and TK. Performed the data analysis: WQY, RYF, MMH, AA and TK. Performed the experiments and interpreted the results: WQY, RYF, MMH, AA and TK. Contributed to the writing of the manuscript: AA, WQY, RYF, MMH and TK. All authors read, provided comment and approved the final manuscript.
Department of Computer & Information Science & Engineering, University of Florida, Gainesville, 32611, FL, USA
Yuanfang Ren, Qiyao Wang, Md Mahmudul Hasan & Tamer Kahveci
Department of Biology & Mathematics, Colgate University, Hamilton, 13346, NY, USA
Ahmet Ay
Yuanfang Ren
Qiyao Wang
Md Mahmudul Hasan
Tamer Kahveci
Correspondence to Yuanfang Ren.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Ren, Y., Wang, Q., Hasan, M.M. et al. Identifying the topology of signaling networks from partial RNAi data. BMC Syst Biol 10, 53 (2016). https://doi.org/10.1186/s12918-016-0301-4
Signal transduction networks
Network inference
RNAi data | CommonCrawl |
T FAHMY
Articles written in Bulletin of Materials Science
Volume 42 Issue 5 October 2019 Article ID 0220
AC conductivity and broadband dielectric spectroscopy of a poly(vinyl chloride)/poly(ethyl methacrylate) polymer blend
T FAHMY HESHAM ELZANATY
Alternating-current (ac) conductivity and dielectric relaxation behaviour of a poly(vinyl chloride)/poly(ethyl methacrylate) polymer blend have been investigated intensively in a frequency range from 1 × 10⁻¹ to 2 × 10⁷ Hz through a temperature range from 300 to 393 K. The variation of σac of pure and polyblend samples showed a plateau region at high temperature and low frequency and this plateau region is decreased with decreasing temperature. Values of the exponent $n$ are less than unity indicative of the correlated barrier hopping for conduction. The values of the exponent $n$ are used to calculate the binding energy (Wm) of the charge carriers. The investigation of the frequency dependence of ε' for pure and polyblend samples showed a dielectric dispersion. The high values of dielectric constant at a low frequency and high temperature are attributed to the effects of space charge due to the electrode polarization. The complex electric modulus (M*) of pure and polyblend samples has been investigated. It is found that the real part of the complex electric modulus, M' is increased non-linearly as the frequency increased and reached the steady state at higher frequencies for all samples. On the other hand, the imaginary part of the complex electric modulus, M'' is characterized by a relaxation peak. The different modes of relaxation, such as interfacial polarization and dipolar relaxation, are detected in low and high frequency regions in the variation plot of M'' against frequency. The activation energy values of both interfacial polarization and α-relaxation are calculated.
Volume 43 All articles Published: 27 August 2020 Article ID 0243
AC conductivity and dielectric relaxation of chitosan/poly(vinyl alcohol) biopolymer polyblend
T FAHMY H ELHENDAWI W B ELSHARKAWY F M REICHA
Polyblend samples of chitosan/poly(vinyl alcohol) (PVA) have been prepared using a casting technique. Scanning electron microscopy, Fourier transform infrared spectroscopy and thermogravimetric analysis measurementsrevealed that chitosan and PVA are compatible with each other. Alternate current (AC) conductivity and dielectric relaxation features of pure and polyblend samples are analysed in the frequency range of 0.1 Hz to 100 kHz covering abroad temperature range from room temperature to 423 K. Variation of AC conductivity, $\rho_{\rm AC}$, of pure and chitosan/PVA polyblend samples is found to be characterized by a plateau region at low frequency and high temperature, and this plateau region increases with increase in temperature. Based on the behaviour of the exponent $s$ $vs$. temperature, AC conductivitydependence on frequency is found to be correlated with overlapping-large polaron tunnelling (OLPT) model. The polyblend samples showed an improvement in their dielectric properties compared to the pure materials. The dielectric constant, $\epsilon '$, of polyblend samples was increased by increasing the content of PVA. The dielectric dispersion was observed in the variation of $\epsilon '$ against frequency for all samples. The high values of $\epsilon '$ for all samples at high temperature and low frequency are attributed to space charge polarization. Also, loss tangent-frequency behaviour of pure chitosan, PVA andall polyblend samples showed two distinguished relaxation peaks with different values of activation energies. The first relaxation peak is termed as interfacial polarization or Maxwell–Wagner–Sillars polarization due to heterogeneity of thepolyblend samples, whereas, the second relaxation peak is termed as $\delta$-relaxation and $\alpha$-relaxation, for pure chitosan and PVA, respectively.
Volume 44 All articles Published: 15 May 2021 Article ID 0142
Characterization and molecular dynamic studies of chitosan–iron complexes
T FAHMY A SARHAN
Chitosan–iron (Cs–Fe) complexes are prepared electrochemically in an aqueous acidic medium in onecompartment cell at different times. XRD pattern of Cs–Fe complex samples has been investigated in the range from 5° to 50° and revealed that chitosan is characterized by certain crystalline peaks at 8.73°, 11.92° and 18.96°. In addition, the crystallinity of Cs–Fe complex samples is increased with increasing the content of Fe$^{3+}$. Ultraviolet–visible (UV–Vis) and Fourier transform-infrared (FTIR) spectroscopies have been used to investigate the optical properties of Cs–Fe complex samples. UV analysis showed that pure chitosan is characterized by absorption band at 214 nm resulted from the amide linkages and at 311 nm, as a shoulder which is attributed to intraligand $n$ ${\rightarrow}$${\pi}$ and ${\pi}$ ${\rightarrow}$${\pi}^*$ transitions of the chromophoric C=O group. On the other hand, two new bands are observed in Cs–Fe complex samples at nearly 350 and 389 nm with increasing Fe$^{3+}$ content. The optical parameters of all the samples, such as optical band gap energy ($E_g$), Urbachenergy ($E_U$), dispersion energy ($E_d$) and oscillator energy ($E_o$) have been estimated. It is found that these parameters are significantly affected due to the Fe$^{3+}$ content. FTIR spectra revealed that many of the characteristic bands of pure chitosan have been affected either in its position or its intensity due to the presence of Fe$^{3+}$, confirming that the formation of complex between chitosan and Fe$^{3+}$ is occurred. Dielectric relaxation spectroscopy technique has been used to investigate the dielectric properties of pure chitosan and Cs–Fe complex samples in a wide range frequency and a temperature range extended from RT to 433 K. The investigation showed that the existence of Fe$^{3+}$ resulted in a modification in the dielectric constant (${\varepsilon}'$) and dielectric loss (${\varepsilon}"$) behaviour. Dielectric loss tangent (tan ${\delta}$) showed that pure chitosan is characterized by two different types of relaxations, whereas Cs–Fe complex samples are characterized by only one relaxation process.
Online submission at ematersci
Winners - Poster Presentation
Bulletin of Materials Science | News
Dr Shanti Swarup Bhatnagar for Science and Technology
Prof. Subi Jacob George — Jawaharlal Nehru Centre for Advanced Scientific Research, Jakkur, Bengaluru
Chemical Sciences 2020
Prof. Surajit Dhara — School of Physics, University of Hyderabad, Hyderabad
Physical Sciences 2020 | CommonCrawl |
Combined transcriptome and metabolome analysis of Nerium indicum L. elaborates the key pathways that are activated in response to witches' broom disease
Shengjie Wang1,
Shengkun Wang1,
Ming Li1,
Yuhang Su1,
Zhan Sun1 &
Haibin Ma1
Nerium indicum Mill. is an ornamental plant that is found in parks, riversides, lakesides, and scenic areas in China and other parts of the world. Our recent survey indicated the prevalence of witches' broom disease (WBD) in Guangdong, China. To find out the possible defense strategies against WBD, we performed a MiSeq based ITS sequencing to identify the possible casual organism, then did a de novo transcriptome sequencing and metabolome profiling in the phloem and stem tip of N. indicum plants suffering from WBD compared to healthy ones.
The survey showed that Wengyuen county and Zengcheng district had the highest disease incidence rates. The most prevalent microbial species in the diseased tissues was Cophinforma mamane. The transcriptome sequencing resulted in the identification of 191,224 unigenes of which 142,396 could be annotated. There were 19,031 and 13,284 differentially expressed genes (DEGs) between diseased phloem (NOWP) and healthy phloem (NOHP), and diseased stem (NOWS) and healthy stem (NOHS), respectively. The DEGs were enriched in MAPK-signaling (plant), plant-pathogen interaction, plant-hormone signal transduction, phenylpropanoid and flavonoid biosynthesis, linoleic acid and α-linoleic acid metabolism pathways. Particularly, we found that N. indicum plants activated the phytohormone signaling, MAPK-signaling cascade, defense related proteins, and the biosynthesis of phenylpropanoids and flavonoids as defense responses to the pathogenic infection. The metabolome profiling identified 586 metabolites of which 386 and 324 metabolites were differentially accumulated in NOHP vs NOWP and NOHS and NOWS, respectively. The differential accumulation of metabolites related to phytohormone signaling, linoleic acid metabolism, phenylpropanoid and flavonoid biosynthesis, nicotinate and nicotinamide metabolism, and citrate cycle was observed, indicating the role of these pathways in defense responses against the pathogenic infection.
Our results showed that Guangdong province has a high incidence of WBD in most of the surveyed areas. C. mamane is suspected to be the causing pathogen of WBD in N. indicum. N. indicum initiated the MAPK-signaling cascade and phytohormone signaling, leading to the activation of pathogen-associated molecular patterns and hypersensitive response. Furthermore, N. indicum accumulated high concentrations of phenolic acids, coumarins and lignans, and flavonoids under WBD. These results provide scientific tools for the formulation of control strategies of WBD in N. indicum.
Nerium indicum Mill. is a large upright evergreen shrub which belongs to family Apocynaceae. It is found all over the world (naturalized in tropical, sub-tropical, and temperate parts) especially in south-west Asia. In particular, it is native to Bangladesh, China, India, Nepal, Myanmar, and Pakistan [1]. In China it is distributed as far as Yunnan and is mostly used as an ornamental plant. The use as an ornamental plant is due to its large gorgeous flowers. This is why it is found in scenic areas, roadsides, riversides, parks, and lakesides. This acclimatization to different growing conditions is also due to its ability to detoxify, absorb, and tolerate pollution such as automobile exhaust [2]. Other than ornamental uses, it is known for the presence of bioactive compounds having medicinal importance [1, 3]. Thus, this species has aesthetic, environmental, and medicinal importance and should be protected from different biotic and abiotic stresses [4].
One of the most devastating disease in Nerium species is the witches' broom disease (WBD) [5]. Typical WBD symptoms include auxiliary bud sprouting (abnormal brush like cluster of dwarfed weak shoots), internode shortening, base thickening, and leaf yellowing. The diseased plants assume stunted growth [5]. Other than Nerium spp. this disease has also been reported in a number of plant species such as Mexican/acid lime (Citrus aurantifolia) [6, 7], cacao (Theobroma cacao), sesame (Sesamum indicum) [8], paulownia (Paulownia tomentosa) [9], and Balanites trifloral [10]. Research on different plant species have shown that the infection causes hypertrophy and hyperplasia in the diseased tissues followed by the loss of apical dominance. Infection leads to the development of abnormal stems called as green broom. In the latter stages, it causes necrosis and death of the diseased tissues [11].
Different plants have been explored to study the resistance mechanism and defense responses against WBD. For example, in cacao, during the development of WBD, Scarpari et al., [12] have reported the changes in the contents of soluble sugars, asparagine and alkaloids, ethylene, and tannins. The authors reported a coordinated biochemical response to the pathogen's infection in cacao [12]. The transcriptome profiling of the Paulownia spp. Suffering from WBD showed the involvement of Ca2+ signaling, plant-pathogen interaction pathway, phosphorylation cascade, photosynthesis, and carbohydrate metabolism pathways [13]. In Mexican lime, the whole metabolome profiling during WBD progression resulted in the differential accumulation of 40 different metabolites such as alcohols, organic acids, fatty acids, and amino acids [14]. Another study in Mexican lime indicated the increased levels of catechin and epicatechin together with the higher expression of enzymes related with the phenylpropanoid biosynthesis pathway [15]. The transcriptome sequencing of the same lime species indicated that plants respond to WBD by activating plant-pathogen interaction pathway, along with the changes in cell wall biosynthesis and degradation, regulation of the hormone signaling, and sugars related pathways [6]. Soybean (Glycine max) initiates (in response to the infection) an array of defense responses such as activation of plant-pathogen interaction pathway, auxin, cytokinin, ethylene, salicylic acid (SA), Jasmonic acid (JA), and brassinosteroids signaling [16].
Advancements in transcriptome sequencing and metabolite profiling has geared up the discovery of pathways associated with biotic and abiotic stresses in a variety of plant species [17, 18]. These advancements have been successfully utilized in exploring the plant-pathogen interaction during the WBD in various plant species as described in the above paragraph. However, there is no such report on the question that how N. indicum plants respond to the WBD. This combined transcriptome and metabolome study will enable researchers to find possible defense strategies of N. indicum plants against WBD and formulate corresponding control measures.
Disease incidence in Guangdong, China
A survey in Guangdong province, China identified that Shaoguan city (Wengyuen county) had the highest disease incidence rate (80.56%) followed by the Southwest side of Dongguan city (48.08%) (Table 1; Fig. 1). On the other hand, the disease index was the highest in Guangzhou city (Zengcheng district, 71.67%) followed by Shaoguan city (Wengyuen county, 44.33%). Overall, we observed that the disease is more prevalent in rural areas, while the disease incidence was lower in urban highways and park landscapes. It could be noted that there exist correlation between incidence rate and disease index except Zengcheng district and Wengyuen county. This survey report proposes that this disease can impact more than 80% of the plantations in an area. Thus, we further conducted experiments to understand the mechanisms through which N. indicum plants respond to WBD.
Table 1 Witches' broom disease incidence in ten different locations of Guangdong province, China
Surveyed N. indicum plants suffering from WBD and showing the characteristic symptoms i.e., normal and arbuscular branches in the larger panel. The smaller panels show arbuscular branches
Community composition and diversity of microorganisms in diseased N. indicum
First of all, we confirmed if the causal agent of the WB infection in N. indicum was phytoplasma or fungi. For this, we tested the presence of phytoplasma because its infection in certain plants such as Jujube, Paulownia, Coconut, and Chrysanthemum have shown symptoms such as yellowing of leaves, clustering and dwarfing of the leaflets, which are sometimes consider similar to those of WBD (Announcement No. 4 of 2013 of the State Forestry Administration). The PCR and nested PCR analyses based on 16S rRNA showed no detection of phytoplasma in the symptomatic N. indicum samples (Fig. 2A-B; Supplementary Fig. 1). These PCR assay results were further confirmed by the transmission electron microscopy, where no phytoplasma could be detected (Fig. 2C).
PCR and microscopic analyses of the diseased and healthy N. indicum tissues. A) PCR analysis of Jujube (diseased tissues), Paulownia arbuscular diseased tissues, and N. indicum tissues. M: Ladder, 1: Jujube sample, 2: Paulownia arbuscular diseased sample, 3–6: diseased N. indicum, 7: healthy N. indicum. B) Nested PCR detection assay (M: Ladder, 1–4: diseased N. indicum tissues, 5: Paulownia arbuscular diseased tissues). C) Transmission electron microscopy (i, ii: Phytoplasma in the phloem of the deciduous leaf of the Chrysanthemum; iii, iv: phloem of the N. indicum)
Moving further, we performed MiSeq based 16S rRNA sequencing to identify the casual organism. The sequencing results showed that the average number of bases and sequence length was 17,687,861 bases and 239 bp (min. 153 bp and max. 465 bp), respectively. Alpha diversity analysis showed that the endophytic fungal diversity was higher (Table 2) in the symptomatic stems from Shaoguang, followed by Dongguan, and Guangzhou, which is consistent with the incidence rate. Community composition analysis showed significant differences in endogenous fungal community composition of healthy and diseased N. indicum tissues. Among the healthy tissues, the top five fungi with higher abundance were Capnodiales, Neodevriesia, Phaeomoniellaceae, Trichomeriaceae, and Strelitziana. Among the N. indicum tissues where tufts occurred, the top five fungi with higher abundance were Cophinforma, Capnodiales, Ascomycota, Cladosporium, and Albonectria. The most prevalent microbial species in the healthy samples (NOH) were Capnodiales neodevriesia and Phaeomoniellaceae trichomeriaceae (Fig. 3A). On the contrary, Cophinforma mamane was the most prevalent microbial species in the diseased tissues (Fig. 3B). Interestingly, C. mamane was not identified in any healthy tissue, suggesting that this microbial species is the potential pathogen causing WBD in N. indicum.
Table 2 Diversity index of endophytic fungi
Stacked bar plot showing relative abundances of fungal order in the A) healthy and B) arbuscular tissues of N. indicum plants. Plots are based on sequencing data for ITS genes. NOH and NOW shows healthy and diseased samples, respectively. The GZ (1–3) with the sample names indicate the replicates
Transcriptome sequencing of N. indicum
The sequencing of 12 libraries of N. indicum resulted in 42.93 to 51.21 million clean reads (average clean reads per library were 47.07 million reads). Overall, the transcriptome sequencing of N. indicum resulted in 86.7 Gb clean data; the average clean data of each sample reached 6 Gb. The error rate was ≤ 0.03%. The Q20%, Q30%, and GC% were ≤ 97.94%, ≤ 93.89%, and 43.38%, respectively (Supplementary Table 1). The transcriptome sequencing resulted in the identification of 191,224 unigenes. Of these, 142,396 unigenes could be annotated in different databases i.e., KEGG (111,585), NR (141,221), SwissProt (102,226), Trembl (139,139), KOG (89,276), GO (119,702), Pfam (107,887) (Supplementary Fig. 2).
The overall gene expression distribution i.e., Fragments Per Kilobase of Transcript per Million fragments mapped (FPKM) was lower in healthy stem (NOHS) as compared to the diseased stem (NOWS), diseased phloem (NOWP), and healthy phloem (NOHP) (Fig. 4A). The Pearson Correlation Coefficient (PCC) for the biological and technical replicates was 0.68 to 0.99 (average PCC = 0.8) (Fig. 4B) indicating the reproducibility of the experiment and reliability of the expression data. The 1st and 2nd principal components (PC1 and PC2) explained 19.96% and 17.555% (Fig. 4C). We found 13,284 and 19,031 DEGs in NOHP vs NOWP and NOHS and NOWS, respectively (Fig. 4D; Supplementary Tables 2–3). The top-10 KEGG pathways in which DEGs were significantly enriched (in NOHP vs NOWP) were plant-pathogen interaction, stilbenoid, diarylheptanoid and gingerol biosynthesis, plant hormone signal transduction, phenylpropanoid biosynthesis, linoleic acid metabolism, flavonoid biosynthesis, metabolic pathways, biosynthesis of secondary metabolites, MAPK signaling pathway-plant, and steroid biosynthesis (Supplementary Fig. 3). Whereas in NOHS vs NOWS, the DEGs were significantly enriched in MAPK signaling pathway, plant hormone signal transduction, flavonoid biosynthesis, biosynthesis of secondary metabolites, circadian rhythm, ABC transporters, zeatin biosynthesis, sesquiterpenoid and triterpenoid biosynthesis, plant-pathogen interaction, and alpha-linoleic acid metabolism (Supplementary Fig. 3). From these observations, it could be concluded that the common pathways could be highly relevant to the N. indicum responses to the WBD.
Summary of gene expression in diseased Nerium indicum L. stem tip and phloem and respective controls. A Overall distribution of gene expression, B Pearson Correlation Coefficient between different treatments and their replicates, C Principal component analysis, and D Venn diagram of the differentially expressed genes. Where TNOWS, TNOHS, TNOWP, and TNOHP represent diseased stem tip, healthy stem tip, diseased phloem, and healthy phloem of N. indicum. 1, 2, and 3 with the tissue names represent replicates
Of the 19,031 DEGs between NOHP and NOWP, 562 and 2,330 genes were exclusively expressed in NOHP and NOWP, respectively. The 276 genes that were exclusively expressed in NOHP were enriched in metabolic pathways, starch and sucrose metabolism, biosynthesis of secondary metabolites, and carbon metabolism (Supplementary Fig. 4A). While the 915 genes specific to NOWP were enriched in lysine biosynthesis, citrate cycle, linoleic acid metabolism, biosynthesis of amino acids, plant-pathogen interaction and biosynthesis of secondary metabolites pathways (Supplementary Fig. 4B). The regulation of these pathways in NOWP indicates that N. indicum phloem have exclusively regulated these pathways.
Of the 13,284 DEGs between NOHS and NOWS, 215 and 976 genes were specific to NOHS and NOWS, respectively. The genes that were exclusively expressed in the NOHS were enriched in biosynthesis of secondary metabolites and metabolic pathways (Supplementary Fig. 4C). Whereas, the NOWS specific genes were enriched in glycolysis/gluconeogenesis, metabolic pathways, fatty acid degradation, biosynthesis secondary metabolites, amino sugar and nucleotide sugar metabolism, and plant-pathogen interaction pathways (Supplementary Fig. 4D).
Differential regulation of signaling related pathways
Two major pathways related to signaling i.e., MAPK signaling-plant pathway and plant-hormone signal transduction pathway, were differentially regulated between the diseased and healthy N. indicum studied tissues. There were 434 and 545 DEGs enriched in MAPK signaling-plant and plant hormone signal transduction pathways, respectively.
MAPK Signaling-plant pathway
Specifically, in the MAPK signaling-plant pathway, we observed that signaling pathway related to pathogen infection was activated. The key genes i.e., botrytis-induced kinase 1 (BAK1), mitogen-activated protein kinase kinase 1/2 (MKK1/2), MAP-kinase substrate 1 (MKS1), MAP-kinase (MPK), MPK3/6, ACS6, WRKY TF, pathogenesis-related protein 1 (PR1), and FRK1 were upregulated in NOWP as compared to NOHP (Fig. 5A). Whereas in MKK1/2, MKK4/5, and FRK1 were upregulated, while, MEKK1, MPK3/6, and VIP1 were downregulated in NOWS as compared to NOHS (Fig. 5B; Supplementary Table 4). These observations indicate that MKKs, FRK1, and PR1 play important roles in early and late responses to WBD. Furthermore, we found that genes associated with pathogen attack (and H2O2) were upregulated in WBD. Importantly, we noted the increased expression of OXI1, ANP1, NDPK2, MPK3/6, and WRKY TFs related with cell death and H2O2 production in NOWP as compared to NOHP. Similarly, the OXI1, MPK3/6, WRKY TF, and NDPK2 showed increased expressions in NOWS as compared to NOHS. Additionally, RobhD was also upregulated in the diseased tissues; RobhD together with MKK3, MPK8, and CaM4 were involved in the maintenance of the homeostasis of reactive oxygen species (ROS) [19, 20] (Fig. 5).
Plant-hormone signal transduction pathway
Differential regulation of MAPK signaling-plant pathway in A) NOHP vs NOWP and B) NOHS vs NOWS). The red, green, and blue color indicates increased, decreased, and increased/decreased expression of the respective genes in the given pathways. Where NOWS, NOHS, NOWP, and NOHP represent diseased stem tip, healthy stem tip, diseased phloem, and healthy phloem of N. indicum. Permissions to use the KEGG pathway map was taken from the Kanehisa Laboratories (https://www.kanehisa.jp/)
Regarding plant-hormone signal transduction pathway, we noted that almost all hormone signaling related pathways were differentially regulated. For auxin signaling, TRANSPORT INHIBITOR RESPONSE1 (TIR1) and Glycoside hydrolase 3 (GH3) showed decreased and increased expression in NOWP as compared to NOHP, respectively. While the transcripts related to other genes in this pathway showed mixed regulation (some transcripts showed increased expression while other showed decreased expression). In stem, all the genes showed mixed expression. These observations indicate that auxin signaling has minor role in defense against WBD in N. indicum. As far as cytokinin signaling is concerned, we noted that CYTOKININ RESPONSE 1 (CRE1) and type-A Arabidopsis response regulator (A-AAR) showed decreased and increased expression, respectively in NOWP as compared to NOHP. Whereas, transcripts related to CRE1, Arabidopsis histidine phosphotransfer proteins (AHP), and A-AAR genes showed higher FPKM values in NOWS as compared to NOHS. This suggests important role of cytokinin in defense against N. indicum. For gibberellin signaling, a gene gibberellin insensitive dwarf 1 (GID1) was upregulated in both diseased tissues as compared to their respective control. Whereas GID2 was downregulated in NOWP as compared to NOHP, while TF showed increased expression in NOWS as compared to NOHS. A PYR/PYL gene was upregulated in NOWS as compared to NOHS, while all other genes showed mixed expression in diseased and healthy N. indicum phloem and stem (Supplementary Table 4; Fig. 6).
Differential regulation of plant-hormone signal transduction pathway in A NOHP vs NOWP and B NOHS vs NOWS. The red, green, and blue color indicates increased, decreased, and increased/decreased expression of the highlighted genes. Where NOWS, NOHS, NOWP, and NOHP represent diseased stem tip, healthy stem tip, diseased phloem, and healthy phloem of N. indicum. Permissions to use the KEGG pathway map was taken from the Kanehisa Laboratories (https://www.kanehisa.jp/)
Ethylene signaling was differentially regulated in both tissue types. However, different set of genes were regulated in both tissues e.g., CONSTITUTIVE TRIPLE RESPONSE (CTR1), ETHYLENE INSENSITIVE 3 (EIN3), and ethylene responsive factor 1/2 (ERF1/2) were upregulated in NOWP as compared to NOHP, whereas ethylene receptor (ETR), CTR1, stress induced MAPK kinase kinase (SIMKK), ethylene binding factor 1/2 (EBF1/2) were upregulated in NOWS as compared to NOHS. Transcripts related to other genes in the ethylene signaling were both up/downregulated. The regulation of a large number of ethylene signaling related genes indicates an important role of ethylene signaling defense against WBD in N. indicum plants. Three genes i.e., BAK1, touch 4 (TCH4), and cyclin-D3 (CYCD3), were upregulated while two genes i.e., BR Insensitive 2 (BIN2) and BRASSINAZOLE-RESISTANT 1 (BZR1/2) were downregulated NOWP as compared to NOHP. On the contrary, all these genes (except BAK1) were downregulated in NOWS as compared to NOHS. These expression changes suggest that the brassinosteroid signaling is differently regulating the witches' broom infection/defense in N. indicum phloem and stem. Jasmonic acid (JA) signaling related gene JAR1 was downregulated in diseased phloem while CORONATINE INSENSITIVE 1 (COI1) was upregulated in diseased stem as compared to their respective controls. It is known that salicylic acid (SA) plays important roles in plant defense against pathogens by participating in inducible defense mechanism and system acquired resistance (SAR) [21]. In accordance to this, we also observed the differential regulation of the genes involved in SA signaling. Two genes i.e., NONEXPRESSOR OF PATHOGENESIS-RELATED GENES 1 (NPR1) and pathogenesis related protein 1 (PR-1) were upregulated in NOWP as compared to NOHP. Whereas, NPR1 and TF TGA (TGA) showed increased expression in N. indicum stem with WBD (Supplementary Table 4; Fig. 6).
These observations suggest that N. indicum plants activate a network of signaling mechanisms associated with phytohormones, ROS, and disease resistance against diseased plants with symptoms of witches' broom in stem and phloem.
Differential regulation of defense-related pathways
Plant-pathogen interaction pathway
Calcium signaling plays an important role as a universal second messenger in plants' responses to biotic and abiotic stresses and is a major part of plant-pathogen interaction pathway [22]. In the studied tissue comparisons, we observed the changes in the expression of multiple genes which are associated with plants' responses (based on Ca2+ signaling) against invading pathogens. In NOWP, respiratory burst oxidase (Rboh) was upregulated while nitric-oxide synthase (NOS) was downregulated as compared to NOHP. Whereas, cyclic nucleotide gated channel (CNGCs) were downregulated in NOWS as compared to NOHS. Transcripts related to other genes in the Ca2+ signaling were variedly regulated. Other genes that end up in activation of the defense related genes i.e., WRKY TF (especially WRKY29), FLG22-induced receptor-like kinase 1 (FRK1), glycerol kinase (NHO1), and PR-1 showed increased expression in witches' broom diseased tissues. For example, we observed BAK1/BKK1, pto-interacting protein 5 (PTI5), PTI6, MAPK-kinase kinase 1/2 (MKK1/2), and WRKY TFs were upregulated in NOWP, whereas, EF-TU receptor (EFR1) and MKK4/5 were upregulated in NOWS. All of the four defense related genes i.e., WRKY, FRK1, NHO1, and PR-1 were upregulated in NOWP, whereas only WRKY and FRK1 were upregulated in NOWS in comparison to their respective controls. Transcripts related to cysteine and histidine-rich domain-containing protein RAR1 (RAR1) and enhanced disease susceptibility 1 protein (EDS1) showed increased expression in NOWP as compared to NOHS. The EDS1 was also upregulated in NOWS as compared to NOHS. The diseased stem also showed increased expression disease-resistance protein 2 (RPS2). These expression changes suggest that N. indicum plants defend themselves from the witches' broom infection by activating RAR1 and EDS1 driven responses (Supplementary Table 4). Overall, these observations indicate that N. indicum plants adapt multiple level defense strategy to resist the Withes' broom infection.
Linoleic acid and α-linoleic acid metabolism pathways
Eighty-six DEGs were enriched in linoleic acid metabolism pathway; 43 and 70 genes in this pathway were differentially expressed in NOHP vs NOWP and NOHS vs NOWS, respectively. All the transcripts annotated as linoleic acid metabolism related genes were variedly expressed between NOHP and NOWP; secretory phospholipases A2, linoleate 9S-lipoxygenases, cytochrome P450 family 2 subfamily J (CYP2J), and lipoxygenases. However, the transcripts annotated as cytochrome P450 family 3 subfamily A4 (CYP3A4, Cluster-13228.84747) showed increased expression in the diseased phloem as compared to control. In addition to CYP2J, we also observed increased expression of linoleate 9S-lipoxygenases in NOWS as compared to NOHS. Whereas, secretory phospholipases 2 were downregulated in diseased stem as compared to control. These observations suggest that the biosynthesis of linoleic acid is not stable, whereas its degradation/breakdown increases in NOWP as compared to NOHP. On the contrary, the biosynthesis of linoleic acid decreased whereas its breakdown increased in NOWS as compared to NOHS (Supplementary Table 4; Fig. 7). We also observed the increased expression of chloroplastic oxoene reductase (COR), acetyl-CoA acyltransferase 1, and phospholipase A1 in NOWP as compared to NOHP. Whereas, in NOWS, only acetyl-CoA acyltransferase 1 related transcripts showed increased expression (Supplementary Table 4; Fig. 7). These changes propose that N. indicum plants may adapt JA induced defense system by changing linoleic acid levels.
Phenylpropanoid and flavonoid biosynthesis pathways
Since DEGs were significantly enriched in phenylpropanoid biosynthesis pathway and flavonoid biosynthesis pathway, therefore, we explored the differential regulation of these pathways in witches' broom diseased N. indicum plants as compared to healthy controls. There were 355 and 115 DEGs that were enriched in phenylpropanoid biosynthesis and flavonoid biosynthesis pathways, respectively (Supplementary Table 4). Phenylpropanoid biosynthesis was largely affected. Most prominently, we noticed that the expression of trans-cinnamate 4-monooxygenase, caffeoyl-CoA O-methyltransferase (CCOAOMT1), 4-coumarate-CoA ligase (4CL), catalase-peroxidase (Kat), cinnamoyl-CoA reductase (CCR), ferulate-5-hydroxylase (F5H), phenylalanine ammonia-lyase (PAL), caffeoylshikimate esterase (CSE), and cinnamyl-alcohol dehydrogenase (CAD) increased in NOWP as compared to NOHP. Whereas, other genes showed varied expression pattern between NOWP and NOHP. While, coniferyl-alcohol glucosyltransferase was downregulated in NOWP as compared to NOHP. Whereas in NOWS, we observed the increased expression of trans-cinnamate 4-monooxygenase, CCOAOMT1, flavin prenyltransferase, Kat, coumaroylquinate(coumaroylshikimate) 3'-monooxygenase, F5H, PAL, CAD, CSE, and scopoletin glucosyltransferase (SGtf) (Supplementary Table 4).
From these observations it could be noted that possibly the biosynthesis of caffeoyl-CoA and coniferyl aldehyde increased in NOWP and NOWS which might lead the increased biosynthesis of lignin increased as compared to NOHP and NOHS. Also, it could be suggested that N. indicum stem biosynthesize lignin, syringin, coniferin, and 4-hydroxycinnamyl-alcohol-4-D-glucoside as a response against witches' broom infection. While, phloem of diseased plants only biosynthesizes lignin as compared to the healthy phloem (Supplementary Table 4).
Most interesting finding was that all the transcripts related to genes associated with flavonoid biosynthesis pathway showed increased expression in NOWP and NOWS as compared to their respective controls. Only one gene (leucoanthocyanidin reductase, Cluster-13228.75100) showed reduced expression in NOWS as compared to NOHS (Supplementary Table 4). These expression changes suggest that large scale flavonoid biosynthesis is either initiated or ongoing as a defense response when N. indicum plants are presenting WBD symptoms.
Top DEGs with increased/decreased expression in diseased N. indicum phloem and stem
The top-10 genes with highest log2 fold change values in the diseased phloem and stem as compared to their respective controls are shown in Tables 2–3. The highest log2 foldchange value (13.97) was observed in NOWP for Cluster-13228.51868 (transcriptional activator of proteases prtt). This exclusive expression in NOWP as compared to NOHP suggests that proteolysis of proteins is highly increased in NOWP and a supply of amino acids is increased. This is consistent with the KEGG pathway enrichment results that DEGs were also enriched amino acid biosynthesis pathway. Since Regulatory Particle Non-ATPase 4 (RPN4) is required for proteolysis, the exclusive expression of transcriptional regulator RPN4 proposes (Cluster-13228.57048) that in NOWP, there is proteolysis. Thus, it is possible that the Withes' broom diseased N. indicum plants opt for proteolysis for higher amino acid supplies. This is consistent with the known fact that proteases act as hubs in plant immunity and the function of proteases and related genes [23,24,25]. Other prominent genes that were exclusively expressed in NOWP included ubiquitin carboxyl-terminal hydrolase, β-amyrin 28-monooxygenase, viridiflorene synthase, abscisic-aldehyde oxidase, cytokinin dehydrogenase, cathepsin A, arrestin-related trafficking adapter, and expansin. The carboxyl-terminal hydrolase (Cluster-13228.83464) is a part of circadian clock [26], whereas β-amyrin 28-monooxygenase (Cluster-13228.51250) and viridiflorene synthase (Cluster-13228.74406) are involved in triterpene and sesquiterpene biosynthesis [27]. The expression of abscisic-aldehyde oxidase (Cluster-13228.74406), cytokinin dehydrogenase (Cluster-13228.56658), and cathepsin A (Cluster-13228.110860) indicate the possible role of ABA biosynthesis [28], zeatin biosynthesis [29], and Jasmonic acid induced defense response [30] in diseased phloem, respectively. This observation is consistent with the KEGG pathways annotation results where we observed the differential regulation of both plant-hormone signal transduction pathway and zeatin biosynthesis pathway (Supplementary Fig. 4). Finally, the exclusive expression of expansin (Cluster-13228.65339) (Table 2).
The highest downregulation of genes in NOWP as compared to NOHP indicates that the processes such as ribosome functioning (Cluster-13228.67277), disturbance in photosynthesis possibly due to disturbances in linear electron flow by PROTON GRADIENT REGULATION 5 (Cluster-13228.107206) [31], maintenance of genome stability by protein downstream neighbor of Son (Cluster-13228.90657) [32], degradation of arginine to urea by arginase (Cluster-13228.124673) [33], disturbance in auxin redistribution in response to gravity by LAZY (Cluster-13228.124659) [34], and thiamine metabolism (Cluster-13228.58619) [35] were significantly affected in the diseased phloem (Table 3).
The genes that were highly expressed in NOWS as compared to NOHS were related to transition from vegetative to reproductive phase (lysine-specific histone demethylase 1A, Cluster-13228.98726) [36], MAPK-signaling (MAPKKK13, Cluster-13228.81744) [37], sucrose related pathways (fructose-bisphosphate aldolase, Cluster-13228.71255 [38] and GDP mannose 4,6-dehydratase, Cluster-13228.55109) [39], programmed cell death (KDEL-tailed cysteine endopeptidase, Cluster-13228.81725), secondary metabolite biosynthesis (tyrosine aminotransferase, Cluster-13228.107437), and disease resistance (WRKY19, Cluster-13228.71611). These genes were enriched in plant-pathogen interaction, sucrose metabolism related pathways, MAPK-signaling plant, and secondary metabolite biosynthesis, which is consistent with the KEGG pathway enrichments results. Indicating and confirming the involvement of these pathways in resistance against WBD in N. indicum stem (Table 4).
The highly downregulated genes included heat shock protein, peroxin-5, LIFEGUARD4, ribulose-phosphate 3-epimerase, reversibly glycosylated polypeptide / UDP-arabinopyranose mutase, tetraspanin-4, and some other genes. These genes are associated with plant-pathogen interactions e.g., knockdown of LIFEGUARD4 supports delayed fungal development in Arabidopsis [40], pentose phosphate pathway or xylose metabolism [41], and interaction with pathogens [42] (Table 4).
qRT-PCR analysis of selected genes
Heatmaps representing the log2 foldchange values of differentially. The gene names are followed by annotation as per KEGG database. Where WS, HS, WP, and HP represent diseased stem tip, healthy stem tip, diseased phloem, and healthy phloem of N. indicum
Table 3 List of top-10 genes that were up/downregulated in diseased N. indicum phloem as compared to healthy phloem
Table 4 List of genes top-10 genes that were up/downregulated in diseased N. indicum stem as compared to healthy stem
The qRT-PCR analysis was performed to validate the reliability of the RNA-seq data. For this we studied the relative gene expression of sixteen transcripts by using an Actin-2. Overall, the sixteen genes showed similar expression trend as of the RNA-seq for the same genes; as noted by R2 = 0.8258 (Fig. 8).
qRT-PCR analysis of the selected N. indicum genes. The bar graphs in two panels show the relative transcript level in TNOHP vs TNOWP (top) and NOHS vs NOWS (bottom). The error bars represent standard deviation. The panel of the right show correlation between RNA-seq (FPKM values) and the relative gene expression measured through qRT-PCR. Where NOWS, NOHS, NOWP, and NOHP represent diseased stem tip, healthy stem tip, diseased phloem, and healthy phloem of N. indicum
Metabolome profile of N. indicum
The UPLC-MS/MS analysis of the diseased and healthy N. indicum phloem and stem tissues resulted in the identification of 586 metabolites (Fig. 9A). The PCA showed grouping of the replicates for each treatment indicating that the sampling was reliable. PC1 and PC2 explained 40.14% and 31.11% variability, respectively (Fig. 9B).
A Heatmap, B Principal component analysis, and C Venn diagram of the detected metabolites in N. indicum. Where NOWS, NOHS, NOWP, and NOHP represent diseased stem tip, healthy stem tip, diseased phloem, and healthy phloem of N. indicum. 1, 2, and 3 with the tissue names represent replicates
Based on the orthogonal partial least squares-discriminant analysis (OPLS-DA) we found that 386 and 324 metabolites were differentially accumulated in NOHP vs NOWP and NOHS and NOWS, respectively (Supplementary Tables 5– 6). The OPLS-DA models showed that the Q2 for NOHP vs NOWP and NOHS vs NOWS was 0.997 and 0.996, respectively (Supplementary Fig. 5). This indicates that the OPLS-DA models are reliable. We found that 225 DAMs were common between both treatments whereas 161 and 99 DAMs were specific to NOHP vs NOWP and NOHS vs NOWS, respectively (Fig. 9C).
The DAMs between the NOHP and NOWP were enriched in 83 different KEGG pathways. Most importantly, the DAMs were enriched in pentose and glucuronate interconversion, linoleic acid metabolism, biosynthesis of secondary metabolites, α-linoleic acid metabolism, and phenylpropanoid biosynthesis (Fig. 10A). Sixty-three and 17 DAMs were specific to NOWS and NOWP, respectively (Supplementary Table 5). Of these the top-10 metabolites that were up/down accumulated are represented in Fig. 10B. The top-10 exclusively up-accumulated metabolites were classified as phenolic acids, amino acids and derivatives, and terpenoids (Supplementary Table 5).
A Scatter plot of KEGG pathways to which DAMs were enriched in NOHP vs NOWP, B Top-10 (up- and down) accumulated metabolites in NOHP vs NOWP. C Scatter plot of KEGG pathways to which DAMs were enriched in NOHS vs NOWS, D Top-10 (up- and down) accumulated metabolites in NOHS vs NOWS. Where NOWS, NOHS, NOWP, and NOHP represent diseased stem tip, healthy stem tip, diseased phloem, and healthy phloem of N. indicum
The DAMs between NOHS and NOWS were enriched in metabolic pathways, flavanol and flavonoid biosynthesis, pyruvate metabolism, nicotinate and nicotinamide metabolism, citrate cycle (TCA cycle), and alanine, aspartate, and glutamate metabolism (Fig. 10C). Apart from these, the DAMs were accumulated in 82 different KEGG pathways. Forty-five and four metabolites were exclusively accumulated in NOWS and NOHS, respectively. The top-10 up/down accumulated metabolites are represented in Fig. 10D. The highly accumulated (top-10) metabolites in NOWS were classified as phenolic acids, flavonoids, tannins, alkaloids, and organic acids (Supplementary Table 6). The KEGG pathway specific accumulation of metabolites is discussed below.
Pathway specific DAM enrichment in diseased N. indicum phloem and stem
Plant hormone signal transduction pathway
We observed the higher accumulation of IAA and N6-isopentenyladenine in NOWP as compared to NOHP. Whereas, we observed reduced accumulation of IAA, SA, JA in NOWS as compared to NOHS (Supplementary Table 6). These observations propose the involvement of hormone signaling in N. indicum in the diseased tissues.
Alanine, aspartate, and glutamate metabolism
The differential metabolite accumulation showed that L-asparagine, L-aspartic acid, L-glutamic acid, citric acid, argininosuccinic acid, and α-ketoglutaric acid biosynthesis reduced in NOWP as compared to NOHP. These metabolites, as well as succinic acid, α-ketoglutaric acid, fumaric acid, and γ-aminobutyric acid biosynthesis decreased in NOWS as compared to NOHS (Supplementary Table 6). Overall, these observations suggest that the witches' broom infection caused reduction in these metabolites as compared to respective control.
Linoleic acid and α-linoleic acid metabolism
We observed the accumulation of metabolites associated with the α-linoleic acid metabolism in NOWP as compared to NOHP. In case of NOWS vs NOHS, only one metabolite (9-Hydroxy-12-oxo-15(Z)-octadecenoic acid) was differentially accumulated. Similarly, all the metabolites that were enriched in linoleic acid metabolism were up-accumulated in NOWP as compared to NOHP. Whereas, the number of metabolites differentially accumulated between NOHS and NOWP was lower but a similar accumulation trend was observed as in case of phloem. These results are consistent with the transcriptome findings (Supplementary Table 4; Fig. 6).
Phenylpropanoid biosynthesis and flavonoid biosynthesis pathway
We observed higher accumulation of cinnamic acid, p-coumaric acid, caffeic acid, ferulic acid, p-coumaroylshikimic acid, coniferyl-aldehyde, coniferyl-alcohol, and p-coniferyl alcohol in diseased phloem tissues as compared to control. The increased accumulation of these metabolites probably leads to the higher biosynthesis of coniferin (Supplementary Table 6). These observations are consistent with the observations of enrichment of DEGs in the same pathway (Supplementary Table 4), thus confirming that coniferin biosynthesis is increased in response to the infection. Similarly, the accumulation of most of the metabolites enriched in this pathway were up-accumulated in NOWS as compared to NOHS (Supplementary Table 6). These observations propose that increased biosynthesis of phenolic acids is a generalized response (regardless of infection site) of N. indicum plants against the infection.
The accumulation of flavonoids and phenolic acids that are biosynthesized in flavonoid biosynthesis pathway was increased in NOWP and NOWS as compared to their respective controls (Table 2). Similar observations were recorded in case of transcriptome sequencing (Supplementary Table 4).
Taken together, it could be proposed that large scale flavonoid and phenolic acid biosynthesis occurs in the diseased N. indicum plants.
Citrate cycle and nicotinate and nicotinamide metabolism
Five and six metabolites were differentially accumulated in citrate cycle in NOWP vs NOHP and NOWS vs NOHS, respectively. Among these, succinic acid was up-accumulated in NOWP as compared to NOHP, whereas the accumulation of all other metabolites was decreased in response to infection. Similarly, all the metabolites (except phosphoenolpyruvate) were down-accumulated in NOWS as compared to NOHS (Supplementary Table 6). These observations indicate that Withes' broom infection greatly influences citrate cycle.
The reduced accumulation of nicotinate, nicotinate D-ribonucleoside, succinate, and 4-amino butanoate was observed in NOWP as compared to NOHP (Supplementary Table 6). On the other hand, an increased accumulation of β-Nicotinamide mononucleotide, succinic acid, nicotinamide, nicotinic acid, nicotinic acid adenine dinucleotide was observed in NOWP and/or NOWS as compared to the respective controls. These metabolites are present upstream the tryptophan metabolism, alanine, aspartate and glutamate metabolism, tropane, piperidine, and pyridine alkaloid biosynthesis, alanine metabolism, and citrate cycle. Thus, their differential accumulation in response to the witches' broom infection possibly effects all the downstream pathways.
Dominant pathogen in witches' broom diseased N. indicum
We conducted a survey of the Guangdong province in China to determine the disease incidence of WBD on N. indicum natural plantations. The results that WBD was prevalent in rural areas as compared to urban ones can be linked with the lack of proper management practices. Since we found that this disease can impact > 80% plantations, thus, we performed detailed experiments to study possible casual organism and what kind of defense responses are activated by N. indicum plants. We performed PCR, nested PCR, and transmission electron microscopy analyses and found that the disease is not associated with phytoplasmas as in the case of Jujube madness disease [43], Paulownia arbuscular disease [9], and [44] (Fig. 2). Thus, these analyses remove the possibility that the causal agent of the N. indicum WBD is phytoplasma. Thus, the results of ITS sequencing using MiSeq documents that the possible pathogen causing WBD in N. indicum is Cophinforma mamane (Fig. 3). Though limited knowledge is available on the Botryosphaeria mamane (a homotypic synonym of C. mamane), but this fungus has not been restricted to S. chrysophylla since it has also been reported on Acacia mangium and Eucalyptus urophylla in Venenzuela. These reports were based on ITS phylogeny as well as on the basis of similarity of the conidial characteristics. Mohali, et al. [45]. Zhou and Stanosz [46] sequenced the ITS regions of these two strains of B. mamane (from A. mangium and E. urophylla) however, the ITS phylogeny reported by Naito, Tanaka, Taba, Toyosato, Oshiro, Takaesu, Hokama, Usugi and Kawano [44] has been debated [47]. A global survey on the ecology and diversity of the endophytic fungi indicated the presence of C. mamane in different plant species such as Garcinia mangostana, Bixa Orellana, and Catharanthus roseus [48], indicates that this fungi can use different plant species as host and infect them. An earlier study had showed that B. mamane was found associated with WBD symptoms such as branch contortions and swellings, leading to a death of the tissues in Sophora chrysophylla [49]. This is very similar to the findings of our survey (Table 1). Other than S. chrysophylla, this fungal species has also been reported to be associated with the decline of the table grapevine in North-eastern Brazil [50]. Thus, our data is in accordance with these reports and the infection in N. indicum plants is potentially due to C. mamane.
There are several strategies on the disease control (including WBD) such as use of genetic resistance (genetic enhancement), cultural management such as removal of the diseased branches, agroforestry production systems, chemical control (fungicides), biological control (microbiological fungicide), and integrated pest management [51, 52]. Our survey results indicate that the diseased areas in Guangdong should be treated in order to avoid the spreading of the disease to the whole province. Since we found a lower disease incidence in urban areas and parks, therefore, it is understandable that the disease can be managed by following regular pruning and trimming of the diseased branches. Particularly, the areas with > 80% disease incidence should be a priority to remove or prune the diseased trees since the removal of the diseased branches is the cheapest and the most effective practice. For the areas, where the disease incidence is lower e.g., urban areas with < 5% disease index, attention should be given to maintain tree vigor and enhance disease resistance. A better strategy could be planting multi-variety model. So that plants with variable resistance can grow. In this regard, the detailed understanding of the infection and N. indicum responses could be helpful to devise a suitable control strategy. The key pathways that were regulated in the diseased N. indicum plants are discussed below.
MAPK cascade is possibly involved in responses against witches' broom disease in N. indicum
Our results displayed the activation of MAPK-signaling plant pathway in diseased phloem and stem (Figs. 2–3; Supplementary Table 4). MAPK cascades are highly conserved signaling networks in plants and play an important role in signaling when a plant is under pathogen attack [53]. Particularly, pathogen/microbe-associated molecular patterns (PAMPs/MAMPs) results in the activation of mitogen activated protein kinases (MAPKs) [54]. The increased expression of BAK1, MKK1/2, MKS1, MPK, MPK3/6, ACS6, WRKY TF, PR1, and FRK1 (Fig. 4) in NOWP and/or NOWS proposes that N. indicum responds to witches' broom infection by both the early and late defense responses. This proposition is based on the known roles of genes involved in MAPK signaling cascade e.g., most recently, three genes i.e., MKK2, MPK2, and MKK4 were reported to be involved in responses against WBD in Chinese Jujube [55]. It was established in Arabidopsis that constitutive expression of MKK2 caused the increased the expression genes encoding enzymes for the biosynthesis of JA and ethylene [56]. Since these expression changes correspond to that of JA and ethylene related transcripts (Fig. 7). Therefore, the defense response in N. indicum involves both phytohormone biosynthesis and MAPK-signaling. Similarly, a tomato MKK4 (together with MKK2) have been reported to be induced by Botrytis cinerea and JA and ethylene [57]. Whereas its silencing reduced resistance in tomato plants against B. cinerea. Thus, the expression changes in the diseased and healthy N. indicum plants correspond to these observations and indicate that N. indicum plants have similar defense strategy. However, detailed characterization must be carried out to confirm the individual/combined role(s) of these genes. Another defense strategy that N. indicum plants might adapt as a part of MAPK signaling cascade is the production of H2O2, which is known for a central role in signaling pathways within and between plant cells, especially when under pathogen attack [58]. This is probably due to the increased expression of OXI1, ANP1, NDPK2, MPK3/6, and WRKY TFs in NOWP and/or NOWS as compared to their respective controls (Fig. 4; Supplementary Table 4). Since these genes are involved in programmed cell death (PCD), therefore, there is a possibility that the oxidative burst (due to the accumulation of H2O2) is activating PCD in diseased N. indicum plant tissues [59]. This proposition is further supported by the changes in the expression of genes such as MKK3, RobhD, and OXI1 in NOWP and NOWS, as compared to their controls. MKK3 and CaM4 are present upstream the OXI1 [60]. Taken together, our results indicate that N. indicum plants activate MAPK signaling cascades which result in early and late defense responses against pathogen, H2O2 production, cell death, and maintenance of homeostasis of ROS.
N. indicum may use plant-hormone signal transduction pathway during WBD
The results that both transcriptome and metabolome analysis showed the enrichment of DEGs and DAMs in plant-hormone signal transduction pathway, indicates during the witches' broom infection, phytohormone signaling might play important roles (Fig. 5; Supplementary Table 4). Earlier studies have shown that plant hormones such as SA, JA, and ethylene act as signal molecules and can trigger a range of defense responses [61]. The upregulation of NPR, TGA, and PR1 in NOWS and/or NOWP as compared to the controls indicate that SA signaling is activated in N. indicum plants when diseased (Fig. 4; Supplementary Table 4). This is consistent with the metabolome findings where we observed the changes in SA in NOWS (Supplementary Table 6). Silencing of NPR genes (NPR1 and NPR3) in C. roseus altered the susceptibility against Periwinkle leaf yellowing [62]. Thus, it could be proposed that WBD suffering N. indicum plants use SA-induced defense responses to activate PR1, which is a known for its role in resistance against invading pathogens; most recently the upregulation of PR1 was reported in cacao against WBD [63]. Apart from SA, JA signaling might also be a possible defense strategy in N. indicum plants against WBD. This proposition is based on the observations that genes and metabolites associated with JA signaling were differentially regulated/accumulated in the studied tissues (Fig. 5 & 7; Table 3; Supplementary Table 4). Also, this is consistent with the findings that large scale JA changes occur in Chinese jujube leaves showing WBD symptoms due to phytoplasma infection [64]. These expression changes further confirm the above discussed roles of MKK2 and MKK4 genes in tomato and Arabidopsis [56, 57]. Similar defense strategy has been reported in Paulownia fortunei to paulownia witches' broom infection [65]. Apart from SA and JA signaling, the observation that a large number of transcripts annotated in ethylene signaling pathway indicates the probable role of ethylene in N. indicum against witches' broom infection. The ETR and CTR1 genes initiate MAPK signaling cascade, which is consistent with the differential expression of the MKK transcripts in the diseased and healthy N. indicum tissues [66]. Their higher expression, along with SIMKK, EBF1A/2, and ERF1/2 in NOWS and/or NOWP as compared to controls indicate that N. indicum plants initiate MAPK signaling cascade as a response to WBD. These results are consistent with the observation of the increased levels of ethylene in cocoa shoots after witches' broom infection [12]. The observation that most genes in the brassinosteroid signaling pathway were downregulated in NOWS, while upregulated in NOWP as compared to their respective controls indicates that WBD may lead towards reduced cell division and elongation in stem, but an increased cellular growth and division in phloem. These observations are consistent with the disease morphology that internodes are shortened and sprouting of the auxiliary buds [67]. Thus, overall regulation of the plant-hormone signaling pathway in WBD suffering N. indicum signifies the roles of respective hormone signaling networks in resistance and responses to WBD. Particularly, these observations highlight that JA, SA, and ethylene biosynthesis and signaling related genes interact with the key genes in MAPK signaling pathway and help plant to withstand the disease.
Plant-pathogen interaction pathway is active in WBD suffering N. indicum plants
Plants respond to the invading pathogens through multiple layers of specific immunity mechanisms [68]. When pathogens invade plants, the cytosolic Ca2+ concentrations change rapidly and is considered an essential early event during disease infection in plants [69]. The observation that Robh gene was upregulated in NOWP and/or NOWS as compared to controls indicate the possibility of changes in Ca2+ levels in the diseased tissues since both Rboh and CDPK genes are regulated by Ca2+. This upregulation leads to increased ROS accumulation, which eventually triggers defense reactions including cell wall reinforcement [70]. The upregulation of defense related genes i.e., WRKY TFs [71], FRK1 [72], NHO1, and PR1 [73] as a result of the changes in the expression of MAPK signaling cascade related genes indicates a strong defense response to WBD [71,72,73] [74]. It is interesting to note that the observed defense responses in N. indicum are similar to cacao and jujube [75]. The expression of genes such as MKK1/2, PTIs, and RAR1 is another signal apart from possible changes in Ca2+ concentrations. Previously, it is known that PTI5/6 regulates defense responses and disease resistance e.g., in tomato against Stemphylium lycopersici [76, 77], while RAR1 is specifically required for plant innate immunity [72, 76, 77]. Thus, it could be stated that N. indicum plants' defense mechanism against witches' broom infection include possible changes in Ca2+ concentrations in cytosol, which together with MAPK-signaling cascade, activates related defense pathways. However, these observations are a preliminary picture of the N. indicum-C. mamane interaction, and future studies should focus on the identification of resistant and tolerant N. indicum genotypes to WBD followed by the identification of perception of C. mamane by host plants.
Phenylpropanoid and flavonoid biosynthesis is increased in diseased N. indicum stem and phloem
Phenylpropanoid compounds play a range of defense related functions in plants such as preformed/inducible physical and chemical barriers. These compounds are also involved in local or systemic signaling and may induce defense related genes [78]. The enrichment of DEGs and DAMs in phenylpropanoid biosynthesis pathway is consistent with the earlier reports in Paulownia fortunei [65], cacao [63, 74], and green tea [65]. Particularly, the upregulation of trans-cinnamate 4-monooxygenase, CCOAOMT1, 4CL, Kat, CCR, F5H, PAL, CSE, and CAD in NOWP and/or NOWS as compared to respective controls possibly lead the increased accumulation of lignans and coumarins and phenolic acids (Supplementary Table 6). Earlier studies have demonstrated that higher expression of CCOAOMTs and CADs lead to higher lignin biosynthesis and disease resistance in Arabidopsis, respectively [79,80,81]. Similarly, it is known that F5H plays role in phenylpropanoid biosynthesis in Arabidopsis [82], while the activity of PAL increases in Mexican lime plants showing WBD symptoms [15]. Thus, it could be proposed that the witches' broom infection in N. indicum leads towards the increased expression of the above-mentioned genes, which in turn causes the accumulation of phenolic acids and lignans and coumarins.
Since flavonoid biosynthesis pathway is present downstream the phenylpropanoid biosynthesis [83], therefore, the accumulation of flavonoids and phenolic acids could be expected. This higher accumulation of flavonoid and phenolic acids could be related with the increased expression of naringenin 3-dioxygenase (F3H), trans-cinnamate 4-monooxygenase (C4H), CCOAOMT, chalcone synthase (CS), chalcone isomerase (CHI), flavonol synthase (FLS), and other genes enriched in this pathway (Supplementary Table 4). The study on Mexican lime tree suffering from WBD showed the upregulation of these genes as compared to controls [6]. Thus, it could be concluded that increased flavonoid biosynthesis is a common response of different plants species to the pathogen infection [78, 84] and that the N. indicum plants manipulate flavonoid biosynthesis together with phenylpropanoid biosynthesis pathway to resist against this disease.
Linolenic acid and α-linolenic acid metabolism may be a part of defense responses against WBD
As we know that linoleic acid is a precursor of JA, which is involved in defense responses in plants [85] and references therein]. The end product of α-linoleic acid metabolism is Jasmonate/methyl-Jasmonate, which induces defense responses in plants against different biotic stresses [86]. In this regard, the regulation of multiple genes controlling latter stages of Jasmonate/methyl-Jasmonate biosynthesis related transcripts i.e., COR, acetyl-CoA acyltransferase 1, phospholipase A1 in NOWP and/or NOHP. suggests that N. indicum plants use similar mechanism for defense against WBD (Supplementary Table 4; Fig. 5). These changes in the expression are consistent with the observations that metabolites associated with the α-linoleic acid metabolism in NOWP and NOWS as compared to their respective controls (Supplementary Table 6). The changes suggest that N. indicum plants adapt a strategy to use JA induced defense responses against the pathogen infection regardless of the site of infection i.e., stem or phloem.
Possible roles of other pathways in defense responses in N. indicum against WBD
The differential regulation of other pathways such as citrate cycle, nicotinate and nicotinamide metabolism, and alanine, aspartate, and glutamate metabolism indicate that N. indicum plants adapt a multi-layer response to the witches' broom infection. Previous studies have shown that the flux of tricarboxylic acid (TCA) may play role during the setup of plant defenses mainly because it is a central pathway for the generation of primary metabolites in order to recruit and redistribute energy flows [87, 88].
The differential accumulation of compounds enriched in nicotinate and nicotinamide metabolism pathway such as β-Nicotinamide mononucleotide, succinic acid, nicotinamide, nicotinic acid, nicotinic acid adenine dinucleotide is consistent with the changed expression of the genes enriched in this pathway (Supplementary Tables 2–3). These compounds are precursors of nicotinamide adenine dinucleotide (NAD) [89]. The expression of genes associated to the above-mentioned metabolites in this pathway have been previously shown to increase in response to disease infection [90], which is consistent with our findings. It has also been reported that extracellular pyridine nucleotides induce PR genes in Arabidopsis [91]. Since we have seen the upregulation of PR gene in NOWS and NOWP, therefore, it is possible that a similar response exists in N. indicum with WBD. Furthermore, because nicotinate and nicotinamide metabolism are present upstream the alanine, aspartate, and glutamate metabolism, and tryptophan metabolism [92], therefore, the differential accumulation of metabolites related to these pathways is understandable. At this point, it could be proposed that under WBD attack, N. indicum might stimulate the variable accumulation of metabolites related to alanine, aspartate, and glutamate metabolism. However, a detailed understanding of how these pathways might regulate the defense responses is needed through gene/pathway-specific investigations.
The survey of Guangdong province, China showed that the WBD is widespread in the province and the disease incidence can be up to 80%. Considering this alarming situation, we used MiSeq based ITS sequencing and found that C. mamane was the most prevalent microbe in the diseased tissues. We also confirmed the absence of phytoplasma in the diseased N. indicum tissues. Further, our combined transcriptome sequencing and metabolome profiling of WBD suffering N. indicum phloem and stem indicated the multi-layer defense responses in this plant species. Particularly, we found that plant-pathogen interaction, plant-hormone signal transduction, MAPK-signaling (plant), phenylpropanoid biosynthesis, flavonoid biosynthesis, linoleic acid metabolism, α-linoleic acid metabolism, nicotinate and nicotinamide metabolism, and alanine, aspartate, and glutamate metabolism pathways are activated during the WBD in phloem and stem. WBD in N. indicum triggers PAMP and other defense related signals such as MAPK signaling cascade, which possibly result in early and late defense responses against pathogen, H2O2 production, cell death, and maintenance of homeostasis of ROS. This study presents a wide range of target genes belonging to the above-mentioned pathways for gene specific characterization and developing WBD resistant N. indicum plants by using CRISPR/Cas and other gene manipulation techniques [93]. Besides, our sampling will allow us in a future study a detailed characterization of the pathogen conferring the WBD in N. indicum.
Survey of diseased areas
The current study was based on Nerium indicum plants from Guangdong province in China. The samples were obtained from the wild and no permissions were necessary to collect such samples. Also, the collection of plant material complies with Chinese Academy of Forestry's, Chinese, and international guidelines and legislation. The formal identification of the samples was undertaken by the corresponding author of this publication (Professor Haibin Ma). The voucher specimens have been deposited in a local herbarium of Research Institute of Tropical Forestry, Chinese Academy of Forestry (Guangzhou, China), under the ID: NRX001PR20023.
We conducted a survey of ten different cities of Guangdong province, China i.e., Dongguan, Gaozhou, Guangzhou, Huizhou, Jieyang, Meizhou, Shenzhen, Zhongshan, Zhuhai, and Shaoguan and recorded the disease incidence (see Table 1 for the number of samples studied from each area). The samples were considered diseased if the plants/branches showed typical WBD symptoms. The disease incidence and index were determined according to the following formulas.
$$\mathrm{Disease incidence rate}=\frac{\mathrm{Numbers of onset}}{\mathrm{ the total number of Investigated plants}}\times 100$$
$$\mathrm{Disease index}=\frac{\Sigma (\mathrm{Disease stage number}\times \mathrm{On behalf of the numerical})}{\mathrm{the total number of Investigated plants}\times \mathrm{The highest disease}-\mathrm{grade representative value}}\times 100$$
The disease severity was judged on the basis of following scale (Table 5).
Table 5 Disease severity scale of witches' broom disease in N. indicum
Authors identified Nerium indicum cv. Plenum (Reddish flowered plants) plantations severely showing WBD symptoms in Shaoguan, China (114.1 longitude and 24.5 latitude). The soil type of the sampling location is yellow–red. At the time of sampling, the plants were 8–10 years old. The average temperature of the area is 26 ℃.
Disease identification and pathogen confirmation
The occurrence of arbuscular symptoms was used as the basis for judging the infection. The growth of the main shoot is stagnant, axillary buds or a large number of side branches germinate followed by the shortening of the arbuscular branches between nodes. Also, the swollen tissue appeared at the base of the sprouting witches. The base of the newly grown lateral branches appears swollen, reddish, and leaves become smaller and yellower. The branches show reduced flowering, while the disease worsens year by year. The branches had tumors, the cortex was decomposed, and in a state of ulceration. Some branches were crumpled and curved. Upon cutting open the diseased branches, blackened fibrous bundles were observed. Triplicate Nerium plants suffering from WBD were sampled. Stem tip (NOWS) and phloem (NOWP) of the diseased plants were taken by peeling the stem with scalpel, rinsed thrice with sterile water, and stored in liquid nitrogen for pathogen identification, RNA extraction, and metabolite analyses. Three healthy Nerium plants were used to harvest stem tip (NOHS) and phloem (NOHP) to be used as control (Fig. 11).
Plant samples showing the witches' broom disease symptoms. Dark orange arrows = germination of a large number of axillary buds and lateral branches, green arrows = swollen base, and dark yellow arrow = shortening of internodes
The NOWP tissues were dissected, a 5 mm × 5 mm black rotten xylem tissue was cut (in ultra-clean work bench), immersed in 70% ETOH for 1 min, and sterilized in 3% hydrogen peroxide for 20 s. The tissue was then washed thrice with sterile water, dried on a filter paper, and placed on a potato dextrose agar (PDA) plates (four tissue pieces per plate), and incubated for 3–4 days. The hyphae growing at both ends were transferred to a fresh PDA plate. The fungal hyphae were isolated, and used for DNA extraction.
After surface sterilization, the DNA was extracted from 1 g of each sample by following the modified method of Griffiths, et al. [94] as reported by Monard, et al. [95]. The quality of DNA was checked on 1% agarose gel electrophoresis. The PCR amplifications were carried out by using a barcode specific primer i.e., ITS1F and ITS2R as reported earlier [96, 97]. The reactions were carried out on an ABI GeneAmp® 9700. The PCR reaction conditions were as reported earlier [95]. Each sample was amplified thrice, the triplicate products were pooled, loaded on 2% agarose gel electrophoresis, and recovered by using AxyPrep DNA Gel Extraction Kit (Axygen, USA). The eluted sample is detected and quantified by suing QuantiFluorTM-STBlue fluorescence quantitative system (Promega, USA). The DNA was then prepared by using TruSeqTM DNA Sample Prep Kit to be sequenced with MiSeq.
The sequencing reads were filtered in Flash (version 1.2.11) [98] and only quality reads were used for further bioinformatic analyses using standard microbial analysis pipeline. We performed an OUT-cluster analysis at 97% identity using UPARSE (version 7.0.1090) [99], and then globally aligned using USEARCH [100] against a database of high quality ITS fungal gene sequences UNITE (https://unite.ut.ee/), Silva (Release138 http://www.arb-silva.de), RDP (Release 11.5 http://rdp.cme.msu.edu/), Greengene (Release 135 http://greengenes.secondgenome.com/), and Functional gene FGR (Release7.3 http://fungene.cme.msu.edu/) to determine taxonomic classifications by RDP Classifier [101]. A phylogenetic tree was constructed in IQ-TREE [102] after alignment in MAFFT [103].
To test if the disease was caused by phytoplasma or fungus, we performed PCR and nested PCR reactions on the genomic DNA extracted from the diseased Jujube, Paulownia, and N. indicum samples. The genomic DNA was extracted as reported above. The 16S rRNA gene was amplified using genomic DNA as a template by employing the common primers P1/P7 and R16mF2/R16mR1 of the phytoplasma. PCR was carried out in a 30 µL reaction system; 1 µL DNA, 0.75 µl of forward and reverse primer, 2 × PCR master mix (0.05 U·μL−1 Taq DNA polymerase, 4 mM MgCl2 and 0.4 mM dNTPs), and ddH2O. The reactions were carried out for 35 cycles with the conditions were as follows. 94 °C for 5 min, 94 °C for 30 s, 53 °C for 40 s, 72 °C for 2 min, followed by a final extension for 10 min at 72 °C. Products for both PCR types were detected on 1% agarose gel electrophoresis. The transmission electron microscopy was done as reported earlier by Park, et al. [104].
Transcriptome analyses
RNA extraction, library preparation, and sequencing
High-quality RNA was extracted from the triplicate samples of phloem and stem tips of both diseased and healthy plants. RNA was extracted using Spin Column Plant total RNA Purification Kit (Sangon Biotech, Shanghai, China). The quality of the RNA was tested by analyzing the integrity (using agarose gel electrophoresis and Agilent 2100 bioanalyzer) and concentration (by Qubit 2.0 Fluorometer).
To prepare libraries, mRNA was purified from total RNA by using poly-T oligo-attached magnetic beads followed by breaking it into short RNA fragments by using fragmentation buffer. The short-fragments were then used to synthesize first strand cDNA with random hexamers, buffer, and dNTPs, DNA polymerase I. The double-stranded cDNA was purified by using AMPure XP beads. The purified cDNA was repaired, A-tailed, and ligated with a sequencing adapter and then AMPure XP beads were used for fragment size selection, and finally PCR enrichment was performed to obtain a final cDNA library. Once the libraries were prepared, their quality was tested preliminarily by Qubit 2.0 and Agilent 2100 for insert size detection. This was followed by Q-PCR to determine the effective library concentration (> 2 nM). The libraries were then sequenced on Illumina HiSeq platform (Illumina Inc., San Diego, CA, USA).
Bioinformatic analyses
Raw Illumina HiSeq sequencing data was processed for quality control by removing reads with adaptors, removing paired reads if N content in sequencing reads exceeded 10%, and if low quality basis (Q ≤ 20) in sequencing reads exceeded 50%. This was followed by the determination of error distribution and GC content in the sequencing reads.
BLAST [105] was used to compare unigene sequences with KEGG [106], NR [107], Swiss-Prot [108], GO [109], COG/KOG [110], Trembl databases [111]. Furthermore, we predicted unigenes' amino acid sequences and used HMMER software to compare the sequences with Pfam [112].
To quantify the gene expression, the spliced transcripts by Trinity were used as reference sequence and the clean reads of each sample were mapped to it by using bowtie2 [113] in RSEM [114]. We then calculated the Fragments Per Kilobase of transcript per Million fragments mapped (FPKM) as an index of the expression level of the transcripts. Overall FPKM values were visualized as box-plot in R. The expression data was then used to study the Pearson's Correlation Coefficient (PCC) and Principal Component Analysis (PCA) in R. Further we used DESeq2 [115] to find the differentially expressed genes (DEGs) between the diseased and healthy samples. Then Benjamini–Hochberg method [116] was used to perform hypothesis test correction on p-value to obtain false discovery rate (FDR) and screened the DEGs on the basis of FDR (< 0.05) and log2 foldchange (≥ 1). Venn diagrams were prepared in InteractiVenn [117].
KEGG pathway enrichment analysis of the DEGs was done in KOBAS2.0 [118] FDR (< 0.05) was used to reduce false positive prediction of enriched KEGG pathways. The degree of KEGG enrichment was measure by Rich factor, Q-value, and the number of gene enriched in each pathway and was displayed as scatter plots (20 entries maximum).
Finally, we used iTAK software to predict plant transcription factors (TF); it integrates PlnTFDB and PlantTFDB and uses TF family and identifies TF through HMM-HMM scan comparison [119].
Quantitative real-time PCR analysis
We selected sixteen genes from the N. indicum RNA-seq comparison data to validate the sequencing results. The Actin-2 gene was used as an internal control. The PCR reactions and determination of the relative gene expression were carried out as reported earlier [120] on a Rotor-Gene 6000 machine (Qiagen, Shanghai, China) using primers designed in Primer 3 (http://frodo.wi.mit.edu/primer3/) (Table 6). The thermal cycling profile was as follows. 50 °C for 2 min and 95 °C for 2 min, followed by 40 cycles at 95 °C for 3 s and 60 °C for 30 s. We also performed the melting curve analysis and verified the single product amplification with temperature ranging from 55 to 95 °C by increasing of 1 °C every step. Volume for all the reactions was 10 μL; 30 ng of cDNA, 5 μL 1 × SYBR® Select Master Mix (Applied Biosystem, Carlsbad, CA, USA), and 0.2 μL (20 μM) of each primer. Three biological replicates were analyzed in independent runs.
Table 6 List of forward and reverse primers that were used for the quantitative real-time PCR analysis of the N. indicum genes
Metabolome profiling
Sample preparation and extraction
The freeze-dried samples were crushed into powder with zirconia bead for 1.5 min at 30 Hz in a MM400-Retsch mixer mill. 100 mg powder was then extracted at 4 ℃ for 12 h in 70% MeOH (0.6 mL) followed by centrifugation for 10 min at 10,000 g. the extracts were absorbed (CNWBOND Carbon-GCB SPE Cartridge, 250 mg, 3 ml; ANPEL, Shanghai, China) and filtered (0.22 µm) prior to UPLC-MS/MS analyses.
UPLC conditions
To analyze the extracts, we used UPLC-ESI–MS/MS system (UPLC, Shimpack UFLC SHIMADZU CBM30A system. MS, Applied Biosystems 4500 Q TRAP). We used an Agilent SB-C18 UPLC column for the analysis. The mobile phase included solvent A, pure water + 0.1% formic acid, solvent B, acetonitrile. The measurements were recorded by setting up a gradient program that employed the starting conditions of 95% A, 5% B. Within 9 min, a linear gradient to 5% A, 95% B was programmed, and a composition of 5% A, 95% B was kept for 1 min. subsequently, a composition of 95% A, 5.0% B was adjusted within 1.10 min and kept for 2.9 min. During the measurements, the temperature for the column oven was kept 40 ℃. An injection volume of 4 µL was used. The effluent was alternatively connected to an ESI-triple quadrupole-linear ion trap (QTRAP)-MS.
ESI-Q TRAP-MS/MS
The ESI-Q TRAP-MS/MS settings were as reported earlier [121]. Briefly, LIT and triple quadrupole (QQQ) scans were acquired on a triple quadrupole-linear ion trap mass spectrometer (Q TRAP), API 4500 Q TRAP UPLC/MS/MS System, equipped with an ESI Turbo Ion-Spray interface, operating in positive and negative ion mode and controlled by Analyst 1.6.3 software (AB Sciex). The ESI source operation parameters were as follows: ion source, turbo spray; source temperature 550 ℃; ion spray voltage (IS) 5500 V (positive ion mode)/ -4500 V (negative ion mode); ion source gas I (GSI), gas II (GSII), curtain gas (CUR) were set at 50, 60, and 30.0 psi, respectively; the collision gas (CAD) was high. Instrument tuning and mass calibration were performed with 10 and 100 μmol/L polypropylene glycol solutions in QQQ and LIT modes, respectively. QQQ scans were acquired as MRM experiments with collision gas (nitrogen) set to 5 psi. Declustering potential (DP) and collision energy (CE) for individual MRM transitions was done with further DP and CE optimization. A specific set of MRM transitions were monitored for each period according to the metabolites eluted within this period.
Metabolite data analyses
Unsupervised PCA was performed in R using prcomp function. The original data was compressed into n principal components (PC1 and PC2) to describe the characteristics of the original data set. Hierarchical cluster analysis and PCC between the samples was computed and represented as heatmaps in R using pheatmap and cor functions.
To determine if the detected metabolite was differentially accumulated, we used variable importance in projection (VIP) ≥ 1 and log2 foldchange ≥ 1 as criteria. The VIP values were extracted from OPLS-DA result, which was generated using R package MetaboAnalystR; the data was log transformed and mean centered before OPLS-DA. We also performed a permutation test (200 permutations) to avoid overfitting.
The metabolites were annotated using KEGG Compound database (http://www.kegg.jp/kegg/compound/) and then mapped to KEGG Pathway database (http://www.kegg.jp/kegg/pathway.html). Pathways with significantly regulated metabolites mapped to were then fed into MSEA (metabolite sets enrichment analysis), their significance was determined by hypergeometric test's p-values.
The raw transcriptome data have been submitted to NCBI SRA under the project number: PRJNA764871 (https://www.ncbi.nlm.nih.gov/bioproject/764871).
TIR1:
Transport Inhibitor Response1
GH3:
Glycoside hydrolase 3
CRE1:
CYTOKININ RESPONSE 1
A-AAR1:
Type-A Arabidopsis response regulator 1
AHP:
Arabidopsis histidine phosphotransfer proteins
GID1:
Gibberellin insensitive dwarf 1
CTR1:
CONSTITUTIVE TRIPLE RESPONSE 1
ETHYLENE INSENSITIVE 3
ERF1/2 :
Ethylene responsive factor 1/2
SIMKK:
Stress induced MAPK kinase kinase
Ethylene binding factor 1/2
TCH4:
CYCD3:
Cyclin-D 3
BIN2:
BR Insensitive 2
BZR1/2:
BRASSINAZOLE-RESISTANT 1
JA:
Jasmonic acid
COI1:
CORONATINE INSENSITIVE 1
SAR:
System acquired resistance
PR-1:
Pathogenesis related protein 1
TGA:
TF TGA
Rboh:
Respiratory burst oxidase
NOS:
Nitric-oxide synthase
CNGCs:
Cyclic nucleotide gated channel
FRK1:
FLG22-induced receptor-like kinase 1
NHO1:
PTI5:
Pto-interacting protein 1
MKK1/2:
MAPK-kinase kinase 1/2
RAR1:
Cysteine and histidine-rich domain-containing protein RAR1
EDS1:
Enhanced disease susceptibility 1 protein
RPS2:
Disease resistance protein RPS2
CYP2J:
Cytochrome P450 family 2 subfamily J
CYP3A4:
Cytochrome P450 family 3 subfamily A4
Chloroplastic oxoene reductase
CCOAOMT1:
Caffeoyl-CoA O-methyltransferase
4-Coumarate-CoA ligase
Catalase-peroxidase
Cinnamoyl-CoA reductase
Ferulate-5-hydroxylase
Caffeoylshikimate esterase
Cinnamyl-alcohol dehydrogenase
SGtf:
Scopoletin glucosyltransferase
RPN4:
Regulatory Particle Non-ATPase 4
DAMs:
Differentially accumulated metabolites
NOHS:
Stem tip from healthy N. indicum
NOWS:
Stem tip from N. indicum with WBD symptoms
NOHP:
Non-diseased N. indicum phloem
NOWP:
Phloem from N. indicum with WBD symptoms
Dey P, Chaudhuri TK. Pharmacological aspects of Nerium indicum Mill: a comprehensive review. Pharmacogn Rev. 2014;8:156.
Tamboli, R. Effect of vehicle air pollution on leaf structure of Nerium indicum L. plant on NH-4 divider. Advances in Plant Sciences. 2013;26:435–438.
Ma D, Chen Y, Lai Y, Zhang Z, Li X, Zhang D. Diverse resourcing of Nerium indicum leaves for bio-utilization. Therm Sci. 2020;24:1785–93.
Mulas, M.; Perinu, B.; Francesconi, A.H.D. Evaluation of Spontaneous Oleander (Nerium oleander L.) as a Medicinal Plant. Journal of Herbs, Spices & Medicinal Plants 2002, 9, 121–125.
West, E. Witches' broom of Oleander. Witches' broom of Oleander. 1937.
Mardi M, Karimi Farsad L, Gharechahi J, Salekdeh G.H. In-depth transcriptome sequencing of Mexican lime trees infected with Candidatus Phytoplasma aurantifolia. PLoS One. 2015;10:e0130425.
Ghosh D, Das A, Singh S, Singh S, Ahlawat Y. Occurrence of Witches'-Broom, a new phytoplasma disease of acid lime (Citrus aurantifolia) in India. Plant Dis. 1999;83:302–302.
Al-Sakeiti M, Al-Subhi A, Al-Saady N, Deadman M. First report of witches'-broom disease of sesame (Sesamum Indicum) in Oman. Plant Dis. 2005;89:530–530.
Hiruki, C. Paulownia witches'-broom disease important in East Asia. In Proceedings of International Symposium on Urban Tree Health 496; pp. 63–68.
Win NKK, Lee S-Y, Bertaccini A, Namba S, Jung H-Y. 'Candidatus Phytoplasma balanitae'associated with witches' broom disease of Balanites triflora. Int J Syst Evol Microbiol. 2013;63:636–40.
Evans H. Pleomorphism in Crinipellis perniciosa, causal agent of witches' broom disease of cocoa. Trans Br Mycol Soc. 1980;74:515–23.
Scarpari L, Meinhardt L, Mazzafera P, Pomella A, Schiavinato M, Cascardo J, Pereira G. Biochemical changes during the development of witches' broom: the most important disease of cocoa in Brazil caused by Crinipellis perniciosa. J Exp Bot. 2005;56:865–77.
Liu R, Dong Y, Fan G, Zhao Z, Deng M, Cao X, Niu S. Discovery of genes related to witches broom disease in Paulownia tomentosa× Paulownia fortunei by a de novo assembled transcriptome. PLoS ONE. 2013;8: e80238.
Mollayi S, Zadali R, Farzaneh M, Ghassempour A. Metabolite profiling of Mexican lime (Citrus aurantifolia) leaves during the progression of witches' broom disease. Phytochem Lett. 2015;13:290–6.
Mollayi S, Farzaneh M, Ghanati F, Aboul-Enein HY, Ghassempour A. Study of catechin, epicatechin and their enantiomers during the progression of witches' broom disease in Mexican lime (Citrus aurantifolia). Physiol Mol Plant Pathol. 2016;93:93–8.
Jaiswal S, Jadhav PV, Jasrotia RS, Kale PB, Kad SK, Moharil MP, Dudhare MS, Kheni J, Deshmukh AG, Mane SS. Transcriptomic signature reveals mechanism of flower bud distortion in witches'-broom disease of soybean (Glycine max). BMC Plant Biol. 2019;19:1–12.
Guo, J.; Huang, Z.; Sun, J.; Cui, X.; Liu, Y. Research Progress and Future Development Trends in Medicinal Plant Transcriptomics. Frontiers in plant science 2021, 12.
Panda A, Parida AK, Rangani J. Advancement of metabolomics techniques and their applications in plant science: Current scenario and future prospective. In Plant Metabolites and Regulation Under Environmental Stress: Elsevier; 2018. p. 1–36.
Krysan PJ, Colcombet J. Cellular complexity in MAPK signaling in plants: Questions and emerging tools to answer them. Front Plant Sci. 2018;9:1674.
Hettenhausen C, Schuman MC, Wu J. MAPK signaling: a key element in plant defense response to insects. Insect science. 2015;22:157–64.
Shah J. The salicylic acid loop in plant defense. Curr Opin Plant Biol. 2003;6:365–71.
Aldon D, Mbengue M, Mazars C, Galaud J-P. Calcium signalling in plant biotic interactions. Int J Mol Sci. 2018;19:665.
Article PubMed Central CAS Google Scholar
Misas-Villamil JC, van der Hoorn RA, Doehlemann G. Papain-like cysteine proteases as hubs in plant immunity. New Phytol. 2016;212:902–7.
Balakireva AV, Zamyatnin AA. Indispensable role of proteases in plant innate immunity. Int J Mol Sci. 2018;19:629.
Minina EA, Moschou PN, Bozhkov PV. Limited and digestive proteolysis: crosstalk between evolutionary conserved pathways. New Phytol. 2017;215:958–64.
Hayama R, Yang P, Valverde F, Mizoguchi T, Furutani-Hayama I, Vierstra RD, Coupland G. Ubiquitin carboxyl-terminal hydrolases are required for period maintenance of the circadian clock at high temperature in Arabidopsis. Sci Rep. 2019;9:1–12.
Bleeker PM, Spyropoulou EA, Diergaarde PJ, Volpin H, De Both MT, Zerbe P, Bohlmann J, Falara V, Matsuba Y, Pichersky E. RNA-seq discovery, functional characterization, and comparison of sesquiterpene synthases from Solanum lycopersicum and Solanum habrochaites trichomes. Plant Mol Biol. 2011;77:323.
Seo M, Koiwai H, Akaba S, Komano T, Oritani T, Kamiya Y, Koshiba T. Abscisic aldehyde oxidase in leaves of Arabidopsis thaliana. Plant J. 2000;23:481–8.
Prerostova S, Dobrev PI, Gaudinova A, Knirsch V, Körber N, Pieruschka R, Fiorani F, Brzobohatý B, Spichal L, Humplik J. Cytokinins: Their impact on molecular and growth responses to drought stress and recovery in Arabidopsis. Front Plant Sci. 2018;9:655.
Lisón P, Rodrigo I, Conejero V. A novel function for the cathepsin D inhibitor in tomato. Plant Physiol. 2006;142:1329–39.
Takagi D, Miyake C. Proton gradient regulation 5 supports linear electron flow to oxidize photosystem I. Physiol Plant. 2018;164:337–48.
Reynolds JJ, Bicknell LS, Carroll P, Higgs MR, Shaheen R, Murray JE, Papadopoulos DK, Leitch A, Murina O, Tarnauskaitė Ž. Mutations in DONSON disrupt replication fork stability and cause microcephalic dwarfism. Nat Genet. 2017;49:537–49.
Gardan R, Rapoport G, Débarbouillé M. Expression of therocDEFOperon Involved in Arginine Catabolism inBacillus subtilis. J Mol Biol. 1995;249:843–56.
Yoshihara T, Spalding EP, Iino M. A t LAZY 1 is a signaling component required for gravitropism of the A rabidopsis thaliana inflorescence. Plant J. 2013;74:267–79.
Eser BE, Zhang X, Chanani PK, Begley TP, Ealick SE. From suicide enzyme to catalyst: the iron-dependent sulfide transfer in Methanococcus jannaschii thiamin thiazole biosynthesis. J Am Chem Soc. 2016;138:3639–42.
Yang W, Jiang D, Jiang J, He Y. A plant-specific histone H3 lysine 4 demethylase represses the floral transition in Arabidopsis. Plant J. 2010;62:663–73.
Craig EA, Stevens MV, Vaillancourt RR, Camenisch TD. MAP3Ks as central regulators of cell fate during development. Developmental dynamics: an official publication of the American Association of Anatomists. 2008;237:3102–14.
Alefounder P, Baldwin S, Perham R, Short N. Cloning, sequence analysis and over-expression of the gene for the class II fructose 1, 6-bisphosphate aldolase of Escherichia coli. Biochemical Journal. 1989;257:529–34.
Liao, T.-H.; Barber, G. Purification of guanosine 5′-diphosphate d-mannose oxidoreductase from Phaseolus vulgaris. Biochimica et Biophysica Acta (BBA)-Enzymology 1972, 276, 85–93.
Weis C, Hückelhoven R, Eichmann R. LIFEGUARD proteins support plant colonization by biotrophic powdery mildew fungi. J Exp Bot. 2013;64:3855–67.
Stumpf P, Horecker B. The role of xylulose 5-phosphate in xylose metabolism of Lactobacillus pentosus. J Biol Chem. 1956;218:753–68.
Reimann R, Kost B, Dettmer J. Tetraspanins in plants. Front Plant Sci. 2017;8:545.
Zhao, J.; Liu, M. VARIATION OF MINERAL ELEMENT CONTENTS IN CHINESE JUJUBE WITH WITCHES'BROOM DISEASE. In Proceedings of I International Jujube Symposium 840; pp. 399–404.
Naito T, Tanaka M, Taba S, Toyosato T, Oshiro A, Takaesu K, Hokama K, Usugi T, Kawano S. Occurrence of chrysanthemum virescence caused by "Candidatus Phytoplasma aurantifolia" in Okinawa. J Gen Plant Pathol. 2007;73:139–41.
Mohali S, Slippers B, Wingfield MJ. Identification of Botryosphaeriaceae from Eucalyptus, Acacia and Pinus in Venezuela. Fungal Diversity. 2007;25:103–25.
Zhou, S.; Stanosz, G.R. Relationships among Botryosphaeria species and associated anamorphic fungi inferred from the analyses of ITS and 5.8 S rDNA sequences. Mycologia 2001, 93, 516–527.
Phillips A, Alves A, Abdollahzadeh J, Slippers B, Wingfield MJ, Groenewald J, Crous PW. The Botryosphaeriaceae: genera and species known from culture. Stud Mycol. 2013;76:51–167.
Rashmi M, Kushveer J, Sarma V. A worldwide list of endophytic fungi with notes on ecology and diversity. Mycosphere. 2019;10:798–1079.
Gardner, D.E. Botryosphaeria mamane sp. nov. associated with witches'-brooms on the endemic forest tree Sophora chrysophylla in Hawaii. Mycologia 1997;89:298–303.
Correia, K.C.; Câmara, M.P.S.; Barbosa, M.A.G.; Sales Jr, R.; Agusti-Brisach, C.; Gramaje, D.; Leon, M.; Garcia-Jimenez, J.; Abad-Campos, P.; Armengol, J. Fungal trunk pathogens associated with table grape decline in North-eastern Brazil. Phytopathologia Mediterranea 2013, 380–387.
Medeiros F, Pomella A, De Souza J, Niella G, Valle R, Bateman R, Fravel D, Vinyard B, Hebbar P. A novel, integrated method for management of witches' broom disease in Cacao in Bahia. Brazil Crop Protection. 2010;29:704–11.
Sousa Filho HR, de Jesus RM, Bezerra MA, Santana GM, de Santana RO. History, dissemination, and field control strategies of cocoa witches' broom. Plant Pathol. 2021;70:1971–8.
Meng X, Zhang S. MAPK cascades in plant disease resistance signaling. Annu Rev Phytopathol. 2013;51:245–66.
Zhang J, Zhou J-M. Plant immunity triggered by microbial molecular signatures. Mol Plant. 2010;3:783–93.
Liu Z, Zhao Z, Xue C, Wang L, Wang L, Feng C, Zhang L, Yu Z, Zhao J, Liu M. Three Main genes in the MAPK Cascade involved in the Chinese jujube-Phytoplasma interaction. Forests. 2019;10:392.
Brader G, Djamei A, Teige M, Palva ET, Hirt H. The MAP kinase kinase MKK2 affects disease resistance in Arabidopsis. Mol Plant Microbe Interact. 2007;20:589–96.
Li X, Zhang Y, Huang L, Ouyang Z, Hong Y, Zhang H, Li D, Song F. Tomato SlMKK2 and SlMKK4 contribute to disease resistance against Botrytis cinerea. BMC Plant Biol. 2014;14:1–17.
Awwad F, Bertrand G, Grandbois M, Beaudoin N. Reactive oxygen species alleviate cell death induced by thaxtomin A in Arabidopsis thaliana cell cultures. Plants. 2019;8:332.
Article CAS PubMed Central Google Scholar
Gechev TS, Hille J. Hydrogen peroxide as a signal controlling plant programmed cell death. J Cell Biol. 2005;168:17–20.
Liu Y, He C. A review of redox signaling and the control of MAP kinase pathway in plants. Redox Biol. 2017;11:192–204.
Takatsuji, H.; Jiang, C.-J. Plant hormone crosstalks under biotic stresses. Phytohormones: a window to metabolism, signaling and biotechnological applications 2014, 323–350.
Sung Y-C, Lin C-P, Hsu H-J, Chen Y-L, Chen J-C. Silencing of CrNPR1 and CrNPR3 alters plant susceptibility to periwinkle leaf yellowing phytoplasma. Front Plant Sci. 2019;10:1183.
Dos Santos EC, Pirovani CP, Correa SC, Micheli F, Gramacho KP. The pathogen Moniliophthora perniciosa promotes differential proteomic modulation of cacao genotypes with contrasting resistance to witches broom disease. BMC Plant Biol. 2020;20:1–21.
Ye X, Wang H, Chen P, Fu B, Zhang M, Li J, Zheng X, Tan B, Feng J. Combination of iTRAQ proteomics and RNA-seq transcriptomics reveals multiple levels of regulation in phytoplasma-infected Ziziphus jujuba Mill. Horticulture research. 2017;4:1–13.
Fan G, Xu E, Deng M, Zhao Z, Niu S. Phenylpropanoid metabolism, hormone biosynthesis and signal transduction-related genes play crucial roles in the resistance of Paulownia fortunei to paulownia witches' broom phytoplasma infection. Genes & Genomics. 2015;37:913–29.
Huang Y, Li H, Hutchison CE, Laskey J, Kieber JJ. Biochemical and functional analysis of CTR1, a protein kinase that negatively regulates ethylene signaling in Arabidopsis. Plant J. 2003;33:221–33.
Zhao, Y.; Sun, Q.; Davis, R.; Lee, I.-M.; Liu, Q. First Report of Witches'-Broom Disease in a Cannabis spp. in China and Its Association with a Phytoplasma of Elm Yellows Group (16SrV). Plant Disease 2007, 91, 227–227.
Staskawicz BJ. Genetics of plant-pathogen interactions specifying plant disease resistance. Plant Physiol. 2001;125:73–6.
Zhang L, Du L, Poovaiah B. Calcium signaling and biotic defense responses in plants. Plant Signal Behav. 2014;9: e973818.
Chang, Y.; Li, B.; Shi, Q.; Geng, R.; Geng, S.; Liu, J.; Zhang, Y.; Cai, Y. Comprehensive Analysis of Respiratory Burst Oxidase Homologs (Rboh) Gene Family and Function of GbRboh5/18 on Verticillium Wilt Resistance in Gossypium barbadense. Frontiers in genetics 2020, 11.
Rossi FR, Gárriz A, Marina M, Romero FM, Gonzalez ME, Collado IG, Pieckenstain FL. The sesquiterpene botrydial produced by Botrytis cinerea induces the hypersensitive response on plant tissues and its action is modulated by salicylic acid and jasmonic acid signaling. Mol Plant Microbe Interact. 2011;24:888–96.
Yeh Y-H, Chang Y-H, Huang P-Y, Huang J-B, Zimmerli L. Enhanced Arabidopsis pattern-triggered immunity by overexpression of cysteine-rich receptor-like kinases. Front Plant Sci. 2015;6:322.
Yoda H, Ogawa M, Yamaguchi Y, Koizumi N, Kusano T, Sano H. Identification of early-responsive genes associated with the hypersensitive response to tobacco mosaic virus and characterization of a WRKY-type transcription factor in tobacco plants. Mol Genet Genomics. 2002;267:154–61.
da Hora Junior, B.T.; de Faria Poloni, J.; Lopes, M.A.; Dias, C.V.; Gramacho, K.P.; Schuster, I.; Sabau, X.; Cascardo, J.C.D.M.; Di Mauro, S.n.M.Z.; da Silva Gesteira, A. Transcriptomics and systems biology analysis in identification of specific pathways involved in cacao resistance and susceptibility to witches' broom disease. Molecular Biosystems 2012, 8, 1507–1519.
Chen, P.; Chen, L.; Ye, X.; Tan, B.; Zheng, X.; Cheng, J.; Wang, W.; Yang, Q.; Zhang, Y.; Li, J. Phytoplasma effector Zaofeng6 induces shoot proliferation by decreasing the expression of ZjTCP7 in Ziziphus jujuba. Horticulture research 2022, 9.
Jones JD, Dangl JL. The plant immune system nature. 2006;444:323–9.
Yang H, Zhao T, Jiang J, Chen X, Zhang H, Liu G, Zhang D, Du C, Wang S, Xu X. Transcriptome analysis of the Sm-mediated hypersensitive response to Stemphylium lycopersici in tomato. Front Plant Sci. 2017;8:1257.
Dixon RA, Achnine L, Kota P, Liu CJ, Reddy MS, Wang L. The phenylpropanoid pathway and plant defence—a genomics perspective. Mol Plant Pathol. 2002;3:371–90.
Zhang, G.; Zhang, Y.; Xu, J.; Niu, X.; Qi, J.; Tao, A.; Zhang, L.; Fang, P.; Lin, L.; Su, J. The CCoAOMT1 gene from jute (Corchorus capsularis L.) is involved in lignin biosynthesis in Arabidopsis thaliana. Gene 2014, 546, 398–402.
Tronchet M, Balague C, Kroj T, Jouanin L, Roby D. Cinnamyl alcohol dehydrogenases-C and D, key enzymes in lignin biosynthesis, play an essential role in disease resistance in Arabidopsis. Mol Plant Pathol. 2010;11:83–92.
Nawaz, M.A.; Rehman, H.M.; Imtiaz, M.; Baloch, F.S.; Lee, J.D.; Yang, S.H.; Lee, S.I.; Chung, G. Systems identification and characterization of cell wall reassembly and degradation related genes in Glycine max (L.) Merill, a bioenergy legume. Sci Rep 2017, 7, 1–16.
Anderson NA, Bonawitz ND, Nyffeler K, Chapple C. Loss of ferulate 5-hydroxylase leads to Mediator-dependent inhibition of soluble phenylpropanoid biosynthesis in Arabidopsis. Plant Physiol. 2015;169:1557–67.
Tohge T, de Souza LP, Fernie AR. Current understanding of the pathways of flavonoid biosynthesis in model and crop plants. J Exp Bot. 2017;68:4013–28.
Treutter D. Significance of flavonoids in plant resistance and enhancement of their biosynthesis. Plant Biol. 2005;7:581–91.
Mata-Pérez C, Sánchez-Calvo B, Begara-Morales JC, Luque F, Jiménez-Ruiz J, Padilla MN, Fierro-Risco J, Valderrama R, Fernández-Ocaña A, Corpas FJ. Transcriptomic profiling of linolenic acid-responsive genes in ROS signaling from RNA-seq data in Arabidopsis. Front Plant Sci. 2015;6:122.
Puentes A, Zhao T, Lundborg L, Björklund N, Borg-Karlson A-K. Variation in methyl jasmonate-induced defense among Norway spruce clones and trade-offs in resistance against a fungal and an insect pest. Front Plant Sci. 2021;12:962.
Sweetlove LJ, Beard KF, Nunes-Nesi A, Fernie AR, Ratcliffe RG. Not just a circle: flux modes in the plant TCA cycle. Trends Plant Sci. 2010;15:462–70.
Fernie AR, Carrari F, Sweetlove LJ. Respiratory metabolism: glycolysis, the TCA cycle and mitochondrial electron transport. Curr Opin Plant Biol. 2004;7:254–61.
Pétriacq P, de Bont L, Hager J, Didierlaurent L, Mauve C, Guérard F, Noctor G, Pelletier S, Renou JP, Tcherkez G. Inducible NAD overproduction in Arabidopsis alters metabolic pools and gene expression correlated with increased salicylate content and resistance to Pst-AvrRpm1. Plant J. 2012;70:650–65.
Miwa A, Sawada Y, Tamaoki D, Hirai MY, Kimura M, Sato K, Nishiuchi T. Nicotinamide mononucleotide and related metabolites induce disease resistance against fungal phytopathogens in Arabidopsis and barley. Sci Rep. 2017;7:1–12.
Zhang X, Mou Z. Extracellular pyridine nucleotides induce PR gene expression and disease resistance in Arabidopsis. Plant J. 2009;57:302–12.
Katoh A, Hashimoto T. Molecular biology of pyridine nucleotide and nicotine biosynthesis. Front Biosci. 2004;9:1577–86.
Zafar, S.A.; Zaidi, S.S.-e.-A.; Gaba, Y.; Singla-Pareek, S.L.; Dhankher, O.P.; Li, X.; Mansoor, S.; Pareek, A. Engineering abiotic stress tolerance via CRISPR/Cas-mediated genome editing. Journal of Experimental Botany 2020, 71, 470–479.
Griffiths RI, Whiteley AS, O'Donnell AG, Bailey MJ. Rapid method for coextraction of DNA and RNA from natural environments for analysis of ribosomal DNA-and rRNA-based microbial community composition. Appl Environ Microbiol. 2000;66:5488–91.
Monard C, Gantner S, Stenlid J. Utilizing ITS1 and ITS2 to study environmental fungal diversity using pyrosequencing. FEMS Microbiol Ecol. 2013;84:165–75.
Gardes M, Bruns TD. ITS primers with enhanced specificity for basidiomycetes-application to the identification of mycorrhizae and rusts. Mol Ecol. 1993;2:113–8.
Zhu H, Li B, Ding N, Hua Z, Jiang X. A Case Study on Microbial Diversity Impacts of a Wastewater Treatment Plant to the Receiving River. Journal of Geoscience and Environment Protection. 2021;9:206–20.
Magoč T, Salzberg SL. FLASH: fast length adjustment of short reads to improve genome assemblies. Bioinformatics. 2011;27:2957–63.
Edgar RC. UPARSE: highly accurate OTU sequences from microbial amplicon reads. Nat Methods. 2013;10:996–8.
Edgar RC. Search and clustering orders of magnitude faster than BLAST. Bioinformatics. 2010;26:2460–1.
Lan Y, Wang Q, Cole JR, Rosen GL. Using the RDP classifier to predict taxonomic novelty and reduce the search space for finding novel organisms. PLoS ONE. 2012;7: e32491.
Minh BQ, Schmidt HA, Chernomor O, Schrempf D, Woodhams MD, Von Haeseler A, Lanfear R. IQ-TREE 2: new models and efficient methods for phylogenetic inference in the genomic era. Mol Biol Evol. 2020;37:1530–4.
Katoh K, Rozewicki J, Yamada KD. MAFFT online service: multiple sequence alignment, interactive sequence choice and visualization. Brief Bioinform. 2019;20:1160–6.
Park J, Kim H-J, Huh YH, Kim KW. Ultrastructure of phytoplasma-infected jujube leaves with witches' broom disease. Micron. 2021;148: 103108.
Johnson M, Zaretskaya I, Raytselis Y, Merezhuk Y, McGinnis S, Madden TL. NCBI BLAST: a better web interface. Nucleic Acids Res. 2008;36:W5–9.
Kanehisa M, Goto S. KEGG: kyoto encyclopedia of genes and genomes. Nucleic Acids Res. 2000;28:27–30.
Deng Y, Li J, Wu S, Zhu Y, Chen Y, He F. Integrated nr database in protein annotation system and its localization. Comput Eng. 2006;32:71–2.
Apweiler R. Functional information in SWISS-PROT: the basis for large-scale characterisation of protein sequences. Brief Bioinform. 2001;2:9–18.
Ashburner M, Ball CA, Blake JA, Botstein D, Butler H, Cherry JM, Davis AP, Dolinski K, Dwight SS, Eppig JT. Gene ontology: tool for the unification of biology. Nat Genet. 2000;25:25–9.
Koonin EV, Fedorova ND, Jackson JD, Jacobs AR, Krylov DM, Makarova KS, Mazumder R, Mekhedov SL, Nikolskaya AN, Rao BS. A comprehensive evolutionary classification of proteins encoded in complete eukaryotic genomes. Genome Biol. 2004;5:1–28.
Apweiler R, Bairoch A, Wu CH, Barker WC, Boeckmann B, Ferro S, Gasteiger E, Huang H, Lopez R, Magrane M. UniProt: the universal protein knowledgebase. Nucleic Acids Res. 2004;32:D115–9.
Bateman A, Birney E, Cerruti L, Durbin R, Etwiller L, Eddy SR, Griffiths-Jones S, Howe KL, Marshall M, Sonnhammer EL. The Pfam protein families database. Nucleic Acids Res. 2002;30:276–80.
Langmead B, Salzberg SL. Fast gapped-read alignment with Bowtie 2. Nat Methods. 2012;9:357–9.
Li B, Dewey CN. RSEM: accurate transcript quantification from RNA-Seq data with or without a reference genome. BMC Bioinformatics. 2011;12:1–16.
Varet H, Brillet-Guéguen L, Coppée J-Y, Dillies M-A. SARTools: a DESeq2-and edgeR-based R pipeline for comprehensive differential analysis of RNA-Seq data. PLoS ONE. 2016;11: e0157022.
Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J Roy Stat Soc: Ser B (Methodol). 1995;57:289–300.
Heberle H, Meirelles GV, da Silva FR, Telles GP, Minghim R. InteractiVenn: a web-based tool for the analysis of sets through Venn diagrams. BMC Bioinformatics. 2015;16:1–7.
Xie, C.; Mao, X.; Huang, J.; Ding, Y.; Wu, J.; Dong, S.; Kong, L.; Gao, G.; Li, C.-Y.; Wei, L. KOBAS 2.0: a web server for annotation and identification of enriched pathways and diseases. Nucleic acids research 2011, 39, W316-W322.
Zheng Y, Jiao C, Sun H, Rosli HG, Pombo MA, Zhang P, Banf M, Dai X, Martin GB, Giovannoni JJ. iTAK: a program for genome-wide prediction and classification of plant transcription factors, transcriptional regulators, and protein kinases. Mol Plant. 2016;9:1667–70.
Zhou Z, Gao H, Ming J, Ding Z, Zhan R. Combined Transcriptome and Metabolome analysis of Pitaya fruit unveiled the mechanisms underlying Peel and pulp color formation. BMC Genomics. 2020;21:1–17.
Chen, L.; Wu, Q.; He, W.; He, T.; Wu, Q.; Miao, Y. Combined De Novo Transcriptome and Metabolome Analysis of Common Bean Response to Fusarium oxysporum f. sp. phaseoli Infection. Int J Mol Sci 2019, 20, 6278.
This study was financially supported by A Specific Program for National Non-profit Scientific Institutions (CAFYBB2019SY021).
The Key Laboratory of National Forestry and Grassland Administration for Tropical Forestry Research, Research Institute of Tropical Forestry, Chinese Academy of Forestry, Longdong, Guangzhou, 510520, China
Shengjie Wang, Shengkun Wang, Ming Li, Yuhang Su, Zhan Sun & Haibin Ma
Shengjie Wang
Shengkun Wang
Ming Li
Yuhang Su
Zhan Sun
Haibin Ma
Conceptualization, H.M.; methodology, S.W., S.W.; software, M.L.; Y.S. validation, Z.S., H.M., M.L. and S.W.; formal analysis, S.W., Y.S., and M.L.; investigation, H.M.; Y.S. and Z.S.; resources, H.M.; data curation, S.W.; writing—original draft preparation, S.W., and H.M.; writing—review and editing, S.W., and H.M; visualization, Y.S., and Z.S.; supervision, M.L., S.W. and Z.S.; project administration, Z.S., and H.M.; funding acquisition, H.M. All authors have read and approved the final version of the manuscript.
Correspondence to Haibin Ma.
No approval was required by the host institute or the local, provincial, or national government for the collection of the samples. This study did not include any animal or human subjects.
No verbal or written consent was needed to publish the results.
Competing interest
Additional file 1: Supplementary Table 1.
Summary of transcriptome sequencing of infected and non-infected Nerium indicum L. tissues. Supplementary Table 2. Differential expression of genes between infected (NOWP) and non-infected (NOHP) N. indicum phloem. Supplementary Table 3. Differential expression of genes between infected (NOWS) and non-infected (NOHS) N. indicum stem. Supplementary Table 4. List of differentially expressed genes that were enriched in specific pathways in WBD infected N. indicum phloem (NOWP) and stem tip (NOWS) as compared to controls i.e., NOHP and NOHS, respectively. Supplementary Table 5. Details of the differentially accumulated metabolites between infected (NOWP) and non-infected (NOHP) N. indicum phloem. Supplementary Table 6. Details of the differentially accumulated metabolites between infected (NOWS) and non-infected (NOHS) N. indicum stem tip. Supplementary Table 7. Pathway specific differential accumulation of metabolites in infected N. indicum phloem and stem as compared to non-infected tissues.
Supplementary Figure 1. A PCR analysis of Jujube (diseased tissues),Paulownia arbuscular diseased tissues, and N. indicum tissues. well # 1:Ladder, 2: Jujube sample, 3: Paulownia arbuscular diseased sample, 4-7:diseased N. indicum, 8: healthy N. indicum, 9: Jujube sample, and10: Paulownia arbuscular diseased sample. B Nested PCR detection assay(M: Ladder, 1-4: diseased N. indicum tissues, 5: Paulownia arbusculardiseased tissues). Supplementary Figure 2.Summary of annotation; Number of Nerium Indicum L. genes annotated indifferent databases. Supplementary Figure 3. Scatter plots showing KEGG pathways in which thedifferentially expressed genes were enriched in NOHP vs NOWP and NOHS vs NOWS.NOHS, NOWS, NOHP, and NOWP represent non-infected stem, WBD infected stem,non-infected phloem, and WBD infected phloem, respectively. Supplementary Figure 4.Scatter plots showing KEGG pathways in which the specifically expressed geneswere enriched in A) NOHP, B) NOWP, C) NOHS, and D) NOWS. NOHS, NOWS, NOHP, andNOWP represent non-infected stem, WBD infected stem, non-infected phloem, andWBD infected phloem, respectively. Supplementary figure 5. OPLS-DA of the metabolites that weredifferentially accumulated between A NOHP vs NOWP and B NOHS vs NOWS. WhereNOWS, NOHS, NOWP, and NOHP represent infected stem tip, healthy stem tip,infected phloem, and healthy phloem of N. indicum.
Wang, S., Wang, S., Li, M. et al. Combined transcriptome and metabolome analysis of Nerium indicum L. elaborates the key pathways that are activated in response to witches' broom disease. BMC Plant Biol 22, 291 (2022). https://doi.org/10.1186/s12870-022-03672-z
DOI: https://doi.org/10.1186/s12870-022-03672-z
Plant-pathogen interaction
Defense responses
MAPK-signaling cascade
Lignans and coumarins
Nerium spp | CommonCrawl |
Communications on Pure & Applied Analysis
March 2002 , Volume 1 , Issue 1
Select all articles
Export/Reference:
A semi-implicit moving mesh method for the focusing nonlinear Schrödinger equation
Hector D. Ceniceros
2002, 1(1): 1-18 doi: 10.3934/cpaa.2002.1.1 +[Abstract](1760) +[PDF](410.7KB)
An efficient adaptive moving mesh method for investigation of the semi-classical limit of the focusing nonlinear schrödinger equation is presented. The method employs a dynamic mesh to resolve the sea of solitons observed for small dispersion parameters. A second order semi-implicit discretization is used in conjunction with a dynamic mesh generator to achieve a cost-efficient, accurate, and stable adaptive scheme. This method is used to investigate with highly resolved numerics the solution's behavior for small dispersion parameters. Convincing evidence is presented of striking regular space-time patterns for both analytic and non-analytic inital data.
Hector D. Ceniceros. A semi-implicit moving mesh method for the focusing nonlinear Schr\u00F6dinger equation. Communications on Pure & Applied Analysis, 2002, 1(1): 1-18. doi: 10.3934/cpaa.2002.1.1.
An application of homogenization techniques to population dynamics models
B. E. Ainseba, W. E. Fitzgibbon, M. Langlais and J. J. Morgan
2002, 1(1): 19-33 doi: 10.3934/cpaa.2002.1.19 +[Abstract](1418) +[PDF](195.1KB)
We are interested in partial differential equations and systems of partial differential equations arising in some population dynamics models, for populations living in heterogeneous spatial domains. Discontinuities appear in the coefficients of divergence form operators and in reaction terms as well. Global posedness results are given. For models offering a great a degree of heterogeneity we derive simpler models with constant coefficients by applying homogenization method. Long term behavior is then analyzed.
B. E. Ainseba, W. E. Fitzgibbon, M. Langlais, J. J. Morgan. An application of homogenization techniques to population dynamics models. Communications on Pure & Applied Analysis, 2002, 1(1): 19-33. doi: 10.3934/cpaa.2002.1.19.
Global well-posedness of weak solutions for the Lagrangian averaged Navier-Stokes equations on bounded domains
Daniel Coutand, J. Peirce and Steve Shkoller
In this paper, we study the Lagrangian averaged Navier-Stokes (LANS-$\alpha$) equations on bounded domains. The LANS-$\alpha$ equations are able to accurately reproduce the large-scale motion (at scales larger than $\alpha >0$) of the Navier-Stokes equations while filtering or averaging over the motion of the fluid at scales smaller than α, an a priori fixed spatial scale.
We prove the global well-posedness of weak $H^1$ solutions for the case of no-slip boundary conditions in three dimensions, generalizing the periodic-box results of [8]. We make use of the new formulation of the LANS-$\alpha$ equations on bounded domains given in [20] and [14], which reveals the additional boundary conditions necessary to obtain well-posedness. The uniform estimates yield global attractors; the bound for the dimension of the global attractor in 3D exactly follows the periodic box case of [8]. In 2D, our bound is $\alpha$-independent and is similar to the bound for the global attractor for the 2D Navier-Stokes equations.
Daniel Coutand, J. Peirce, Steve Shkoller. Global well-posedness of weak solutions for the Lagrangian averaged Navier-Stokes equations on bounded domains. Communications on Pure & Applied Analysis, 2002, 1(1): 35-50. doi: 10.3934/cpaa.2002.1.35.
Boundary layers in weak solutions of hyperbolic conservation laws II. self-similar vanishing diffusion limits
K. T. Joseph and Philippe G. LeFloch
This paper is concerned with the boundary layers that arise in solutions of a nonlinear hyperbolic system of conservation laws in presence of vanishing diffusion. We consider self-similar solutions of the Riemann problem in a half-space, following a pioneering idea by Dafermos for the standard Riemann problem. The system is strictly hyperbolic but no assumption of genuine nonlinearity is made; moreover, the boundary is possibly characteristic, that is, the wave speed do not have a specific sign near the (stationary) boundary.
First, we generalize a technique due to Tzavaras and show that the boundary Riemann problem with diffusion admits a family of continuous solutions that remain uniformly bounded in the total variation norm. Careful estimates are necessary to cope with waves that collapse at the boundary and generate the boundary layer.
Second, we prove the convergence of these continuous solutions toward weak solutions of the Riemann problem when the diffusion parameter approaches zero. Following Dubois and LeFloch, we formulate the boundary condition in a weak form, based on a set of admissible boundary traces. Following Part I of this work, we identify and rigorously analyze the boundary set associated with the zero-diffusion method. In particular, our analysis fully justifies the use of the scaling $1/\varepsilon$ near the boundary (where $\varepsilon$ is the diffusion parameter), even in the characteristic case as advocated in Part I by the authors.
K. T. Joseph, Philippe G. LeFloch. Boundary layers in weak solutions of hyperbolic conservation laws II. self-similar vanishing diffusion limits. Communications on Pure & Applied Analysis, 2002, 1(1): 51-76. doi: 10.3934/cpaa.2002.1.51.
Global existence of solutions to a reaction diffusion system based upon carbonate reaction kinetics
Congming Li and Eric S. Wright
The carbonate system is an important reaction system in natural waters because it plays the role of a buffer, regulating the pH of the water. We present a global existence result for a system of partial differential equations that can be used to model the combined dynamics of diffusion, advection, and the reaction kinetics of the carbonate system.
Congming Li, Eric S. Wright. Global existence of solutions to a reaction diffusion system based upon carbonate reaction kinetics. Communications on Pure & Applied Analysis, 2002, 1(1): 77-84. doi: 10.3934/cpaa.2002.1.77.
Canonical forms and structure theorems for radial solutions to semi-linear elliptic problems
Y. Kabeya, Eiji Yanagida and Shoji Yotsutani
2002, 1(1): 85-102 doi: 10.3934/cpaa.2002.1.85 +[Abstract](1426) +[PDF](199.3KB)
We propose a method to investigate the structure of positive radial solutions to semilinear elliptic problems with various boundary conditions. It is already shown that the boundary value problems can be reduced to a canonical form by a suitable change of variables. We show structure theorems to canonical forms to equations with power nonlinearities and various boundary conditions. By using these theorems, it is possible to study the properties of radial solutions of semilinear elliptic equations in a systematic way, and make clear unknown structure of various equations.
Y. Kabeya, Eiji Yanagida, Shoji Yotsutani. Canonical forms and structure theorems for radial solutions to semi-linear elliptic problems. Communications on Pure & Applied Analysis, 2002, 1(1): 85-102. doi: 10.3934/cpaa.2002.1.85.
A new approach to study the Vlasov-Maxwell system
Sergiu Klainerman and Gigliola Staffilani
2002, 1(1): 103-125 doi: 10.3934/cpaa.2002.1.103 +[Abstract](1554) +[PDF](237.7KB)
We give a new proof based on Fourier Transform of the classical Glassey and Strauss [6] global existence result for the 3D relativistic Vlasov-Maxwell system, under the assumption of compactly supported particle densities. Though our proof is not substantially shorter than that of [6], we believe it adds a new perspective to the problem. In particular the proof is based on three main observations, see Facts 1-3 following the statement of Theorem 1.4, which are of independent interest.
Sergiu Klainerman, Gigliola Staffilani. A new approach to study the Vlasov-Maxwell system. Communications on Pure & Applied Analysis, 2002, 1(1): 103-125. doi: 10.3934/cpaa.2002.1.103.
Optimal regularity of solution to a degenerate elliptic system arising in electromagnetic fields
H. M. Yin
In this paper we prove a fundamental estimate for the weak solution of a degenerate elliptic system: $\nabla\times [\rho(x)\nabla\times H]=F$, $\nabla\cdot H=0$ in a bounded domain in $R^3$, where $\rho(x)$ is only assumed to be in $L^{\infty}$ with a positive lower bound. This system is the steady-state of Maxwell's system for the evolution of a magnetic field $H$ under the influence of an external force $F$, where $\rho(x)$ represents the resistivity of the conductive material. By using Campanato type of techniques, we show that the weak solution to the system is Hölder continuous, which is optimal under the assumption. This result solves the regularity problem for the system under the minimum assumption on the coefficient. Some applications arising in inductive heating are presented.
H. M. Yin. Optimal regularity of solution to a degenerate elliptic system arising in electromagnetic fields. Communications on Pure & Applied Analysis, 2002, 1(1): 127-134. doi: 10.3934/cpaa.2002.1.127.
RSS this journal
Tex file preparation
Open Choice
Call for special issues
Add your name and e-mail address to receive news of forthcoming issues of this journal:
Select the journal
Select Journals | CommonCrawl |
An Example of a Nested Decreasing Sequence of Bounded Closed Sets with Empty Intersection
Could someone provide me with an example of a metric space having a nested decreasing sequence of bounded closed sets with empty intersection? I first thought of Cantor set but the intersection is not empty!
real-analysis metric-spaces examples-counterexamples
Martin Sleziak
$\begingroup$ Can you think of sets which are bounded and closed, but not compact? Because if you ever include a compact set in your sequence, Cantor's intersection theorem will imply there is a non-empty intersection. (As is noted in an answer, we can't do this in $\mathbb R$ or $\mathbb R^n$ due to the Heine-Borel theorem) $\endgroup$ – Milo Brandt Aug 15 '15 at 3:49
$\begingroup$ @MiloBrandt: Your comment was indeed what I was hinting at in my answer. $\endgroup$ – user21820 Aug 15 '15 at 11:02
Let $\mathbb N$ be endowed with the discrete metric. In this metric space, every subset is bounded (although not necessarily totally bounded) and closed. Moreover, the subsets \begin{align*} A_1\equiv&\,\{1,2,3,4,\ldots\},\\ A_2\equiv&\,\{\phantom{1,\,}2,3,4,\ldots\},\\ A_3\equiv&\,\{\phantom{1,2,\,}3,4,\ldots\},\\ \vdots&\, \end{align*} are nested, and their intersection is empty.
However, if you stay within the realm of $\mathbb R$ endowed with the usual Euclidean metric, than you can't have a situation like the one above:
Claim: Suppose that $$A_1\supseteq A_2\supseteq A_3\supseteq\ldots$$ is a countable family of non-empty, closed, bounded subsets of $\mathbb R$. Then, $\bigcap_{n=1}^{\infty} A_n\neq\varnothing$.
Proof: By the Heine–Borel theorem, $A_n$ is compact for each $n\in\mathbb N$. For the sake of contradiction, suppose that $\bigcap_{n=1}^{\infty} A_n=\varnothing$. This is equivalent to $\bigcup_{n=1}^{\infty} A_n^{\mathsf c}=\mathbb R$. In particular, $$A_1\subseteq\bigcup_{n=1}^{\infty} A_n^{\mathsf c}.$$ Since $A_1$ is compact and the sets $(A_n^{\mathsf c})_{n=1}^{\infty}$ form an open cover of it, there must exist a finite subcover. That is, there exists some $m\in\mathbb N$ such that $$A_1\subseteq\bigcup_{n=1}^m A_n^{\mathsf c}=A_m^{\mathsf c},$$ where the second equality follows from the fact that $$A_1^{\mathsf c}\subseteq A_2^{\mathsf c}\subseteq A_3^{\mathsf c}\subseteq\ldots.$$ Now, $A_1\subseteq A_m^{\mathsf c}$ means that if a point is in $A_1$, then it must not be in $A_m$, so that $A_1\cap A_m=\varnothing$. But $A_m\subseteq A_1$ (given that the sets are nested), so that $A_1\cap A_m= A_m=\varnothing$, which contradicts the assumption that $A_m$ is not empty. This contradiction reveals that the intersection $\bigcap_{n=1}^{\infty} A_n$ must not be empty. $\quad\blacksquare$
triple_sectriple_sec
$\begingroup$ Are these sets bounded? What I see is they are bounded from below not from above. Could You please explain why they are bounded? $\endgroup$ – Fabian Aug 15 '15 at 3:57
$\begingroup$ @Fabian You're thinking in terms of the Euclidean metric, in which case each set is unbounded, indeed. However, consider the discrete metric on $\mathbb N$. This is defined as $d(m,n)=1$ if $m\neq n$ and $d(m,n)=0$ if $m=n$. It is not difficult to show that this is a legitimate metric. Also, the distance between any two distinct points is 1. This implies that if you take a "ball" of "diameter" 2 around the point $n=1$, then this ball will contain the whole space! That is, $$\{m\in\mathbb N\,|\,d(m,n)<2\}=\mathbb N,$$ and the whole space fits into a ball, which implies boundedness. $\endgroup$ – triple_sec Aug 15 '15 at 4:05
$\begingroup$ Got it! Great explanation! Thank you triple_sec . $\endgroup$ – Fabian Aug 15 '15 at 4:09
$\begingroup$ @Fabian I added a proof that the intersection may never be empty if you stay in $\mathbb R$. $\endgroup$ – triple_sec Aug 15 '15 at 4:17
Another simple example is to look at the "punctured line": $(-\infty, 0) \cup (0, \infty)$, which is just the real numbers with $0$ removed. The sets $A_n = \{ x \ \in \Bbb R \,|\, |x| \le 1/n \text{ and } x \ne 0 \}$ are closed and bounded in the punctured line, but their intersection is empty.
Paul SinclairPaul Sinclair
Do you know a theorem about nested bounded closed sets having non-empty intersection? If you do then you would need to find a metric space that does not satisfy the conditions of that theorem. $\mathbb{R}$ satisfies that theorem and hence you're not going to find a counter-example there. But there is a smaller metric space sitting inside it, namely $\mathbb{Q}$, that will give you a counter-example.
$\begingroup$ I do not understand the downvote. I feel that my hint (without explicitly stating $\mathbb{Q}$) was just right, rather than just giving a complete solution like the other answerers that leave nothing for the asker to try. $\endgroup$ – user21820 Aug 15 '15 at 11:01
$\begingroup$ Hello user21820, I upvoted your answer, and sorry to see users downvote for no reason. Thank you for your hint. $\endgroup$ – MATH Aug 15 '15 at 17:45
$\begingroup$ @MATH: Thanks. Anyway the downvoter removed his downvote already, so perhaps he is happier with the explicit example of $\mathbb{Q}$. =) $\endgroup$ – user21820 Aug 16 '15 at 2:21
I initially missed the fact that bounded sets were desired.
Let the metric space be $(0,1)$, the set of all real numbers between $0$ and $1$, not including the endpoints, with the usual metric $d(x,y)=|x-y|$.
Then the example can be that the $n$th set is $(0,\ 1/n]$. This is closed within this space; it contain all of its limit points in the space.
The first thing I thought of was $\displaystyle \bigcap_{n=0}^\infty [n,+\infty)$.
If the object called $+\infty$ were included in the space, with the appropriate topology so that $n\to\infty$, then these sets would not be closed, but would become closed if one added $+\infty$ to them as a new member, and then the intersection would not be empty because $+\infty$ would be a member of it.
Michael HardyMichael Hardy
$\begingroup$ The sets should be bounded. $\endgroup$ – Carl Mummert Dec 3 '15 at 21:30
If $(X,d)$ is a metric space then $$d'(x,y)=\min\{d(x,y),1\}$$ is a metric on $d$, as well. Moreover, the metrics $d$ and $d'$ generate the same topology (hence the same subsets of $X$ are closed in $(X,d)$ and in $(X,d')$).
Every subset is bounded in $(X,d')$.
See also: Proof that every metric space is homeomorphic to a bounded metric space
So it suffices to find a metric space which contains some nested closed subsets with empty intersection. Then you can look at the same system of subsets in the modified metric, and you will have an example of closed bounded sets.
Martin SleziakMartin Sleziak
Possibly a simpler example is the usual metric on the rationals, and the closed sets $$C_k=\left[{\lfloor \sqrt{2}k\rfloor\over k}, {\lceil \sqrt{2}k\rceil\over k }\right].$$ Intuitively, the intersection of the $C_k$s is $\{\sqrt 2\}$, but that's not rational, so in $\mathbb{Q}$ their intersection is empty.
Noah SchweberNoah Schweber
$\begingroup$ Note that what's really being used here is that $\mathbb{Q}$ is incomplete - I'm essentially taking a non-convergent Cauchy sequence (one "converging" to $\sqrt{2}$) and turning it into a decreasing sequence of bounded closed sets with empty intersection. As e.g. Martin Sleziak's answer shows, this is not necessary - there are complete metric spaces in which nonetheless we have decreasing sequences of bounded closed sets with empty intersection. $\endgroup$ – Noah Schweber Dec 4 '15 at 0:07
Not the answer you're looking for? Browse other questions tagged real-analysis metric-spaces examples-counterexamples or ask your own question.
Intersection of closed balls
Proof that every metric space is homeomorphic to a bounded metric space
An example in Cantor's intersection theorem if the hypothesis $\text{diam}(D_n)\to0$ as $n\to\infty$ is omitted
Example of nested closed balls with empty intersection?
When Cantor's Intersection theorem won't work with closed sets
Do you have any simple example for nested bounded sets that have an empty intersection?
Nested sequence of closed sets
Please can you check my proof of nested closed sets intersection is non-empty
Making Cantor's intersection theorem work on closed sets
Nested sequence of non-empty closed bounded sets
Could you help me : If (An) is a nested descreasing sequence of non-empty closed sets in the metric space M
Example of a decreasing sequence of nonempty closed sets in $R^{n}$ with empty intersection
If a collection of closed sets of arbitrary cardinality in a metric space has empty intersection, does some countable subcollection? | CommonCrawl |
The idea of a probability distribution
A random variable is a variable that is subject to variations due to random chance. One can think of a random variable as the result of a random experiment, such as rolling a die, flipping a coin, picking a number from a given interval. The idea is that, each time you perform the experiment, you obtain a sample of the random variable. Since the variable is random, you expect to get different values as you obtain multiple samples. (Some values might be more likely than others, as in an experiment of rolloing two six-sided die and recording the sum of the resulting two numbers, where obtaining a value of 7 is much more likely than obtaining value of 12.) A probability distribution is a function that describes how likely you will obtain the different possible values of the random variable.
It turns out that probability distributions have quite different forms depending on whether the random variable takes on discrete values (such as numbers from the set $\{1,2,3,4,5,6\}$) or takes on any value from a continuum (such as any real number in the interval $[0,1]$). Despite their different forms, one can do the same manipulations and calculations with either discrete or continuous random variables. The main difference is usually just whether one uses a sum or an integral.
Discrete probability distribution
A discrete random variable is a random variable that can take on any value from a discrete set of values. The set of possible values could be finite, such as in the case of rolling a six-sided die, where the values lie in the set $\{1,2,3,4,5,6\}$. However, the set of possible values could also be countably infinite, such as the set of integers $\{0, 1, -1, 2, -2, 3, -3, \ldots \}$. The requirement for a discrete random variable is that we can enumerate all the values in the set of its possible values, as we will need to sum over all these possibilities.
For a discrete random variable $X$, we form its probability distribution function by assigning a probability that $X$ is equal to each of its possible values. For example, for a six-sided die, we would assign a probability of $1/6$ to each of the six options. In the context of discrete random variables, we can refer to the probability distribution function as a probability mass function. The probability mass function $P(x)$ for a random variable $X$ is defined so that for any number $x$, the value of $P(x)$ is the probability that the random variable $X$ equals the given number $x$, i.e., \begin{align*} P(x) = \Pr(X = x). \end{align*} Often, we denote the random variable of the probability mass function with a subscript, so may write \begin{align*} P_X(x) = \Pr(X = x). \end{align*}
For a function $P(x)$ to be valid probability mass function, $P(x)$ must be non-negative for each possible value $x$. Moreover, the random variable must take on some value in the set of possible values with probability one, so we require that $P(x)$ must sum to one. In equations, the requirenments are \begin{gather*} P(x) \ge 0 \quad \text{for all $x$}\\ \sum_x P(x) = 1, \end{gather*} where the sum is implicitly over all possible values of $X$.
For the example of rolling a six-sided die, the probability mass function is \begin{gather*} P(x) = \begin{cases} \frac{1}{6} & \text{if $x \in \{1,2,3,4,5,6\}$}\\ 0 & \text{otherwise.} \end{cases} \end{gather*}
If we rolled two six-sided dice, and let $X$ be the sum, then $X$ could take on any value in the set $\{2,3,4,5,6,7,8,9,10,11,12\}$. The probability mass function for this $X$ is \begin{gather*} P(x) = \begin{cases} \frac{1}{36} & \text{if $x \in \{2,12\}$}\\ \frac{2}{36}=\frac{1}{18} & \text{if $x \in \{3,11\}$}\\ \frac{3}{36}=\frac{1}{12} & \text{if $x \in \{4,10\}$}\\ \frac{4}{36}=\frac{1}{9} & \text{if $x \in \{5,9\}$}\\ \frac{5}{36} & \text{if $x \in \{6,8\}$}\\ \frac{6}{36} =\frac{1}{6} & \text{if $x = 7$}\\ 0 & \text{otherwise.} \end{cases} \end{gather*} $P(x)$ is plotted as a bar graph in the following figure.
Continuous probability distribution
A continuous random variable is a random variable that can take on any value from a continuum, such as the set of all real numbers or an interval. We cannot form a sum over such a set of numbers. (There are too many, since such a continuum is uncountable.) Instead, we replace the sum used for discrete random variables with an integral over the set of possible values.
For a continuous random variable $X$, we cannot form its probability distribution function by assigning a probability that $X$ is exactly equal to each value. The probability distribution function we must use in the case is called a probability density function, which essentially assigns the probability that $X$ is near each value. For intuition behind why we must use such a density rather than assigning individual probabilities, see the page that describes the idea behind the probability density function.
Given the probability density function $\rho(x)$ for $X$, we determine the probability that $X$ is in any set $A$ (i.e., that $X \in A$ (confused?)) by integrating $\rho(x)$ over the set $A$, i.e., \begin{gather*} \Pr(X \in A) = \int_A \rho(x)dx. \end{gather*} Often, we denote the random variable of the probability density function with a subscript, so may write \begin{gather*} \Pr(X \in A) = \int_A \rho_X(x)dx. \end{gather*}
The definition of this probability using an integral gives one important consequence for continuous random variables. If the set $A$ contains just a single element, we can immediately see that the probability that $X$ is equal to that one value is exactly zero, as the integral over a single point is zero. For a continuous random variable $X$, the probability that $X$ is any single value is always zero.
In other respects, the probability density function of a continuous random variables behaves just like the probability mass function for a discrete random variable, where we just need to use integrals rather than sums. For a function $\rho(x)$ to be valid probability density function, $\rho(x)$ must be non-negative for each possible value $x$. Just as for discrete random variable, a continuous random variable must take on some value in the set of possible values with probability one. In this case, we require that $\rho(x)$ must integral to one. In equations, the requirenments are \begin{gather*} \rho(x) \ge 0 \quad \text{for all $x$}\\ \int \rho(x)dx = 1, \end{gather*} where the integral is implicitly over all possible values of $X$.
For examples of continuous random variables and their associated probability density functions, see the page on the idea behind the probability density function.
The idea of a probability density function
Nykamp DQ, "The idea of a probability distribution." From Math Insight. http://mathinsight.org/probability_distribution_idea
Keywords: probability
Send us a message about "The idea of a probability distribution"
The idea of a probability distribution by Duane Q. Nykamp is licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 4.0 License. For permissions beyond the scope of this license, please contact us. | CommonCrawl |
The stratospheric wintertime response to applied extratropical torques and its relationship with the annular mode
Peter A. G. Watson1 &
Lesley J. Gray1,2
Climate Dynamics volume 44, pages 2513–2537 (2015)Cite this article
The response of the wintertime Northern Hemisphere (NH) stratosphere to applied extratropical zonally symmetric zonal torques, simulated by a primitive equation model of the middle atmosphere, is presented. This is relevant to understanding the effect of gravity wave drag (GWD) in models and the influence of natural forcings such as the quasi-biennial oscillation (QBO), El Ninõ-Southern Oscillation (ENSO), solar cycle and volcanic eruptions on the polar vortex. There is a strong feedback due to planetary waves, which approximately cancels the direct effect of the torque on the zonal acceleration in the steady state and leads to an EP flux convergence response above the torque's location. The residual circulation response is very different to that predicted assuming wave feedbacks are negligible. The results are consistent with the predictions of ray theory, with applied westerly torques increasing the meridional potential vorticity gradient, thus encouraging greater upward planetary wave propagation into the stratosphere. The steady state circulation response to torques applied at high latitudes closely resembles the Northern annular mode (NAM) in perpetual January simulations. This behaviour is analogous to that shown by the Lorenz system and tropospheric models. Imposed westerly high-latitude torques lead counter-intuitively to an easterly zonal mean zonal wind \((\overline{u})\) response at high latitudes, due to the wave feedbacks. However, in simulations with a seasonal cycle, the feedbacks are qualitatively similar but weaker, and the long-term response is less NAM-like and no longer easterly at high latitudes. This is consistent with ray theory and differences in climatological \(\overline{u}\) between the two types of simulations. The response to a tropospheric wave forcing perturbation is also NAM-like. These results suggest that dynamical feedbacks tend to make the long-term NH extratropical stratospheric response to arbitrary external forcings NAM-like, but only if the feedbacks are sufficiently strong. This may explain why the observed polar vortex responses to natural forcings such as the QBO and ENSO are NAM-like. The results imply that wave feedbacks must be understood and accurately modelled in order to understand and predict the influence of GWD and other external forcings on the polar vortex, and that biases in a model's climatology will cause biases in these feedbacks.
Understanding the stratospheric response to an applied zonal torque is a long-standing research problem. It is important as an idealised problem that is useful for forming conceptual ideas about stratospheric dynamics, for understanding phenomena such as the Brewer–Dobson circulation (e.g. Holton et al. 1995; Shepherd 2007; Plumb 2010) and for anticipating how changes in gravity wave parameterisations will affect models of the stratosphere (e.g. Cohen et al. 2013, 2014). The response of the zonal mean circulation to an applied torque in the absence of wave feedbacks is well-understood (Eliassen 1951; Plumb 1982; Garcia 1987; Haynes et al. 1991). However, feedbacks arising from features such as planetary waves seem to strongly affect the extratropical stratospheric response to natural forcings such as the quasi-biennial oscillation (QBO) (Watson and Gray 2014), to changes in gravity wave parameterisations (e.g. Holton 1984; McLandress and McFarlane 1993; Cohen et al. 2013, 2014; Sigmond and Shepherd 2014) and to increasing \(\hbox {CO}_{2}\) concentrations (Sigmond and Shepherd 2014), and these are not well-understood.
Here we examine the stratospheric responses to applied steady extratropical zonally symmetric zonal torques in a 3D primitive equation numerical model of the middle atmosphere, which explicitly calculates feedbacks arising from the large-scale dynamics. We consider torques with a simple structure, so that the dynamics can be more easily understood. We mostly consider westerly torques, but we also show that the response is close to being equal and opposite if the torque is reversed in sign. Therefore our results are likely to apply directly to understanding easterly torques, such as those due to parameterised gravity wave drag (GWD) in atmospheric models. A westerly torque may also represent a reduction in GWD resulting from a change in the parameterisation scheme.
There are two main objectives to this work. The first is to gain a better understanding of how the interaction between the zonal mean and waveFootnote 1 parts of the flow affects the stratospheric response to applied forcings. The second is to test the suggestion of Watson and Gray (2014) that the long-term mean response of the extratropical stratosphere to an arbitrary forcing will tend to closely resemble the structure of the stratosphere's leading mode of low-frequency variability, the annular mode (AM) (e.g. Thompson and Wallace 1998; Kushner 2010). This may explain why the observed Northern Hemisphere (NH) responses to QBO, El Niño-Southern Oscillation (ENSO), solar cycle and volcanic influences are similar to the Northern annular mode (NAM) (e.g. Dunkerton and Baldwin 1991; Kodera 1995; Sassi et al. 2004; Labitzke 2005; Ruzmaikin et al. 2005; Watson and Gray 2014). For this second objective, the simple torques we study are useful to test whether the extratropical response to a simple forcing is also robustly NAM-like. If the suggestion of Watson and Gray (2014) is correct, then it offers a simple conceptual way to think about the action of feedbacks on long time scales in general cases.
Regarding the first objective, understanding wave-mean flow interaction in the context of the circulation response to a zonally symmetric torque could help understanding of more complex problems, such as the effect of GWD. The influence of the wave component of the flow on the zonal mean part is well-understood. Following Andrews et al. (1987), the transformed Eulerian mean (TEM) zonal momentum equation, derived using the full primitive equations of motion, is
$$\begin{aligned} \frac{\partial \overline{u}}{\partial t} + \overline{v}^{*} \left[ (a \cos \phi )^{-1} (\overline{u} \cos \phi )_{\phi } - f \right] + \overline{w}^{*} \overline{u}_{z} = D_{F} + \overline{X}. \end{aligned}$$
Here \(\overline{u}\) is the zonal mean zonal wind, \((\overline{v}^{*},\overline{w}^{*})\) is the residual meridional circulation, \(a\) is the Earth's radius, \(f\) is the Coriolis parameter, \(\overline{X}\) is the zonal mean mechanical forcing, \(t\) is time, \(\phi\) is the latitude and \(z\) is log-pressure height. Subscipts denote partial differentiation with respect to the subscripted variable.
$$\begin{aligned} D_{F}=(\rho _{0} a \cos \phi )^{-1} \nabla \cdot \mathbf {F} \end{aligned}$$
represents wave driving of the zonal flow. \(\mathbf {F}\) is the Eliassen–Palm (EP) flux, and \(\rho _{0}\) is a reference density profile.
\(D_{F}\) is typically negative in the winter extratropical stratosphere, and this weakens \(\overline{u}\) and drives a poleward meridional circulation (Eliassen 1951; Plumb 1982; Garcia 1987; Haynes et al. 1991). Moreover, Haynes et al. (1991) showed that in the steady state limit, the meridional circulation response is zero everywhere except directly below the region where the right hand side (RHS) of Eq. 1 is non-zero, so that only one overturning cell exists directly below this region. The effect of an additional mechanical zonal torque \((\delta \overline{X})\), with the \(D_{F}\) perturbation associated with wave feedbacks \((\delta D_{F})\) also added to the RHS of Eq. 1, can similarly be predicted: if \(\delta \overline{X}+\delta D_{F}\) is negative, the effect is to weaken \(\overline{u}\) and drive a poleward residual circulation; if the sum is positive then the effect is the opposite. \(\delta \overline{X}\) may represent the zonal mean zonal acceleration due to parameterised GWD in an atmospheric model, for example. \(\delta D_{F}\) is then the contribution to the change in EP flux divergence from the resulting planetary wave response.
A full understanding of the response to an applied torque \(\delta \overline{X}\) requires understanding the wave feedbacks, which has not been clearly elucidated in previous studies. One theoretical framework is "ray theory". Following Andrews et al. (1987) again, under the assumptions that the wave part of the circulation is small and varies quickly in space and time compared to the background zonal flow, the wave part can be considered to consist of propagating eddies. These eddies trace ray paths that bend towards regions of greater "refractive index" in the meridional plane (Matsuno 1970; Karoly and Hoskins 1982). For stationary waves, which dominate wave forcing of the NH extratropical stratosphere, the refractive index is given by
$$\begin{aligned} n^{2} = \frac{\overline{q}_{\phi }}{a \overline{u}} - \frac{s^{2}}{a^{2} \cos ^{2}\phi } - \frac{f^{2}}{4N^{2}H^{2}}, \end{aligned}$$
where \(s\) is the zonal wavenumber, \(N^{2}\) is a reference static stability, assumed to be constant, and \(H\) is the pressure scale height.
$$\begin{aligned} \overline{q}_{\phi } = 2\varOmega \cos \phi - \left[ \frac{(\overline{u} \cos \phi )_{\phi }}{a \cos \phi } \right] _{\phi } - \frac{a}{\rho _{0}} \left( \frac{\rho _{0} f^{2}}{N^{2}} \overline{u}_{z} \right) _{z} \end{aligned}$$
is the meridional potential vorticity (PV) gradient, with \(\varOmega\) the Earth's angular velocity about its spin axis. Under the same assumptions, the group velocity of these waves is parallel to \(\mathbf {F}\), so \(n^{2}\), together with knowledge of the wave sources, can be used to predict the direction of \(\mathbf {F}\). Despite the fact that the assumptions upon which this derivation is based are not realistic in the stratosphere, where wave amplitudes and wavelengths are large, this framework has been found to be useful for understanding qualitatively the climatology of \(\mathbf {F}\) (Matsuno 1970; Karoly and Hoskins 1982) and its changes during stratospheric sudden warmings (SSWs) (e.g. Palmer 1981; Butchart et al. 1982; McIntyre 1982; O'Neill and Youngblut 1982).
Regarding the second objective of this study, it has been found that the leading mode of variability resembles the responses to many different forcings in the Lorenz (1963) system (that which gives rise to the famous Lorenz butterfly attractor) (Palmer 1999; Palmer and Weisheimer 2011) and in models of the troposphere (Son and Lee 2006; Ring and Plumb 2007, 2008; Branstator and Selten 2009). Watson and Gray (2014) suggested that the stratosphere may behave in an analogous way and that this could explain why the NH extratropical responses to different natural forcings are NAM-like. If the suggestion is correct then the long-term mean stratospheric response to an applied torque will also be NAM-like.
The dynamics of the Lorenz (1963) system and the troposphere are quite different to the dynamics of the stratosphere, and the experiments performed here test whether the large-scale dynamics of the stratosphere displays the same behaviour. This behaviour is not fully general in the troposphere: for example, Son and Lee (2006) found that the steady state tropospheric response to heating perturbations in a general circulation model (GCM) is not always NAM-like, and Woollings (2008) showed that the NH extratropical tropospheric response to anthropogenic forcings simulated by various climate models is often not NAM-like. Understanding the circumstances in which this behaviour is manifested in the stratosphere may shed light on why it sometimes does and sometimes does not occur in other parts of the climate system as well.
The studies of Song and Robinson (2004) and Chen and Zurita-Gotor (2008) have previously shown the response to stratospheric zonal torques in models of the troposphere and stratosphere with zonally-symmetric boundary conditions, with their main objective being to examine the tropospheric response. However, the models that were used had weak planetary waves and are unlikely to have correctly simulated the changes in stratospheric waves. Cohen et al. (2013) examined the effect of changing parameters in a momentum-conserving gravity wave parameterisation in an atmospheric primitive equation model with zonal wavenumber-2 topography. They found that the EP flux due to resolved planetary waves adjusts to cancel out most of the change in the GWD. They found that this also occurs in the steady state response to a fixed zonally symmetric zonal torque. The experiments performed here extend upon this work, and complement the recent investigation of Cohen et al. (2014) into the mechanisms behind the cancellation.
The structure of the paper is as follows. Section 2 describes the model and experimental method. Section 3 presents the steady state and transient circulation responses to a variety of simple applied torques in perpetual January (PJ) simulations. Section 3.1 shows that the extratropical steady state circulation responses to torques placed at high latitudes are often NAM-like, and the transient \(\overline{u}\) and planetary wave responses are described in Sects. 3.2 and 3.3. Section 4 examines the responses in runs with a seasonal cycle (SC), for which the long-term responses do not appear as NAM-like. The wave feedbacks are found to be qualitatively similar to those in the PJ simulations but weaker, suggesting that the strength of the feedbacks is important for determining whether the long-term response becomes NAM-like. In Sect. 5 it is argued that the wave feedbacks can be understood through the torques' direct effect on \(\overline{q}_{\phi }\) and \(n^{2}\), and how this affects wave propagation using ray theory. This indicates that the strength of the wave feedback is largely determined by the \(\overline{u}\) climatology and wave sources at the tropopause. It is argued that differences in the \(\overline{u}\) climatology between the PJ and SC experiments explain the differences in the simulated response to torques in each case. We show in Sect. 6 that the steady-state \(\overline{u}\) response to perturbations in tropospheric wave forcing in PJ conditions are also NAM-like, supporting the hypothesis that arbitrary forcings will tend to give a NAM-like response when wave feedbacks are strong. Extended discussion of the results is given in Sect. 7 and the main conclusions are summarised in Sect. 8.
Model and methods
The Stratosphere–Mesosphere Model
The UK Met Office Stratosphere–Mesosphere Model (SMM) is a global primitive equation model of the stratosphere and mesosphere. The configuration used here is identical to that used by Gray et al. (2001), with \(5^{\circ }\) resolution in latitude and longitude and 2 km resolution in log-pressure height over the domain 0.01–100 hPa (very nearly 16–80 km, with \(H=6.95\,\hbox {km}\)). At 100 hPa the geopotential height (GPH) is specified and enters into forcing terms in the zonal and meridional momentum equations. The radiative contribution to the diabatic heating rate is computed by the MIDRAD scheme (Shine 1987). Gravity wave drag is parameterised simply by Rayleigh friction terms in the zonal and meridional momentum equations, with time scale varying smoothly from 116 days below 50 km to about 2 days in the upper mesosphere.
This model allows the stratospheric dynamical response to be isolated, since wave generation by the troposphere is unchanged by the torques. Thus the behaviour of the internal middle atmosphere dynamics can be seen clearly. This is especially important for examining whether feedbacks from stratospheric dynamics tend to act to make the response NAM-like. The SMM is also computationally cheap enough for the response to many different torques to be examined under both PJ conditions and with a seasonal cycle.
The SMM has been shown to capture stratospheric variability reasonably well (e.g. Butchart et al. 1982; Fairlie et al. 1990; Fisher et al. 1993; O'Neill and Pope 1993) and has been used extensively to study stratospheric dynamics (e.g. Scaife and James 2000; Gray et al. 2001, 2004).
The Rayleigh friction does not conserve zonal momentum and this can give rise to physically unrealistic downward influences that may affect the response to applied forcings (Shepherd et al. 1996; Shepherd and Shaw 2004). Repeating selected experiments with zero or weakened Rayleigh friction north of 20N shows that the results are not strongly affected by the presence of Rayleigh friction (not shown).
Experimental method
Control runs were performed both under PJ conditions and with a seasonal cycle, defined by particular specifications of the solar radiation and 100 hPa GPH: Sects. 2.2.1 and 2.2.2 describe these "standard" control runs. To examine the responses to applied torques, runs were performed with an additional forcing term in the zonal momentum equation, as described in Sect. 2.2.3. Data were sampled at daily intervals.
The specified 100 hPa GPH was derived from a 240-year pre-industrial control run of the HadGEM2-CCS GCM, described by Watson and Gray (2014). The climatological 100 hPa GPH in HadGEM2-CCS is similar to that in ERA-40 (Uppala et al. 2005) (not shown) and this model exhibits realistic stratospheric variability (Mitchell et al. 2012; Osprey et al. 2013).
Perpetual January control run
The model was run for 40 years with radiative conditions set to those for January 15 in the MIDRAD scheme. The first two years were discarded in order to allow the model to adjust to the forcing conditions. The imposed 100 hPa GPH was the January climatology of that in the HadGEM2-CCS control run, with the zonally asymmetric component multiplied by a factor of 2 (Fig. 1). Climatological averaging smooths bursts of wave activity, so multiplying by this factor was found to be necessary in order for the model to exhibit irregular vacillations that are qualitatively similar to observed stratospheric variability. The wavenumber-1 and 2 amplitudes at 62.5N are 250 and 360 m respectively, which are similar to the amplitudes of specified bottom boundary single-wavenumber GPH eddies used in previous studies of stratospheric internal variability (e.g. Gray et al. 2003; Scott and Polvani 2006).
a Specified 100 hPa GPH used as the bottom-boundary forcing of the SMM in most of the perpetual January simulations, and b the zonally asymmetric component
Figure 2a shows the climatological \(\overline{u}\) and the difference from the ERA-40 January climatology. The modelled polar vortex is considerably weaker than that in ERA-40 and is further equatorward, with the peak \(\overline{u}\) around 30N in the middle and upper stratosphere compared to around 60N in ERA-40. Figure 2b shows the standard deviation of \(\overline{u}\), which is up to \(\sim 10\,\hbox {ms}^{-1}\) less than that in ERA-40 in the NH extratropical mid-stratosphere, though it still peaks at high latitudes despite the vortex being further south. In the tropical stratosphere the modelled standard deviation is up to ~20 ms−1 too small due to the absence of the QBO and SAO. Figure 2c shows a typical time-height section of \(\overline{u}\) at 57.5N over 365 days, illustrating the irregular vacillation cycles with \(\overline{u}\) reversals and intensifications that often propagate down to the lower stratosphere, in a qualitatively similar way to observed variability. The EP flux climatology (Fig. 2d) is qualitatively similar to that in ERA-40 (not shown). However, it is too poleward in the high-latitude lower stratosphere and is smaller, with the upward component being ~25–40 % weaker in the model than in ERA-40 in the lower stratosphere. \(D_{F}\) is represented reasonably in the model, though. The \(\overline{q}_{\phi }\) climatology (Fig. 2e) is greatest near ~30N, too far equatorward compared to ERA-40, consistent with the vortex being too far equatorward (Fig. 2a).
The climate of the perpetual January SMM standard control run. a Mean \(\overline{u}\) (contours, in \(\hbox {ms}^{-1}\)) and the difference from the ERA-40 January climatology (colours) and b the same for the standard deviation of \(\overline{u}\). c Time-height section of \(\overline{u}\) at 57.5N for 1 year. d The mean NH EP flux (arrows, shown every 6 km in height) and \(D_{F}\) (colours) below 50 km. A reference arrow in the top left corner has a magnitude of \((5\times 10^{6},\,5\times 10^{4})\,\hbox {kg}/\hbox {s}^{2}\). e Mean NH \(\overline{q}_{\phi }\) (contours, in \(10^{-5}\,\hbox {s}^{-1}\)) and the difference from ERA-40 (colours). \(D_{F}\) and \(\overline{q}_{\phi }\) at 87.5N are not plotted as the differentiation error is large there
Overall, therefore, the PJ control run has a qualitatively reasonable extratropical stratosphere but quantitatively it is quite different from January observations. Since the SC control run exhibits a wintertime climatology in much closer agreement with observations (Sect. 2.2.2), this is likely to be due to the PJ boundary conditions, which are not realistic.
Figure 3 shows the regression of monthly mean \(\overline{u}\), GPH at 32 km (10 hPa) and EP flux and \(D_{F}\) onto the NAM index in the PJ control run. The NAM index is defined in a similar way to that in Watson and Gray (2014) and is the leading principal component of monthly mean pressure- and area-weighted 3D GPH between 16–48 km north of 20N. The sign convention is opposite to that of the usual definition, so that a more positive NAM index corresponds to weaker high-latitude \(\overline{u}\), for easier comparison with the \(\overline{u}\) responses to torques (Sect. 3). The \(\overline{u}\) NAM signature is a dipole with negative \(\overline{u}\) north of about 30N at 30 km and positive \(\overline{u}\) further south, with the positive \(\overline{u}\) tilting northwards with increasing height (Fig. 3a). This is qualitatively similar to that derived from observations (e.g. Kodera 1995; Kushner 2010) and to the leading EOF of \(\overline{u}\) in this control run (not shown).
The NAM in the perpetual January SMM standard control run. a Regression of monthly mean \(\overline{u}\) onto the NAM index. b Similar regressions of EP flux (arrows) and \(D_{F}\) (colours). The reference arrow in the top left of (b) represents a flux \((F^{\phi },F^{z})=(5\times 10^{5},5\times 10^{3})\,\hbox {kg}/\hbox {s}^2\). \(D_{F}\) at 87.5N is not plotted as the differentiation error is large there
The NAM signature of 32 km GPH in the SMM (not shown) is also fairly similar to the signature derived from ERA-40 in January (Watson and Gray 2014), although the GPH signature is less zonally symmetric in the model. The EP flux and \(D_{F}\) NAM signatures (Fig. 3b) exhibit poleward and divergent flux above ~35 km that turns downward near the pole, with equatorward, convergent flux below. The EP flux signature is quite different to that in ERA-40 (Watson and Gray 2014), likely due to the PJ boundary conditions.
Seasonal cycle control run
A 45-year run was performed with a seasonal cycle of radiative conditions. Daily mean 100 hPa GPH values from the pre-industrial control run of HadGEM2-CCS were imposed, with linear interpolation between the middle of each day. The first two years were discarded.
Figure 4a shows the climatological November–February mean \(\overline{u}\) of the SC control run and the difference from the ERA-40 climatology. The vortex has a realistic structure with maximum \(\overline{u}\) near 60–70N. It is slightly too weak in the mid-stratosphere by ~5 ms−1. Figure 4b shows the standard deviation of the November–February mean \(\overline{u}\), showing that the model displays realistic variability in the extratropics, and again too little in the tropics due to the absence of the QBO and SAO. Figure 4c shows an example time-height section of the 57.5N \(\overline{u}\) for one winter in the SC control run, showing the vortex strengthening into January, followed by an SSW in early March. Figure 4d shows a similar sequence of events in the 1983–4 winter in ERA-40, indicating that the model variability is physically realistic. Overall there are \(7.5\,\hbox {SSWs/decade}\) according to the criterion of Charlton and Polvani (2007), which is similar to the frequency of about 6/decade in ERA-40, and the fraction of SSWs falling in each month November–March is also similar to that in ERA-40 (not shown). Figure 4e shows the seasonal cycle of \(\overline{u}\) at 60N and 32 km in the SC control run, along with its standard deviation and its range, alongside that at 60N and 10 hPa in ERA-40, indicating that the model seasonal cycle in the mid-stratosphere has a reasonable time evolution and variability, but \(\overline{u}\) is too weak by ~5 ms−1.
The climate of the SMM standard control run with a seasonal cycle. a November–February mean \(\overline{u}\) (contours, in ms−1) and the difference from the ERA-40 November–February climatology (colours) and b same for the standard deviation of \(\overline{u}\). c Time-height section of \(\overline{u}\) at 57.5N over November–March of one winter. d same for the 60N \(\overline{u}\) in ERA-40 in the 1983–1984 winter for comparison. e Climatological seasonal cycle of \(\overline{u}\) interpolated onto \((60\hbox {N},\,32\,\hbox {km})\) in the SMM (black solid line) and the same for the \((60\hbox {N},\,10\,\hbox {hPa})\,\overline{u}\) in ERA-40 (red solid line). Dashed lines show the mean \(\overline{u}\) plus or minus one standard deviation in the SMM (black) and ERA-40 (red). Dotted lines extreme values
In common with the PJ control run, the EP flux climatology (Fig. 5a) is in good qualitative agreement with that in ERA-40, but the flux is too small by ~25–40 %. The \(\overline{q}_{\phi }\) climatology (Fig. 5b) is similar to that in ERA-40, maximising around ~60–70N, though it is slightly too small in much of the high-latitude mid-stratosphere.
Climatological EP flux and \(\overline{q}_{\phi }\) in the SMM standard control run with a seasonal cycle. a November–February mean NH EP flux (arrows, shown every 6 km in height) and \(D_{F}\) (colours) below 50 km. The reference arrow in the top left corner has a magnitude of \((5\times 10^{6},\,5\times 10^{4})\,\hbox {kg}/\hbox {s}^{2}\). b November–February mean NH \(\overline{q}_{\phi }\) (contours, in \(10^{-5}\,\hbox {s}^{-1}\)) and the difference from ERA-40 (colours). \(D_{F}\) and \(\overline{q}_{\phi }\) at 87.5N are not plotted as the differentiation error is large there
The NAM-signature of November–Feburary mean \(\overline{u}\) in this control run (not shown), is similar to that in the PJ control run (Fig. 3a).
Runs with applied torques
A term of the form
$$\begin{aligned} A_{0}(\phi _{0},\delta \phi ,z_{0},\delta z) \exp \left( - \left[ \frac{\phi - \phi _{0}}{\delta \phi } \right] ^{2} - \left[ \frac{z-z_{0}}{\delta z} \right] ^{2} \right) \end{aligned}$$
was added to the zonal momentum equation, where \((\phi _{0},\,z_{0})\) is the position of the torque maximum, and \(\delta \phi\) and \(\delta z\) set the meridional and vertical scales of the applied torque respectively. \(A_{0}\) is a constant chosen such that (except in experiments designed to test the effect of varying the torque magnitude) the total zonal momentum added is equal to that of a torque with \(\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km},\,\delta \phi =11^{\circ },\,\delta z=10\,\hbox {km}\) and \(A_{0}=2.5\,\hbox {m/s/day}\). This will be referred to as the "standard torque". This torque strength was found to produce responses in quantities such as \(\overline{u}\) of the order of one standard deviation in the control runs, which is comparable to the magnitude of the influence of natural forcings such as the QBO on the vortex in the real atmosphere. These \(\delta \phi\) and \(\delta z\) values are the same as those used by Ring and Plumb (2007) and Song and Robinson (2004) respectively. A torque with oppositely signed \(A_{0}\) is also applied in the opposite hemisphere (with the sign of \(\phi _{0}\) reversed) so no net zonal momentum is added globally. (Excluding this was not found to affect the results substantially.) A variety of experiments were performed, where normally only one of the torque parameters is different from that of the standard torque, to test the sensitivity of the response to varying each individual parameter, though a few experiments have more than one parameter varied as explained in the text.
The steady state responses to torques under PJ conditions are presented in Sect. 3.1. These are defined as the time-mean differences over years 3–20 between 20-year runs with applied torques and the standard PJ control run.
Transient responses to the torques are examined in later sections. These are estimated using 90-day branch runs, performed with both PJ and seasonally varying boundary conditions, with initial conditions taken from the appropriate control run as described below. In these the torque strength was linearly increased from zero to full strength in the first ten days, then held constant. Raising the period of the increase to twenty days was found not to qualitatively change the results, nor did decreasing the period to zero apart from in the first few days when spurious oscillations in the uppermost model levels were apparent, so the results are not sensitive to varying this time.
Branch runs under PJ conditions were started with initial conditions taken every 6-months from the 40-year standard PJ control run, not including the initial 2-year spin-up time. Thus 76 90-day branch runs were performed for each applied torque. The envelope of the autocorrelation function of \(\overline{u}\) at points in the extratropical stratosphere is about 0.2 or less at a lag of 6 months (not shown), so initial conditions taken 6 months apart are largely independent of each other.
SC branch runs were initiated at January 1 of each year of the SC control run, excluding the 2-year spin-up time. There were 43 90-day branch runs for each torque.
Statistical significances of mean responses to torques were calculated using Monte Carlo (MC) methods. This takes into account the non-Gaussianity of the distribution of responses in our ensembles due to non-linear dynamics, which may cause methods that assume the distribution is normal to give misleading results.
Statistical significance of the steady state responses to torques in the PJ runs was calculated according to the null hypothesis that 6-month averages of the data have the same distribution in the runs with applied torques and the control run. The probability that the magnitude of each response would exceed that in the data under this null hypothesis was calculated at each grid point using a Monte Carlo (MC) permutation test. 1,000 surrogate control and perturbed run time series of the same length as those in the actual data were created by randomly combining 6-month averages from both the control and perturbed run data in each surrogate. Differences in the data were considered statistically significant at the 95 % level if they fell below the 2.5th percentile or exceeded the 97.5th percentile of the distribution of mean differences between the surrogate time series. Taking 6-month averages accounts for serial autocorrelation of the data—the calculated statistical significance is not very different if 12-month averages are used instead.
Statistical significances of the transient responses were calculated using an MC bootstrap test. At each grid point, a surrogate data sample was generated according to the null hypothesis that the mean response is zero but other moments of the true distribution of responses equal those in the data. The mean of the responses for all pairs of branch and control runs was subtracted from the response for each pair, and the results were resampled with replacement to produce surrogate ensembles of the same size as the originals. The probability of the mean of this resampled data being larger than that for the real data was estimated using 1,000 data resamplings.
Confidence intervals of ensemble-mean quantities (in Figs. 15, 17) were calculated using an MC bootstrap method in which the responses for the individual ensemble members were resampled with replacement to produce surrogate ensembles of the same size as the originals. The 95 % confidence intervals are the range between the 2.5th–97.5th percentiles of the distribution of the means of 1,000 such surrogate data samples.
Perpetual January run responses
Steady state responses
Figure 6 shows the steady state \(\overline{u}\) response to applied torques under PJ conditions. Figure 6a shows responses to westerly torques centred at different positions in the high-latitude stratosphere, Fig. 6b shows responses to westerly torques centred at \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) with different meridional and height scales, and Fig. 6c shows responses to three westerly torques centred at \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) with different magnitudes and one easterly torque.
Steady state \(\overline{u}\) responses (colours) to applied torques (contours, at \(\pm 0.5,\pm 2,\pm 4,\pm 6\ldots \,\hbox {m/s/day}\), with westerly contours solid and easterly contours dashed) under perpetual January conditions. a Responses to torques centred at different locations in the high-latitude stratosphere, b the responses to torques with different meridional and vertical scales centred at (60N, 30 km), and c the responses to torques with different peak strengths centred at (60N, 30 km). A NAM-like response is displayed in most cases, with mostly easterly high-latitude \(\overline{u}\) responses to westerly torques. The number in the top left corner of each panel is the anomaly correlation north of 20N between the \(\overline{u}\) response and its NAM signature in the standard control run (Fig. 3a). Plot titles indicate the latitude \(\phi _{0}\) and height \(z_{0}\) where the torque is strongest. Unless otherwise specified in the title, the torques have meridional scale \(\delta \phi = 11^{\circ }\), vertical scale \(\delta z=10\,\hbox {km}\), and a strength such that the magnitude of the total zonal momentum added to the NH equals that of the standard \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) torque with peak strength \(2.5\,\hbox {m/s/day}\). Responses are not plotted where they are not statistically significant above the 95 % level according to the test described in Sect. 2.3
It is interesting to see that applying a westerly torque at high latitudes results in an easterly \(\overline{u}\) response in most of the high-latitude stratosphere. This shows that non-linear feedbacks play a very important role in bringing about this response. The \(\overline{u}\) responses share many characteristics with the NAM signature, with the responses to applied westerly torques all showing mostly an easterly \(\overline{u}\) response at high latitudes, and a westerly \(\overline{u}\) response in the subtropics, with the region of westerly response sloping poleward with increasing height. The anomaly correlationsFootnote 2 between the \(\overline{u}\) responses and the NAM signature (Fig. 3a) north of 20N are also written on the panels, and are larger than 0.8 in nine out of twelve cases, and larger than 0.9 in seven. However, many responses have the same sign as the torques just below where the torques peak, which is a departure from the NAM signature.
The responses to torques centred at \((\phi _{0}=60\hbox {N},\,z_{0}=40\,\hbox {km})\) and \((\phi _{0}=75\hbox {N},\,z_{0}=30\,\hbox {km})\) (Fig. 6a) have anomaly correlations with the NAM signature of only 0.75 and 0.44 respectively. However, it can still be seen visually that the main qualitative features of these responses are similar to those of the NAM signature, with a negative response in most of the extratropics and a positive subtropical response that tilts poleward with increasing height. The correlations may be relatively low because the peak torque magnitudes are relatively large - if the magnitudes of these torques are reduced by 60 %, the anomaly correlations rise to 0.85 and 0.70 respectively.
The response to the torque centred at \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) with peak strength 7.5 m/s/day (Fig. 6c) is less like the NAM signature in the control run. However, the NAM signature diagnosed from the run with the 7.5 m/s/day torque applied shows a better qualitative resemblance to the \(\overline{u}\) response, with positive \(\overline{u}\) between ~40–60N and ~16–50 km (not shown). This illustrates that when the applied forcing becomes large, the relationship between the response and the NAM signature becomes complicated due to the fact that the NAM signature is itself affected by the forcing.
The responses to torques centred in middle latitudes at \((\phi _{0}=30\hbox {N},\,z_{0}=30\,\hbox {km})\) and \((\phi _{0}=45\hbox {N},\,z_{0}=30\,\hbox {km})\) (not shown) are small and not NAM-like. Therefore the response only seems to be NAM-like for torques centred at high latitudes.
The steady state GPH responses to the torques in the middle stratosphere (not shown) are also NAM-like for torques that produce a NAM-like \(\overline{u}\) response. Responses on individual pressure levels have anomaly correlations of ~0.8–0.9 with the NAM signature in the latitude-longitude plane.
Overall, therefore, the steady state responses to applied torques centred at 60N or polewards are very like the NAM. The projection of the torque onto the NAM signature is opposite to the sign of the response of the NAM index (not shown), in contrast to what Ring and Plumb (2007) found for the tropospheric response to applied torques.
The responses to torques centred at \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) are quite linear with respect to varying the peak torque amplitude in the range 1–5 m/s/day, but non-linearity becomes substantial as the torque magnitude increases to 7.5m/s/day (Fig. 6c). Changing the sign of the 2.5 m/s/day torque centred at (60N, 30 km) gives a \(\overline{u}\) response that is nearly equal and opposite (the centre-left panel of Fig. 6a and leftmost panel of Fig. 6c), indicating that our analysis also likely applies for easterly torques in general, such as that due to an increase in GWD, with signs of the responses reversed.
The responses are not sensitive to the precise experimental set up. The steady state response to the standard \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) torque is similar in runs with perpetual November, December and February radiative conditions and bottom boundary, and in PJ conditions with the eddy component of the prescribed 100 hPa GPH equal to 1.5 times that of the HadGEM2-CCS January climatology rather than two times (not shown).
Transient \(\overline{u}\) responses
Examining the NAM-like steady state responses to torques does not give any insight into how these responses come about. In order to better understand the underlying dynamics, the transient response is now examined. We focus on the response to the standard \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) torque since this is qualitatively similar to the responses to all torques centred at 60N or polewards.
Figure 7 shows the ensemble-mean transient \(\overline{u}\) response at (60N, 30 km) to the standard \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) torque over the first 50 days, and the \(\pm 2\) standard error and \(\pm 2\) standard deviation ranges. The mean response is positive and the range of responses is very small up to ~day 15, in agreement with the expected direct response to the applied torque. Shortly after this the range increases substantially and not all ensemble members have a response of the same sign, due to the chaotic nature of the dynamics. The ensemble-mean response becomes negative just after day 30.
The ensemble-mean transient \(\overline{u}\) response to the standard \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) torque under perpetual January conditions interpolated to \((60\hbox {N},\,30\,\hbox {km})\) (solid black line), the mean \(\pm\) two standard errors (dashed black lines), the mean \(\pm\) two standard deviations (dotted black lines) and the torque (red line)
Figure 8 shows the ensemble-mean transient \(\overline{u}\) response to the standard \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) torque as a function of latitude and height at various times. On day 1, the structure of the response resembles the torque very well near (60N, 30 km), consistent with the expected similarity between the response and the forcing on short time scales (Watson and Gray 2014). The response is already negative in the lower and middle stratosphere to the north and south of the torque position. Up to day 10, the positive response spreads northwards, but between days 10 and 15 there is a substantial zonal deceleration in the high latitude upper stratosphere, bringing the response there close to zero. A negative response develops in the high latitude upper stratosphere and descends with time from ~day 20, and a positive response develops in the subtropics to give an overall response that resembles the steady state response shown in Fig. 6 a short time after (although the response does not become entirely steady within the 90-day length of the runs).
The ensemble-mean transient \(\overline{u}\) response to the standard \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) torque under perpetual January conditions, averaged over the time periods of the branch runs indicated in the panel titles. Contours show the torque with levels at 0.05 and \(0.2\,\hbox {m/s/day}\) on day 1, at 0.25 and \(1\,\hbox {m/s/day}\) on day 5 and at 0.5 and \(2\,\hbox {m/s/day}\) from day 10 onwards. Responses are not plotted where they are not statistically significant above the 95 % level according to the test described in Sect. 2.3
Transformed Eulerian-mean diagnostics
In order to further investigate the dynamics of the stratospheric response, the transient response of each term in the TEM zonal momentum equation (Eq. 1) to the standard \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) torque is shown in Fig. 9. On day 1, \(\partial \overline{u}/\partial t\) (1st column) is dominated by the torque. The EP flux response (2nd column) is small, with its convergence increasing where the torque is being applied, associated with increased upward flux and equatorward flux from higher latitudes. The sum of the torque and \(D_{F}\) response (3rd column) is positive in most of the extratropics. As expected from the discusson in Sect. 1, the residual circulation response (4th column) is therefore equatorward in most of the stratosphere, with a negative associated acceleration term in Eq. 1:
$$\begin{aligned} -\overline{v}^{*} \left[ (a \cos \phi )^{-1} (\overline{u} \cos \phi )_{\phi } - f \right] - \overline{w}^{*} \overline{u}_{z}. \end{aligned}$$
This residual circulation response makes \(\partial \overline{u}/\partial t\) less than half the peak torque strength of 0.25 m/s/day near (60N, 30 km), and also causes \(\partial \overline{u}/\partial t\) to be negative in the low latitude lower and middle stratosphere.
In columns from left to right are the ensemble-mean transient responses to the standard \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) torque under perpetual January conditions of: \(\partial \overline{u}/\partial t\), the EP flux (arrows) and \(D_{F}\) (colours), the sum of \(D_{F}\) and the applied torque, and the residual circulation (arrows) and associated acceleration (colours). Data are averaged over a different time period of the branch runs on each row. \(\partial \overline{u}/\partial t\) data are multiplied by a scaling factor as indicated in the plot titles. Contours in the first and third columns show the torque, with contours at 0.05 and \(0.2\,\hbox {m/s/day}\) in the first row, at 0.4 and \(1.6\,\hbox {m/s/day}\) in the second row and at 0.5 and \(2\,\hbox {m/s/day}\) from the third row onwards. In the second and fourth columns, arrows in the top left corner of each panel indicate the size of an arrow that represents the EP flux or residual circulation value written in brackets alongside. The EP flux and residual circulation response is only plotted where either the \(\phi\)- or \(z\)-component is statistically significant above the 95 % level, and are plotted every 6 km in height. Stippling in the first, second and fourth columns shows where responses in \(\partial \overline{u}/\partial t,\,D_{F}\) and the acceleration associated with the residual circulation are statistically significant above the 95 % level respectively (Sect. 2.3). \(D_{F}\) responses at 87.5N are not plotted as the differentiation error is large there
On days 6–10, \(\partial \overline{u}/\partial t\) is larger due to the torque growing linearly in the first 10 days. The EP flux response is also much larger, with a stronger convergence response near (60N, 30 km) that is more than half the strength of the torque. The residual circulation response is qualitatively similar to that on day 1 and stronger.
Over days 11–15, \(\partial \overline{u}/\partial t\) becomes negative in the extratropical upper stratosphere due to the strengthening of the EP flux convergence response. The EP flux convergence response in the middle stratosphere now also cancels most of the direct effect of the torque. As a consequence, the residual circulation response has changed qualitatively so that it is poleward in the upper stratosphere, with an equatorward return flow in the lower stratosphere.
The negative \(\partial \overline{u}/\partial t\) response is largest over days 16–20, when it also extends to the high-latitude lower stratosphere. After day 20, the extratropical \(\partial \overline{u}/\partial t\) is smaller, and the EP flux and residual circulation responses evolve slowly to become more like the steady state responses (not shown). The \(\overline{u}\) response in the high-latitude lower stratosphere becomes negative in this period (Fig. 8). In the subtropics, the \(\overline{u}\) response becomes positive and similar to the NAM signature in this region (Fig. 3a), and this is associated with the Coriolis force acting on the poleward residual circulation response (Fig. 9).
The EP flux response after day 20 resembles the NAM signature (Fig. 3b) only in some respects. Both exhibit a convergent equatorward EP flux in the mid-latitude stratosphere and a poleward flux in the uppermost extratropical stratosphere. However, the NAM signature does not display the convergent upward EP flux in the high-latitude lower stratosphere shown in the response.
Figure 10 shows time series of the ensemble-mean responses of the terms in Eq. 1 at (60N, 30 km) to the standard \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) torque. This shows that up to about day 15 the \(\partial \overline{u}/\partial t\) response here is driven by the torque and resisted by responses in the EP flux and residual circulation. After ~day 25, the EP flux convergence response tends to be larger than the torque, so the \(\partial \overline{u}/\partial t\) response tends to be negative, with the residual circulation response contributing positively. So it can be seen that the EP flux convergence response drives the easterly \(\overline{u}\) acceleration in the high-latitude lower stratosphere after ~day 25. These features of the responses are generally statistically significant (see stippling in Fig. 9). The local Rayleigh friction contribution is very small (not shown).
Ensemble-mean transient responses of \(\partial \overline{u}/\partial t\) (black), the torque (green), \(D_{F}\) (orange), the sum of \(D_{F}\) and the torque (red) and the acceleration associated with the residual circulation (blue) to the standard \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) torque under perpetual January conditions as a function of time since initiation of each branch run, interpolated to (60N, 30 km). The \(D_{F}\) response becomes larger than the torque after ~day 25, and this drives the \(\overline{u}\) response negative
The response of the terms in the TEM zonal momentum equation to other torques centred at 60N and polewards (not shown) is qualitatively similar, with the EP flux responding to approximately cancel the applied torque and converge more in the upper stratosphere from about day 10, bringing about deceleration of \(\overline{u}\). The EP flux convergence responses to the \((\phi _{0}=60\hbox {N},\,z_{0}=40\,\hbox {km})\) and \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km},\,\delta z=2.5\,\hbox {km})\) torques do not become larger than the torques - in these cases the acceleration associated with the residual circulation response opposes the torque at all times, and the EP flux convergence response just becomes temporarily large enough for the \(\partial \overline{u}/\partial t\) response to become negative.
In summary, under PJ conditions the responses to torques are strongly affected by feedbacks from the wave part of the circulation. For torques placed at 60N or polewards, these feedbacks cause an EP flux convergence response that opposes the direct effect of the torque. This convergence response causes the \(\overline{u}\) response to become negative in the polar lower stratosphere, and temporarily can become larger than the torque. Altogether this brings about NAM-like \(\overline{u}\) and GPH responses after a few weeks.
Responses in runs with a seasonal cycle
In this section, the response to a torque in the SC runs is examined and compared to that in PJ runs. Figure 11 shows the ensemble-mean transient \(\overline{u}\) response to the standard \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) torque in SC branch runs up to day 60. The response in the first 25 days goes through approximately the same stages as that under PJ conditions in the first 15 days (Fig. 8), with acceleration initially at high latitudes and a negative \(\overline{u}\) response developing further south. However, the response remains positive in most of the polar stratosphere at later times.
As in Fig. 8 but for the transient \(\overline{u}\) response to the standard \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) torque in runs with a seasonal cycle. The response does not become as NAM-like as in the perpetual January runs or easterly at high latitudes
The \(\overline{u}\) response also does not become clearly NAM-like within 90 days. There is some qualitative similarity between the days 41–60 \(\overline{u}\) response and the NAM signature (c.f. Fig. 3a), with there being a meridional dipole in the response, and with the subtropical part sloping poleward with increasing height. However, the vertical structures of the response and the NAM signature are quite different at high latitudes. The subtropical part of the \(\overline{u}\) response is also larger than the high-latitude part, with the reverse being true for the NAM signature. The anomaly correlation between the days 41–60 \(\overline{u}\) response and the NAM signature of January–February mean \(\overline{u}\) in the standard control run is only \(-0.34\). \(\partial \overline{u}/\partial t\) is small by this time (not shown), so it seems unlikely that the response could become more NAM-like in the duration of a winter. Therefore the response is much less NAM-like in the SC experiments than in the PJ experiments.
Figure 12 shows the ensemble-mean transient response of the terms in the TEM zonal momentum equation to the same torque. Again this shows the same sequence of stages as for the PJ response (Fig. 9), with the torque initially causing \(\partial \overline{u}/\partial t\) to be greater, and a more equatorward middle and lower stratospheric residual circulation. The EP flux convergence increases in the region where the torque is strongest, and later it increases in the upper stratosphere, causing \(\overline{u}\) to decelerate there. However, the EP flux response is weaker and this sequence again unfolds more slowly than in the PJ case. It takes until days 16–20 for the EP flux response to largely cancel out the direct effect of the torque in the high-latitude middle stratosphere and to cause \(\partial \overline{u}/\partial t\) to become negative in some places north of 60N. Under PJ conditions these events happen during days 11–15. This difference may be because the zonal mean flow and EP flux co-evolve and respond to changes in each other—the feedbacks are weaker in the SC runs, which may make the time-derivatives of each smaller, so the sequence of changes proceeds more slowly. The EP flux convergence response in the high-latitude upper stratosphere also never becomes as large as it does under PJ conditions, and in the lower stratosphere north of ~70N it never becomes large enough to make \(\partial \overline{u}/\partial t\) negative, though it does temporarily become larger than the torque and make \(\partial \overline{u}/\partial t\) negative at (60N, 30 km) when averaged over days 26–60. This weaker EP flux response appears to be the reason why the \(\overline{u}\) response does not turn negative in the high-latitude lower and middle stratosphere in the SC runs, and hence why it does not appear NAM-like.
As in Fig. 9 but for the transient responses to the standard \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) torque in runs with a seasonal cycle. Contours in the first and third columns showing the torque are plotted slightly differently to those in Fig. 9, with contours at 0.05 and 0.2 m/s/day in the first row at 0.45 and 1.8 m/s/day in the second row and at 0.5 and 2 m/s/day from the third row onwards. The responses are qualitatively similar to those shown in Fig. 9, but the EP flux response is weaker
The responses to the same torques that were used in the PJ experiments were examined. The responses to torques centred at 60N or poleward are qualitatively similar to the response to the \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) torque (not shown), and reversing the sign of the standard \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) torque approximately reverses the response. The responses are also qualitatively similar for branch runs begun on October 22 rather than January 1, except that the EP flux response is weaker and the sequence unfolds more slowly (not shown), likely because wave forcing at the lower boundary is weaker at this time.
Understanding the wave response
To summarise Sects. 3.3 and 4, the transient responses to westerly torques centred in the high-latitude stratosphere in the PJ and SC runs each proceed through three stages (with the response to an easterly torque being similar but opposite in sign):
The torque causes acceleration of \(\overline{u}\) and induces EP flux convergence where the torque is strongest, and it drives a residual circulation that is equatorward in the stratosphere.
The EP flux becomes more upward and convergent in the extratropical upper stratosphere, causing \(\overline{u}\) here to decelerate and the residual circulation response to become poleward in the upper stratosphere.
The \(\overline{u}\) acceleration reduces to zero in the high-latitude stratosphere. In the case of the PJ runs it becomes temporarily strongly negative, due to EP flux convergence increasing, resulting in a very NAM-like \(\overline{u}\) response. The EP flux response is weaker in the SC runs, and this produces a very different long-term mean response that is less NAM-like.
It can be understood from the discussion in Sect. 1, and references therein, why the \(\partial \overline{u}/\partial t\) response tends to have the same sign as the sum of the torque and \(D_{F}\) response, and why the residual circulation responds such that the associated acceleration contribution tends to have the opposite sign (Figs. 9, 12). A complete explanation of the response must account for the change in the EP flux, however. In what follows we argue that the EP flux response, and the differences in the response between the PJ and SC experiments, can be understood using ray theory. This implies that these differences are largely due to differences in the \(\overline{u},\,\overline{q}_{\phi }\) and planetary wave climatologies in the two cases.
Figure 13a shows the transient response of the refractive index for stationary waves squared (\(n^{2}\), given by Eq. 3 using reference temperature \(T_{s}=240\,\hbox {K}\) and scale height \(H=6.95\,\hbox {km}\)) to the standard \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) torque after 1 day under PJ conditions, along with the zonal wavenumber-1 component of the EP flux response. We consider the different wave numbers separately because the predictions of ray theory apply to each wavenumber separately. The presented \(n^{2}\) response is a "trimmed mean" (Wilks 2006) of the responses over the ensemble members, where the top and bottom 10 values were excluded at each grid point to remove large outliers that arise after a few days at a few grid points because some ensemble members have small \(\overline{u}\) values, which makes \(n^{2}\) very sensitive to changes in \(\overline{u}\) and \(\overline{q}_{\phi }\). The important features identified in the following discussion are similar if the mean of the responses is used, and the features in the first few days can also be seen in the responses for individual ensemble members.
a Trimmed-mean response of the squared refractive index for stationary waves (\(n^{2}\), colours) and zonal wavenumber-1 EP flux (arrows) to the standard \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) torque under perpetual January conditions after 1 day. Hatching shows where trimmed-mean \(n^{2}\) for zonal wavenumber-1 is negative in the control run. The reference arrow to the left of the plot represents EP flux (5,000, 50) \(\hbox {kg/s}^{2}\). b Response of the meridional PV gradient \(\overline{q}_{\phi }\). c Contribution to the response in \(\overline{q}_{\phi }\) associated with the "\(u_{\phi \phi }\) term" \(-\overline{u}_{\phi \phi }/a\) and d the same for the "\(u_{zz}\) term" \(-af^{2}\overline{u}_{zz}/N^{2}\), which dominate the \(\overline{q}_{\phi }\) response. These responses are consistent with EP flux convergence increasing near 60N because \(\overline{q}_{\phi }\) increases and enhances planetary wave propagation into this region
There is a very good agreement between the structure of the \(n^{2}\) change and the wavenumber-1 EP flux response expected according to ray theory between ~40–90N and ~16–40 km on day 1. Between ~50–70N, \(n^{2}\) increases, and it decreases to the north and south. Correspondingly the EP flux below and to the north and south of (60N, 30 km) is directed more into this region, and wavenumber-1 EP flux convergence increases here (not shown). There is also a large increase in \(n^{2}\) south of ~30N in the stratosphere, though this is not associated with a substantial change in the EP flux. The wavenumber-1 EP flux response is similar to the full EP flux response (Fig. 9). The wavenumber-2 EP flux response (not shown) is also qualitatively similar.
The \(D_{F}\) response is not directly predicted by ray theory, but it may be anticipated that EP flux convergence will tend to increase where wave propagation increases, for example due to dissipative processes acting on the waves.
The refractive index response \(\delta n^{2}\) is initially approximately given by
$$\begin{aligned} \delta n^{2} \approx (\delta \overline{q}_{\phi } - \delta \overline{u} \,\, \overline{q}_{\phi }/\overline{u})/a \overline{u}, \end{aligned}$$
where \(\delta \overline{q}_{\phi }\) and \(\delta \overline{u}\) are the \(\overline{q}_{\phi }\) and \(\overline{u}\) responses respectively, derived from Eq. 3 taking \(\delta \overline{q}_{\phi }/\overline{q}_{\phi }\) and \(\delta \overline{u}/\overline{u}\) to be small, noting that \(N^{2}\) is assumed to be constant. On day 1 in the extratropical lower and middle stratosphere, the largest contribution to \(\delta n^{2}\) is associated with \(\delta \overline{q}_{\phi }\) (Fig. 13b) with the \(\delta \overline{u}\) term providing a smaller, oppositely signed contribution (not shown), as can be inferred from Fig. 8. \(\delta \overline{q}_{\phi }\) is contributed to mostly by a change in \(\overline{u}_{\phi \phi }\) and also partly by a change in \(\overline{u}_{zz}\). These terms decrease where \(\overline{u}\) is being accelerated most rapidly by the direct effect of the torque around (60N, 30 km), and the terms \(-\overline{u}_{\phi \phi }/a\) and \(-af^{2}\overline{u}_{zz}/N^{2}\) in the expansion of Eq. 4 are associated with most of the increase in \(\overline{q}_{\phi }\) here (Fig. 13c, d). Therefore \(n^{2}\) increases here mainly due to curvature of \(\overline{u}\) becoming more negative as a "nose" is pushed out in the \(\overline{u}\) profile around (60N, 30 km) (top left panel of Fig. 8). \(\overline{u}_{\phi \phi }\) increases (so \(-\overline{u}_{\phi \phi }/a\) decreases) to the north and south of this region and \(\overline{u}_{zz}\) increases (so \(-af^{2}\overline{u}_{zz}/N^{2}\) decreases) above and below, giving negative contributions to \(\delta \overline{q}_{\phi }\).
There is a good correspondence between the extratropical lower and middle stratospheric transient EP flux and refractive index responses in the first few days for all the torques in the PJ simulations (not shown).
In the SC simulations, the EP flux response to the standard \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) torque in the first few days is also consistent with changes in the refractive index (not shown) caused by changes in the QG PV gradient, with the peak refractive index change after one day being about half that in the PJ runs shown in Fig. 13. This is because the climatological \(\overline{u}\) (Figs. 2a, 4a) and \(\overline{q}_{\phi }\) (Figs. 2e, 5b) are both about twice as large near (60N, 30 km) in the SC control run as in the PJ control run, and \(\delta \overline{u}\) and \(\delta \overline{q}_{\phi }\) are similar (which is expected since the direct \(\overline{u}\) response to the torques ought to be similar). So by Eq. 5, \(\delta n^{2}\) is half as large in the SC runs. Therefore it seems that the initial mean EP flux response is less strong in the SC runs because the vortex is stronger and further poleward on average.
On days 6–10, the PJ \(n^{2}\) response to the standard \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) torque (Fig. 14) is largely qualitatively similar to that on day 1, though it has become positive in the stratosphere near the pole, due to the \(\overline{q}_{\phi }\) response becoming positive there (not shown). The upward EP flux response near 60N in the lower stratosphere is still qualitatively consistent with the \(n^{2}\) response and ray theory.
The trimmed-mean response of the squared refractive index for stationary waves (\(n^{2}\), colours) and zonal wavenumber-1 EP flux (arrows) to the standard \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) torque under perpetual January conditions, averaged over days 6–10 of the branch runs. Hatching shows where trimmed-mean \(n^{2}\) for zonal wavenumber-1 is negative in the control run. The reference arrow to the left of the plot represents EP flux \((10^{5},10^{3})\,\hbox {kg/s}^{2}\). The high-latitude lower stratospheric upward EP flux response is consistent with the positive \(n^{2}\) response and ray theory
The first stage of the transient response to a westerly torque can therefore be explained by the torque directly increasing \(\overline{q}_{\phi }\) by affecting the curvature of \(\overline{u}\), so that Rossby waves propagate more into the region where the torque is applied, which increases EP flux convergence here.
In the second stage of the response, the EP flux response becomes strongly upward and equatorward above ~35 km, after ~day 11 in the PJ simulations (Fig. 9). However, \(\delta n^{2}\) remains negative in the mid-latitude upper stratosphere, and is both positive and negative at different points in the high-latitude upper stratosphere (not shown). Therefore it is not immediately clear that the refractive index diagnostic is useful for explaining this stage. We show in the following analysis, though, that the response is still consistent with ray theory once changes in the wave propagation from below are accounted for.
Figure 15 shows the high-altitude response of \(\widetilde{D_{F}}=2\pi a^{2} \cos (\phi ) \nabla \cdot \mathbf {F}\) averaged over days 11–25 plotted against the 45–80N \(F^{z}\) response \((\delta F^{z})\) at 30 km averaged over days 6–10, for all the torques in the PJ and SC experiments. The \(\widetilde{D_{F}}\) response is integrated over 30–80N and 35–50 km. The integral of \(\delta F^{z}\) is defined as
$$\begin{aligned} \int _{45\mathrm {N}}^{80\mathrm {N}} 2 \pi a^{2} \cos (\phi ) \delta F^{z}|_{z=30 \, \mathrm {km}} \, \mathrm {d}\phi . \end{aligned}$$
This expression follows from applying Stokes' theorem to equations 2.5 and 2.6 of Dunkerton et al. (1981) to infer the negative contribution of \(\delta F^{z}\) to the response of \(\widetilde{D_{F}}\) integrated within a closed surface of which the surface at 30 km between 45–80N forms a part. The day 11–25 period includes that of greatest \(\overline{u}\) deceleration and EP flux convergence responses in the upper stratosphere in both PJ and SC runs (Figs. 9, 12).
The ensemble and day 11–25 mean responses in \(\widetilde{D_{F}}\) integrated over 30–80N and 35–50 km plotted against the ensemble and day 6–10 mean \(F^{z}\) responses at 30 km integrated between 45 and 80N, for all the torques in the perpetual January (red) and seasonal cycle (blue) experiments. The solid lines are the least-squares linear fits to each set of experiments. Error bars show the 2.5th–97.5th percentiles of the distribution of the mean according to an MC bootstrap estimate. Unless otherwise specified in the legend, the torques peak at \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\), have meridional scale \(\delta \phi = 11^{\circ }\) and vertical scale \(\delta z=10\,\hbox {km}\), and have a magnitude such that the total zonal momentum added to the NH equals that of the standard \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) torque with peak magnitude 2.5 m/s/day. These results are consistent with enhanced upward planetary wave propagation into the high-latitude lower stratosphere leading to increased propagation into the upper stratosphere at later times with consequent greater EP flux convergence, with this wave response being qualitatively similar but weaker in runs with a seasonal cycle than in perpetual January runs
The high-altitude \(\widetilde{D_{F}}\) response during days 11–25 is more negative for torques that cause a greater \(F^{z}\) response in the middle stratosphere between days 6–10, with correlations of \(-0.96\) and \(-0.93\) across the PJ and SC experiments respectively. As argued previously, the \(F^{z}\) response between days 6–10 seems to be due to torques increasing \(\overline{q}_{\phi }\). The regression coefficient of the mean \(\widetilde{D_{F}}\) response against the mean \(F^{z}\) response is very similar in each set of simulations. This indicates that the high-altitude \(\widetilde{D_{F}}\) response in the SC experiments is weaker than in the PJ experiments primarily because the earlier \(F^{z}\) response is weaker.
Physically, this is consistent with the explanation that the mid-stratospheric upward EP flux response in days 6–10 is associated with propagating waves with an upward group velocity, with an associated transfer of easterly momentum into the upper stratosphere at later times, resulting in deceleration of \(\overline{u}\) that is enhanced by the lower air density here. The results also suggest that the extratropical response to the \((\phi _{0}=30\hbox {N},\,z_{0}=30\,\hbox {km})\) and \((\phi _{0}=45\hbox {N},\,z_{0}=30\,\hbox {km})\) torques is weak (Fig. 6) because these torques are too far south to cause substantial enhancement of upward wave propagation at high latitudes.
In the third phase of the response, the PJ \(D_{F}\) response in the high-latitude stratosphere becomes large enough to cause the \(\overline{u}\) response to turn negative (Fig. 8), but this does not occur in the SC experiments (Fig. 11). The high-latitude \(\widetilde{D_{F}}\) response between days 21–40, which is the period when it drives the high-latitude mid-stratospheric \(\overline{u}\) response to become negative in the PJ runs (Fig. 8), is well correlated with the \(\widetilde{D_{F}}\) response in the same place in days 6–10 in both sets of runs (Fig. 17). So the stronger \(\widetilde{D_{F}}\) response over days 21–40 in the PJ runs is related to their stronger initial response.
The \(n^{2}\) response to westerly torques continues to be positive in the high-latitude lower and middle stratosphere in both the PJ and SC runs past day 10 (Fig. 16a). This is partly because \(\overline{q}_{\phi }\) is increased near where the torque is strongest and where it causes more negative curvature of \(\overline{u}\) (Fig. 16b). \(\delta n^{2}\) continues to be larger in the PJ runs than in the SC runs (not shown), and would be expected to lead to greater EP flux convergence in the extratropics in the PJ runs, as in the first stage of the response. This may be what allows the overall \(\overline{u}\) response to turn easterly in the PJ runs. \(\delta n^{2}\) differs between the PJ and SC runs in this period in part because their control climatologies are different, as in the first stage of the response, which may account for why the early and late \(\widetilde{D_{F}}\) responses are closely related (Fig. 17).
a Trimmed-mean response of the squared refractive index for stationary waves \((n^{2})\) to the standard \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) torque under perpetual January conditions, averaged over days 21–60 of the branch runs. b Corresponding mean response of the meridional PV gradient \(\overline{q}_{\phi }\). The torque increases \(\overline{q}_{\phi }\) in the lower and middle stratosphere near ~45–60N, contributing to \(n^{2}\) being larger, encouraging wave propagation into the stratosphere in the long term
The ensemble and day 21–40 mean responses in \(\widetilde{D_{F}}\) integrated over 45–80N and 16–40 km plotted against the day 6–10 mean \(\widetilde{D_{F}}\) responses in the same region, for all the torques in the perpetual January (red) and seasonal cycle experiments (blue), plotted as in Fig. 15. There is a close relationship between the \(\widetilde{D_{F}}\) responses at early and late times, and the weaker days 6–10 response in the seasonal cycle runs compared to that in the perpetual January runs is related to the weaker response in days 21–40, and hence to the \(\overline{u}\) response being NAM-like in the perpetual January runs but not in those with a seasonal cycle
In summary, the wave feedbacks are qualitatively similar in the PJ and SC experiments, with the difference just being their strength. The feedbacks are consistent with ray theory, which predicts that as a westerly torque increases \(\overline{q}_{\phi }\) at high latitudes, wave propagation into the stratosphere will increase. Ray theory indicates that the differences between the PJ and SC feedback strengths arise due to differences in the control run \(\overline{u}\) climatologies, and possibly also differences in the planetary wave climatologies. Ray theory may also explain why the PJ \(\overline{u}\) response to torques with smaller meridional and height scales is larger (Fig. 6), as these have larger direct effects on \(\overline{q}_{\phi }\), so the wave response is larger. As the response to a torque tends towards its steady state, however, changes in the mesosphere and phenomena such as wave reflection could complicate the details of the wave behaviour, and this has not been fully investigated here, though it seems unlikely to be of primary importance.
Cohen et al. (2013) argued that increased convergence of the EP flux associated with resolved waves in the region of an applied torque is necessary, because otherwise the flow would eventually become unstable due to \(\overline{q}_{\phi }\) changing sign. The fact that in our experiments the EP flux becomes more convergent in the region where the torques peak within a day, for torques centred at 45N or poleward (shown for the \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) torque in Figs. 9, 12), implies though that wave feedbacks begin counteracting the torques before instability develops. \(\overline{q}_{\phi }\) remains positive in our runs with applied torques, except very near 90N (not shown). This is consistent with the analysis of Cohen et al. (2014) and Sigmond and Shepherd (2014).
Comparison of the PJ response to 100 hPa GPH forcing and torques
In order to further test the idea that an arbitrary extratropical forcing will tend to give rise to an NAM-like response if planetary wave feedbacks are large enough, Fig. 18 shows the steady state \(\overline{u}\) differences between two PJ SMM runs without applied torques but with different prescribed 100 hPa GPH. The first is the "standard" PJ control run (Sect. 2.2.1). The second is a similar run with the eddy component of HadGEM2-CCS 100 hPa GPH multiplied by 1.5 rather than 2. In other words, the \(\overline{u}\) differences in Fig. 18 are due to a steady forcing by a 100 hPa GPH perturbation equal to a quarter that shown in Fig. 1b. This could represent an increased planetary wave forcing on the vortex due to a change in the tropospheric state.
The steady state \(\overline{u}\) response to increased planetary wave forcing at 100 hPa under perpetual January conditions. The number in the top left corner is the anomaly correlation north of 20N between the \(\overline{u}\) response and its NAM signature in the standard control run (Fig. 3a). Data are not plotted where they are not statistically significant above the 95 % level according to the test described in Sect. 2.3. The response is very NAM-like, similar to the responses to torques in Fig. 6, supporting the idea that arbitrary high-latitude forcings will give NAM-like steady state responses when feedbacks are strong enough
The \(\overline{u}\) response in Fig. 18 appears very like the NAM \(\overline{u}\) signature and the steady state \(\overline{u}\) responses to torques centred at 60N or poleward (Figs. 3a, 6), with an anomaly correlation of 0.91 with the NAM signature (calculated in the same way as in Sect. 3.1). The easterly high-latitude \(\overline{u}\) response in the steady state is the same sign as the expected direct effect of increased wave forcing, so wave feedbacks do not always act to make the steady state response opposite in sign to the direct effect of a forcing in the PJ simulations.
Therefore the circulation response to increased planetary wave forcing in the PJ simulations is also NAM-like, and this adds further support to the idea that feedbacks will give a NAM-like response to an arbitrary forcing in the high-latitude stratosphere when they are sufficiently strong.
Overall, our results are generally consistent with those of other modelling studies. The approximate cancellation of the direct effect of an applied torque placed in the stratosphere by the planetary wave response in the steady state was also seen by Cohen et al. (2013, 2014) in both primitive equation and more comprehensive models with an interactive troposphere. Sigmond and Shepherd (2014) also found that this occurred in a GCM, albeit with incomplete cancellation at high latitudes. A similar effect was also seen by McLandress and McFarlane (1993) and Manzini and McFarlane (1998) in experiments on parameterised GWD, but was not investigated in depth.
However, the easterly response of the high-latitude stratospheric \(\overline{u}\) to a westerly torque seen in the PJ experiments has not been previously reported, though there are parallels with other systems—in tropospheric experiments, a similar reversal of the \(\overline{u}\) response to broad tropical heating by wave feedbacks was found by Sun et al. (2013), and Palmer (1999) showed that the response can have the opposite sign to an applied forcing in the Lorenz (1963) system. The studies of Song and Robinson (2004) and Chen and Zurita-Gotor (2008) indicate that the stratospheric \(\overline{u}\) response has the same sign as the torque in models with weak stationary wave forcing. However, the strong EP flux response in the results presented here indicates that the stationary wave forcing is very important for bringing about the steady state responses in the SMM simulations. In an additional PJ SMM experiment, with the imposed 100 hPa GPH wave amplitude reduced to a quarter of that in the PJ experiments described in Sect. 2.2.1, so that planetary wave activity is substantially lessened, the steady state \(\overline{u}\) response to the standard \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) westerly torque is also westerly at high latitudes (not shown).
The easterly responses to westerly torques in the PJ runs can be understood as being due to the torque having two opposing effects on \(\overline{u}\): to directly make the flow more westerly, and to increase the meridional PV gradient so that planetary wave propagation into the stratosphere increases, which leads to easterly acceleration. There is nothing restricting the latter effect to be smaller than the former. Wave propagation into the stratosphere stays enhanced even after the \(\overline{u}\) response has become negative in the PJ runs because the \(D_{F}\) response is broader meridionally than the torque (bottom row of Fig. 9), making the total acceleration less negative where the torque peaks, affecting the curvature of \(\overline{u}\). As a result, the response of the meridional PV gradient to the torque is always positive around ~45–60N (Fig. 16b), enhancing upward wave propagation.
Cohen et al. (2013) find in a model with substantial stationary wave forcing that an easterly torque produces an easterly \(\overline{u}\) response at high latitudes. This is consistent with our results in the SC runs (Sect. 4) and is consistent with our analysis if their model simulates a weaker wave response to the applied torque than that in our PJ experiments.
The consistency we have found between the planetary wave responses to applied torques and ray theory is in accordance with previous studies that have shown that ray theory appears to be successful at explaining the tropospheric response to lower stratospheric heating (Simpson et al. 2009) and the extratropical stratospheric response to QBO forcing (Garfinkel et al. 2012). This provides evidence that ray theory is generally useful for understanding the responses to applied forcings in the stratosphere (though this does not preclude other frameworks from also being useful, such as the non-linear approach of O'Neill and Pope (1988), or the PV-based approaches of Cohen et al. (2014) and Scott and Liu (2014)).
Further tests exploring the reasons for differences between perpetual January and seasonal cycle runs
The analysis of Sect. 5 suggests that the long-term response to a torque is NAM-like in the PJ runs but less so in the SC runs because planetary wave feedbacks are stronger in the former. Additional possible explanations were also explored. For example, Son and Lee (2006) found that the tropospheric responses to heating perturbations are more AM-like when the leading circulation EOF dominates the variability more. However, we did not find that the first EOF of GPH explains very different fractions of the total variability in the PJ and SC control runs. Another possible explanation is suggested by the fluctuation-dissipation theorem, which predicts that the size of the response of the leading principal component (PC) to an applied forcing is inversely proportional to that PC's autocorrelation time scale (Leith 1975), but we found that this time scale was not very different between the PJ and SC control runs.
To test the effect of removing daily 100 hPa GPH variability in the SC runs whilst retaining the seasonal cycle, we also performed an additional SC experiment, but prescribed monthly-mean rather than daily-mean 100 hPa GPH, with the eddy component multiplied by a factor of 2 as in the PJ runs. The \(\overline{u}\) climatology and long-term mean response to the standard \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) torque (not shown) were similar to those in the PJ runs, with the response being NAM-like and easterly at high latitudes. This supports the previous analysis showing that the response depends on the \(\overline{u}\) climatology, as the response changed to become like that in the PJ experiments at the same time as the \(\overline{u}\) climatology changed to become like that in the PJ control run. This experiment also shows that the presence of seasonally-varying boundary conditions does not prevent the response from becoming NAM-like.
We performed a further SC experiment with daily variability of 100 hPa GPH, but with the eddy component multiplied by a factor of 1.5 (the SMM became unstable with eddy amplitudes scaled by a factor of 2). The \(\overline{u}\) climatology and response to the standard \((\phi _{0}=60\hbox {N},\,z_{0}=30\,\hbox {km})\) torque were similar to those in simulations with no scaling of the eddy component (Sect. 4), and the \(\overline{u}\) response was not NAM-like. Therefore it does not seem that the difference between the responses to torques in SC and PJ simulations is due to the mean wave amplitude being larger in the latter. This is also consistent with the analysis showing that the response depends on the \(\overline{u}\) climatology, as neither changed much.
By the process of elimination, the presence of high-frequency variability in the imposed 100 hPa GPH in the SC runs therefore seems to be an important factor that causes the \(\overline{u}\) climatology and responses to applied torques to differ from those in the PJ runs. However, it is difficult to directly test this because it is not clear how to include realistic daily 100 hPa GPH variability in the PJ runs to see if this makes them more like the SC runs.
Sources of error in the SMM
It is emphasised that the SMM is a simplified model that is only expected to show behaviour that is qualitatively similar to the true stratospheric dynamics. In particular, imposing GPH at 100 hPa and parameterising GWD using Rayleigh friction may introduce unrealistic effects. However, Fig. 6 indicates that the \(\overline{u}\) response to torques is qualitatively similar for torques placed at different heights above the bottom boundary, indicating that the presence of the bottom boundary does not strongly affect the response. The presented responses will not include feedbacks from the tropospheric response to the stratospheric circulation change, though. Feedbacks from changes in gravity waves are also not properly represented, though it is not expected that the qualitative nature of the results presented here would change if a more realistic gravity wave parameterisation were included, as the contribution from planetary waves usually dominates the stratospheric wave drag. As noted in Sect. 2.1, greatly weakening the NH Rayleigh friction was not found to alter the qualitative nature of the stratospheric responses to the applied torques, indicating that the main results of this study are not strongly affected by including this in the SMM.
We have investigated the NH wintertime steady state and transient circulation responses to applied zonally symmetric zonal torques that were simulated by a primitive equation model of the middle atmosphere, using both idealised perpetual January (PJ) boundary conditions and imposing a more realistic seasonal cycle (SC). This is relevant for understanding how the extratropical stratosphere responds to perturbations to the zonal mean circulation made directly by gravity wave parameterisations and also external forcings, such as the QBO, ENSO, solar cycle and volcanic eruptions. Our experiments indicate that feedbacks from the wave part of the extratropical circulation have a big effect on the spatial structure of the overall responses, and can even determine their sign. Therefore it seems necessary to understand these feedbacks to understand the extratropical stratospheric response to such forcings.
For both PJ and SC boundary conditions, westerly torques placed in the high-latitude NH stratosphere have two main effects: to directly accelerate \(\overline{u}\), and to cause greater upward EP flux, consistent with upward propagation of large-scale waves being enhanced because the torque increases the meridional PV gradient, consistent with ray theory. This wave feedback leads to increased EP flux convergence where the torque is applied, and this temporarily drives the overall zonal acceleration to become opposite in sign to the direct effect of the torque after about a couple of weeks (Sects. 3.3 and 4). This is possible because there is nothing to restrict the easterly acceleration due to the wave response to be smaller than the direct westerly acceleration of the torque.
In the PJ runs, the wave feedback can cause an easterly high-latitude long-term mean \(\overline{u}\) response to a westerly torque. This happens because the \(D_{F}\) response during and after this stage is broader meridionally than the torques we used (bottom row of Fig. 9), so the total acceleration is less negative near where the torque strength peaks, affecting the curvature of \(\overline{u}\). Consequently the response of the meridional PV gradient to the torque is always positive (Fig. 16b) and wave propagation continues to be enhanced even as the \(\overline{u}\) response reduces to zero, so the \(\overline{u}\) response becomes easterly.
The \(D_{F}\) response is roughly equal and opposite to most torques in the steady state in both the PJ and SC simulations. The EP flux convergence also increases in the upper stratosphere, decelerating the upper stratospheric \(\overline{u}\). The residual circulation response to applied torques (Sects. 3.3 and 4) appears very different to the expected response to an applied torque when the EP flux convergence is held fixed (e.g. Haynes et al. 1991), which is important for understanding how changes in gravity wave parameterisations affect the meridional circulation (Cohen et al. 2013, 2014).
In the PJ runs, the long-term responses of \(\overline{u}\) and GPH to high-latitude torques are generally NAM-like (Sect. 3.1), due to the strong wave feedbacks. These stratospheric NAM-like responses are analogous to the NAM-like responses of tropospheric models to various forcings (e.g. Son and Lee 2006; Ring and Plumb 2007, 2008; Branstator and Selten 2009) and the responses of the Lorenz (1963) system to applied forcings (Palmer 1999; Palmer and Weisheimer 2011). This indicates that stratospheric and tropospheric dynamics are similar in this regard and that this behaviour could be quite general. This broadly supports the suggestion of Watson and Gray (2014) that feedbacks cause the long-term NH extratropical stratospheric response to many forcings to be NAM-like, as long as the feedbacks are strong enough. This may explain why the observed responses of the polar vortex to the QBO, ENSO and solar cycle influences are NAM-like (e.g. Dunkerton and Baldwin 1991; Sassi et al. 2004; Labitzke 2005; Ruzmaikin et al. 2005; Watson and Gray 2014), supposing that planetary wave feedbacks are sufficiently strong in the real stratosphere. This is also supported by the fact that the response to 100 hPa GPH wave forcing in the SMM is also NAM-like, though feedbacks do not reverse the sign of the response in this case (Sect. 6). However, torques placed in middle latitudes do not produce a NAM-like response (Sect. 3.1), because it seems they do not strongly affect wave propagation into the stratosphere (Sect. 5, though we have not investigated the responses to these torques in detail). This is perhaps because their main effect is on wave breaking in middle latitudes (Cohen et al. 2014).
The long-term responses in SC runs are not very NAM-like, however, and the high-latitude \(\overline{u}\) responses are the same sign as the torque (Sect. 4). This is because the wave feedbacks are weaker than those in the PJ experiments, though qualitatively similar, consistent with ray theory and differences in the climatological \(\overline{u}\) between the SC and PJ runs (Sect. 5). The model may not simulate the overall magnitude of the feedbacks accurately, since the climatological EP flux is too weak (Sect. 2.2.2) and feedbacks associated with gravity waves and the tropospheric response were not included. This means that it is not clear from these experiments which of the PJ and SC simulations predicts the long-term responses more accurately. It would therefore also be interesting to examine the stratospheric response to torques in a more realistic model.
The overall picture that emerges is that wave feedbacks act to make the response to torques at high latitudes NAM-like on long time scales, but the total response will only become NAM-like if the feedbacks are strong enough. Therefore the effect of feedbacks on long time scales seems quite conceptually straightforward relative to the detailed physics of what happens shortly after a torque is applied.
Since the sign of the \(\overline{u}\) response differs between the PJ and SC experiments, our results indicate that it is necessary that models simulate the magnitude of the feedbacks quantitatively accurately in order to simulate a long-term response that is even qualitatively accurate. This is likely to also apply to forcings other than torques. According to ray theory, biases in the \(\overline{u}\) climatology and representation of tropospheric wave sources will lead to biases in the simulated strength of wave feedbacks. For example, too large climatological \(\overline{u}\) and \(\overline{q}_{\phi }\) would be expected to lead to wave feedbacks to given \(\overline{u}\) and \(\overline{q}_{\phi }\) perturbations to be too weak (Sect. 5). Comparing a model's EP flux to that in observations may also be particularly useful for assessing a model's ability to accurately simulate planetary waves and therefore its ability to simulate the wave feedbacks that shape the response to external forcings. Our results are reminiscent of the findings of Sigmond and Scinocca (2010), who showed that the tropospheric response to a doubling of \(\hbox {CO}_{2}\) is quite sensitive to the \(\overline{u}\) climatology.
The demonstration that the steady state stratospheric response to a forcing may have the opposite sign to the forcing (Sect. 3.1) has important implications for studies of the mechanisms by which external forcings influence the polar vortex—in principle it could be the case that the direct effect of a forcing has the opposite sign to the long-term mean response. As far as we are aware, this possibility has not been considered in any previous studies of the effect on the vortex of forcings such as the QBO, ENSO and the solar cycle. Feedbacks may greatly modify the response from what is expected based on simple arguments. It also highlights the difficulties of using diagnostics such as composite differences to understand forcing mechanisms, since these may be dominated by the effects of feedback processes (Watson and Gray 2014).
The implication that the extratropical stratospheric response to an external forcing is affected by the climatology may be relevant for understanding non-linearity in the way different forcings combine to affect the polar vortex, such as the suggested non-linear combined influence of the QBO and ENSO (Garfinkel and Hartmann 2007; Wei et al. 2007) and of the QBO and solar cycle (e.g. Labitzke 1987; Matthes et al. 2004). When one forcing affects the background circulation, this would be expected to change the circulation response to other forcings, and this effect may contribute to the reported non-linearities.
Our results provide further evidence that ray theory can be helpful for understanding the stratospheric extratropical response to external forcings, though other physical explanations may also be consistent with the results presented here. It is not altogether clear why strong wave feedbacks result in a circulation response that is similar to the NAM. Deeper analysis, beyond the scope of this work, may indicate if the NAM-like response occurs, for example, due to regime behaviour (Palmer 1999), or to NAM-like anomalies having a long decay time scale (Branstator and Selten 2009).
Our work suggests that it may be helpful to examine the transient responses to applied forcings in the troposphere to better understand whether feedbacks generally tend to make the response AM-like there, rather than just the long-term mean responses that have been the focus of previous work that we are aware of (e.g. Son and Lee 2006; Ring and Plumb 2007, 2008; Woollings 2008; Branstator and Selten 2009). In cases when the long-term response to a forcing is not AM-like, the transient response can be used to distinguish between the possibilities that the feedback processes are indeed different compared to situations when the response is AM-like or that the feedbacks are similar but simply weaker.
"Wave" is used here in general to mean any zonally asymmetric component of the circulation, which may not be a propagating structure.
The anomaly correlation between 2D anomaly patterns \(\mathbf {x}\) and \(\mathbf {y}\) is defined here as
$$\begin{aligned} \frac{\sum _{i=1}^{n_{i}} \sum _{j=1}^{n_{j}} w_{ij} x_{ij} y_{ij}}{\sqrt{\sum _{i=1}^{n_{i}} \sum _{j=1}^{n_{j}} w_{ij} x_{ij}^{2}} \sqrt{\sum _{i=1}^{n_{i}} \sum _{j=1}^{n_{j}} w_{ij} y_{ij}^{2}}}, \end{aligned}$$
where \(n_{i}\) and \(n_{j}\) are the number of gridpoints along each respective dimension and \(w_{ij}\) is the cosine of the latitude (the results are not sensitive to varying this weighting function).
Andrews DG, Holton JR, Leovy CB (1987) Middle atmosphere dynamics. Academic Press, London
Branstator G, Selten F (2009) "Modes of variability" and climate change. J Clim 22(10):2639–2658
Butchart N, Clough SA, Palmer TN, Trevelyan PJ (1982) Simulations of an observed stratospheric warming with quasigeostrophic refractive index as a model diagnostic. Q J R Meteorol Soc 108(457):475–502
Charlton AJ, Polvani LM (2007) A new look at stratospheric sudden warmings. Part I: climatology and modeling benchmarks. J Clim 20:449–469
Chen G, Zurita-Gotor P (2008) The tropospheric jet response to prescribed zonal forcing in an idealized atmospheric model. J Atmos Sci 65(7):2254–2271
Cohen NY, Gerber EP, Bühler O (2013) Compensation between resolved and unresolved wave driving in the stratosphere: implications for downward control. J Atmos Sci 70:3780–3898
Cohen NY, Gerber EP, Bühler O (2014) What drives the Brewer–Dobson circulation? J Atmos Sci 71:3837–3855. doi:10.1175/JAS-D-14-0021.1
Dunkerton T, Hsu CPF, McIntyre ME (1981) Some Eulerian and Lagrangian diagnostics for a model stratospheric warming. J Atmos Sci 38(4):819–844
Dunkerton TJ, Baldwin MP (1991) Quasi-biennial modulation of planetary-wave fluxes in the Northern Hemisphere winter. J Atmos Sci 48(8):1043–1061
Eliassen A (1951) Slow thermally or frictionally controlled meridional circulation in a circular vortex. Astrophys Nor 5:19
Fairlie TDA, Fisher M, O'Neill A (1990) The development of narrow baroclinic zones and other small-scale structure in the stratosphere during simulated major warmings. Q J R Meteorol Soc 116(492):287–315
Fisher M, O'Neill A, Sutton R (1993) Rapid descent of mesospheric air into the stratospheric polar vortex. Geophys Res Lett 20(12):1267–1270
Garcia RR (1987) On the mean meridional circulation of the middle atmosphere. J Atmos Sci 44(24):3599–3609
Garfinkel CI, Hartmann DL (2007) The effects of the quasi-biennial oscillation and the El Nino Southern Oscillation on polar temperatures in the stratosphere. J Geophys Res 112:D19112. doi:10.1029/2007JD008481
Garfinkel CI, Shaw TA, Hartmann DL, Waugh DW (2012) Does the Holton–Tan mechanism explain how the quasi-biennial oscillation modulates the Arctic polar vortex? J Atmos Sci 69(5):1713–1733
Gray LJ, Drysdale EF, Lawrence BN, Dunkerton TJ (2001) Model studies of the interannual variability of the northern-hemisphere stratospheric winter circulation: the role of the quasi-biennial oscillation. Q J R Meteorol Soc 127(574):1413–1432
Gray LJ, Sparrow S, Juckes M, O'Neill A, Andrews DG (2003) Flow regimes in the winter stratosphere of the Northern Hemisphere. Q J R Meteorol Soc 129(589):925–945
Gray LJ, Crooks S, Pascoe C, Sparrow S, Palmer M (2004) Solar and QBO influences on the timing of stratospheric sudden warmings. J Atmos Sci 61(23):2777–2796
Haynes PH, McIntyre ME, Shepherd TG, Marks CJ, Shine K (1991) On the 'downward control' of extratropical diabatic circulations by eddy-induced mean zonal forces. J Atmos Sci 48(4):651–679
Holton JR (1984) The generation of mesospheric planetary waves by zonally asymmetric gravity wave breaking. J Atmos Sci 41(23):3427–3430
Holton JR, Haynes PH, McIntyre ME, Douglass AR, Rood RB, Pfister L (1995) Stratosphere–troposphere exchange. Rev Geophys 33:403–439
Karoly DJ, Hoskins BJ (1982) Three dimensional propagation of planetary waves. J Meteor Soc Japan 60:109–123
Kodera K (1995) On the origin and nature of the interannual variability of the winter stratospheric circulation in the Northern-hemisphere. J Geophys Res 100(D7):14,077–14,087
Kushner PJ (2010) Annular modes of the troposphere and stratosphere. Stratos Dyn Transp Chem Geophys Monogr Ser 190:59–91
Labitzke K (1987) Sunspots, the QBO, and the stratospheric temperature in the north polar region. Geophys Res Lett 14(5):535–537
Labitzke K (2005) On the solar cycle-QBO relationship: a summary. J Atmos Solar Terr Phys 67(1–2):45–54
Leith CE (1975) Climate response and fluctuation dissipation. J Atmos Sci 32(10):2022–2026
Lorenz EN (1963) Deterministic nonperiodic flow. J Atmos Sci 20(2):130–141
Manzini E, McFarlane NA (1998) The effect of varying the source spectrum of a gravity wave parameterization in a middle atmosphere general circulation model. J Geophys Res 103(D24):31,523–31,539
Matsuno T (1970) Vertical propagation of stationary planetary waves in the winter Northern Hemisphere. J Atmos Sci 27:871–883
Matthes K, Langematz U, Gray LL, Kodera K, Labitzke K (2004) Improved 11-year solar signal in the Freie Universität Berlin climate middle atmosphere model (FUB-CMAM). J Geophys Res 109:D06101. doi:10.1029/2003JD004012
McIntyre ME (1982) How well do we understand the dynamics of stratospheric warmings. J Meteor Soc Japan 60:37–65
McLandress C, McFarlane NA (1993) Interactions between orographic gravity wave drag and forced stationary planetary waves in the winter northern hemisphere middle atmosphere. J Atmos Sci 50:1966–1990
Mitchell DM, Osprey SM, Gray LJ, Butchart N, Hardiman SC, Charlton-Perez AJ, Watson P (2012) The effect of climate change on the variability of the Northern Hemisphere stratospheric polar vortex. J Atmos Sci 69(8):2608–2618
O'Neill A, Pope VD (1988) Simulations of linear and nonlinear disturbances in the stratosphere. Q J R Meteorol Soc 114(482):1063–1110
O'Neill A, Pope VD (1993) The coupling between radiation and dynamics in the stratosphere. Adv Space Res 13(1):351–358
O'Neill A, Youngblut C (1982) Stratospheric warmings diagnosed using the transformed Eulerian-mean equations and the effect of the mean state on wave propagation. J Atmos Sci 39:1370–1386
Osprey SM, Gray LJ, Hardiman SC, Butchart N, Hinton TJ (2013) Stratospheric variability in twentieth-century CMIP5 simulations of the Met Office climate model: high-top versus low-top. J Clim 26(5):1607–1625
Palmer TN (1981) Diagnostic study of a wavenumber-2 stratospheric sudden warming in a transformed Eulerian-mean formalism. J Atmos Sci 38(4):844–855
Palmer TN (1999) A nonlinear dynamical perspective on climate prediction. J Clim 12(2):575–591
Palmer TN, Weisheimer A (2011) Diagnosing the causes of bias in climate models—why is it so hard? Geophys Astrophys Fluid Dyn 105(2–3):351–365
Plumb RA (1982) Zonally symmetric Hough modes and meridional circulations in the middle atmosphere. J Atmos Sci 39:983–991
Plumb RA (2010) Planetary waves and the extratropical winter stratosphere. Stratos Dyn Transp Chem Geophys Monogr Ser 190:23–39
Ring MJ, Plumb RA (2007) Forced annular mode patterns in a simple atmospheric general circulation model. J Atmos Sci 64(10):3611–3626
Ring MJ, Plumb RA (2008) The response of a simplified GCM to axisymmetric forcings: applicability of the fluctuation-dissipation theorem. J Atmos Sci 65(12):3880–3898
Ruzmaikin A, Feynman J, Jiang X, Yung YL (2005) Extratropical signature of the quasi-biennial oscillation. J Geophys Res 110:D11111. doi:10.1029/2004JD005382
Sassi F, Kinnison D, Boville BA, Garcia RR, Roble R (2004) Effect of El Nino-Southern Oscillation on the dynamical, thermal, and chemical structure of the middle atmosphere. J Geophys Res 109(D17):D17108. doi:10.1029/2003JD004434
Scaife AA, James IN (2000) Response of the stratosphere to interannual variability of tropospheric planetary waves. Q J R Meteorol Soc 126(562):275–297
Scott RK, Liu YS (2014) On the formation and maintenance of the stratospheric surf zone as inferred from the zonally averaged potential vorticity distribution. Q J R Meteorol Soc. doi:10.1002/qj.2377
Scott RK, Polvani LM (2006) Internal variability of the winter stratosphere. Part I: time-independent forcing. J Atmos Sci 63(11):2758–2776
Shepherd TG (2007) Transport in the middle atmosphere. J Meteor Soc Japan 85B:165–191
Shepherd TG, Shaw TA (2004) The angular momentum constraint on climate sensitivity and downward influence in the middle atmosphere. J Atmos Sci 61(23):2899–2908
Shepherd TG, Semeniuk K, Koshyk JN (1996) Sponge layer feedbacks in middle-atmosphere models. J Geophys Res 101(D18):23,447–23,464
Shine KP (1987) The middle atmosphere in the absence of dynamical heat fluxes. Q J R Meteorol Soc 113:603–633
Sigmond M, Scinocca JF (2010) The influence of the basic state on the Northern Hemisphere circulation response to climate change. J Clim 23(6):1434–1446
Sigmond M, Shepherd TG (2014) Compensation between resolved wave driving and parameterized orographic gravity wave driving of the Brewer–Dobson circulation and its response to climate change. J Clim 27:5601–5610. doi:10.1175/JCLI-D-13-00644.1
Simpson IR, Blackburn M, Haigh JD (2009) The role of eddies in driving the tropospheric response to stratospheric heating perturbations. J Atmos Sci 66(5):1347–1365
Son SW, Lee S (2006) Preferred modes of variability and their relationship with climate change. J Clim 19(10):2063–2075
Song Y, Robinson WA (2004) Dynamical mechanisms for stratospheric influences on the troposphere. J Atmos Sci 61(14):1711–1725
Sun L, Chen G, Lu J (2013) Sensitivities and mechanisms of the zonal mean atmospheric circulation response to tropical warming. J Atmos Sci 70(8):2487–2504
Thompson DWJ, Wallace JM (1998) The Arctic Oscillation signature in the wintertime geopotential height and temperature fields. Geophys Res Lett 25(9):1297–1300
Uppala SM, Kållberg PW, Simmons AJ, Andrae U, Bechtold VDC, Fiorino M, Gibson JK, Haseler J, Hernandez A, Kelly GA et al (2005) The ERA-40 re-analysis. Q J R Meteorol Soc 131(612):2961–3012
Watson PAG, Gray LJ (2014) How does the quasi-biennial oscillation affect the stratospheric polar vortex? J Atmos Sci 71(1):391–409
Wei K, Chen W, Huang R (2007) Association of tropical Pacific sea surface temperatures with the stratospheric Holton–Tan Oscillation in the Northern Hemisphere winter. Geophys Res Lett 34:L16814. doi:10.1029/2007GL030478
Wilks DS (2006) Statistical methods in the atmospheric sciences, 2nd edn. Academic press, San Diego
Woollings T (2008) Vertical structure of anthropogenic zonal-mean atmospheric circulation change. Geophys Res Lett 35(19):L19702. doi:10.1029/2008GL034883
We would like to thank J. Anstey, D. Mitchell and S. Osprey for helpful discussions, and S. Osprey also for providing instruction on running the SMM. We would also like to thank N. Cohen and one anonymous reviewer for their comments. P. Watson was supported by a Natural Environment Research Council studentship.
Atmospheric, Oceanic and Planetary Physics, Clarendon Laboratory, University of Oxford, Parks Road, Oxford, OX1 3PU, UK
Peter A. G. Watson & Lesley J. Gray
National Centre for Atmospheric Science, Oxford, UK
Lesley J. Gray
Peter A. G. Watson
Correspondence to Peter A. G. Watson.
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
Watson, P.A.G., Gray, L.J. The stratospheric wintertime response to applied extratropical torques and its relationship with the annular mode. Clim Dyn 44, 2513–2537 (2015). https://doi.org/10.1007/s00382-014-2359-2
Issue Date: May 2015
Extratropics
Northern annular mode
Gravity wave drag | CommonCrawl |
Experimental and numerical study on the screw connection strength of bamboo-oriented strand board compared with wood-oriented strand board
Kaiting Zhang1 na1,
Fuli Wang1 na1,
Runmin Xu1 na1,
Xinhui Fan1 na1,
Bin Yan1,
Chuangye Li1,
Shengquan Liu1,
Yong Guo1 &
Yuxia Chen1
The utilization of resourceful bamboo can alleviate the wood shortage problem. Bamboo-oriented strand board (BOSB) with the highest utilization of bamboo ratio and excellent mechanical properties was considered as a good engineering and furniture material. The strength of joints affects the safety of BOSB structure. This study aims to investigate the effect of screw spacing on the tensile and compressive stiffness and strength of corner joints from BOSB by experimental method combined with finite element method (FEM) compared with wood-oriented strand board (WOSB). The results showed that (1) the strength and stiffness of the corner joint was significantly affected by the screw spacing, and it affected the compressive strength and stiffness of WOSB more significantly; (2) the bending moment and stiffness coefficient of BOSB compressed joint decreased with the increase of spacing, while that of tensile joint increased first and then decreased, and it reached the maximum value, when the spacing was 48 mm; (3) compared with WOSB joint, BOSB joint had higher strength and stiffness, and the failure of the joint was due to the yielding of self-drilling screws. This was also verified by numerical analysis results; (4) the bending moment of BOSB joints was about 2.5 times that of WOSB joints, while the difference between stiffness coefficient was small; (5) the elastic deformations resulted from experimental tests and FEM are similar. It was shown that when the screw spacing is 48 mm, the Von Mises stresses on the BOSB joint were smaller, and the bending strength and stiffness were larger, which was the most suitable screw spacing.
China is one of the countries with the richest bamboo resources and is known as the "Bamboo Kingdom" [1]. Moreover, bamboo utilization ranks first in the world in terms of product varieties, scale, and output [1,2,3,4]. In recent years, the development of bamboo-oriented strand board (BOSB) made of bamboo shavings [5,6,7] improves the utilization of bamboo and makes it possible to use small diameter bamboo and poor-quality materials [8]. Moreover, BOSB has the advantages of more excellent mechanical properties [1] and better dimensional stability [9, 10] compared with wood-oriented strand board (WOSB), which is considered as a good engineering and furniture material [11, 12]. The strength of furniture is not only affected by the mechanical properties of materials, but also depends on the stiffness and stability of joints [13]. The joints in furniture are the weakest elements in terms of their strength and stiffness [14]. Therefore, many studies focus on the bending moment capacity and stiffness of joints [15]. So far, the mechanical properties of wood composite joints have been studied comprehensively, such as PB (particleboard), MDF (medium-density fiberboard), HDF (high-density fiberboard), WOSB as well as new adhesives and connectors. However, there are few researches on the strength of BOSB jointed.
BOSB with high hardness is more suitable for thread connections to obtain greater mechanical strength, compared with non-threaded connections [16,17,18]. Compared with the two-in-one, three-in-one (with embedded nut), wood screw, and other threaded connectors, the thread end of self-drilling screws is narrower, the thread spacing is smaller, and the thread has a greater effect on the shear and extrusion deformation of bamboo fiber, so its connection strength is greater than other threaded connectors [19]. Self-drilling screws are widely used in the design of contemporary furniture, especially wooden and non-disassemble furniture [20]. Screws can be used as an auxiliary for joining connectors and materials, and can also be directly used for fixing corner joints, such as the joint of back plates and side plates, feet and boxes, and laminate plates and side plates. In 2018, Guo et al. found that the screw withdrawal resistance of BOSB is much higher than conventional particleboard in all directions [21]. The durability of BOSB fixed with self-drilling screws is greater than that of WOSB and glued laminated bamboo [17]. However, the performance of BOSB corner joints fixed by screws has not been studied.
Moreover, the screw spacing affects the strength of joints in furniture [22,23,24,25]. Improper installation spacing of screws would affect both the efficiency and cost of production and the stability and safety of furniture. However, there is no research about the influence of installation spacing on the connection strength of BOSB. A review of the literature shows that the strength of most furniture corner joint is obtained by experimental method [13, 26], which is destructive and non-repeatable, and only the failure strength and pattern of joints can be obtained. Therefore, knowledge of the stress distribution of joint is limited, which is not conducive to the optimization of furniture structure [27]. With the development of finite element software functions, the finite element analysis method is applied to the structural design of furniture gradually [2, 28, 29]. In the process of furniture design, FEM (finite element analysis) can be used as a fast and effective simulation method. Mechanical analysis can be done by the simulations of complex materials and models under different loads [2]. And the simulations are repeatable.
Therefore, the experimental and FEM analysis are used to investigate the effect of screw spacing on corner joint strength and stiffness of BOSB and WOSB to optimize screw connection. The specific objectives of this study are to (1) investigate the effect of screw spacing on the strength and stiffness of BOSB and WOSB joint by experimental and FEM method; (2) verify the accuracy of the FEM compared with the experimental results; (3) determine the optimal self-drilling screws installation spacing of BOSB and WOSB joint.
Properties of selected materials
Samples were BOSB and WOSB, which were considered to be isotropic parallel to their wide surfaces. The density and moisture content were measured according to the ASTM D4442-92 [30] and ASTM D2395-93 [31], respectively. The Young's modulus and tensile strength (σT) were measured by uniaxial tension test [32,33,34]. Poisson's ratio was tested with strain gauge method according to ASTM D3039 [35]. Conventional yield point R0.2 was determined using the uniaxial tensile tests method. Based on the stress–strain diagram, the yield strength R0.2 (MPa) (Fig. 1) represents the stress value that produces 0.2% residual deformation [29]. The average test values of three square plate specimens can be obtained by repeated tests.
Diagram used to determine conventional proportionality limit and yield point
Self-drilling screw is stainless steel, with nominal diameter (D) of 3.97 mm, inner rod diameter (d) of 2.76 mm, length (L) of 39.73 mm, thread length (L/Thread) of 31.5 mm, screw pitch (P) of 1.40 mm (shown in Fig. 2) and the yield strength and modulus of elasticity were measured according to standard LY∕T 3219–2020 [36].
Dimensions of self-drilling screw
Preparation of joints
The self-drilling screws were bought from a local commercial supplier (Hefei, China). The dimensions of BOSB were 150 × 100 × 15 mm and 135 × 100 × 15 mm, and the dimensions of WOSB were 150 × 100 × 18 mm and 132 × 100 × 18 mm. The screw spacing (S) was 16, 32, 48, 64, and 80 mm, respectively, symmetrically distributed, and fixed in the middle line of the thickness direction of the plates (shown in Fig. 3). Self-drilling screws were installed with a guide hole of 3.4 mm, penetrating one plate then inserting into the other one. 10 repetitions of each joint are prepared, and there were 100 specimens in total.
Illustration of screw installation
Joint mechanical properties were determined in compression and tension tests (Fig. 4). Tests were performed using a mechanical testing machine (model: WDW-100E, Jinan Chenda Testing Machine Manufacture Co., Ltd., Jinan, China). Crosshead speed was 10 mm/min. The experimental tests directly provided dependence between force P and displacement DP. The strength of the joint was calculated as:
Method of joint analysis: a compression, b tension
$${M}_{\mathrm{C}}={P}_{\mathrm{max}}{a}^{^{\prime}},$$
$${M}_{\mathrm{T}}=0.5{P}_{\mathrm{max}}{e}^{^{\prime}},$$
where MC was the bending moment resistance of the joint under compression loading (N m), MT was the bending moment resistance of the joint under tension loading (N m), Pmax was the maximum load in each test sample (N), a' and e' were the moment arm in compression and tension, respectively (m).
For selected load diagrams, the stiffness coefficient of joint K (N·m/rad) was presented as the quotient of the bending moment for joint posts or rails 0.4MC or 0.4MT and rotation angle ∆φ between the joint arms. This angle was determined based on the changes in joint geometry [15, 29, 34].The coefficient of stiffness coefficient K subjected to compression (Fig. 4a) for the range of linear elasticity of joint was described by the equation [15]:
$${K}_{\mathrm{C}}=\frac{0.4{M}_{\mathrm{C}}}{\Delta \varphi },$$
$$\Delta \varphi =\frac{\pi }{90}\left({\varphi }_{1}-{\varphi }_{2}\right),$$
$${a}^{^{\prime}}=\frac{\sqrt{2}}{2}a-{a}^{{^{\prime}}{^{\prime}}},$$
$${a}^{{^{\prime}}{^{\prime}}}=\sqrt{{b}^{2}-{c}^{2},}$$
$${\varphi }_{1}=\mathrm{atg}\left(\frac{\frac{\sqrt{2}}{2}a}{{a}^{^{\prime}}}\right),$$
$${\varphi }_{2}=\mathrm{asin}\left(\frac{\frac{\sqrt{2}}{2}a-{DP}_{0.4{P}_{max}}}{\sqrt{{c}^{2}+{\left(a-b\right)}^{2}}}\right),$$
$${DP}_{0.4{P}_{max}}=0.4\times {DP}_{max}.$$
The stiffness of joints subjected to tension (Fig. 3b) for the range of linear elasticity was calculated from the following equations:
$${K}_{\mathrm{T}}=\frac{0.4{M}_{\mathrm{T}}}{\Delta \varphi },$$
$${e}^{^{\prime}}=\frac{\sqrt{2}}{2}\left(a-b\right),$$
$${a}^{{^{\prime}}{^{\prime}}}=\sqrt{{b}^{2}-{c}^{2}},$$
$${0.5\varphi }_{1}=\mathrm{atg}\left(\frac{{e}^{^{\prime}}}{f}\right),$$
$${0.5\varphi }_{2}=\mathrm{atg}\left(\frac{{e}^{{^{\prime}}{^{\prime}}}}{f-{DP}_{0.4{P}_{max}}}\right),$$
$$f=e+\frac{\sqrt{2}}{2}b,$$
$${e}^{{^{\prime}}{^{\prime}}}=\sqrt{{e}^{{^{\prime}}2}+{f}^{2}-{\left(f-{DP}_{0.4{P}_{max}}\right)}^{2},}$$
The numerical model of joints
Modeling and numerical simulations were performed using the Siemens NX 12.0 program and Ansys workbench 17.0 software, respectively. Geometry, loading and boundary conditions of the model were based on Fig. 3. The thread features of screw were ignored in Numerical simulation. Between the hole in the board and screw, the bonded interaction was applied. The contact property between boards was specified with a friction coefficient of 0.1. The elements sizes of board and screw were approximately 3 mm and 5 mm, while for contact parts, the sizes of elements were approximate 3 mm to make the model more accurate. In general, 10-node modified quadratic tetrahedron element C3D10M was used (about 48,240 elements and 74,640 nodes per model). In addition, geometric nonlinearity is considered to represent the large deformation of the structure.
Results and analysis
Properties of materials
As seen in Table 1, the mean density of BOSB was 806.61 kg/m3, which was about 1.4 times that of WOSB. The Young's modulus and tensile strength of BOSB were both about three times that of WOSB. The data in Table 1 would be used for the finite element analysis.
Table 1 Physical and mechanical properties of board and screw
Strength of joints
It could be observed in Fig. 5a, b that curves for BOSB joint were smooth with no rapid changes after the maximum forces, but the curve of WOSB joint dropped rapidly after reaching the maximum force. This mean that the BOSB joints fixed by self-drilling screw had good durability and the strength of the joints did not decrease rapidly after reaching ultimate load, which could ensure the safety of the joint. Numerical calculations for the examined joints are also presented in Fig. 5, and the regularity of the loading curves obtained from the FEM was similar to test curves.
The displacement–force curve of joints: a, b was tension and compression test curves of BOSB joint, c, d was tension and compression curves of WOSB joint
Seen from Fig. 5, the displacement–force curve could be approximately categorized into two stages. In the first stage, the correlation between force and displacement was almost linear, which confirmed the Hook's law [25]. The angle of curves with the horizontal axis represented the stiffness of joint [33]. In the case of compression, the angle of BOSB joints with screws spacing of 48 mm was the largest, which represented the maximum strength and stiffness of this joint. Also, the angle was the largest for the WOSB joint with screw spacing of 48 mm. Moreover, the angles of BOSB joints with the spacing of 32, 64 and 80 mm were almost the same, but the maximum force was obviously different, which indicated that the three joints had similar stiffness, but the strength was different. Among BOSB joints, the compression ultimate force with screw spacing of 64 mm was largest, 505 N, and that of 80 mm was smallest (456 N). In the tensile test results, the BOSB joint with screw spacing of 16 mm had the maximum ultimate force (1460 N), which was 36% higher than that of the BOSB joint with the spacing of 80 mm (1071 N). For the WOSB joints with screw spacing of 48 mm, 64 mm and 80 mm had the same angle between the curve and the horizontal coordinate, which indicated that the stiffness was similar, while, the WOSB joint with screw spacing of 48 mm had the highest ultimate force (501 N), followed by that of 64 mm (477 N).
Overall, the strength of joints subjected to tension was almost twofold greater than compressed joints, and the deflection of joints in the tension test was approximately twofold smaller than compressed samples. At the same time, the maximum force of BOSB was about 3 times of WOSB.
Typical damages of arms are illustrated in Fig. 6 caused by joint compression or tension. It should be noted that in the case of BOSB joint, the self-drilling screws were bent (due to the yield of the screws) and the shavings near them were pulled out, resulting in the failure of the joint. But in the case of WOSB joint, the board cracked and large pieces were pulled out (the board is damaged). This was also the reason that obvious peak appeared in the displacement–force curve of Fig. 5b, c.
Typical damage of joints: a BOSB, b WOSB
An important and reliable indicator of joint strength was provided by the maximum bending moment. Figure 7 shows the effect of screw spacing on the bending moment. It could be seen from Fig. 7a that in the tensile test, it was clear from this figure that in the tensile test, the bending moment of BOSB joint with screw spacing of 16 mm (MT = 69.97 N m) was 38% higher than that of BOSB joint with screw spacing of 16 mm (MT = 50.80 N m). It indicated that screw spacing significantly affected the strength of BOSB joint [28]. In contrast, screw spacing has little effect on WOSB joint.
Bending moment of joints at: a tension, b compression
The bending moment of joints subjected to tensile was greater than that of compression joints. For BOSB joint, the MT was about 1.5 times MC, and the MT was about 2 times MC for WOSB joint. The MC of the BOSB joint was almost 3.5- to 4-fold higher than that of the WOSB joint. It can be seen from Fig. 7b that the bending moment of the BOSB joint in the compressive test increased and then decreased with the increase of screw spacing, and the maximum bending moment at the screw spacing of 48 mm (MC = 47.4 N m) was 23% higher than the smallest bending moment at the screw spacing of 80 mm (MC = 38.39 N m). However, for WOSB joints, the highest bending moment at the screw spacing of 48 mm (MC = 13.65 N m) was 40% higher than the smallest bending moment at the screw spacing of 80 mm (MC = 9.78 N m), which indicated that the screw spacing had much more obvious effect on the bending moment of WOSB joints during compression test.
Analysis of variance (Tables 2 and 3) revealed a significant difference between the bending moments for joints connected with different screws spacing. For the tension test, F (BOSB) = 7.652 > F (WOSB) = 4.785, thus screw spacing affected the bending moment of BOSB joint more than that of WOSB. While, for the compression test, F (BOSB) = 4.573 < F (WOSB) = 15.780, which indicated screw spacing affected the bending moment of WOSB joint more than that of BOSB.
Table 2 ANOVA results of tension moment of joints
Table 3 ANOVA results of compression moment of joints
Summing up it may be generalized that the screw spacing was 48 mm, the BOSB joints had the largest bending moment for compressive and the smallest difference with the tensile bending moment, so this screw spacing was beneficial to improve the safety of furniture structure [37]. In this case, the WOSB joint also has better strength. Moreover, when Moreover, the strength of joint with screw-fixed joint was greater compared to two-in-one and three-in-one [19].
Stiffness of joints was evaluated based on the change of stiffness coefficient K (N m/rad) in the function of rotation angle Δφ (rad). Analyses of results given in Fig. 8 indicated that the Δφ–K curves were smooth, and stiffness coefficient K increased with the increase of rotation angle Δφ nonlinearly before reaching the maximum value. In the tension test curve, the peak width of BOSB joints was greater than that of WOSB joints, which indicated that BOSB joints had better durability. The curves also showed that the stiffness coefficient of the BOSB joints was greater than that of the WOSB joints. Additionally, the stiffness coefficient curves showed that in the compression test joints obtain maximum stiffness coefficient at an almost twofold greater than that of subjected to tension. Besides, when the stiffness coefficient reaches the maximum, the DP of the BOSB joint in the case of the compression and tensile tests were around 7.7 mm and 5.5 mm, respectively, and it greater than that of WOSB. It confirms that the BOSB joint has better durability.
Variation of stiffness coefficient K (N·m/rad) in the function of the rotation angle Δφ (rad) of joint: a compression, b tension
Figure 9 shows the effect of the screw spacing on the stiffness coefficient. It could be seen from Fig. 9a that the stiffness coefficient of BOSB joint decreased with the increase of screw spacing, while it increased for WOSB joint in tension test. In the compression test, the stiffness coefficient of the BOSB and WOSB joint tended to increase and then decrease with increasing spacing, their stiffness coefficient reaches the maximum when the screw spacing was 48 mm. Compared with the difference in bending moment, the difference in stiffness coefficient between WOSB and BOSB joint was slightly smaller. For BOSB joints with screw spacing of 48 mm the stiffness coefficient takes values of KT = 324.54 Nm/rad in the case of tension and KC = 357.06 Nm/rad for compression, which differs by as little as 9%. This showed that the loading mode has less influence on the stiffness coefficient of BOSB joints with screw spacing of 48 mm. In the tension test the stiffness coefficient of BOSB joints with screw spacing of 32 mm, KT = 349.68 Nm/rad, was slightly greater than that of the joint in compression test was minimum at KC = 233.78 Nm/rad. Thus the difference of 33% was significant. The difference of stiffness coefficient of BOSB joint with screw spacing of 16 mm between in tension and compression test was 18%. To the structural design of frame furniture, such a trend was highly disadvantageous. This was connected with the commonly observed type of deformation, e.g., in the side plating of bedsteads. In this case, joints were subjected alternately to tension and compression [15]. Thus, the BOSB joint with screw spacing of 48 mm has a comparable or identical stiffness to ensure high structural reliability. The results of ANOVA results show the screw spacing had significant influence on the stiffness coefficient of both BOSB and WOSB joint, and it affected the tensile stiffness more than compression stiffness.
Stiffness coefficient for joints subjected to: a tension, b compression
Results of numerical calculations for modeled joints
Figure 10 presents the Von Mises stresses of joints subjected to compression. From Fig. 10, we can observe that the highest stresses were concentrated in bottom of holes. It meant that with the load increases, the first damage occurred in this part of the plate [29]. The developing stresses were caused by the pressure of the self-drilling screws on the side surface of the hole. The boards on the inside of the joint also had large stresses due to compression. The results showed that the place with higher stress was the most vulnerable to failure, which was confirmed by the failure mode in the experimental test. And it also showed the validity of the numerical analysis results. Moreover, it was found from Fig. 10 that under the same force, the stress distribution on the BOSB joints with screw spacing of 48 and 64 mm was significantly smaller than that with screw spacing of 16, 32 and 80 mm. This indicated that the safety of BOSB joints with screw spacing of 48 and 64 mm were higher than that of BOSB joints with other screw spacing, which was also verified by the experimental data.
Von Mises stresses of joints subjected to compression
Figure 11 presents the Von Mises stresses of joints subjected to tension. The maximum stress was also concentrated near the hole on the inside of the screw bend. For boards parallel to the screw installation direction, the leverage on the screw causes the boards to delaminate and crack. Typical damages of arms illustrated in Fig. 6 caused by joint compression or tension corresponded with the results of numerical calculations indicating the same places of failure. Comparing the Von Mises stresses of different joints in Fig. 11, it was found that the maximum stress of joints with screw spacing of 48 and 64 mm was smaller, which indicates that the joints were safer. This result corresponds to the experimental result.
Von Mises stresses of joints subjected to tension
In this study, the effect of screw spacing on corner joint stiffness and strength of BOSB was studied using FEM and experimental test to optimize self-drilling screw spacing compared with WOSB. The following conclusions were drawn:
The bending moment and stiffness coefficient were significantly affected by screw spacing. The screw spacing affected the mechanical property of BOSB joints more significantly than that of WOSB joints in tension test, while for compression test the screw spacing affected the bending moment and stiffness coefficient of WOSB joints more significantly than that of BOSB joints. The bending moment of BOSB joints subjected to compression and tension were almost 3.5- to 4-fold greater than that of WOSB joints, while the difference of stiffness coefficient was small. Moreover, the bending moment of BOSB joints subjected to tension was almost twofold greater than compressed samples.
For BOSB, the screw spacing of 48 mm could ensure the maximum strength and stiffness of corner joints and improve the safety of furniture.
The failure of BOSB joints was caused by the screw yielding and pulling out from the board (non-cracking). Due to the small mechanical strength of WOSB, the boards cracked during joint bending, resulting in the failure of the corner joints.
A comparison of displacement and failure mode of joints between experiment test and FEM showed that their results were the same and confirm the finite element analysis was effective.
BOSB:
Bamboo-oriented strand board
Finite element method
WOSB:
Wood-oriented strand board
PB:
MDF:
Medium-density fiberboard
HDF:
High-density fiberboard
Biswas D, Bose SK, Hossain MM (2011) Physical and mechanical properties of urea formaldehyde-bonded particleboard made from bamboo waste. Int J Adhes Adhes 31(2):84–87
Fu Y, Fang H, Dai F (2017) Study on the properties of the recombinant bamboo by finite element method. Compos Part B-Eng 115:151–159. https://doi.org/10.1016/j.compositesb.2016.10.022
Bahari SA, Grigsby WJ, Krause A (2017) Thermal stability of processed PVC/bamboo blends: effect of compounding procedures. Eur J Wood Wood Prod 75(2):147–159. https://doi.org/10.1007/s00107-016-1148-5
Akinlabi ET, Anane-Fenin K, Akwada DR (2017) Bamboo Taxonomy and Distribution Across the Globe. Bamboo. https://doi.org/10.1007/978-3-319-56808-9_1
Febrianto F, Jang JH, Lee SH, Santosa IA, Kim NH (2015) Effect of Bamboo Species and Resin Content on Properties of Oriented Strand Board Prepared from Steam-treated Bamboo Strands. BioResources 10(2):2642–2655. https://doi.org/10.15376/biores.10.2.2642-2655
Febrianto F, Sahroni HW, Bakar ES, Kwon GJ, Kwon JH, Kwon J (2012) Properties of oriented strand board made from Betung bamboo (Dendrocalamus asper (Schultesf) Backer ex Heyne). Wood Sci Technol 46(1–3):53–62. https://doi.org/10.1007/s00226-010-0385-8
Apriani MT, Febrianto F, Karlinasari L (2012) Physical and Mechanical Properties of Bamboo Oriented Strand Board Made from Steamed Pretreated Bamboo Strands under Various Bamboo Species and Resin Content. Bogor, Institute Pertanian Bogor
Chaowana P (2013) Bamboo: an alternative raw material for wood and wood-based composites. J Mater Sci Res 2(2):90–102. https://doi.org/10.5539/jmsr.v2n2p90
Wan-Si FU, Huang J (2007) A Study on Manufacturing Technology for Bamboo OSB with PF Resin. China Wood Indus 21(2):7–9
Zhang H, Du, (2007) Research and development of production technology of bamboo waferboard and oriented strand board based on biological characteristics and timber adaptability. J Bamboo Res 26(2):43–48
Sumardi I, Suzuki S (2013) Parameters of Strand Alignment Distribution Analysis and Bamboo Strandboard Properties. BioResources 8(3):4459–4467. https://doi.org/10.15376/biores.8.3.4459-4467
Bakar ES, Nazip M, Anokye R, Hua LS (2019) Comparison of three processing methods for laminated bamboo timber production. J Forestry Res 30(2):363–369. https://doi.org/10.1007/s11676-018-0629-2
Kasal A, Yuksel M, Fathollahzadeh A, Ziya Y, Yildirim EN (2011) Ultimate failure load and stiffness of screw jointed furniture cabinets constructed of particleboard and medium-density fiberboard. Forest Prod J 61(2):155–160. https://doi.org/10.13073/0015-7473-61.2.155
Ratnasingam J, Ioras F (2013) Effect of adhesive type and glue-line thickness on the fatigue strength of mortise and tenon furniture joints. Eur J Wood Wood Prod 71(6):819–821. https://doi.org/10.1007/s00107-013-0724-1
Smardzewski J, Imirzi HO, Lange J, Podskarbi M (2015) Assessment method of bench joints made of wood-based composites. Compos Struct 123(5):123–131. https://doi.org/10.1016/j.compstruct.2014.12.039
Feng W (2009) The Application of FEM in Furniture Construction Design. China Forest Prod Ind 36(4):41–43
Sun Y, Jiang Z, Zhang X, Sun Z, Liu H (2019) Behavior of glued laminated bamboo and bamboo-oriented strand board sheathing-to-framing connections. Eur J Wood Wood Prod 77(6):1189–1199. https://doi.org/10.1007/s00107-019-01454-3
Liu XS, Liu XJ, Ji-Qing LI (2017) The test and analysis of nail holding power about recombinant bamboo used in furniture. Wood Process Mach 28(5):10–12
Xu R, Zhang K, Ren L, Wang F, Chen Y (2021) Connection Performance Examination of a New Bamboo-Oriented Strand Board Connector. BioResources 16(2):2906–2920. https://doi.org/10.15376/biores.16.2.2906-2920
Kukun T, Smardzewski J, Kasal A (2020) Experimental and numerical analysis of mounting force of auxetic dowels for furniture joints. Eng Struct 226(1):111351. https://doi.org/10.1016/j.engstruct.2020.111351
Guo Y, Zhu S, Chen Y (2018) Contrastive analysis of screw withdrawal resistance between bamboo oriented strand board and conventional particleboard. Wood Res 63(6):1071–1080
Sapiee SF, Lau HH (2013) Influence of screw spacing on the strength of self-drilling screw connection for the high strength cold-formed steel. Adv Mat Res 712–715(1):1054–1057. https://doi.org/10.4028/www.scientific.net/AMR.712-715.1054
Roy K, Lau HH, Ting TCH, Masood R, Lim JBP (2019) Experiments and finite element modelling of screw pattern of self-drilling screw connections for high strength cold-formed steel. Thin Wall Struct 145:106393. https://doi.org/10.1016/j.tws.2019.106393
Zhang J, Eckelman CA (1993) Rational design of multi-dowel corner joints in case construction. Forest Prod J 43(11):52–58. https://doi.org/10.1080/02773819408003114
Hao J, Xu L, Wu X, Li X (2020) Analysis and modeling of the dowel connection in wood T type joint for optimal performance. Compos Struct 253:112754. https://doi.org/10.1016/j.compstruct.2020.112754
Kasal A, Erdil YZ, Zhang J, Efe H, Avci E (2008) Estimation equations for moment resistances of L-type screw corner joints in case goods furniture. Forest Prod J 58(9):21–27. https://doi.org/10.1007/s10342-008-0227-5
Hu W, Liu N (2020) Numerical and optimal study on bending moment capacity and stiffness of mortise-and-tenon joint for wood products. Forests 11(5):501. https://doi.org/10.3390/f11050501
Smardzewski J, Lewandowski W, Imirzi HÖ (2014) Elasticity modulus of cabinet furniture joints. Mater Design 60:260–266. https://doi.org/10.1016/j.matdes.2014.03.066
Smardzewski J, Slonina M, Maslej M (2017) Stiffness and failure behaviour of wood based honeycomb sandwich corner joints in different climates. Compos Struct 168(5):153–163. https://doi.org/10.1016/j.compstruct.2017.02.047
ASTM D2395-14 (2011) Standard Test Methods for Density and Specific Gravity (Relative Density) of Wood and Wood-Based Materials. ASTM International: West Conshohocken.
ASTM D4442-07 (2012) Standard test methods for direct moisture content measurement of wood and wood-base materials. ASTM International: West Conshohocken
Koc KH, Kizilkaya EES, Korkut DS (2011) The use of finite element method in the furniture industry. Afr J Bus Manage 5(3):855–865. https://doi.org/10.1626/pps.9.83
Krzyaniak U, Smardzewski J (2019) Strength and stiffness of new designed externally invisible and demountable joints for furniture cases. Eng Struct 199:109674. https://doi.org/10.1016/j.engstruct.2019.109674
Podskarbi M, Smardzewski J (2019) Numerical modelling of new demountable fasteners for frame furniture. Eng Struct 185(15):221–229. https://doi.org/10.1016/j.engstruct.2019.01.135
ASTM D3039/D3039M-17 (2017) Standard Test Method for Tensile Properties of Polymer Matrix Composite Materials. ASTM International: West Conshohocken
LY/T 3219 (2020) Self-tapping screws for timber structures. The state forestry administration of the People's Rep: Standards press of china
Smardzewski J (2009) The reliability of joints and cabinet furniture. Wood Res 54(1):67–76. https://doi.org/10.1145/1553374.1553534
The authors would like to thank the anonymous reviewers and editor for their valuable comments and suggestions for improving the quality of this paper.
The authors would like to express their heartfelt gratitude to the National Science and Technology Major Project of China (2016 YFD 0600905), China Natural Science Foundation (3180047) and the National Key Research and Development Program (2017YFD0600201).
Kaiting Zhang, Fuli Wang, Runmin Xu and Xinhui Fan contributed equally to this work
College of Forest and Garden, Anhui Agricultural University, Hefei, 230036, China
Kaiting Zhang, Fuli Wang, Runmin Xu, Xinhui Fan, Bin Yan, Chuangye Li, Shengquan Liu, Yong Guo & Yuxia Chen
Kaiting Zhang
Fuli Wang
Runmin Xu
Xinhui Fan
Bin Yan
Chuangye Li
Shengquan Liu
Yong Guo
Yuxia Chen
Conceptualization, SL, YC, and YG; data curation, KZ, XF, CL, and YB; formal analysis, KZ, and RX; funding acquisition, YC, SL, and YG; investigation, KZ and XF; methodology, KZ and FW; resources, YC and YG; supervision, YC and YG; validation, YC, SL, and YG; writing—original draft preparation, KZ; writing—review and editing, KZ and FW; funding acquisition, YC, SL, and YG. All authors read and approved the final manuscript.
Correspondence to Shengquan Liu, Yong Guo or Yuxia Chen.
Zhang, K., Wang, F., Xu, R. et al. Experimental and numerical study on the screw connection strength of bamboo-oriented strand board compared with wood-oriented strand board. J Wood Sci 67, 69 (2021). https://doi.org/10.1186/s10086-021-01999-z
Screw spacing
Corner joint
Strength and stiffness | CommonCrawl |
Development of a composite catalyst from anthill and eggshell: an optimization study on biodiesel production from virgin and waste vegetable oils
Adeyinka Sikiru Yusuff ORCID: orcid.org/0000-0002-6630-64111
Waste Disposal & Sustainable Energy volume 1, pages279–288(2019)Cite this article
The primary goal of this study is to develop a composite material from the anthill and chicken eggshell and to use it as a catalyst for the synthesis of biodiesel from virgin and waste vegetable oils. The anthill–eggshell composite (AEC) catalyst was prepared using an incipient wetness impregnation method. Central composite design (CCD) was applied to investigate the effects of catalyst preparation parameters (calcination temperature, calcination time, and anthill proportion in the AEC) on the yields of biodiesel from the two oils. Based on the CCD, two quadratic models were developed to correlate the AEC preparation parameters to the two responses. Analysis of variance (ANOVA) was performed to verify the reliability of the models and also, identify the factor that mostly affects the experimental design responses. Optimization results showed that the predicted values of biodiesel yield from the models for the two oils agreed reasonably well with the experimental values. The optimum conditions for the preparation of AEC catalyst for the transesterification process were calcination temperature of 1000 °C, calcination time of 4 h, and anthill proportion of 20% to achieve 97.13% yield of biodiesel from virgin vegetable oil. At the same optimum parameters, the yield of biodiesel from waste vegetable oil was found to be 70.92%.
Fossil fuel depletion and environmental degradation have been serious concerns in the last decade. To date, about one-fourth of the total pollutant emission results from power generation using fossil fuel [1]. Fossil fuel is a non-biodegradable fuel that is associated with depletion and emission of a large volume of greenhouse gases, and greenhouse gases are regarded as the main contributor to global warming. Thus, shifting from fossil fuel to renewable fuel is an antidote to these menaces. In this view, triglycerides from plant oil or animal fat are reacted with primary alcohol (methanol/ethanol) to obtain biodiesel in the form of (m)ethyl esters but in the presence of a proper catalyst. Many researchers have produced biodiesel from plant oils or animal fat by transesterification process using the conventional homogeneous catalysts such as potassium hydroxide, sodium hydroxide, hydrochloric acid or tetraoxosulphate (VI) acid, but the aftermath of homogeneous processes is not usually desirable as many involve difficulties in the removal of catalysts from the reaction products, subsequently resulting in excess wastewater generation and high cost. As a pivotal approach to conquering such worries, heterogeneous catalysts are used as a low-cost replacement [2], besides several researchers have proven that the use of suitable solid catalysts could provide solutions to those aforementioned problems [3].
Several solid catalysts such as pure metal oxide [4], mixed metal oxide [5], and sulphated/metal oxide [6] have been employed as heterogeneous catalysts for the production of biodiesel. The application of biomass-derived solid catalysts for biodiesel production has also been reported. These include birds' eggshell [7,8,9], fishbone [10], animal bone [11], solid waste coral fragment [12], alum [13], montmorillonite clay [3], modified peanut husk ash [14] and many more. Most of these materials are cheap sources of calcium oxide (CaO) and other alkaline earth metal oxides and reduce the biodiesel production cost [15]. Eggshell is one of the abundant agricultural wastes, which can be converted into a solid base catalyst (CaO). Recent findings had shown that CaO is the most efficient and easily synthesized catalyst for the transesterification of oil with an alcohol to produce biodiesel [16]. A literature survey reveals that the CaO catalyst leaches during the catalytic reaction. However, the CaO derived from eggshell needs to be supported by thermally stable material, which can prevent leaching of the active ingredient and provide more specific surface area and pores for active species [17]. Various catalyst supports such as alumina (Al2O3), silica (SiO2), and zirconia (ZrO2) have been widely used in the transesterification process due to their thermal and mechanical stability, as well as better textural properties [18]. However, these metal oxides in their pure form are expensive. As reported by Henne [19], anthill contains several metal oxides, including SiO2 and Al2O3, and it is available in abundance. Thus, in this study, anthill was chosen as a catalyst support. An anthill is a composite of clay and other materials formed at the entrances of anthill colonies [19]. It has been used in various applications, including ceramic, cement, bricks, and sand casting making [20] and as an adsorbent for wastewater treatment [21].
In the present study, the aim was to prepare a composite catalyst from anthill and eggshell, optimize the preparation process condition and use it for transesterification of virgin and waste vegetable oils to produce biodiesel. To the best of the authors' knowledge, no study has been carried out on the optimization of process conditions for the preparation of composite catalyst for transesterification of vegetable oil to biodiesel using response surface methodology (RSM). RSM is one of the experimental design techniques that are commonly used for process analysis and modeling. In this technique, the main objective is to optimize the response surface that is influenced by various process parameters. RSM also estimates the interaction between controlled experimental variables and measured response [22, 23].
Response surface methodology is a combination of mathematical and statistical optimization tools useful for developing, improving, and optimizing processes [24, 25]. The application of statistical experimental design techniques in the process development of catalysts can result in reduced process variability and less resources requirement (time, raw materials, and experimental run) [26, 27]. In using RSM, three steps are necessary: the design of experiments, model equation development and analysis and optimization of parameters [28]. Generally, there are two important optimization methods under RSM, namely, central composite and Box–Behnken designs [28, 29]. However, both of these design methods are exclusive and provide a definite experimental design to address various approaches used in analyzing data [30]. Meanwhile, these two design methods are different in the number of experimental runs required and in the combination of the levels used in the experiments [31]. The central composite design is commonly used to design the experimental procedures to acquire necessary information for examining the lack of fit without necessarily considering numerous design points [28]. Besides, it is a powerful tool for evaluating the output parameter(s) of most of the steady-state processes [32]. Therefore, in this work, the central composite design (CCD) has been applied to the optimization of process conditions for the preparation of the AEC catalyst. The variables considered were the calcination temperature, calcination time, and anthill proportion in AEC. Also, the activity of the prepared catalysts was tested for the transesterification of vegetable oil.
Waste vegetable oil (WVO) and chicken eggshells used for this study were collected from students' cafeteria 1, Afe Babalola University (ABUAD), Ado-Ekiti, Nigeria. The density, acid value, and saponification value of the WVO are 0.9147 g/cm3, 3.945 mgKOH/g, and 183.1 mgKOH/g, respectively. However, the free fatty acid (FFA) content of the oil was equivalent to 1.973 wt%, and since it is less than 2 wt%, it implies that the WVO could be directly converted to biodiesel via a one-step transesterification process [7, 33]. The type II anthill, used in this study, is situated behind Fidelity Bank, ABUAD, Ado-Ekiti, Nigeria. Synthesis-grade methanol, heptane (solvent), and propylene acetate (internal standard) were all procured from Nizo Chemical Enterprise, Akure, Nigeria. Virgin vegetable oil was purchased from the King's market, Ado-Ekiti, Nigeria.
AEC catalyst preparation
The harvested anthill was first crushed into powder form and sieved through a sieve mesh of 0.3 mm to obtain particle size lesser than 0.3 mm (< 0.3 mm). The sieved anthill powder was kept in a plastic container and then covered. The eggshells were first soaked for a day and washed thoroughly with tap water to remove all the impurities and inner white membrane, followed by another washing with distilled water. Then, the cleaned eggshells were dried in an oven at 110 °C for 24 h. The dried eggshells were ground by a mechanical grinder to obtain fine powder. The fine powder of eggshells was sieved through an aperture mesh of 0.3 mm to obtain the finest eggshell powder. The obtained eggshell powder was then kept in a soft polythene bag and placed in a sealed plastic container.
The procedure employed to synthesize the AEC catalyst referred to our previous work [34]. The AEC catalyst was prepared, by mixing different proportions of the anthill, and eggshell powders, as suggested by central composite design (Table 1). Typically, 20 g of the mixture of anthill and eggshell powders was formulated by varying their mixing proportions. The mixture of anthill and eggshell powders was poured into a beaker while an adequate amount of distilled water was added, and stirred for 2 h on a hot plate to homogenize the mixtures. The obtained slurry was heated up at 125 °C in an oven overnight to remove excess water. Finally, raw AEC samples at various mixing ratios of anthill to eggshell were calcined at various temperatures (600–1000 °C) and corresponding time in the range of 1–4 h using a muffle furnace with a heating rate of 10 °C/min. The calcined AEC samples were kept in a desiccator containing silica pellets to prevent moisture contamination.
Table 1 Experimental ranges and level of the independent test variables
Design of experiment
In the present study, central composite design, which is a form of RSM, was used for the optimization of process conditions for the preparation of the AEC catalyst. To evaluate the effect of operating variables on the performance of the AEC catalyst, three main factors were considered: calcination temperature, x1 °C, calcination time, x2 h, and anthill proportion in AEC, x3. A total of twenty experiments were conducted, 23 = 8 cube points, 6 replications at the center point, and 6 axial points. Experimental data were analyzed using the Design-Expert software version 7.0.0 (STAT-EASE Inc., Minneapolis, USA). Table 1 presents the experimental ranges and the level of the chosen input variables for biodiesel yield (Y).
The main goal is to establish the best variables for the catalyst preparation process, from the developed models using experimental data. The desired goal in terms of biodiesel yields (responses) was described as "maximization" to determine the optimum process parameters for the maximum biodiesel yields. The responses were obtained via the transesterification process and used to develop mathematical relations, which correlate the responses (biodiesel yields) and catalyst preparation process parameters studied according to the second-order polynomial response equation given in Eq. 1.
$$Y_{i} = b_{0} + bx_{1} + b_{2} x_{2} + b_{3} x_{3} + b_{12} x_{1} x_{2} + b_{13} x_{1} x_{3} + b_{23} x_{2} x_{3} + b_{11} x_{1}^{2} + b_{22} x_{2}^{2} + b_{33} x_{3}^{2} ,$$
where Yi is the response variable of biodiesel yield. The bis are regression coefficients for linear effects; bik the regression coefficients for interaction effects; bii the regression coefficients for quadratic effects and xi represent coded experimental level of the variables.
Activity study
The biodiesel was produced via the transesterification of oil with methanol using a batch reactor (a 250 mL three-neck round bottom flask). One of the side necks was used to insert a thermometer for regular temperature monitoring; the other side neck was fitted with a reflux condenser to minimize loss of methanol and required quantities of reactants and catalyst were fed through the middle neck. The reactor was then placed on a temperature-controlled magnetic stirrer to maintain the required reaction temperature and a fixed 300 rpm stirring rate. The reactor content was heated to 60 °C before proper mixing commenced. The reaction parameters were fixed at a catalyst loading of 5 wt%, methanol to oil molar ratio of 6:1, reaction temperature of 60 °C, and reaction time of 2 h for all the experiments [35]. At the end of a reaction, the reactor contents were cooled to room temperature, and the catalyst was removed from the product mixture by centrifugation. The methyl ester contents were analyzed using gas chromatography–mass spectrometry (Varian 4000 GC/MS/MS system). The GC column was an Agilent J&W capillary column (DB-624, length: 30 mm, diameter: 0.320 mm and film thickness: 1.8 µm) with helium as the carrier gas.
The yield (Yi) of biodiesel produced was calculated by the following equation:
$${\text{Biodiesel yield }}\left( {Y_{i} } \right),\% = \frac{{M_{is} A_{b} }}{{M_{b} A_{is} }} \times 100$$
where Mis is the mass of internal standard added to the biodiesel sample, Ais is the peak area of internal standard, Mb is the mass of biodiesel sample and Ab is the peak area of biodiesel sample [7].
Characterization of the prepared AEC catalyst
The optimal composite catalyst was characterized for its morphological structure, porosity development, and elemental composition, using a scanning electron microscope equipped with an energy-dispersive X-ray (EDX) analyzer (SEM–EDX, JEOL-JSM 7600F). Fourier transform infrared (FTIR) spectrophotometer (IRAffinity-1S, Shimadzu, Japan) was used on the prepared AEC catalyst to determine various surface functional groups. The spectra were recorded from 4000 to 500 cm−1. Also, the Brunauer–Emmett–Teller (BET) surface area and pore size distribution of fresh and spent AEC catalysts were determined by the surface area analyzer (Quantachrome instrument, NOVA station A model, 11:03, USA) at the temperature of 77 K.
Development of regression model equation
The complete design matrix, as generated by the Design-Expert software and the experimental results obtained during activity evaluation of AEC samples, are presented in Table 2, and second-order polynomial models (Eqs. 3 and 4) were used to correlate the dependent and independent variables. The final models in terms of coded factors for the virgin oil biodiesel (VOB) yield, (Y1) and waste oil biodiesel (WOB) yield, (Y2) are shown in Eqs. 3 and 4, respectively.
Table 2 Experimental design matrix and expected responses
$$Y_{1} = 76.55 + 9.41x_{1} + 1.78x_{2} - 5.66x_{3} - 2.64x_{1} x_{2} - 1.65x_{1} x_{3} - 0.33x_{2} x_{3} + 2.299x_{1}^{2} + 2.73x_{2}^{2} - 1.29x_{3}^{2}$$
$$Y_{2} = 42.14 + 8.23x_{1} + 1.14x_{2} - 2.07x_{3} - 0.27x_{1} x_{2} - 1.80x_{1} x_{3} - 3.42x_{2} x_{3} + 5.48x_{1}^{2} + 7.11x_{2}^{2} - 1.56x_{3}^{2}$$
The yields of biodiesel produced from the virgin and waste oils have been predicted by Eq. 2 and presented in Table 2. The adequacy of the models obtained was evaluated based on the values of the correlation coefficient (R2) and adjusted R2 (Adj. R2). The R2 value quantitatively evaluates the correlation between the predicted and the observed output values. The experimental and predicted biodiesel yields obtained from the two models (Eqs. 3 and 4) were compared. Generally, there were good agreements between the predicted and observed values of biodiesel yield with R2 = 0.9584 for VOB yield and R2 = 0.9806 for WOB yield. This indicates that only 95.84% and 98.06% of the whole variations for VOB and WOB yields, respectively, are explained by the process variables considered, and this also means that 4.16% of the variation for VOB yield and 1.94% of the variation for WOB yield are not explained by the corresponding models. Adj-R2, which measures the goodness of a model fit, was also used to correct R2 value for the sample size and the number of terms in the model using the degrees of freedom on its computations. If there are many terms in a model and not a very large size, Adj-R2 may be visibly smaller than R2 [27]. In the current study, Adj-R2 values for the WOB and VOB models were very close to their corresponding R2 values (Tables 3 and 4). Thus, this observation indicated that the experimental responses agreed reasonably well with the predicted values from the two models.
Table 3 Analysis of variance (ANOVA) for response surface quadratic model for VOB yield
Table 4 Analysis of variance (ANOVA) for response surface quadratic model for WOB yield
Furthermore, the fitness of the models was examined by analysis of variance (ANOVA—Type III). The ANOVA for VOB and WOB yields are presented in Tables 3 and 4, respectively. As indicated in Tables 3 and 4, the model F values for the VOB and WOB yields were found to be 25.59 and 56.21, respectively, which indicated that the two models were significant. According to the ANOVA, values of "Prob > F" less than 0.0500 indicate model terms are significant. In this case, \(x_{1}\), \(x_{2}\), \(x_{3},\)\(x_{1} x_{2}\), \(x_{1}^{2}\), and \(x_{2}^{2}\) were the significant model terms to the VOB yield, whereas \(x_{1}\), \(x_{3}\), \(x_{1} x_{3}\), \(x_{2} x_{3}\), \(x_{1}^{2}\), \(x_{2}^{2}\), and \(x_{3}^{2}\) were significant model terms to the WOB yield. It can be observed that the two models were adequate to predict the two responses (VOB and WOB yields) within the range of parameters considered in this study.
Figure 1a and b depict the graphs of the predicted values against the experimental values for VOB and WOB yields, respectively. It was found in both cases that the predicted response values agreed reasonably well with the corresponding experimental values within the range of the operating conditions. However, Fig. 1b revealed that the predicted WOB yield values were nearly close to experimental values, indicating that the relationship between the catalyst preparation variables and WOB yield was best described by the model developed. Also, Fig. 1a displayed that the model (Eq. 3) captured the correlation between the composite catalyst preparation variables and the VOB yield. These observations implied that the independent variables considered in this study had effects on both VOB and WOB yields.
Predicted vs. experimental yield of a VOB and b WOB
Virgin oil biodiesel (VOB) yield
The ANOVA analysis (Table 3) revealed that all three composite catalyst preparation variables showed influential effects on the VOB yield. The calcination temperature (\(x_{1}\)), calcination time (\(x_{2}\)), and composition of an anthill in the composite catalyst (\(x_{3} )\) exhibited F values of 140.74, 5.02, and 50.84, respectively. However, the calcination temperature is the most influential factor on VOB yield, followed by mixing proportion, as they exhibited the largest F values compared to that of calcination time. Meanwhile, only the quadratic effects of calcination temperature and calcination time on the VOB yield were significant. Figure 2 illustrates the effect of calcination temperature and calcination time on virgin oil conversion to biodiesel for anthill proportion in the AEC catalyst of 30%. As it is obvious from the figure, the VOB yield increased with increasing temperature and time. The reason for this observation is thought to be the fact that calcination of raw catalysts at elevated temperature and time resulted in complete removal of adsorbed gases and creates cavities on its surface, which paved way for the adsorption of methanol [36]. In other words, higher temperatures coupled with prolonging calcination caused solid rearrangement and pore opening on the surface of the solid, which enhanced the adsorption of methanol. However, it can also be observed that the calcination temperature showed more effect on the catalyst performance in the conversion of virgin oil to biodiesel than calcination time, and this was confirmed by the large F value (Table 3) obtained for calcination temperature. The result obtained agreed with the work performed by Olutoye et al. [2] which reported that calcination temperature and time had a significant effect on the morphological property of heterogeneous catalyst prepared from barium-modified clay.
Three-dimensional response surface plot of VOB yield (effect of calcination temperature and time)
Waste oil biodiesel (WOB) yield
Based on the F value (Table 4), calcination temperature (\(x_{1}\)) showed the highest F value of 204.69, indicating that it had the most significant effect on the conversion of waste oil to biodiesel, compared to other parameters. The composition of an anthill in AEC catalyst (\(x_{3}\)) was the second significant parameter among the variables studied with an F value of 12.91 and a p value of less than 0.05. However, the quadratic effects of the three variables on the WOB yield were all significant. Figure 3a depicts the combined effect of calcination temperature and calcination time on the WOB yield at fixed anthill proportion in the AEC catalyst of 30%. As it is obvious from Fig. 3a, a slight improvement in WOB yield with increasing calcination time was observed, whereas the same figure showed a sharp increase in WOB yield with an increase in calcination temperature. The presumed reason is that prolonged or short period calcination could reduce the surface area and performance of the prepared catalyst, as a longer calcination period might result in agglomeration of catalyst particles (sintering), whereas shorter calcination time could not guarantee the formation of pores on the catalyst surface [3].
Three-dimensional response surface plot of WOB yield: a effect of calcination time and temperature; b effect of anthill proportion in AEC catalyst and calcination time
To study the combined effect of calcination temperature and anthill proportion in AEC catalyst on WOB yield, the experiments were conducted with calcination time varying from 2 to 4 h and anthill proportion varying from 20 to 40% at a constant calcination temperature of 800 °C. The result is depicted in Fig. 3b. The figure shows that the WOB yield increases with an increase in anthill proportion in the AEC catalyst. The anthill/eggshell impregnation ratio played a significant role in the performance of the AEC catalyst. Anthill comprises mixed metal oxides which can serve as a catalyst support [37]. However, when anthill is mixed with eggshell powder in appropriate quantity and calcined at high temperature, they tend to degrade as a result of the evolvement of adsorbed gases (CO2, SO2, and others) on the solid. The thermal treatment opens up the pores on the catalyst surface for improved activity and forms synergetic mixed metal oxides that are most basic [2]. This observation is corroborated by the EDX analysis.
Optimization of process variables
The optimum values of the process parameters for the preparation of the AEC catalyst were 1000 °C, 4 h, and 20% for calcination temperature (\(x_{1} )\), calcination time (\(x_{2}\)), and anthill proportion in AEC catalyst (\(x_{3} )\), respectively, as it can also be seen in Table 5. At these optimum values, the experimental values of VOB and WOB yields were found to be 97.13% and 70.92%, respectively. It was found that the experimental values obtained were closed to those values predicted by the models, with slight errors between the predicted and the experimental values, which was calculated as 0.67% and 0.50% for VOB and WOB yields, respectively. This implies that the strategy to optimize the AEC catalyst preparation conditions and to obtain maximal biodiesel yields by RSM in this study is successful.
Table 5 Optimum conditions and model validation for AEC catalyst preparation
Also, the results obtained indicate that the performance of the AEC catalyst for the conversion of virgin and waste oils to biodiesel was satisfactory, with the maximum triglyceride conversion close to 100% for virgin oil. As reflected in the literature, Xie et al. [38] reported that 87% of virgin vegetable oil was converted to biodiesel by ZnO/KF catalyst. This implies that the composite catalyst developed in this work was effective for the virgin vegetable oil conversion to biodiesel. However, the low biodiesel yield recorded in the case of waste vegetable oil was due to high FFA content, which was slightly less than 2 wt%. As reported in the literature, transesterification would not occur if FFA content in the biodiesel feedstock were higher than 2 wt%, except high FFA content is first reduced by acid esterification [35]. Nevertheless, the optimal AEC catalyst proved to be effective for the production of biodiesel from high FFA feedstock via a single-step transesterification process. However, further research such as optimization of the transesterification process is recommended to investigate the activity of the optimal catalyst and also, enhancing the biodiesel yield by considering a two-step transesterification process. Moreover, investigating the stability of the spent catalyst during reuse is necessary. The process/method adopted to develop the catalyst was considered to be cost effective, and practicable as both anthill and waste chicken eggshells were available in abundance, so an effective composite catalyst with excellent activity could be derived from their combinations.
Characterization of AEC catalyst prepared under optimum conditions
The SEM image depicted in Fig. 4 reveals the surface morphology of the AEC catalyst prepared under optimum conditions. The prepared catalyst surface has rough, irregular, and larger particles. More so, pores of different shapes and sizes are observed on its surface, thus there is a better possibility for the methanol to be adsorbed. However, the presence of the pores on the optimal AEC catalyst might be attributed to the elimination of adsorbed gases, organic matter, and moisture content [39].
Scanning electron micrograph of AEC catalyst prepared under optimum conditions (calcination temperature = 1000 °C, calcination time = 4 h, and anthill/eggshell percentage composition = 20%/80%)
Figure 5 shows the FTIR spectrum of the AEC catalyst prepared under optimum conditions. The sharp and broad absorption bands, respectively, at 3644 cm−1 and 3450 cm−1 are assignable to hydroxyl bond from adsorbed moisture. The absorption band observed at 2359 cm−1 is due to the \(C \equiv C\) stretching. The bands at 1479 cm−1 and 1417 cm−1 are attributed to the CH3 antisymmetric deformation and C–O asymmetric stretching modes, respectively. The optimal AEC catalyst also shows another set of bands at 1057 cm−1, 767 cm−1 and 459 cm−1 and can be, respectively, attributed to the O-Si–O stretching, Al–Mg–OH and Si–O–Al vibration of the clay sheet [3]. These detected functional groups are essential to the activity of the catalyst as they provide sufficient adsorptive sites for reactants [16].
FTIR spectrum of AEC catalyst prepared under optimum conditions (calcination temperature = 1000 °C, calcination time = 4 h, and anthill/eggshell percentage composition = 20%/80%)
Table 6 shows the result of EDX analysis and it is found that calcium is the main component in the prepared catalyst, no carbon element is, however, detected in the AEC sample and this indicates that CaCO3 contained in the chicken eggshell was completely decomposed into CaO and CO2. The table also shows that Si, Al, Fe, and O have a high content, and the result from EDX analysis indicates that the mineral compositions in the study sample are CaO, SiO2, Al2O3, and Fe2O3. Due to the high Ca content, the prepared AEC catalyst could be regarded as a heterogeneous base catalyst. The oxygen atom in the CaO after calcination represents Lewis base site and the calcium ion is Lewis acid site whereby good activity exhibited by the AEC catalyst during the transesterification reaction was attributed to high basicity. Moreover, the prepared catalyst could be referred to as a supported catalyst as it contains SiO2, Al2O3, which are good catalyst supports [18, 37].
Table 6 Elemental analysis for optimal AEC catalyst
The BET surface area and pore size distribution analyses were conducted to determine the textural properties of the fresh and spent AEC catalysts, and the results are presented in Table 7. The surface area of the fresh catalyst was determined to be 48.12 m2/g by the BET analysis. However, the surface area of the used catalyst was found to be 7.03 m2/g. Additionally, a distinct decrease in the pore volume and the average pore radius of the catalyst was observed. The reason for the observation was attributed to the agglomeration of the catalyst particles by the glycerol and unreacted oil blockage on the catalyst surface [7]. The studied composite is found to have a relatively large specific surface area (48.12 m2/g) and this indicates that it could be considered as an effective catalytic material for the transesterification of vegetable oil with alcohol to produce fatty acid alkyl esters, mostly when compared with lithium-based chicken bone (8.62 m2/g) [40], barium-modified montmorillonite K10 (13.114 m2/g) [41] and KOH-modified zinc oxide (4.35 m2/g) [42].
Table 7 Analysis of BET and pore size distribution of the fresh and spent AEC catalysts
The preparation process condition for the AEC catalyst was optimized, and the optimum values of the process variables were 1000 °C, 4 h, and 20% for calcination temperature, calcination time, and anthill proportion in the AEC catalyst, respectively. At these optimum values, the yields of VOB and WOB were 97.13% and 70.92%, respectively. Analysis of variance showed high correlation coefficients for the two responses (R2 = 0.9584 for VOB and R2 = 0.9806 for WOB), thus indicating better agreement between the predicted and the experimental values. A detailed characterization of the AEC sample prepared under optimum conditions revealed that the catalyst was of good quality and contained components that were actively involved in the transesterification process.
Quddus MR. A novel mixed metallic oxygen carriers for chemical looping combustion: Preparation, characterization and kinetic modeling, Doctoral dissertation, Ontario: Department of Chemical and Biochemical Engineering, University of Western; 2013.
Olutoye MA, Adeniyi OD, Yusuff AS. Synthesis of biodiesel from palm kernel oil using mixed clay-eggshell heterogeneous catalysts. Iranica J Ener Environ. 2016;7(3):308–14.
Olutoye MA, Hameed BH. A highly active clay-based catalyst for the synthesis of fatty acid methyl ester from waste cooking palm oil. Appl Catal A Gen. 2012;450:57–62.
Kumar V, Kant P. Biodiesel production from sorghum oil by transesterification using zinc oxide as catalyst. Pet Coal. 2014;56(1):35–40.
Sun H, Ding Y, Duan J, et al. Transesterification of sunflower oil to biodiesel on ZrO2 supported La2O3 catalyst. Bioresour Technol. 2013;101(3):953–8.
Jitputti J, Kitiyanan B, Rangsunvigit P, et al. Transesterification of crude palm kernel oil and crude coconut oil by different solid catalysts. Chem Eng J. 2006;116:61–6.
Tan YH, Abdullah MO, Hipolito CN, et al. Waste ostrich and chicken-eggshells as heterogeneous base catalyst for biodiesel production from used cooking oil: catalyst characterization and biodiesel yield performance. Appl Energy. 2015;2(1):1–13.
Sharma YC, Singh B, Korstad J. Application of an efficient nonconventional heterogeneous catalyst for biodiesel synthesis from Pongamia pinnata oil. Ener Fuels. 2010;24:3223–31.
Cho YB, Seo G. High activity of acid treated of quail eggshell catalysts in the transesterification of palm oil with methanol. Bioresour Technol. 2010;101:8515–24.
Sulaiman S, Khairudin N, Jamah P, et al. Characterization of fish bone catalyst for biodiesel production. Int J Bio Food Veter Agric Eng. 2014;8(5):464–6.
Obadiah A, Swaroopa GA, Kumar SV, et al. Biodiesel production from Palm oil using calcined waste animal bone as catalyst. Bioresour Technol. 2012;116:512–6.
Roschat W, Kacha M, Yoosuk B, et al. Biodiesel production based on heterogeneous process catalyzed by solid waste coral fragment. Fuel. 2012;2:194–202.
Aderemi BO, Hameed BH. Alum as a heterogeneous catalyst for the transesterification of palm oil. Appl Catal A Gen. 2009;370:54–8.
Dai Y, Chen K, Wang Y, et al. Application of peanut husk ash as a low-cost solid catalyst for biodiesel production. Int J Che Eng Appl. 2014;5(3):1–8.
Shah B, Sulaimana S, Jamal P, et al. Production of heterogeneous catalyst for biodiesel synthesis. Int J Che Environ Eng. 2014;5(2):73–5.
Refaat AA. Biodiesel production using solid metal oxide catalyst. Int J Environ Sci Tech. 2011;8(1):203–21.
Zabeti M, Wan Daud WMA, Aroua MK. Activity of solid catalysts for biodiesel production: a review. Fuel Process Technol. 2011;90:770–7.
Taufiq-Yap YH, Abdullah NF, Basri M. Biodiesel production via transesterification of palm oil using NaOH/Al2O3 catalysts. Sains Malaysiana. 2011;40(60):587–94.
Henne GA. Anthill as a resource for ceramics, A Dissertation submitted to the School of Graduate Studies, Kwame Nkrumah University of Science and Technology. 2009.
Akinwekomi AD, Omotoyinbo JA, Folorunso D. Effect of high alumina cement on selected foundry properties of anthill clay. Leo Elect J Pract Technol. 2012;1:37–46.
Yusuff AS, Olateju II. Experimental investigation of adsorption capacity of anthill in the removal of heavy metals from aqueous solution. Environ Qual Manage. 2018;52(3):1–7.
Muthu K, Viruthagiri T. Study on solid based calcium oxide as a heterogeneous catalyst for the production of biodiesel. J Adv Chem Sci. 2015;1(14):160–3.
Karacan F, Ozden U, Karacan S. Optimization of manufacturing conditions for activated carbon from Turkish lignite by chemical activation using response surface methodology. Appl Therm Eng. 2007;21:1212–8.
Bezerra MA, Santelli RE, Oliveira EP, et al. Response surface methodology (RSM) as a tool for optimization in analytical chemistry. Talata. 2008;76:965–77.
Tan IAW, Ahmad AL, Hameed BH. Preparation of activated carbon from coconut husk: optimization study on removal of 2,4,6-trichlorophenol using response surface methodology. J Hazard Mater. 2008;153:709–17.
Kralik P, Kusic H, Koprivanac N, et al. Degradation of chlorinated hydrocarbons by UV/H2O2: the application of experimental design and kinetic modeling approach. Chem Eng J. 2010;158:154–66.
Giwa A, Akpan UG, Hameed BH. Optimization of photocatalytic degradation of an anthraquinone dye using design of experiment. J Eng Res. 2012;17(3):20–31.
Montogomery DC. Design and analysis of experiments. Hoboken: Wiley; 2005.
Myer RH, Montogomery DC. Response surface methodology: process and product optimization using designed experiment. 2nd ed. Hoboken: Wiley; 2002.
Said KAM, Amin MAM. Overview on the response surface methodology (RSM) in extraction processes. J App Sc. Process Eng. 2015;2(1):8–17.
Khataee AR, Dehghan G, Ebadi A, et al. Biological treatment of a dye solution by macroalgae Chara sp: effect of operational parameters, intermediates identification and artificial neural network modeling. Bioresour Technol. 2010;101:2252–8.
Obeng DP, Morrell S, Napier-Munn TJ. Application of central composite rotatable design to modeling the effect of some operating variables on the performance of the three-product cyclone. Int J Miner Process. 2005;76:181–92.
Canakci M, Van Gerpen J. Biodiesel production from oils and fats with high free fatty acids. Trans ASAE. 2001;44(6):1429–36.
Yusuff AS. Preparation and characterization of composite anthill-eggshell anthill-chicken eggshell adsorbent: optimization study on heavy metal adsorptions using response surface methodology. J Environ Sci Tech. 2017;10(3):120–30.
Yusuff AS, Adeniyi OD, Olutoye MA, et al. Performance and emission characteristics of diesel engine fuelled with waste frying oil biodiesel-petroleum diesel blend. Int J Eng Res Afr. 2017;32:100–11.
Chu L, Zhang Z, Peng P, et al. Leaching S from pressure acid leaching residue of zinc concentrate: Parameters optimization using response surface methodology. In: Wang S, Dutrizac JE, Free ML, Hwang JY, Kim D, editors. T.T. Chen honorary symposium on hydrometallurgy, electrometallurgy and materials characterization. Hoboken: Wiley; 2012.
Yusuff AS, Popoola LT. Optimization of biodiesel production from waste frying oil over alumina supported chicken eggshell catalyst using experimental design tool. Acta Polytech. 2019;59(1):88–97.
Xie W, Peng H, Chen L. Calcined Mg-Al hydrotalcites as solid base catalysts for methanolysis of soybean oil. J Mol Catal A: Chem. 2006;246(1–2):24–32.
Leofanti G, Tozzola G, Padovan M, et al. Catalyst characterization: characterization techniques. Catal Today. 1997;34:307–27.
AlSharifi M, Znad H. Development of a lithium based chicken bone (Li-Cb) composite as an efficient catalyst for biodiesel production. Renew Energy. 2019. https://doi.org/10.1016/j.renene.2019.01.052.
Olutoye MA, Wong SW, Chin LH, et al. Synthesis of fatty acid methyl esters via transesterification of waste cooking oil by methanol with a barium-modified montmorillonite K10 catalyst. Renew Energy. 2016;86:392–8.
Yacob A, Bello AM, Ruskam A, et al. Catalytic performance by kinetics evaluation of novel KOH-modified zinc oxide in the heterogeneous transesterification of rice bran oil to biodiesel. In: 6th International Conference on Environmental Science and Technology, 2015;84(17):101–107.
Department of Chemical and Petroleum Engineering, College of Engineering, Afe Babalola University, Ado-Ekiti, Ekiti State, Nigeria
Adeyinka Sikiru Yusuff
Search for Adeyinka Sikiru Yusuff in:
Correspondence to Adeyinka Sikiru Yusuff.
Yusuff, A.S. Development of a composite catalyst from anthill and eggshell: an optimization study on biodiesel production from virgin and waste vegetable oils. Waste Dispos. Sustain. Energy 1, 279–288 (2019) doi:10.1007/s42768-019-00015-x
Revised: 18 September 2019
Issue Date: December 2019
Chicken eggshell
Conversion of vegetable oil to biodiesel
Heterogeneous catalyst
Central composite design | CommonCrawl |
Toddler Screening for Autism Spectrum Disorder: A Meta-Analysis of Diagnostic Accuracy | mijn-bsl Skip to main content
Deel dit onderdeel of sectie (kopieer de link)
Optie A:
Klik op de rechtermuisknop op de link en selecteer de optie "linkadres kopiëren"
Optie B:
Deel de link per e-mail
Link delen
Publiceer nu
Terug naar het zoekresultaat
vorige artikel Temperament as an Early Risk Marker for Autism ...
volgende artikel Cross-Cultural Content Validity of the Autism P...
Swipe om te navigeren naar een ander artikel
Tip sluiten
08-01-2019 | OriginalPaper | Uitgave 5/2019 Open Access
Toddler Screening for Autism Spectrum Disorder: A Meta-Analysis of Diagnostic Accuracy
Tijdschrift:
Journal of Autism and Developmental Disorders > Uitgave 5/2019
Ana B. Sánchez-García, Purificación Galindo-Villardón, Ana B. Nieto-Librero, Helena Martín-Rodero, Diana L. Robins
» Naar de samenvatting
Belangrijke opmerkingen
The online version of this article (https://doi.org/10.1007/s10803-018-03865-2) contains supplementary material, which is available to authorized users.
Population level (level 1) screening for autism spectrum disorder (ASD) has been the subject of numerous papers, particularly since the American Academy of Pediatrics published a policy statement more than a decade ago (Council on Children with Disabilities 2006 ). The most commonly studied tool is the Modified Checklist for Autism in Toddlers (M-CHAT; Robins et al. 1999 ), and its revision, the M-CHAT-revised, with follow-up (M-CHAT-R/F; Robins et al. 2009 ). However, the variety of screening tools for prospective identification of early signs of autism has encouraged the publication of different systematic reviews (Daniels et al. 2014 ; McPheeters et al. 2016 ). See Table 1 for the tools included in the current meta-analysis, and references for more information about each tool.
Details of sample characteristics and individual outcomes such as studies show
Study number
Screening test(s)
FN a strategy
Total N b
Age (months)
1. Nygren et al. ( 2012 )
M-CHAT
M-CHAT + JOBS
4. Baird et al. ( 2000 )
5. Wiggins et al. ( 2014 )
PEDS+ PATH
7. Kamio et al. ( 2014 )
M-CHAT_JV
8. Stenberg et al. ( 2014 )
9. Chlebowski et al. ( 2013 )
M-CHAT/Yale Screener + STAT
10. Canal-Bedia et al. ( 2011 )
11. Barbaro and Dissanayake ( 2010 )
12. Inada et al. ( 2011 )
M-CHAT (short version 9, cut-off 1)
M-CHAT (full version)
14. Dereu et al. ( 2010 )
CESDD
15. Miller et al. ( 2011 )
ITC + M-CHAT
16. Robins et al. ( 2014 )
M-CHAT-R/F
17. Honda et al. ( 2005 )
YACHT-18
18. Baranek ( 2015 )
FN false negative, FP false positive, TP true positive, TN true negative, NA not available from paper, M-CHAT modified-checklist for autism in toddlers, JOB joint attention-observation schedule, CHAT checklist for autism in toddlers, PED parents' evaluation of developmental status, M-CHAT_JV modified-checklist for autism in Toddlers_Japanese version, STAT screening tool for autism in toddlers and young children, SACS social attention and communication study, CESDD checklist for early signs of developmental disorders, M-CHAT-R/ F modified checklist for autism in toddlers, revised, with follow-up, YACHT-18 young autism and other developmental disorders checkup tool
a FN strategy = methods to identify false negative screening cases, or children with ASD who were missed by the screening tool(s) of interest
bTotal N with missing cases
The U.S. Preventive Services Task Force (USPSTF; Siu and Preventive Services Task Force 2016 ) concluded that there was insufficient evidence to provide a recommendation regarding universal toddler screening for ASD. At the same time they emphasized the potential of the M-CHAT as a universal screening tool, as evidenced by empirical results (R. Canal-Bedia, personal communication, May 9, 2016). Hence, it is necessary to perform a systematic study of the psychometric data available in different studies.
The meta-analysis is an important resource to summarize— in quantitative terms—the accuracy of diagnostic test, providing a higher level of evidence; for this reason, the current study conducted a meta-analysis to review empirical data from the studies and tools used since the first ASD population screening was performed in England (Baron-Cohen et al. 1996 ).
In this kind of study, the reference test may be imperfect because a gold standard is not available in practice. We have used the Bayesian Hierarchical Model (HSROC; Rutter and Gatsonis 2001 ) to carry out the meta-analysis. The model is robust in adjusting for the imperfect nature of the reference standard of autism tools, in a bivariate meta-analysis of diagnostic test sensitivity and specificity and others psychometric parameters. Another bivariate model was proposed by Reitsmaet al. ( 2005 ) in which it is assumed that the vector of (logit(sensitivity), logit(specificity)) follows a bivariate normal distribution. However, Harbord and Whiting ( 2009 ) showed that the likelihood functions of both the HSROC and bivariate models are algebraically equivalent, and yield identical pooled sensitivity and specificity. Dendukuri et al. ( 2012 ) have demonstrated the usefulness of HSROC model, when no gold standard test is available.
Therefore, in this study, we used a Bayesian meta-analysis, and the main aim was to evaluate the accuracy of the different screening tools. The second objective was to calculate the pooled psychometric properties associated with different studies to evaluate the tools effectiveness and support their recommendation internationally (R. Canal-Bedia, personal communication, May 9, 2016).
The preferred reporting items for systematic reviews and meta-analyses (PRISMA) (Moher et al. 2009 ) has guided this systematic review.
Criteria for Selection of Studies
Included papers focused on the screening and diagnosis of ASD and other developmental disorders in the general population, also known as level 1 screening. In cases where studies had duplicated data, only the most complete one was selected in order to avoid an unrealistic increase in the homogeneity between studies, and emphasis was placed on studies validating screening tools, which were often the most complete samples. Therefore, we excluded studies focused on tools that were not designed to screen for ASD, screening studies not applied to the general population (level 1), and all those that did not provide sufficient data to construct a 2 × 2 contingency table of screening × diagnosis (such as those without confirmatory diagnoses), or had a low quality rating in the quality assessment.
A systematic literature search identified studies that reported tools and procedures used for the early detection of ASD. The articles were obtained from CINHAL, ERIC, PsycINFO, PubMed and WOS databases using several combinations of the relevant keywords and Medical Subject Heading (MeSH), which include the categories of terms suggested by Daniels et al. ( 2014 ). All articles published between January 1992 and April 2015 were considered eligible. Only articles published in the English language and reporting an age range of screening from 14 to 36 months were included. The search strategy for PubMed is described (see Appendix 1). An additional search was conducted for grey literature captured on other search engines such as Google Scholar; we also searched the reference lists of included articles and any relevant review articles identified through the search and the 'related articles' function in PubMed. In addition, when searching the grey literature, we took into account the reference lists of primary studies and review papers, and contacted the experts to locate significant but as yet unpublished studies.
Assessment of Methodological Quality
Two reviewers conducted quality assessment of the included studies with the QUADAS-2 Tool (Quality Assessment of Diagnostic Accuracy Studies-2) (Whiting et al. 2004 ). Any discrepancies were referred to a third reviewer. QUADAS is a validated quality checklist (Deeks 2001 ; Whiting 2011 ; Whiting et al. 2006 ) composed of 14 items which encompass the most important sources of bias and variations observed in diagnostic accuracy studies. The studies were classified according to whether they had low or high risk for bias and their applicability was graded as low or high.
The following data items were extracted from each study using a data collection form: first author and year of publication; size and characteristics of the study population; raw cell values [true positive ( TP), true negative ( TN), false positive ( FP), false negative ( FN); and psychometric properties, specifically sensitivity ( Se), specificity ( Sp), positive and negative predictive values ( PPV, NPV), positive and negative likelihood ratio values ( LR+; LR−), and diagnostic odds ratio ( DOR)]. See Appendix 2 for definitions of bio-statistical terms. Psychometric properties which were not provided in the studies were calculated based on raw cell values. Clarification was requested from the authors via e-mail when we observed discrepancies between the data reported and the data calculated. Details of the search and results are shown (see Tables 1, 2).
Details of individual diagnostic outcomes such as studies show
(95% CI)
LR+
LR−
Nygren et al. ( 2012 )
(0.614–0.882)
Baird et al. ( 2000 )
Wiggins et al. ( 2014 )
Kamio et al. ( 2014 )
Stenberg et al. ( 2014 )
Chlebowski et al. ( 2013 )
Canal-Bedia et al. ( 2011 )
Barbaro and Dissanayake ( 2010 )
(303.93–564.99)
(0.12–0.22)
Inada et al. ( 2011 )
Dereu et al. ( 2010 )
(0.999–1.00)
Miller et al. ( 2011 )
Robins et al. ( 2014 )
Honda et al. ( 2005 )
Baranek ( 2015 )
Se sensitivity, Sp specificity, PPV positive predictive value, NPV negative predictive value, LR+ positive likelihood ratio, LR− negative likelihood ratio, NA not available from paper
Data Synthesis and Statistical Analysis
We calculated the pooled Se, Sp, LR+, LR−, PPV, NPV and DOR for the included studies. Separate pooling of sensitivity and specificity may lead to biased results because different thresholds were used in different studies (Deeks 2001 ; Moses et al. 1993 ). Therefore, we used the Hierarchical Summary Receiver Operating Characteristic Model (HSROC) (Rutter and Gatsonis 2001 ) to estimate the diagnostic accuracy parameters and to generate a summary receiver operating characteristic curve with HSROC, [an R package available from CRAN (Schiller and Dendukuri 2015 )]. The model is robust for including studies with different reference standards and potential negative correlation in paired measures ( Se/ Sp) across studies (Trikalinos et al. 2012 ). This kind of analysis models the variation in diagnostic accuracy and cut-off values, and identifies sources of heterogeneity, which is a common feature among diagnostic or screening test accuracy reviews.
The model has been called a "Hierarchical Model" owing to the fact that it takes into account statistical distributions at two levels. At the first level, within-study variability in sensitivity and specificity is examined. At the second level, between-study variability is examined (Macaskill 2004 ). The main goal of the model is to estimate an SROC curve across different thresholds.
The estimation from the model requires Markov Chain Monte Carlo (MCMC) simulation (Rutter and Gatsonis 2001 ). To carry out this Bayesian estimation we specified the prior distributions over the set of unknown parameters with a similar assumption made by Higgins et al. ( 2003 ). This process was used in order to obtain posterior predictions of the Se and Sp. According to Harbord and Whiting ( 2009 ), the true estimate of Se and Sp in each study could be found by empirical Bayes estimates, although we acknowledge that many of the included studies were limited in their ability to confirm that negative cases were in fact true negatives.
In order to establish whether there was inconsistency and heterogeneity in the meta-analysis, we summarized the test performance characteristics using a forest plot with the corresponding Higgins I 2 index (Higgins and Thompson 2002 ) and assessed heterogeneity by visual inspection of the SROC plots and using Cochran's Q test (p > 0.1) (Cochran 1954 ). Summary DORs were estimated by random DerSimonian–Laird effect model (DerSimonian and Laird 1986 ) following the recommendations of Macaskill et al. ( 2010 ) because I 2 was greater than 50% and Q test was < 0.1. Since variability of results among different studies was confirmed, an investigation of heterogeneity was necessary and subgroup analyses were used. The Egger's test (Song et al. 2002 ) was calculated for assessing publication bias using STATA 12.0.
Finally, we obtained a crosshair plot and ROC ellipses plot to summarize the confidence intervals of Se and FP cases in each study with the R-package (Doebler 2015 ) using meta-analysis of diagnostic accuracy (MADA), LR+, LR−, PPV, NPV and DOR were calculated using SAS for Windows, version 9.4 (Cary, NC).
Study Selection
The initial literature search identified 1883 studies. Six hundred and sixty-seven duplicate records were eliminated to obtain 1216 non-duplicated articles, 1114 of which were excluded after title and abstract screening through the application of inclusion/exclusion criteria, and 87 were excluded after full text screening or methodological quality assessment and data extraction (see Supplemental Table 1). One additional study that qualified for inclusion was identified from the search of grey literature. Finally, 14 studies: (Baird et al. 2000 ; Barbaro and Dissanayake 2010 ; Canal-Bedia et al. 2011 ; Chlebowski et al. 2013 ; Dereu et al. 2010 ; Honda et al. 2005 ; Inada et al. 2011 ; Kamio et al. 2014 ; Miller et al. 2011 ; Nygren et al. 2012 ; Robins et al. 2014 ; Stenberg et al. 2014 ; Wiggins et al. 2014 ; Baranek 2015 ) were eligible for inclusion in our review. We present the flow chart showing the selection process in Fig. 1.
Study selection flow chart following PRISMA guidelines
Methodological Quality of the Included Studies
We used the QUADAS-2 tool for study of quality assessment and K coefficient to examine inter-rater agreement for our initial overall quality score, and resolved any item discrepancies through discussion. The agreement between judges' kappa values was 0.643 (CI 95%; p < 0.01). In Fig. 2, we summarize the results of the methodological quality for all 20 studies included in this assessment: (Baird 2000 ; Barbaro 2010 ; Canal-Bedia et al. 2011 ; Chlebowski 2013 ; Dereu 2010 ; Dietz 2006 ; Honda 2005 , 2009 ; Inada 2011 ; Kamio 2014 ; Kleinman 2008 ; Miller 2011 ; Nygren et al. 2012 ; Pierce 2011 ; Robins 2008 , 2014 ; Stenberg 2014 ; VanDenHeuvel 2007 ; Wetherby 2008 ; Wiggins et al. 2014 ).
Methodological quality graph depicting the cumulative findings of the methodological quality analysis
As Fig. 2 shows, two bar graphs report the assessment of risk of bias and applicability. The percentage of studies rated as unclear, high, or low is observed across X-axes at intervals of 20%. The concerns regarding applicability include three domains: patient selection, index test, and reference standard. The risk of bias dimension is comprised of four domains: patient selection, index test, reference standard, and flow and timing. Across a majority of studies, concern about applicability of the reference standard was assessed as low, the index test was assessed as unclear, and patient selection was assessed as having low concerns. Regarding risk or bias, the majority of the studies demonstrated high risk of bias for flow and timing; the index test was rated as unclear risk, the reference standard was generally rated as low risk, and patient selection was rated as low risk.
During this process we excluded the following studies: Honda ( 2009 ), Pierce ( 2011 ), Robins ( 2008 ), VanDeHeuvel ( 2007 ), Wetherby ( 2008 ). In supplemental materials (see supplemental Table 1) we show the list of papers excluded during analysis of quality and data extraction processes.
Characteristics of the Included Studies
One hundred and two full text articles were assessed for eligibility, 14 (13.72%) of which were included in the quantitative synthesis. Some articles evaluated more than one index test (Inada et al. 2011 ; Nygren et al. 2012 ; Wiggins et al. 2014 ) and this is why we present a meta-analysis on 18 sets of psychometric values, 35.71% of which came from the USA, 35.71% from Europe, 21.42% from Japan and 7.14% from Australia. The sample includes 191,803 toddlers. The interval of age range is between 16.7 and 29 months. Sex data was available for 158,965 toddlers, of whom 73,431 (46.19%) were female.
The studies presented great variability in terms of the data reported. Twelve of 14 studies (66.6%) showed all the primary outcomes required to populate 2 × 2 contingency tables. Data pertaining to Se were presented in 77.7% of studies, Sp in 55.5%, PPV in 77.7%, NPV in 44.4%, and LR+ and LR− in 22.2% of studies. The main characteristics and the clinical outcomes, as shown in included studies are presented (see Tables 1, 2).
Diagnostic Accuracy of Screening Tools
The accuracy of screening tools was evaluated in 14 studies that assessed the test characteristics of various screening tools (18 in all). The pooled Se was 0.72 (95% CI 0.61–0.81) and the Sp was 0.98 (95% CI 0.97–0.99). The positive likelihood ratio (LR+) was 131.27 (95% CI 50.40–344.48) and the negative likelihood ratio (LR−) was 0.22 (95% CI 0.13–0.45). The diagnostic odds ratio (DOR) was 596.09 (95% CI 174.32–2038.34). The positive predictive value (PPV) was 97.78 (95% CI 97.71–97.84) and the negative predictive value (NPV) was 93.13 (95% CI 93.02–93.24). The above is summarized in Table 3, while the corresponding HSROC plot is presented in Fig. 3. The Se of each individual study varied between 0.22 and 0.95 whereas the Sp ranged from 0.81 to 0.99 (see Table 4).
Parameters estimated between studies (point estimate = median) both for the entire meta-analysis and for the sub-analysis of nine studies
Meta-analysis with all studies selected (N = 18)
Meta-analysis: subgroup of analysis (N = 9)
MC_error
C.I._lower
C.I._upper
HSROC THETA a
< 0.01
HSROC LAMBDA b
HSROC Beta c
− 0.09
σ α d
σ θ e
Se overall
Sp overall
MC error of each parameter smaller than 10% of its posterior standard deviation
Se sensitivity, Sp specificity
aTHETA = the overall mean cut-off value for defining a positive test
bLAMBDA = the overall diagnostic accuracy
cBeta = the logarithm of the ratio of the standard deviation of test results among patients with the disease and among patients without the disease
dσ α = the between-study standard deviation of the difference in means
eσ θ = the between-study standard deviation in the cut-off
ROC ellipses plot with confidence regions, which describe the uncertainty of the pair of sensitivity and false positive rate. The size of the circles indicates the weight of each study. Studies indicated by study number (see Table 1)
Estimates of diagnostic precision and outcomes in single studies
Screening test
THETA a (95% CI)
ALPHA b (95% CI)
Prevalence c (95% CI)
Sensitivity ( Se) (95% CI)
Specificity ( Sp) (95% CI)
1.31 (1.06–1.56)
0.01 (< 0.01–0.01)
< 0.01 (< 0.01 to < 0.01)
Wigginset al. ( 2014 )
< 0.01 (< 0.01–0.01)
PEDS + PATH
− 0.05 (− 0.14–0.01)
M-CHAT /YALE SCREENER and STAT
0.54 (− 0.01 to − 1.03)
M-CHAT (short version 9, cutoff:1)
0.66 (0. 47–0.84)
< 0.01 (< 0.01 to <0.01)
< 0.01 (< 0.01–<0.01)
aTHETA = the overall mean cut-off value for defining a positive test
bALPHA = the 'accuracy parameter' measures the difference between TP and FP within-study parameters
cPrevalence within-study parameters
Exploration of Heterogeneity
A considerable degree of heterogeneity in sensitivities was observed (Q = 337.62, df = 17.00, p < 0.001) and specificities (Q = 30901.50, df = 17.00, p < 0.001). The heterogeneity in test accuracy between studies may be due to differences in cut-offs utilized in different studies, among other factors (Doebler et al. 2012 ). To delve deeper into the understanding of these results, we evaluated the confidence intervals which describe the relationship between the psychometric properties. The ROC ellipse plots of the confidence intervals in Fig. 3 shows the studies responsible for high levels of heterogeneity, how cut-off values vary, and how they demonstrate moderate negative correlations between sensitivities and False Positive rates ( r s = − 0.355), that is, if Se tends to decrease when FP rate increases.
According to this analysis, study 18 (Baranek 2015 ), study 14 (Dereu et al. 2010 ), studies 12 and 13 (Inada et al. 2011 ) and study 15 (Miller et al. 2011 ) show the largest confidence intervals both for Se and FP rate, and study 4 (Baird et al. 2000 ), study 10 (Canal-Bedia et al. 2011 ), study 7 (Kamio et al. 2014 ) and study 8 (Stenberg et al. 2014 ) indicate large confidence intervals only in Se.
The SROC curve summarizes the relationship between Se and (1 − Sp) across studies, taking into account the between-study heterogeneity. We constructed a SROC curve using all studies selected; see Fig. 3. It is worth noting that it is a significant graphical tool for understanding how the diagnostic accuracy of the different test depends on the different cut-off (Doebler et al. 2012 ).
As Fig. 4 shows, the prediction region covers a larger range of Se than Sp. This may be due to the fact that most of the studies had a considerably larger number of participants with screen negative results compared to screen positive results, leading to greater sampling variability when we estimated Se vs. Sp. The figure also demonstrates an asymmetry of the test performance measures towards a higher Sp with higher variability of Se, providing indirect proof of some threshold variability. The figure also shows how when the threshold is increased then Se is decreased but Sp is increased.
Hierarchical summary receiver operating characteristic curve (HSROC) plot shows test accuracy (using all studies selected). According to Schiller and Dendukuri ( 2015 ) individual studies are represented by round circles. The size of the circles is proportional to the number of patients included in the study, the height of ovals indicates the number of affected individuals and the width indicates the number of non-affected individuals. The filled red circle is the pooled sensitivity and specificity across the studies taking into account the between-study heterogeneity. The blue dotted-curve defines the 95% prediction region. The red dot-dashed-curve marks the boundary of the 95% credible region for the pooled estimates
The posterior predictive value of Se was 0.71 (95% CI 0.22–1) with a standard error of 0.23 and that of Sp was 0.98 (95% CI 0.81–1) with a standard error of 0.07.
Subgroup of Analysis
A large degree of heterogeneity was observed. Heterogeneity may be due to different factors (Macaskill et al. 2010 ; Trikalinos et al. 2012 ). In order to investigate the source of heterogeneity in the current sample, we followed recommendations of these authors and conducted analyses using a subgroup of studies. The new meta-analysis excluded the following studies, based on graphical analysis and the Cochran Q test (p > 0.1): Study 4 (Baird et al. 2000 ), Study 7 (Kamio et al. 2014 ), Study 8 (Stenberg et al. 2014 ), Study 10 (Canal-Bedia et al. 2011 ), Studies 12 and 13 (Inada et al. 2011 ), Study 14 (Dereu et al. 2010 ), Study 15 (Miller et al. 2011 ), and Study 18 (Baranek 2015 ).
Regarding the estimations between study parameters, subgroup analysis demonstrated that Se was increased because the pooled sensitivity was 0.77 (95% CI 0.69–0.84), and the Sp was 0.99 (95% CI 0.97–0.99). The posterior predictive p-value of Se was 0.81 (95% CI 0.39–1) and Sp, 0.97 (95% CI 0.76–1, SD = 0.08).
Parameters estimated between studies by HSROC model are shown in Table 3, which demonstrates how the parameters estimated for the subgroup of analysis are higher results than those obtained for the first meta-analysis. For example, it is of note that standard deviation in the cut-off and standard deviation of the difference in means between studies are decreased.
The estimates for individual studies were grouped by parameters and are shown in Table 5.
Estimates of diagnostic precision and outcomes in single studies for the sub-analysis of nine studies
Se (95% CI)
Sp (95% CI)
0.01 (< 0.01 -01)
0.34 (-0.03–0.71)
0.35 (− 0.06 to 0.76)
M-CHAT /YALE SCREENER/STAT
0.01 (< 0.01 to < 0.01)
Figure 5 shows how the prediction region covers a larger range of Se than Sp although this is less than in the first meta-analysis. The figure also shows less asymmetry of the test performance and therefore less heterogeneity. This means that the range, which includes the measurements for Se and Sp is lower than the one shown in Fig. 4.
Hierarchical summary receiver operating characteristic curve (HSROC) plot show test accuracy (using subgroup of studies)
Publication Bias
The estimated Egger bias coefficient was 3.21 (95% CI − 0.49 to 6.92) with a standard error of 1.5, giving a p-value of 0.08. The test thus suggests evidence that results are not biased by the presence of small-study effects.
Interest in early detection of ASD is increasing, due to the growing evidence that early intervention improves prognosis. Low-risk screening, as part of pediatric primary care, for example, is one of the most widely studied strategies to promote early detection.
Consequently, the information reported from systematic reviews of screening accuracy is valuable, both for research and practice. Different systematic reviews, such as the ones carried out by Daniels et al. ( 2014 ) and McPheeters et al. ( 2016 ), have represented an important advance with regard to traditional or narrative reviews, which were characterized by a lack of systematization. However, a meta-analysis is a systematic review which also uses statistical methods to analyze the results of the included studies. It is accepted that data from systematic reviews with meta-analyses adds value since the statistical analysis used converts the results of primary studies into a measure of integrated quantitative evidence. This is beneficial both to the scientific community and to the clinicians who use the tools in such meta-analyses.
Meta-analysis of screening studies is a complex but critical approach to examining evidence across measures and scoring thresholds in different populations (Gatsonis and Paliwal 2006 ). We employed a Bayesian Hierarchical Model (Rutter and Gatsonis 2001 ), which is robust in adjusting for the imperfect nature of the reference standard of autism tools, in a bivariate meta-analysis of diagnostic test sensitivity and specificity and others psychometric parameters. This kind of meta-analysis statistically compares the accuracy of different diagnostic screening tests and describes how test accuracy varies. Therefore, it is more likely to lead to a 'gold standard' than other types of reviews which can be influenced by biases associated with the publication of single studies.
The HSROC model was used to estimate the screening accuracy parameters and a summary in each study as functions of an underlying bivariate normal model. This model has been recommended when there is no standard cut-off to define a positive result (Bronsvoort et al. 2010 ; Dukic and Gatsonis 2003 ; Macaskill 2004 ) in order to allow the meta-analytic assessment of heterogeneity between studies while taking into consideration both within- and between-study variability. Furthermore, it is also optimally suited when more information is available, for example, when the studies have reported results from more than one modality (Rutter and Gatsonis 2001 ) like our case. The advantages of the model have been discussed (Gatsonis and Paliwal 2006 ; Leeflang et al. 2013 ; Macaskill 2004 ; Rutter and Gatsonis 2001 ) and support its selection in this meta-analysis.
This review included 14 studies that assessed the test characteristics of various screening tools (18 in all) for detecting autism and a subgroup of analysis retaining nine studies that demonstrated lower heterogeneity. Initial findings of the overall meta-analysis show that tools which are used in level 1 ASD screening are accurate at detecting the presence of ASD [pooled sensitivity was 0.72 (95% CI 0.61–0.81)] and highly accurate at detecting a lack of presence of ASD [pooled of specificity was 0.98 (95% CI 0.97–0.99)]. But more importantly, we demonstrate the tools' performance in identifying autism, DOR 596.09 (95% CI 174.32–2038.34). The clinical utility of the level 1 screening tools reviewed in this study is clear because the pooled positive likelihood ratio (LR+) was 131.27 (95% CI 50.40–344.48) and the negative likelihood ratio (LR−) was 0.22 (95% CI 0.13–0.45). LR+ > 1 indicates the results are associated with the disease. Although those findings are informative to clinicians, it is important to understand the limitations of the last assertion because the accuracy of a LR depends upon the quality of the studies that generated the pooled of sensitivity and specificity, therefore data must be interpreted with caution. Finally, the pooled of positive predictive value (PPV) was 97.78 (95% CI 97.71–97.84) and the negative predictive value (NPV) was 93.13 (95% CI 93.02–93.24).
A limitation of this meta-analysis comes from the methodological limitations of the included studies; 55% of the included studies were assessed to have high risk or unclear risk of bias in the quality analysis with QUADAS, particularly in the domains of flow and timing, and in the index test. We recommend that future screening studies include a flowchart with information about the method of recruitment of patients, sample, order of test execution, follow up and other details related to the process to improve replicability and to better inform readers about potential bias.
The second concern is about the heterogeneity of the psychometric data in the included studies. In this respect, according to Doebler et al. ( 2012 ), in diagnostic meta-analysis the observed sensitivities and specificities can vary across primary studies and heterogeneity should be assumed in results of this kind of meta-analysis (Macaskill et al. 2010 ). This assertion has been acknowledged in this work and justifies the choice of the model HSROC, which is a more robust model for addressing heterogeneity compared to some of the other meta-analysis models.
Following the recommendations of Macaskill et al. ( 2010 ) and Trikalinos et al. ( 2012 ) we conducted a subgroup of analyses to assess the pooled Se and Sp without those studies driving heterogeneity in analyses. The pooled of sensitivity and specificity were improved by the exclusion of these studies. Consequently, the parameters estimated for this set of studies suggested a good performance for ruling out and ruling in ASD since the prior pooled Se was 0.77 (95% CI 0.69–0.84, SD = 0.03), Sp was 0.99 (95% CI 0.97–0.99; SD ≤ 0.01), the posterior predictive p-value of Se was 0.81 (95% CI 0.39–1, SD = 0.18), and high specificity was maintained, 0.97 (95% CI 0.76–1, SD = 0.08). The previous data from the posterior predictive p-values of Se and Sp are very important because the true estimate of Se and Sp in each study could be found by empirical Bayes estimates (Harbord and Whiting 2009 ).
One important aspect to bear in mind is that only about 66.6% of all studies showed all the primary outcomes required to populate 2 × 2 contingency tables. Data pertaining to the Se were presented in 77.7% of studies, Sp in 55.5%, PPV in 77.7%, NPV in 44.4%, LR+ and LR− in 22.2% of studies. This leads us to recommend that authors of screening studies include sufficient detail to calculate all psychometric properties to improve the quality of systematic reviews and future meta-analyses. It also would be valuable for authors of future studies to reflect on the question of why there is such a low percentage of primary studies that do provide those data. Some authors use caution in presenting psychometric properties when the negative cases cannot be confirmed to be true negatives. Although this is a notable limitation of cross-sectional screening studies, given that confirmatory evaluations are prohibitive in very large samples, it is likely that the number of truly negative cases greatly outnumbers those cases that will later be identified as false negatives, suggesting that interpreting the TN cell of the 2 × 2 matrix to be "presumed TN" is a reasonable assertion. Looking further at the omission of specific psychometric values, there is a remarkably low percentage of studies that include LR+ and LR−, as well as a number that do not report NPV. LR+ and LR− may not have been commonly included given that they were not emphasized in the American Academy of Pediatrics' policy statement that highlighted the psychometric properties of Se and Sp. The reduced emphasis on NPV may be due to the fact that predictive value is affected by baserate of the disorder in the sample being studied (such as PPV and NPV may vary dramatically across sampling strategies), whereas Se and Sp are not influenced by base rate. We recommend that future studies report comprehensive psychometrics, in order to promote understanding of the findings. In addition, it is often difficult to ascertain characteristics of the study, study cohort, and technical aspects (Gatsonis and Paliwal 2006 ). In future studies, a unified approach is necessary in presenting results of screening research to avoid the inconsistency and heterogeneity observed.
The present results suggested improved screening accuracy when meta-analysis was restricted to a subset of studies with reduced heterogeneity (see Table 3 for a comparison of parameters for the complete meta-analysis and the subgroup meta-analysis). The subgroup findings add specific knowledge for clinicians and researchers regarding each tool used for toddler ASD screening.
We have estimated parameters for each study in both meta-analyses (see Tables 4, 5). The results from subgroup analysis suggest that the Se of each individual study varied between 0.78 and 0.88. In those tables we also reported other important data, which could be a particular contribution for the clinicians in this field of study, such as the different cut-off points or the 'accuracy parameter', which measures the difference between TP and FP in each study and the prevalence. With respect to prevalence, we can say that it was estimated at or near 1% depending on the studies.
Finally, in the light of the results obtained by computing the summary measures with and without studies (shown as outliers Tables 3, 4, 5) we suggest that the tools used in Level 1 screening are adequate to detect ASD in the 14–36 age range. Thus, we confirm - in quantitative terms- the finding of the USPSTF that screening detects ASD.
A systemic review and meta-analysis of screening tools to detect ASD in toddlers determined that these measures detect ASD with high Se and Sp. Studies were restricted to low-risk samples in children younger than 3 years old, in order to evaluate the use of these screening tools in primary pediatric care. Given that children who start ASD-specific early intervention before age three have improved outcomes compared to children who go untreated prior to preschool, it is essential to disseminate strategies to improve the identification of the children in need of intervention as young as possible. Consistent with the recommendation of the American Academy of Pediatrics (Johnson et al. 2007 ) results of the current study show the validity of low-risk screening to identify ASD in children under 3 years old.
The authors thank their colleagues from the AJ Drexel Autism Institute and others that have supported this research. Special thanks to Dr. Newschaffer, to members of Dr. Robins' team, and especially at UNC Chapel Hill to Dr. Baranek who contributed as yet unpublished data to this meta-analysis. We likewise wish to thank all the researchers whom we contacted during the search for grey literature. Also, the authors express special appreciation to Dr. Canal-Bedía, Magán-Maganto and de Pablos who took part in the process of qualitative review and to Dr. Verdugo-Alonso for providing ongoing support for this project. Finally, we thank the Fulbright Commission for supporting this Project.
Compliance with Ethical Standards
Ethical Approval
The information and analysis in this research is essentially based on data gathered on previous primary studies in which ethical approval.
Informed consent were obtained by the investigators from all individual participants included in their studies.
OpenAccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
The Search Strategy Described on PubMed was Carried on May 2015
#1 "Autistic Disorder" [Majr] OR "Autistic Disorder" [Title/Abstract] OR "Autistic Disorders" [Title/Abstract] OR "Autism" [Title/Abstract] OR "Child Development Disorders, Pervasive" [Majr] OR "Pervasive Developmental Disorder" [Title/Abstract] OR "Pervasive Developmental Disorders" [Title/Abstract] OR "PDD" [Title/Abstract] OR "Autistic Spectrum Disorder" [Title/Abstract] OR "Autistic Spectrum Disorders" [Title/Abstract] OR "Autism Spectrum Disorder" [Title/Abstract] OR "Autism Spectrum Disorders" [Title/Abstract] OR "ASD" [Title/Abstract]
#2 "Diagnosis" [Mesh:noexp] OR "Diagnosis" [Subheading] OR "Diagnosis" [Title/Abstract] OR "Early Diagnosis" [Mesh:noexp] OR "Early Diagnosis" [Title/Abstract] OR "Detection" [Title/Abstract] OR "Early Detection" [Title/Abstract] OR "Early Identification" [Title/Abstract] OR "Early Intervention" [Title/Abstract] OR "Early Prediction" [Title/Abstract]
#3 "Screening" [Title/Abstract] OR "Early Screening" [Title/Abstract] OR "Mass Screening" [Majr:noexp] OR "Mass Screening/instrumentation" [Majr:noexp] OR "Mass Screening/methods" [Majr:noexp] OR "Mass Screening" [Title/Abstract] OR "Screening Tool" [Title/Abstract] OR "Screening Tools" [Title/Abstract] OR "Screening Test" [Title/Abstract] OR "Screening Instrument" [Title/Abstract] OR "Screening Instruments" [Title/Abstract] OR "Checklist" [MeSH Terms] OR "Checklist" [Title/Abstract] OR "Checklists" [Title/Abstract] OR "Follow-up" [Title/Abstract]
#4 (#2 AND #3)
#6 "Infant" [MeSH Terms:noexp] OR "Child, Preschool" [MeSH Terms] OR "Infant" [Title/Abstract] OR "Infants" [Title/Abstract] OR "Preschool Child" [Title/Abstract] OR "Preschool Children" [Title/Abstract] OR "Toddler" [Title/Abstract] OR "Toddlers" [Title/Abstract]
#8 "1992/01/01" [PDAT]: "2015/04/31" [PDAT]
#9 English[Lang]
#10 (#7 AND #8 AND #9)
Definitions for Bio-Statistical Terms that may not be Familiar to Readers
Cochran Q Statistic for Heterogeneity is used to determine whether variations between primary studies represent true differences or are due to chance. A p value < 0.05 indicates the presence of heterogeneity due to the low statistical strength of Cochran's Q test.
$$Q=\mathop \sum \nolimits^{} {w_i}{\left( {{T_i} - \bar {T}} \right)^2}$$
Diagnostic accuracy relates to the ability of a test to discriminate between the target condition and health. This discriminative ability can be quantified by the measures of diagnostic accuracy: sensitivity and specificity/positive and negative predicative values (PPV, NPV)/likelihood ratio/the area under the ROC curve (AUC)/diagnostic odds ratio (DOR).
Diagnostic Odds Ratio measures of the effectiveness of a diagnostic test:
$$DOR=(LR+)/(LR- )=(TP/FN)/(FP/TN).$$
Egger' s test is a simple linear regression of the magnitude of the effect divided by the standard error over the inverse standard error which verifies whether the Y intercept is statistically significant with p < 0.1.
Graphical analysis the starting point for investigation of heterogeneity in diagnostic or screening accuracy reviews often is through visual assessment of study results in forest plots and in ROC space.
Grey literature is generally understood to mean literature that is not formally published in accessible sources. It can be another source of bias in meta-analytical studies.
I 2 Measure for Heterogeneity indicates the percentage of variance in a meta-analysis that is attributable to studies heterogeneity. I 2 values range from 0 to 100%. I 2 values of 25%, 50%, and 75% are interpreted as low, moderate, and high estimates, respectively:
$${I^2}=\left\{ {\begin{array}{*{20}{c}} {\frac{{Q - \left( {k - 1} \right)}}{1} \times \,100\% }&{to\;Q>k - 1} \\ 0&{to\;Q \leqslant k - 1} \end{array}} \right.$$
Negative Likelihood Ratio ( LR−) shows how much the odd of the target condition is decreased when the test index is negative.
$$LR- =(1 - Se)/Sp$$
Negative Predictive Value (NPV) probability of no target condition among patients with a negative index test result.
$$NPV=(TN)/(TN+FN)$$
Positive Predictive Value (PPV) probability of target condition among patients who actually have the disease.
$$PPV=TP/(TP+FP)$$
Positive Likelihood Ratio ( LR+) shows how much the odds of the target condition are increased when the test index is positive.
$$LR+=Se/(1 - Sp)$$
Publication bias is the term for what occurs whenever the research that appears in the published literature is systematically unrepresentative of the population of completed studies.
The posterior predictive p-value is a Bayesian alternative to the classical p-value. It is used to calculate the tail-area probability corresponding to the observed value of the statistic.
p-value The probability under the assumption of null hypothesis, of obtaining a result equal to or more extreme than what was observed. It shows whether a difference found between groups that are being compared is due to chance.
Sensitivity (Se) proportion of positives patients with the target condition who are identified as having the condition.
$$Se=(TP)/(TP+FN)$$
Specificity (Sp) proportion of negatives patients without the target condition who are identified as not having the condition.
$$Sp=(TN)/(TN+FP)$$
Below is the link to the electronic supplementary material.
Supplementary material 1 (DOCX 16 KB)
Onze productaanbevelingen
go back to reference Baird, G., Charman, T., Baron-Cohen, S., Cox, A., Swettenham, J., Wheelwright, S., & Drew, A. (2000). A screening instrument for autism at 18 months of age: A 6-year follow-up study. Journal of the American Academy of Child and Adolescent Psychiatry, 39(6), 694–702. https://doi.org/10.1097/00004583-200006000-00007. CrossRefPubMed Baird, G., Charman, T., Baron-Cohen, S., Cox, A., Swettenham, J., Wheelwright, S., & Drew, A. (2000). A screening instrument for autism at 18 months of age: A 6-year follow-up study. Journal of the American Academy of Child and Adolescent Psychiatry, 39(6), 694–702. https://doi.org/10.1097/00004583-200006000-00007. CrossRefPubMed
go back to reference Baranek, G. T. (2015). Sensitivity/ specificity of the FYI, MCHAT, and SRS for the North Carolina community sample for meta-analysis. Unpublished manuscript. Baranek, G. T. (2015). Sensitivity/ specificity of the FYI, MCHAT, and SRS for the North Carolina community sample for meta-analysis. Unpublished manuscript.
go back to reference Barbaro, J., & Dissanayake, C. (2010). Prospective identification of autism spectrum disorders in infancy and toddlerhood using developmental surveillance: The social attention and communication study. Journal of Developmental & Behavioral Pediatrics, 31(5), 376–385. https://doi.org/10.1097/DBP.0b013e3181df7f3c. CrossRef Barbaro, J., & Dissanayake, C. (2010). Prospective identification of autism spectrum disorders in infancy and toddlerhood using developmental surveillance: The social attention and communication study. Journal of Developmental & Behavioral Pediatrics, 31(5), 376–385. https://doi.org/10.1097/DBP.0b013e3181df7f3c. CrossRef
go back to reference Baron-Cohen, S., Cox, A., Baird, G., Swettenham, J., Nightingale, N., Morgan, K., et al. (1996). Psychological markers in the detection of autism in infancy in a large population. The British Journal of Psychiatry, 168(2), 158–163. CrossRefPubMed Baron-Cohen, S., Cox, A., Baird, G., Swettenham, J., Nightingale, N., Morgan, K., et al. (1996). Psychological markers in the detection of autism in infancy in a large population. The British Journal of Psychiatry, 168(2), 158–163. CrossRefPubMed
go back to reference Bronsvoort, B. M. d. C., Wissmann, B. v., Fèvre, E. M., Handel, I. G., Picozzi, K., & Welburn, S. C. (2010). No gold standard estimation of the sensitivity and specificity of two molecular diagnostic protocols for Trypanosoma brucei spp. in western Kenya. PLoS ONE, 5(1), e8628. https://doi.org/10.1371/journal.pone.0008628. CrossRefPubMedCentral Bronsvoort, B. M. d. C., Wissmann, B. v., Fèvre, E. M., Handel, I. G., Picozzi, K., & Welburn, S. C. (2010). No gold standard estimation of the sensitivity and specificity of two molecular diagnostic protocols for Trypanosoma brucei spp. in western Kenya. PLoS ONE, 5(1), e8628. https://doi.org/10.1371/journal.pone.0008628. CrossRefPubMedCentral
go back to reference Canal-Bedia, R., García-Primo, P., Martín-Cilleros, M. V., Santos-Borbujo, J., Guisuraga-Fernández, Z., Herráez-García, L., et al. (2011). Modified checklist for autism in toddlers: Cross-cultural adaptation and validation in Spain. Journal of Autism and Developmental Disorders, 41(10), 1342–1351. https://doi.org/10.1007/s10803-010-1163-z. CrossRefPubMed Canal-Bedia, R., García-Primo, P., Martín-Cilleros, M. V., Santos-Borbujo, J., Guisuraga-Fernández, Z., Herráez-García, L., et al. (2011). Modified checklist for autism in toddlers: Cross-cultural adaptation and validation in Spain. Journal of Autism and Developmental Disorders, 41(10), 1342–1351. https://doi.org/10.1007/s10803-010-1163-z. CrossRefPubMed
go back to reference Chlebowski, C., Robins, D. L., Barton, M. L., & Fein, D. (2013). Large-scale use of the modified checklist for autism in low-risk toddlers. Pediatrics, 131(4), e1121. https://doi.org/10.1542/peds.2012-1525. CrossRefPubMedPubMedCentral Chlebowski, C., Robins, D. L., Barton, M. L., & Fein, D. (2013). Large-scale use of the modified checklist for autism in low-risk toddlers. Pediatrics, 131(4), e1121. https://doi.org/10.1542/peds.2012-1525. CrossRefPubMedPubMedCentral
go back to reference Cochran, W. G. (1954). The combination of estimates from different experiments. Biometrics, 10, 101–129. https://doi.org/10.2307/3001666. CrossRef Cochran, W. G. (1954). The combination of estimates from different experiments. Biometrics, 10, 101–129. https://doi.org/10.2307/3001666. CrossRef
go back to reference Council on Children With Disabilities, Section on Developmental Behavioral Pediatrics, Bright Futures Steering Committee and Medical Home Initiatives for Children With Special Needs Project Advisory Committee. (2006). Identifying infants and young children with developmental disorders in the medical home: An algorithm for developmental surveillance and screening. Pediatrics, 118(4), 405–420. https://doi.org/10.1542/peds.2006-1231. CrossRef Council on Children With Disabilities, Section on Developmental Behavioral Pediatrics, Bright Futures Steering Committee and Medical Home Initiatives for Children With Special Needs Project Advisory Committee. (2006). Identifying infants and young children with developmental disorders in the medical home: An algorithm for developmental surveillance and screening. Pediatrics, 118(4), 405–420. https://doi.org/10.1542/peds.2006-1231. CrossRef
go back to reference Daniels, A. M., Halladay, A. K., Shih, A., Elder, L. M., & Dawson, G. (2014). Approaches to enhancing the early detection of autism spectrum disorders: A systematic review of the literature. Journal of the American Academy of Child and Adolescent Psychiatry, 53(2), 141–152. https://doi.org/10.1016/j.jaac.2013.11.002. CrossRefPubMed Daniels, A. M., Halladay, A. K., Shih, A., Elder, L. M., & Dawson, G. (2014). Approaches to enhancing the early detection of autism spectrum disorders: A systematic review of the literature. Journal of the American Academy of Child and Adolescent Psychiatry, 53(2), 141–152. https://doi.org/10.1016/j.jaac.2013.11.002. CrossRefPubMed
go back to reference Deeks, J. J. (2001). Systematic reviews in health care: Systematic reviews of evaluations of diagnostic and screening tests. British Medical Journal, 323(7305), 157–162. CrossRefPubMed Deeks, J. J. (2001). Systematic reviews in health care: Systematic reviews of evaluations of diagnostic and screening tests. British Medical Journal, 323(7305), 157–162. CrossRefPubMed
go back to reference Dendukuri, N., Schiller, I., Joseph, L., & Pai, M. (2012). Bayesian meta-analysis of the accuracy of a test for tuberculous pleuritis in the absence of a gold standard reference. Biometrics, 68(4), 1285–1293. https://doi.org/10.1111/j.1541-0420.2012.01773.x. CrossRefPubMedPubMedCentral Dendukuri, N., Schiller, I., Joseph, L., & Pai, M. (2012). Bayesian meta-analysis of the accuracy of a test for tuberculous pleuritis in the absence of a gold standard reference. Biometrics, 68(4), 1285–1293. https://doi.org/10.1111/j.1541-0420.2012.01773.x. CrossRefPubMedPubMedCentral
go back to reference Dereu, M., Warreyn, P., Raymaekers, R., Meirsschaut, M., Pattyn, G., Schietecatte, I., & Roeyers, H. (2010). Screening for autism spectrum disorders in Flemish day-care centres with the checklist for early signs of developmental disorders. Journal of Autism and Developmental Disorders, 40(10), 1247–1258. https://doi.org/10.1007/s10803-010-0984-0. CrossRefPubMed Dereu, M., Warreyn, P., Raymaekers, R., Meirsschaut, M., Pattyn, G., Schietecatte, I., & Roeyers, H. (2010). Screening for autism spectrum disorders in Flemish day-care centres with the checklist for early signs of developmental disorders. Journal of Autism and Developmental Disorders, 40(10), 1247–1258. https://doi.org/10.1007/s10803-010-0984-0. CrossRefPubMed
go back to reference DerSimonian, R., & Laird, N. (1986). Meta-analysis in clinical trials. Controlled Clinical Trials, 7(3), 177–188. CrossRefPubMed DerSimonian, R., & Laird, N. (1986). Meta-analysis in clinical trials. Controlled Clinical Trials, 7(3), 177–188. CrossRefPubMed
go back to reference Dietz, C., Swinkels, S., van Daalen, E., van Engeland, H., & Buitelaar, J. K. (2006). Screening for autistic spectrum disorder in children aged 14–15 months. II: Population screening with the early screening of autistic traits questionnaire (ESAT). Design and general findings. Journal of Autism and Developmental Disorders, 36(6), 713–722. https://doi.org/10.1007/s10803-006-0114-1. CrossRefPubMed Dietz, C., Swinkels, S., van Daalen, E., van Engeland, H., & Buitelaar, J. K. (2006). Screening for autistic spectrum disorder in children aged 14–15 months. II: Population screening with the early screening of autistic traits questionnaire (ESAT). Design and general findings. Journal of Autism and Developmental Disorders, 36(6), 713–722. https://doi.org/10.1007/s10803-006-0114-1. CrossRefPubMed
go back to reference Doebler, P. (2015). Mada: Meta-analysis of diagnostic accuracy. Retrieved from https://cran.r-project.org/web/packages/mada/index.html. Doebler, P. (2015). Mada: Meta-analysis of diagnostic accuracy. Retrieved from https://cran.r-project.org/web/packages/mada/index.html.
go back to reference Doebler, P., Holling, H., & Böhning, D. (2012). A mixed model approach to meta-analysis of diagnostic studies with binary test outcome. Psychological Methods, 17(3), 418–436. https://doi.org/10.1037/a0028091. CrossRefPubMed Doebler, P., Holling, H., & Böhning, D. (2012). A mixed model approach to meta-analysis of diagnostic studies with binary test outcome. Psychological Methods, 17(3), 418–436. https://doi.org/10.1037/a0028091. CrossRefPubMed
go back to reference Dukic, V., & Gatsonis, C. (2003). Meta-analysis of diagnostic test accuracy assessment studies with varying number of thresholds. Biometrics, 59(4), 936–946. https://doi.org/10.1111/j.0006-341X.2003.00108.x. CrossRefPubMed Dukic, V., & Gatsonis, C. (2003). Meta-analysis of diagnostic test accuracy assessment studies with varying number of thresholds. Biometrics, 59(4), 936–946. https://doi.org/10.1111/j.0006-341X.2003.00108.x. CrossRefPubMed
go back to reference Gatsonis, C., & Paliwal, P. (2006). Meta-analysis of diagnostic and screening test accuracy evaluations: Methodologic primer. AJR: American Journal of Roentgenology, 187(2), 271–281. https://doi.org/10.2214/AJR.06.0226. CrossRefPubMed Gatsonis, C., & Paliwal, P. (2006). Meta-analysis of diagnostic and screening test accuracy evaluations: Methodologic primer. AJR: American Journal of Roentgenology, 187(2), 271–281. https://doi.org/10.2214/AJR.06.0226. CrossRefPubMed
go back to reference Harbord, R. M., & Whiting, P. (2009). metandi: Meta-analysis of diagnostic accuracy using hierarchical logistic regression. Stata Journal, 9(2), 211–229. CrossRef Harbord, R. M., & Whiting, P. (2009). metandi: Meta-analysis of diagnostic accuracy using hierarchical logistic regression. Stata Journal, 9(2), 211–229. CrossRef
go back to reference Higgins, J. P. T., & Thompson, S. G. (2002). Quantifying heterogeneity in a meta-analysis. Statistics in Medicine, 21, 1539–1558. https://doi.org/10.1002/sim.1186. CrossRefPubMed Higgins, J. P. T., & Thompson, S. G. (2002). Quantifying heterogeneity in a meta-analysis. Statistics in Medicine, 21, 1539–1558. https://doi.org/10.1002/sim.1186. CrossRefPubMed
go back to reference Higgins, J. P. T., Thompson, S. G., Deeks, J. J., & Altman, D. G. (2003). Measuring inconsistency in meta-analyses. British Medical Journal, 327(7414), 557–560. https://doi.org/10.1136/bmj.327.7414.557. CrossRefPubMed Higgins, J. P. T., Thompson, S. G., Deeks, J. J., & Altman, D. G. (2003). Measuring inconsistency in meta-analyses. British Medical Journal, 327(7414), 557–560. https://doi.org/10.1136/bmj.327.7414.557. CrossRefPubMed
go back to reference Honda, H., Shimizu, Y., Imai, M., & Nitto, Y. (2005). Cumulative incidence of childhood autism: A total population study of better accuracy and precision. Developmental Medicine and Child Neurology, 47(1), 10–18. https://doi.org/10.1111/j.1469-8749.2005.tb01034.x. CrossRefPubMed Honda, H., Shimizu, Y., Imai, M., & Nitto, Y. (2005). Cumulative incidence of childhood autism: A total population study of better accuracy and precision. Developmental Medicine and Child Neurology, 47(1), 10–18. https://doi.org/10.1111/j.1469-8749.2005.tb01034.x. CrossRefPubMed
go back to reference Honda, H., Shimizu, Y., Nitto, Y., Imai, M., Ozawa, T., Iwasa, M., & Hira, T. (2009). Extraction and refinement strategy for detection of autism in 18-month-olds: A guarantee of higher sensitivity and specificity in the process of mass screening. Journal of Child Psychology and Psychiatry, 50(8), 972–981. https://doi.org/10.1111/j.1469-7610.2009.02055.x. CrossRefPubMed Honda, H., Shimizu, Y., Nitto, Y., Imai, M., Ozawa, T., Iwasa, M., & Hira, T. (2009). Extraction and refinement strategy for detection of autism in 18-month-olds: A guarantee of higher sensitivity and specificity in the process of mass screening. Journal of Child Psychology and Psychiatry, 50(8), 972–981. https://doi.org/10.1111/j.1469-7610.2009.02055.x. CrossRefPubMed
go back to reference Inada, N., Koyama, T., Inokuchi, E., Kuroda, M., & Kamio, Y. (2011). Reliability and validity of the Japanese version of the modified checklist for autism in toddlers (M-CHAT). Research in Autism Spectrum Disorders, 5(1), 330–336. https://doi.org/10.1016/j.rasd.2010.04.016. CrossRef Inada, N., Koyama, T., Inokuchi, E., Kuroda, M., & Kamio, Y. (2011). Reliability and validity of the Japanese version of the modified checklist for autism in toddlers (M-CHAT). Research in Autism Spectrum Disorders, 5(1), 330–336. https://doi.org/10.1016/j.rasd.2010.04.016. CrossRef
go back to reference Johnson, C. P., Myers, S. M., & Council on Children with Disabilities (2007). Identification and evaluation of children with autism spectrum disorders. Pediatrics, 120(5), 1183–1215. https://doi.org/10.1542/peds.2007-2361. CrossRef Johnson, C. P., Myers, S. M., & Council on Children with Disabilities (2007). Identification and evaluation of children with autism spectrum disorders. Pediatrics, 120(5), 1183–1215. https://doi.org/10.1542/peds.2007-2361. CrossRef
go back to reference Kamio, Y., Inada, N., Koyama, T., Inokuchi, E., Tsuchiya, K., & Kuroda, M. (2014). Effectiveness of using the modified checklist for autism in toddlers in two-stage screening of autism spectrum disorder at the 18-month health check-up in Japan. Journal of Autism and Developmental Disorders, 44(1), 194–203. https://doi.org/10.1007/s10803-013-1864-1. CrossRefPubMed Kamio, Y., Inada, N., Koyama, T., Inokuchi, E., Tsuchiya, K., & Kuroda, M. (2014). Effectiveness of using the modified checklist for autism in toddlers in two-stage screening of autism spectrum disorder at the 18-month health check-up in Japan. Journal of Autism and Developmental Disorders, 44(1), 194–203. https://doi.org/10.1007/s10803-013-1864-1. CrossRefPubMed
go back to reference Kleinman, J. M., Robins, D. L., Ventola, P. E., Pandey, J., Boorstein, H. C., Esser, E. L., et al. (2008). The modified checklist for autism in toddlers: A follow-up study investigating the early detection of autism spectrum disorders. Journal of Autism and Developmental Disorders, 38(5), 827–839. https://doi.org/10.1007/s10803-007-0450-9. CrossRefPubMedPubMedCentral Kleinman, J. M., Robins, D. L., Ventola, P. E., Pandey, J., Boorstein, H. C., Esser, E. L., et al. (2008). The modified checklist for autism in toddlers: A follow-up study investigating the early detection of autism spectrum disorders. Journal of Autism and Developmental Disorders, 38(5), 827–839. https://doi.org/10.1007/s10803-007-0450-9. CrossRefPubMedPubMedCentral
go back to reference Leeflang, M. M., Deeks, J. J., Takwoingi, Y., & Macaskill, P. (2013). Cochrane diagnostic test accuracy reviews. Systematic Reviews, 2(1), 1–6. https://doi.org/10.1186/2046-4053-2-82. CrossRef Leeflang, M. M., Deeks, J. J., Takwoingi, Y., & Macaskill, P. (2013). Cochrane diagnostic test accuracy reviews. Systematic Reviews, 2(1), 1–6. https://doi.org/10.1186/2046-4053-2-82. CrossRef
go back to reference Macaskill, P. (2004). Empirical Bayes estimates generated in a hierarchical summary ROC analysis agreed closely with those of a full Bayesian analysis. Journal of Clinical Epidemiology, 57(9), 925–932. https://doi.org/10.1016/j.jclinepi.2003.12.019. CrossRefPubMed Macaskill, P. (2004). Empirical Bayes estimates generated in a hierarchical summary ROC analysis agreed closely with those of a full Bayesian analysis. Journal of Clinical Epidemiology, 57(9), 925–932. https://doi.org/10.1016/j.jclinepi.2003.12.019. CrossRefPubMed
go back to reference Macaskill, P., Gatsonis, C., Deeks, J. J., Harbord, R. M., & Takwoingi, Y. (2010). Chapter 10: Analysing and presenting results. In J. J. Deeks, P. M. Bossuyt, & C. Gatsonis (Eds.), Cochrane handbook for systematic reviews of diagnostic test accuracy. Retrieved from http://methods.cochrane.org/sites/methods.cochrane.org.sdt/files/public/uploads/Chapter%2010%20-%20Version%201.0.pdf. Macaskill, P., Gatsonis, C., Deeks, J. J., Harbord, R. M., & Takwoingi, Y. (2010). Chapter 10: Analysing and presenting results. In J. J. Deeks, P. M. Bossuyt, & C. Gatsonis (Eds.), Cochrane handbook for systematic reviews of diagnostic test accuracy. Retrieved from http://methods.cochrane.org/sites/methods.cochrane.org.sdt/files/public/uploads/Chapter%2010%20-%20Version%201.0.pdf.
go back to reference McPheeters, M. L., Weitlauf, A., Vehorn, A., Taylor, C., Sathe, N. A., Krishnaswami, S., et al. (2016). Screening for autism spectrum disorder in young children: A systematic evidence review for the U.S. Preventive Services Task Force. Retrieved from http://www.ncbi.nlm.nih.gov/books/NBK349703/. McPheeters, M. L., Weitlauf, A., Vehorn, A., Taylor, C., Sathe, N. A., Krishnaswami, S., et al. (2016). Screening for autism spectrum disorder in young children: A systematic evidence review for the U.S. Preventive Services Task Force. Retrieved from http://www.ncbi.nlm.nih.gov/books/NBK349703/.
go back to reference Miller, J. S., Gabrielsen, T., Villalobos, M., Alleman, R., Wahmhoff, N., Carbone, P. S., & Segura, B. (2011). The each child study: Systematic screening for autism spectrum disorders in a pediatric setting. Pediatrics, 127(5), 866. https://doi.org/10.1542/peds.2010-0136. CrossRefPubMed Miller, J. S., Gabrielsen, T., Villalobos, M., Alleman, R., Wahmhoff, N., Carbone, P. S., & Segura, B. (2011). The each child study: Systematic screening for autism spectrum disorders in a pediatric setting. Pediatrics, 127(5), 866. https://doi.org/10.1542/peds.2010-0136. CrossRefPubMed
go back to reference Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G., The PRISMA Group (2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Medcine, 6(7), e1000097. https://doi.org/10.1371/journal.pmed.1000097. CrossRef Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G., The PRISMA Group (2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Medcine, 6(7), e1000097. https://doi.org/10.1371/journal.pmed.1000097. CrossRef
go back to reference Moses, L. E., Shapiro, D., & Littenberg, B. (1993). Combining independent studies of a diagnostic test into a summary ROC curve: Data-analytic approaches and some additional considerations. Statistics in Medicine, 12(14), 1293–1316. CrossRefPubMed Moses, L. E., Shapiro, D., & Littenberg, B. (1993). Combining independent studies of a diagnostic test into a summary ROC curve: Data-analytic approaches and some additional considerations. Statistics in Medicine, 12(14), 1293–1316. CrossRefPubMed
go back to reference Nygren, G., Sandberg, E., Gillstedt, F., Ekeroth, G., Arvidsson, T., & Gillberg, C. (2012). A new screening programme for autism in a general population of Swedish toddlers. Research in Developmental Disabilities, 33(4), 1200–1210. https://doi.org/10.1016/j.ridd.2012.02.018. CrossRefPubMed Nygren, G., Sandberg, E., Gillstedt, F., Ekeroth, G., Arvidsson, T., & Gillberg, C. (2012). A new screening programme for autism in a general population of Swedish toddlers. Research in Developmental Disabilities, 33(4), 1200–1210. https://doi.org/10.1016/j.ridd.2012.02.018. CrossRefPubMed
go back to reference Pierce, K., Carter, C., Weinfeld, M., Desmond, J., Hazin, R., Bjork, R., & Gallagher, N. (2011). Detecting, studying, and treating autism early: The one-year well-baby check-up approach. The Journal of Pedriatics, 159(3), 458–465.e6. https://doi.org/10.1016/j.jpeds.2011.02.036. CrossRef Pierce, K., Carter, C., Weinfeld, M., Desmond, J., Hazin, R., Bjork, R., & Gallagher, N. (2011). Detecting, studying, and treating autism early: The one-year well-baby check-up approach. The Journal of Pedriatics, 159(3), 458–465.e6. https://doi.org/10.1016/j.jpeds.2011.02.036. CrossRef
go back to reference Reitsma, J. B., Glas, A. S., Rutjes, A. W. S., Scholten, R. J. P. M., Bossuyt, P. M., & Zwinderman, A. H. (2005). Bivariate analysis of sensitivity and specificity produces informative summary measures in diagnostic reviews. Journal of Clinical Epidemiology, 58(10), 982–990. https://doi.org/10.1016/j.jclinepi.2005.02.022. CrossRefPubMed Reitsma, J. B., Glas, A. S., Rutjes, A. W. S., Scholten, R. J. P. M., Bossuyt, P. M., & Zwinderman, A. H. (2005). Bivariate analysis of sensitivity and specificity produces informative summary measures in diagnostic reviews. Journal of Clinical Epidemiology, 58(10), 982–990. https://doi.org/10.1016/j.jclinepi.2005.02.022. CrossRefPubMed
go back to reference Robins, D. L. (2008). Screening for autism spectrum disorders in primary care settings. Autism, 12(5), 537–556. https://doi.org/10.1177/1362361308094502. CrossRefPubMedPubMedCentral Robins, D. L. (2008). Screening for autism spectrum disorders in primary care settings. Autism, 12(5), 537–556. https://doi.org/10.1177/1362361308094502. CrossRefPubMedPubMedCentral
go back to reference Robins, D. L., Casagrande, K., Barton, M., Chen., C.-M. A., Dumont-Mathieu, T., & Fein, D. (2014). Validation of the modified checklist for autism in toddlers, revised with follow-up (M-CHAT-R/F). Pediatrics, 133(1), 37. https://doi.org/10.1542/peds.2013-1813. CrossRefPubMedPubMedCentral Robins, D. L., Casagrande, K., Barton, M., Chen., C.-M. A., Dumont-Mathieu, T., & Fein, D. (2014). Validation of the modified checklist for autism in toddlers, revised with follow-up (M-CHAT-R/F). Pediatrics, 133(1), 37. https://doi.org/10.1542/peds.2013-1813. CrossRefPubMedPubMedCentral
go back to reference Robins, D. L., Fein, D., & Barton, M. (1999). The modified checklist for autism in toddlers (M-CHAT) Storrs. CT: Self-published. Robins, D. L., Fein, D., & Barton, M. (1999). The modified checklist for autism in toddlers (M-CHAT) Storrs. CT: Self-published.
go back to reference Robins, D. L., Fein, D., & Barton, M. (2009). The modified checklist for autism in toddlers, revised with follow-up (M-CHAT-R/F). Self-published. Robins, D. L., Fein, D., & Barton, M. (2009). The modified checklist for autism in toddlers, revised with follow-up (M-CHAT-R/F). Self-published.
go back to reference Rutter, C. M., & Gatsonis, C. A. (2001). A hierarchical regression approach to meta-analysis of diagnostic test accuracy evaluations. Statistics in Medicine, 20(19), 2865–2884. https://doi.org/10.1002/sim.942. CrossRefPubMed Rutter, C. M., & Gatsonis, C. A. (2001). A hierarchical regression approach to meta-analysis of diagnostic test accuracy evaluations. Statistics in Medicine, 20(19), 2865–2884. https://doi.org/10.1002/sim.942. CrossRefPubMed
go back to reference Schiller, I., & Dendukuri, N. (2015). HSROC: Joint meta-analysis of diagnostic test sensitivity and specificity with or without a gold standard reference test. R package version 2.1.8. Retrieved from http://artax.karlin.mff.cuni.cz/r-help/library/HSROC/html/00Index.html. Schiller, I., & Dendukuri, N. (2015). HSROC: Joint meta-analysis of diagnostic test sensitivity and specificity with or without a gold standard reference test. R package version 2.1.8. Retrieved from http://artax.karlin.mff.cuni.cz/r-help/library/HSROC/html/00Index.html.
go back to reference Siu, A. L., & U.S. Preventive Services Task Force (USPSTF) (2016). Screening for autism spectrum disorder in young children: US Preventive Services Task Force recommendation statement. JAMA, 315(7), 691–696. https://doi.org/10.1001/jama.2016.0018. CrossRefPubMed Siu, A. L., & U.S. Preventive Services Task Force (USPSTF) (2016). Screening for autism spectrum disorder in young children: US Preventive Services Task Force recommendation statement. JAMA, 315(7), 691–696. https://doi.org/10.1001/jama.2016.0018. CrossRefPubMed
go back to reference Song, F., Khan, K. S., Dinnes, J., & Sutton, A. J. (2002). Asymmetric funnel plots and publication bias in meta-analyses of diagnostic accuracy. International Journal of Epidemiology, 31(1), 88–95. CrossRefPubMed Song, F., Khan, K. S., Dinnes, J., & Sutton, A. J. (2002). Asymmetric funnel plots and publication bias in meta-analyses of diagnostic accuracy. International Journal of Epidemiology, 31(1), 88–95. CrossRefPubMed
go back to reference Stenberg, N., Bresnahan, M., Gunnes, N., Hirtz, D., Hornig, M., Lie, K. K., et al. (2014). Identifying children with autism spectrum disorder at 18 months in a general population sample. Paediatric and Perinatal Epidemiology, 28(3), 255–262. https://doi.org/10.1111/ppe.12114. CrossRefPubMedPubMedCentral Stenberg, N., Bresnahan, M., Gunnes, N., Hirtz, D., Hornig, M., Lie, K. K., et al. (2014). Identifying children with autism spectrum disorder at 18 months in a general population sample. Paediatric and Perinatal Epidemiology, 28(3), 255–262. https://doi.org/10.1111/ppe.12114. CrossRefPubMedPubMedCentral
go back to reference Trikalinos, T. A., Balion, C. M., Coleman, C. I., Griffith, L., Santaguida, P. L., Vandermeer, B., & Fu, R. (2012). Chapter 8: Meta-analysis of test performance when there is a "gold standard. Journal of General Internal Medicine, 27(S1), 56–66. https://doi.org/10.1007/s11606-012-2029-1. CrossRefPubMedCentral Trikalinos, T. A., Balion, C. M., Coleman, C. I., Griffith, L., Santaguida, P. L., Vandermeer, B., & Fu, R. (2012). Chapter 8: Meta-analysis of test performance when there is a "gold standard. Journal of General Internal Medicine, 27(S1), 56–66. https://doi.org/10.1007/s11606-012-2029-1. CrossRefPubMedCentral
go back to reference VanDenHeuvel, A., Fitzgerald, M., Greiner, B. A., & Perry, I. J. (2007). Screening for autistic spectrum disorder at the 18-month developmental assessment: A population-based study. Irish Medical Journal, 100(8), 565–567. PubMed VanDenHeuvel, A., Fitzgerald, M., Greiner, B. A., & Perry, I. J. (2007). Screening for autistic spectrum disorder at the 18-month developmental assessment: A population-based study. Irish Medical Journal, 100(8), 565–567. PubMed
go back to reference Wetherby, A. M., Brosnan-Maddox, S., Peace, V., & Newton, L. (2008). Validation of the infant-toddler checklist as a broadband screener for autism spectrum disorders from 9 to 24 months of age. Autism, 12(5), 487–511. https://doi.org/10.1177/1362361308094501. CrossRefPubMedPubMedCentral Wetherby, A. M., Brosnan-Maddox, S., Peace, V., & Newton, L. (2008). Validation of the infant-toddler checklist as a broadband screener for autism spectrum disorders from 9 to 24 months of age. Autism, 12(5), 487–511. https://doi.org/10.1177/1362361308094501. CrossRefPubMedPubMedCentral
go back to reference Whiting, P. F. (2011). QUADAS-2: A revised tool for the quality assessment of diagnostic accuracy studies. Annals of Internal Medicine, 155(8), 529. https://doi.org/10.7326/0003-4819-155-8-201110180-00009. CrossRefPubMed Whiting, P. F. (2011). QUADAS-2: A revised tool for the quality assessment of diagnostic accuracy studies. Annals of Internal Medicine, 155(8), 529. https://doi.org/10.7326/0003-4819-155-8-201110180-00009. CrossRefPubMed
go back to reference Whiting, P. F., Rutjes, A. W. S., Dinnes, J., Reitsma, J., Bossuyt, P. M. M., & Kleijnen, J. (2004). Development and validation of methods for assessing the quality of diagnostic accuracy studies. Health Technology Assessment, 8(25), iii, 1–234. CrossRef Whiting, P. F., Rutjes, A. W. S., Dinnes, J., Reitsma, J., Bossuyt, P. M. M., & Kleijnen, J. (2004). Development and validation of methods for assessing the quality of diagnostic accuracy studies. Health Technology Assessment, 8(25), iii, 1–234. CrossRef
go back to reference Whiting, P. F., Weswood, M. E., Rutjes, A. W., Reitsma, J. B., Bossuyt, P. N., & Kleijnen, J. (2006). Evaluation of QUADAS, a tool for the quality assessment of diagnostic accuracy studies. BMC Medical Research Methodology. https://doi.org/10.1186/1471-2288-6-9. CrossRefPubMedPubMedCentral Whiting, P. F., Weswood, M. E., Rutjes, A. W., Reitsma, J. B., Bossuyt, P. N., & Kleijnen, J. (2006). Evaluation of QUADAS, a tool for the quality assessment of diagnostic accuracy studies. BMC Medical Research Methodology. https://doi.org/10.1186/1471-2288-6-9. CrossRefPubMedPubMedCentral
go back to reference Wiggins, L. D., Piazza, V., & Robins, D. L. (2014). Comparison of a broad-based screen versus disorder-specific screen in detecting young children with an autism spectrum disorder. Autism, 18(2), 76–84. https://doi.org/10.1177/1362361312466962. CrossRefPubMed Wiggins, L. D., Piazza, V., & Robins, D. L. (2014). Comparison of a broad-based screen versus disorder-specific screen in detecting young children with an autism spectrum disorder. Autism, 18(2), 76–84. https://doi.org/10.1177/1362361312466962. CrossRefPubMed
Ana B. Sánchez-García
Purificación Galindo-Villardón
Ana B. Nieto-Librero
Helena Martín-Rodero
Diana L. Robins
Journal of Autism and Developmental Disorders
Uitgave 5/2019
Elektronisch ISSN: 1573-3432
Andere artikelen Uitgave 5/2019
Naar de uitgave
Understanding the Reasons, Contexts and Costs of Camouflaging for Autistic Adults
A Strength-Focused Parenting Intervention May Be a Valuable Augmentation to a Depression Prevention Focus for Adolescents with Autism
OriginalPaper
Psychometric Assessments of Three Self-Report Autism Scales (AQ, RBQ-2A, and SQ) for General Adult Populations
A Randomised Controlled Feasibility Trial of Immersive Virtual Reality Treatment with Cognitive Behaviour Therapy for Specific Phobias in Young People with Autism Spectrum Disorder
Course and Predictors of Sleep and Co-occurring Problems in Children with Autism Spectrum Disorder
Brief Report: Evaluation of the Short Quantitative Checklist for Autism in Toddlers (Q-CHAT-10) as a Brief Screen for Autism Spectrum Disorder in a High-Risk Sibling Cohort | CommonCrawl |
Search Results: 1 - 10 of 61918 matches for " Wei Luo "
Page 1 /61918
Preliminary Research on Visualization of S&T Policy
—A Case Study of China Innovation and Entrepreneurship Graphic Policies [PDF]
Qin Luo, Wei Song
Open Journal of Social Sciences (JSS) , 2016, DOI: 10.4236/jss.2016.45017
Abstract: Based on the current status and trends of S&T (science and technology) policy communication, this paper establishes the concept of policy visualization and describes the main applicability of visual policy. Next, we study the procedural requirements of S&T policy visualization on four aspects: subject selection, content analysis and information processing, policy graphic designation, media promotion. By this study, we recognize that the visualization of China's S&T policy has some problems, such as, the difficulty in the definition of the authority of policy graphics, in homogeneity and the lack of dynamics in the effect of graphic policy and so on. Furthermore, the paper puts forward some suggestions to improve the development of S&T policy visualization according to the existing problems.
Research on One-Stop and Full-Support Mode of Intellectual Property Transaction [PDF]
Abstract: Intellectual property right transaction is the key link of the successful transformation of scientific and technological achievements, and it plays an important role in the process of promoting the integration of S&T and economy. In order to break the gap between S&T and the economy, the state vigorously advocating the construction of comprehensive service platform for intellectual property rights is based on the whole process of transformation of scientific and technological achievements, but the current real sense of the whole support platform is numbered. Under the guidance of the innovation value chain theory, this paper expounds the connotation and essential characteristics of the one-stop and full-support platform of intellectual property transaction, and summarizes the five-stage model of intellectual property transactions and the basic service requirements during the whole process of intellectual property transactions, then builds the service system of the one-stop and full-support mode based on the overall process of intellectual property transaction and analyzes the unique advantages of the mode. Finally, the paper constructs the internal and external support system so as to improve the construction and development of one-stop and full-support platform mode.
Study on Comprehensive Evaluation Model of Commercial Housing Price-Rationalization [PDF]
Yifei Lai, Yuanxin Wei, Haiyun Luo
Journal of Service Science and Management (JSSM) , 2010, DOI: 10.4236/jssm.2010.33043
Abstract: In recent years, the prices of city commercial housing are soaring, causing wide attention of public and fierce discussion about whether it is reasonable for the housing prices in China. This paper attempts to establish a method to measure housing price-rationalization. Firstly, the paper establishes rationalization evaluation system of housing price from commercial housing price formation, residents' endurance and coordination parity system. Then it selects an appropriate standard ways to build affordable housing product evaluation criteria.
The Reaction Sequence and Dielectric Properties of BaAl2Ti5O14 Ceramics [PDF]
Xiaogang Yao, Wei Chen, Lan Luo
Advances in Materials Physics and Chemistry (AMPC) , 2012, DOI: 10.4236/ampc.2012.24B008
Abstract: To investigate the correct reaction sequence of BaO-Al2O3-5TiO2 system, powders calcined at different temperatures are analyzed by x-ray diffraction. The results show that the source phase BaCO3 decomposes below 800°C, TiO2 and Al2O3 start to consume at 900 and 1100°C, respectively. BaTi4O9 phase appears at 1000°C while BaAl2Ti5O14 phase starts to reveal at 1200°C. As the temperature increases, the density, dielectric constant and quality factor of the BaAl2Ti5O14 ceramic increase and keep unchanged at 1350°C. The dielectric properties of BaAl2Ti5O14 ceramic sintered at 1350°C for 3h are: εr=35.8, Q×f=5130GHz, τf=-6.8ppm/°C.
Pseudo DNA Sequence Generation of Non-Coding Distributions Using Variant Maps on Cellular Automata [PDF]
Jeffrey Zheng, Jin Luo, Wei Zhou
Applied Mathematics (AM) , 2014, DOI: 10.4236/am.2014.51018
In a recent decade, many DNA sequencing projects are developed on cells, plants and animals over the world into huge DNA databases. Researchers notice that mammalian genomes encoding thousands of large noncoding RNAs (lncRNAs), interact with chromatin regulatory complexes, and are thought to play a role in localizing these complexes to target loci across the genome. It is a challenge target using higher dimensional tools to organize various complex interactive properties as visual maps. In this paper, a Pseudo DNA Variant MapPDVM is proposed following Cellular Automata to represent multiple maps that use four Meta symbols as well as DNA or RNA representations. The system architecture of key components and the core mechanism on the PDVM are described. Key modules, equations and their I/O parameters are discussed. Applying the PDVM, two sets of real DNA sequences from both the sample human (noncoding DNA) and corn (coding DNA) genomes are collected in comparison with two sets of pseudo DNA sequences generated by a stream cipher HC-256 under different modes to show their intrinsic properties in higher levels of similar relationships among relevant DNA sequences on 2D maps. Sample 2D maps are listed and their characteristics are illustrated under a controllable environment. Various distributions can be observed on both noncoding and coding conditions from their symmetric properties on 2D maps.
Mismatch negativity of ERP in cross-modal attention
Yuejia Luo,Jinghan Wei
Science China Life Sciences , 1997, DOI: 10.1007/BF02882690
Abstract: Event-related potentials were measured in 12 healthy youth subjects aged 19–22 using the paradigm "cross-modal and delayed response" which is able to improve unattended purity and to avoid the effect of task target on the deviant components of ERP. The experiment included two conditions: (i) Attend visual modality, ignore auditory modality; (ii) attend auditory modality, ignore visual modality. The stimuli under the two conditions were the same. The difference wave was obtained by subtracting ERPs of the standard stimuli from that of the deviant stimuli. The present results showed that mismatch negativity (MMN), N2b and P3 components can be produced in the auditory and visual modalities under attention condition. However, only MMN was observed in the two modalities under inattention condition. Auditory and visual MMN have some features in common: their largest MMN wave peaks were distributed respectively over their primary sensory projection areas of the scalp under attention condition, but over frontc-central scalp under inattention condition. There was no significant difference between the amplitudes of visual and auditory MMN. Their amplitudes and the scalp distribution were unaffected by attention, thus suggesting that MMN amplitude is an important index reflecting automatic processing in the brain. However, the latency of the auditory and visual MMN were affected by attention, showing that MMN not only reflects automatic processing but also probably relates to control processing.
Event-related potentials study on cross-modal discrimination of Chinese characters
Abstract: Event-related potentials (ERPs) were measured in 15 normal young subjects (18–22 years old) using the "cross-modal and delayed response" paradigm, which is able to improve inattention purity. The stimuli consisted of written and spoken single Chinese characters. The presentation probability of standard stimuli was 82.5% and that of deviant stimuli was 17.5%. The attention components were obtained by subtracting the ERPs of inattention condition from those of attention condition. The results of the N1 scalp distribution demonstrated a cross-modal difference. This result is in contrast to studies with non-verbal as well as with English verbal stimuli. This probably reflected the brain mechanism feature of Chinese language processing. The processing location of attention was varied along with verbal/ non-verbal stimuli, auditory/visual modalities and standard/deviant stimuli, and thus it has plasticity. The early attention effects occurred before the exogenous components, and thus provided evidence supporting the early selective theory of attention. According to the relationship of N1 and Nd1, the present result supported the viewpoint that the N1 enhancement was caused by endogenous components overlapping with exogenous one rather than by pure exogenous component.
Relationship between LTP and the nuclear protein induced by calcineurin
Jie Luo,Qun Wei
Chinese Science Bulletin , 1998, DOI: 10.1007/BF02883688
Abstract: Results demonstrate that calcineurin plays a role in long term potentiatibn (LTP) in the hippocampus. A 45 ku enzyme was administered, which exhibits Ca2+ /calmodulin independent activity to rats, and the protein components of nuclear extracts from the cell-free system of rat hippocampus were tested. It is found that the level of a component significantly increased in nuclear extracts following the activation of calcineurin. The concentration of the component in the nucleus is also markedly increased following hippocampal LTP elicited by ginsenosides. These results suggest that the activation of calcineurin induces a preexisting protein component to translocate to the nucleus in the hippocampus. The nuclear translocation of the component may be required for LTP in the hippocampus.
Room Temperature Quantum Spin Hall Insulators with a Buckled Square Lattice
Wei Luo,Hongjun Xiang
Abstract: Two-dimensional (2D) topological insulators (TIs), also known as quantum spin Hall (QSH) insulators, are excellent candidates for coherent spin transport related applications because the edge states of 2D TIs are robust against nonmagnetic impurities since the only available backscattering channel is forbidden. Currently, most known 2D TIs are based on a hexagonal (specifically, honeycomb) lattice. Here, we propose that there exists the quantum spin Hall effect (QSHE) in a buckled square lattice. Through performing global structure optimization, we predict a new three-layer quasi-2D (Q2D) structure which has the lowest energy among all structures with the thickness less than 6.0 {\AA} for the BiF system. It is identified to be a Q2D TI with a large band gap (0.69 eV). The electronic states of the Q2D BiF system near the Fermi level are mainly contributed by the middle Bi square lattice, which are sandwiched by two inert BiF2 layers. This is beneficial since the interaction between a substrate and the Q2D material may not change the topological properties of the system, as we demonstrate in the case of the NaF substrate. Finally, we come up with a new tight-binding model for a two-orbital system with the buckled square lattice to explain the low-energy physics of the Q2D BiF material. Our study not only predicts a QSH insulator for realistic room temperature applications, but also provides a new lattice system for engineering topological states such as quantum anomalous Hall effect.
Local well-posedness and blow-up criteria for a two-component Novikov system in the critical Besov space
Wei Luo,Zhaoyang Yin
Mathematics , 2015, DOI: 10.1016/j.na.2015.03.022
Abstract: In this paper we mainly investigate the Cauchy problem of a two-component Novikov system. We first prove the local well-posedness of the system in Besov spaces $B^{s-1}_{p,r}\times B^s_{p,r}$ with $p,r\in[1,\infty],~s>\max\{1+\frac{1}{p},\frac{3}{2}\}$ by using the Littlewood-Paley theory and transport equations theory. Then, by virtue of logarithmic interpolation inequalities and the Osgood lemma, we establish the local well-posedness of the system in the critical Besov space $B^{\frac{1}{2}}_{2,1}\times B^{\frac{3}{2}}_{2,1}$. Moreover, we present two blow-up criteria for the system by making use of the conservation laws. | CommonCrawl |
Period-luminosity relations for Cepheid variables: from mid-infrared to multi-phase
@article{Ngeow2012PeriodluminosityRF,
title={Period-luminosity relations for Cepheid variables: from mid-infrared to multi-phase},
author={Chow-Choong Ngeow and Shashi Kanbur and Earl Patrick Bellinger and Marcella Marconi and Ilaria Musella and Michele Cignoni and Ya-Hong Lin},
journal={Astrophysics and Space Science},
pages={105-113}
C. Ngeow, S. Kanbur, +4 authors Ya-Hong Lin
Published 1 February 2012
Astrophysics and Space Science
This paper discusses two aspects of current research on the Cepheid period-luminosity (P-L) relation: the derivation of mid-infrared (MIR) P-L relations and the investigation of multi-phase P-L relations.The MIR P-L relations for Cepheids are important in the James Webb Space Telescope era for the distance scale issue, as the relations have potential to derive the Hubble constant within ∼2% accuracy—a critical constraint in precision cosmology. Consequently, we have derived the MIR P-L…
View on Springer
Figures and Tables from this paper
Reclassification of Cepheids in the Gaia Data Release 2
V. Ripepi, R. Molinaro, I. Musella, M. Marconi, S. Leccia, L. Eyer
Astronomy & Astrophysics
Context. Classical Cepheids are the most important primary indicators for the extragalactic distance scale. Establishing the precise zero points of their period-luminosity and period-Wesenheit…
Extended envelopes around Galactic Cepheids IV. T Monocerotis and X Sagittarii from mid-infrared interferometry with VLTI/MIDI
A. Gallenne, A. Mérand, P. Kervella, O. Chesneau, J. Breitfelder, W. Gieren
Aims. We study the close environment of nearby Cepheids using high spatial resolution observations in the mid-infrared with the VLTI/MIDI instrument, a two-beam interferometric recombiner. Methods.…
Updated 24 μm Period-Luminosity Relation Derived from Galactic Cepheids
C. Ngeow, S. Sarkar, A. Bhardwaj, S. Kanbur, Harinder P. Singh
In this work, we updated the catalog of Galactic Cepheids with $24\mu\mathrm{m}$ photometry by cross-matching the positions of known Galactic Cepheids to the recently released MIPSGAL point source…
Gaia Data Release 1. Testing parallaxes with local Cepheids and RR Lyrae stars
G. Clementini, L. Eyer, +587 authors S. Zschocke
Parallaxes for 331 classical Cepheids, 31 Type II Cepheids and 364 RR Lyrae stars in common between Gaia and the Hipparcos and Tycho-2 catalogues are published in Gaia Data Release 1 (DR1) as part of…
On the Form of the Spitzer Leavitt Law and its Dependence on Metallicity
D. Majaess, D. Turner, W. Gieren
The form and metallicity-dependence of Spitzer mid-infrared Cepheid relations are a source of debate. Consequently, Spitzer 3.6 and 4.5 um period-magnitude and period-color diagrams were re-examined…
A study on the universality and linearity of the Leavitt law in the LMC and SMC galaxies
A. Garc'ia-Varela, B. Sabogal, M. Ram'irez-Tannus
The universality and linearity of the Leavitt law are hypotheses commonly adopted in studies of galaxy distances using Cepheid variables as standard candles. In order to test these hypotheses, we…
The Distance to M51
K. McQuinn, E. Skillman, A. Dolphin, D. Berg, R. Kennicutt
Great investments of observing time have been dedicated to the study of nearby spiral galaxies with diverse goals ranging from understanding the star formation process to characterizing their dark…
The influential effect of blending, bump, changing period and eclipsing Cepheids on the Leavitt law
A. Garc'ia-Varela, J. R. Muñoz, B. Sabogal, S. V. Dom'inguez, J. Mart'inez
The investigation of the non-linearity of the Leavitt law is a topic that began more than seven decades ago, when some of the studies in this field found that the Leavitt law has a break at about ten…
Inverse Problems in Asteroseismology
E. Bellinger
New techniques to measure the ages, masses, and radii of stars are presented, as well as a way to infer their internal structure.
THE DISTANCE TO M104
This is the author accepted manuscript. The final version is available from the Institute of Physics via https://doi.org/10.3847/0004-6256/152/5/144
THE PERIOD-LUMINOSITY RELATION FOR THE LARGE MAGELLANIC CLOUD CEPHEIDS DERIVED FROM SPITZER ARCHIVAL DATA
C. Ngeow, S. Kanbur
Using Spitzer archival data from the SAGE (Surveying the Agents of a Galaxy's Evolution) program, we derive the Cepheid period-luminosity (P-L) relation at 3.6, 4.5, 5.8, and 8.0 μm for Large…
View 1 excerpt, references methods
Cepheid period–luminosity relation from the AKARI observations
C. Ngeow, Y. Ita, S. Kanbur, H. Neilson, T. Onaka, D. Kato
ABSTRACT In this paper, we derive the period-luminosity (P-L) relation for Large MagellanicCloud (LMC) Cepheids based on mid-infrared AKARI observations. AKARI's IRCsources were matched to the…
View 2 excerpts, references methods
THE MID-INFRARED PERIOD-LUMINOSITY RELATIONS FOR THE SMALL MAGELLANIC CLOUD CEPHEIDS DERIVED FROM SPITZER ARCHIVAL DATA
In this paper, we derive the Spitzer IRAC band period-luminosity (P-L) relations for the Small Magellanic Cloud (SMC) Cepheids, by matching the Spitzer archival SAGE-SMC data with the OGLE-III SMC…
THEORETICAL CEPHEID PERIOD-LUMINOSITY AND PERIOD-COLOR RELATIONS IN SPITZER IRAC BANDS
C. Ngeow, M. Marconi, I. Musella, M. Cignoni, S. Kanbur
In this paper, the synthetic period-luminosity (P-L) relations in Spitzer's IRAC bands, based on a series of theoretical pulsation models with varying metal and helium abundance, were investigated.…
PERIOD-LUMINOSITY RELATIONS DERIVED FROM THE OGLE-III FUNDAMENTAL MODE CEPHEIDS
C. Ngeow, S. Kanbur, H. Neilson, A. Nanthakumar, J. Buonaccorsi
In this Paper, we have derived Cepheid period-luminosity (P-L) relations for the Large Magellanic Cloud (LMC) fundamental mode Cepheids, based on the data released from OGLE-III. We have applied an…
View 2 excerpts, references background and methods
A New Calibration Of Galactic Cepheid Period-Luminosity Relations From B To K Bands, And A Comparison To LMC Relations
P. Fouqué, P. Arriagada, +8 authors B. Mcarthur
Context. The universality of the Cepheid Period-Luminosity relations has been under discussion since metallicity effects have been assumed to play a role in the value of the intercept and, more…
The Cepheid Period-Luminosity Relation (The Leavitt Law) at Mid-Infrared Wavelengths. II. Second-Epoch LMC Data
B. Madore, W. Freedman, J. Rigby, S. E. Persson, L. Sturch, Violet Major
We present revised and improved mid-infrared (mid-IR) period-luminosity (PL) relations for Large Magellanic Cloud (LMC) Cepheids based on double-epoch data of 70 Cepheids observed by Spitzer at 3.6,…
Highly Influential
Period–colour and amplitude–colour relations in classical Cepheid variables – IV. The multiphase relations
The superb phase resolution and quality of the Optical Gravitational Lensing Experiment (OGLE) data on the Large Magellanic Cloud (LMC) and Small Magellanic Cloud (SMC) Cepheids, together with…
Galactic Cepheids with Spitzer: I. Leavitt Law and Colors
M. Marengo, N. Evans, P. Barmby, G. Bono, D. Welch, M. Romaniello
Classical Cepheid variable stars have been important indicators of extragalactic distance and Galactic evolution for over a century. The Spitzer Space Telescope has opened the possibility of…
New period-luminosity and period-color relations of classical Cepheids. II. Cepheids in LMC
A. Sandage, G. Tammann, B. R. O. C. I. Washington, D. Physics, Astronomy, U. Basel
Photometric data for 593 Cepheids in the LMC, measured by Udalski et al. in the OGLE survey, augmented by 97 longer period Cepheids from other sources, are analyzed for the period-color (P-C) and… | CommonCrawl |
You are here: Home1 / Uncategorized2 / adjoint of a matrix properties
adjoint of a matrix properties
Let A = [ a i j ] be a square matrix of order n. The adjoint of a matrix A is the transpose of the cofactor matrix of A. Trace of a matrix If A is a square matrix of order n, then its trace, denoted … Example Given A = 1 2i 3 i , note that A = 1 3 2i i . Illustration 4: If A =[02yzxy−zx−yz]satisfies A'=A−1,=\left[ \begin{matrix} 0 & 2y & z \\ x & y & -z \\ x & -y & z \\ \end{matrix} \right] satisfies\; A'={{A}^{-1}},=⎣⎢⎡0xx2yy−yz−zz⎦⎥⎤satisfiesA'=A−1, (a)x=±1/6,y=±1/6,z=±1/3 (b)x=±1/2,y=±1/6,z=±1/3(a) x=\pm 1/\sqrt{6},y=\pm 1/\sqrt{6},z=\pm 1/\sqrt{3}\;\; \;\;\;\;\;\;\; (b) x=\pm 1/\sqrt{2},y=\pm 1/\sqrt{6},z=\pm 1/\sqrt{3}(a)x=±1/6,y=±1/6,z=±1/3(b)x=±1/2,y=±1/6,z=±1/3, (c)x=±1/6,y=±1/2,z=±1/3 (d)x=±1/2,y=±1/3,z=±1/2(c) x=\pm 1/\sqrt{6},y=\pm 1/\sqrt{2},z=\pm 1/\sqrt{3} \;\;\;\;\;\;\;\;\;\;\;\; (d) x=\pm 1/\sqrt{2},y=\pm 1/3,z=\pm 1/\sqrt{2}(c)x=±1/6,y=±1/2,z=±1/3(d)x=±1/2,y=±1/3,z=±1/2. Note the pattern of signsbeginning with positive in the upper-left corner of the matrix. Using Property 5 (Determinant as sum of two or more determinants) About the Author . (1) A.adj(A)=adj(A).A=|A|In where, A is a square matrix, I is an identity matrix of same order as of A and |A| represents determinant of matrix A. Proposition 6. Play Solving a System of Linear Equations - using Matrices 3 Topics . Example 4: Let A =[123134143],=\left[ \begin{matrix} 1 & 2 & 3 \\ 1 & 3 & 4 \\ 1 & 4 & 3 \\ \end{matrix} \right],=⎣⎢⎡111234343⎦⎥⎤, then the co-factors of elements of A are given by –. Properties 1.–5. In mathematics, the adjoint of an operator is a generalization of the notion of the Hermitian conjugate of a complex matrix to linear operators on complex Hilbert spaces.In this article the adjoint of a linear operator M will be indicated by M ∗, as is common in mathematics.In physics the notation M … Proving trigonometric identities worksheet. a31;A31=(−1)3+1∣a12a13a22a23∣=a12a23−a13. How to prove that det(adj(A))= (det(A)) power n-1? Find Inverse and Adjoint of Matrices with Their Properties Worksheet. Play Solving a System of Linear Equations - using Matrices 3 Topics . Adjoint (or Adjugate) of a matrix is the matrix obtained by taking transpose of the cofactor matrix of a given square matrix is called its Adjoint or Adjugate matrix. If there is a nXn matrix A and its adjoint is determined by adj(A), then the relation between the martix and its adjoint is given by, adj(adj(A))=A. A) =[a11a12a13a21a22a23a31a32a33]×[A11A21A31A12A22A32A13A23A33]=\left[ \begin{matrix} {{a}_{11}} & {{a}_{12}} & {{a}_{13}} \\ {{a}_{21}} & {{a}_{22}} & {{a}_{23}} \\ {{a}_{31}} & {{a}_{32}} & {{a}_{33}} \\ \end{matrix} \right]\times \left[ \begin{matrix} {{A}_{11}} & {{A}_{21}} & {{A}_{31}} \\ {{A}_{12}} & {{A}_{22}} & {{A}_{32}} \\ {{A}_{13}} & {{A}_{23}} & {{A}_{33}} \\ \end{matrix} \right]=⎣⎢⎡a11a21a31a12a22a32a13a23a33⎦⎥⎤×⎣⎢⎡A11A12A13A21A22A23A31A32A33⎦⎥⎤, =[a11A11+a12A12+a13A13a11A21+a12A22+a13A23a11A31+a12A32+a13A33a21A11+a22A12+a23A13a21A21+a22A22+a23A23a21A31+a22A32+a23A33a31A11+a32A12+a33A13a31A21+a32A22+a33A23a31A31+a32A32+a33A33]=\left[ \begin{matrix} {{a}_{11}}{{A}_{11}}+{{a}_{12}}{{A}_{12}}+{{a}_{13}}{{A}_{13}} & {{a}_{11}}{{A}_{21}}+{{a}_{12}}{{A}_{22}}+{{a}_{13}}{{A}_{23}} & {{a}_{11}}{{A}_{31}}+{{a}_{12}}{{A}_{32}}+{{a}_{13}}{{A}_{33}} \\ {{a}_{21}}{{A}_{11}}+{{a}_{22}}{{A}_{12}}+{{a}_{23}}{{A}_{13}} & {{a}_{21}}{{A}_{21}}+{{a}_{22}}{{A}_{22}}+{{a}_{23}}{{A}_{23}} & {{a}_{21}}{{A}_{31}}+{{a}_{22}}{{A}_{32}}+{{a}_{23}}{{A}_{33}} \\ {{a}_{31}}{{A}_{11}}+{{a}_{32}}{{A}_{12}}+{{a}_{33}}{{A}_{13}} & {{a}_{31}}{{A}_{21}}+{{a}_{32}}{{A}_{22}}+{{a}_{33}}{{A}_{23}} & {{a}_{31}}{{A}_{31}}+{{a}_{32}}{{A}_{32}}+{{a}_{33}}{{A}_{33}} \\ \end{matrix} \right]=⎣⎢⎡a11A11+a12A12+a13A13a21A11+a22A12+a23A13a31A11+a32A12+a33A13a11A21+a12A22+a13A23a21A21+a22A22+a23A23a31A21+a32A22+a33A23a11A31+a12A32+a13A33a21A31+a22A32+a23A33a31A31+a32A32+a33A33⎦⎥⎤. It is denoted by adj A. iii) An n×n matrix U is unitary if UU∗ = 1l. I, Let A=[a11a12a13a21a22a23a31a32a33] and adj A = [A11A21A31A12A22A32A13A23A33]A=\left[ \begin{matrix} {{a}_{11}} & {{a}_{12}} & {{a}_{13}} \\ {{a}_{21}} & {{a}_{22}} & {{a}_{23}} \\ {{a}_{31}} & {{a}_{32}} & {{a}_{33}} \\ \end{matrix} \right] \;\;and \;\;adj \;A\;=\;\left[ \begin{matrix} {{A}_{11}} & {{A}_{21}} & {{A}_{31}} \\ {{A}_{12}} & {{A}_{22}} & {{A}_{32}} \\ {{A}_{13}} & {{A}_{23}} & {{A}_{33}} \\ \end{matrix} \right]A=⎣⎢⎡a11a21a31a12a22a32a13a23a33⎦⎥⎤andadjA=⎣⎢⎡A11A12A13A21A22A23A31A32A33⎦⎥⎤, A. {{\left( AB \right)}^{-1}}=\frac{adj\,AB}{\left| AB \right|}.(AB)−1=∣AB∣adjAB. Transpose of a Matrix – Properties ( Part 1 ) Play Transpose of a Matrix – Properties ( Part 2 ) Play Transpose of a Matrix – Properties ( Part 3 ) ... Matrices – Inverse of a 2x2 Matrix using Adjoint. In other words, we can say that matrix A is another matrix formed by replacing each element of the current matrix by its corresponding cofactor and then taking the transpose of the new matrix formed. Definition M.4 (Normal, Self–Adjoint, Unitary) i) An n×n matrix A is normal if AA∗ = A∗A. From this relation it is clear that | A | ≠ 0, i.e. ... Properties of T∗: 1. (b) Given that A'=A−1A'={{A}^{-1}}A'=A−1 and we know that AA−1=IA{{A}^{-1}}=IAA−1=I and therefore AA'=I.AA'=I.AA'=I. This allows the introduction of self-adjoint operators (corresonding to sym-metric (or Hermitean matrices) which together with diagonalisable operators (corresonding to diagonalisable matrices) are the subject of … Adjoint definition, a square matrix obtained from a given square matrix and having the property that its product with the given matrix is equal to the determinant of the given matrix times the identity matrix… De nition Theadjoint matrixof A is the n m matrix A = (b ij) such that b ij = a ji. Also, the expectation value of a Hermitian operator is guaranteed to be a real number, not complex. a32{{A}_{11}}={{\left( -1 \right)}^{1+1}}\left| \begin{matrix} {{a}_{22}} & {{a}_{23}} \\ {{a}_{32}} & {{a}_{33}} \\ \end{matrix} \right|={{a}_{22}}{{a}_{33}}-{{a}_{23}}.\,{{a}_{32}}A11=(−1)1+1∣∣∣∣∣a22a32a23a33∣∣∣∣∣=a22a33−a23.a32. Find Inverse and Adjoint of Matrices with Their Properties Worksheet. A = A. ii) An n× n matrix A is self–adjoint if A = A∗. The adjoint of a matrix (also called the adjugate of a matrix) is defined as the transpose of the cofactor matrix of that particular matrix. An adjoint matrix is also called an adjugate matrix. Properties of Inverse and Adjoint of a Matrix, Problems on Finding the Inverse of a Matrix, CBSE Previous Year Question Papers Class 10, CBSE Previous Year Question Papers Class 12, NCERT Solutions Class 11 Business Studies, NCERT Solutions Class 12 Business Studies, NCERT Solutions Class 12 Accountancy Part 1, NCERT Solutions Class 12 Accountancy Part 2, NCERT Solutions For Class 6 Social Science, NCERT Solutions for Class 7 Social Science, NCERT Solutions for Class 8 Social Science, NCERT Solutions For Class 9 Social Science, NCERT Solutions For Class 9 Maths Chapter 1, NCERT Solutions For Class 9 Maths Chapter 2, NCERT Solutions For Class 9 Maths Chapter 3, NCERT Solutions For Class 9 Maths Chapter 4, NCERT Solutions For Class 9 Maths Chapter 5, NCERT Solutions For Class 9 Maths Chapter 6, NCERT Solutions For Class 9 Maths Chapter 7, NCERT Solutions For Class 9 Maths Chapter 8, NCERT Solutions For Class 9 Maths Chapter 9, NCERT Solutions For Class 9 Maths Chapter 10, NCERT Solutions For Class 9 Maths Chapter 11, NCERT Solutions For Class 9 Maths Chapter 12, NCERT Solutions For Class 9 Maths Chapter 13, NCERT Solutions For Class 9 Maths Chapter 14, NCERT Solutions For Class 9 Maths Chapter 15, NCERT Solutions for Class 9 Science Chapter 1, NCERT Solutions for Class 9 Science Chapter 2, NCERT Solutions for Class 9 Science Chapter 3, NCERT Solutions for Class 9 Science Chapter 4, NCERT Solutions for Class 9 Science Chapter 5, NCERT Solutions for Class 9 Science Chapter 6, NCERT Solutions for Class 9 Science Chapter 7, NCERT Solutions for Class 9 Science Chapter 8, NCERT Solutions for Class 9 Science Chapter 9, NCERT Solutions for Class 9 Science Chapter 10, NCERT Solutions for Class 9 Science Chapter 12, NCERT Solutions for Class 9 Science Chapter 11, NCERT Solutions for Class 9 Science Chapter 13, NCERT Solutions for Class 9 Science Chapter 14, NCERT Solutions for Class 9 Science Chapter 15, NCERT Solutions for Class 10 Social Science, NCERT Solutions for Class 10 Maths Chapter 1, NCERT Solutions for Class 10 Maths Chapter 2, NCERT Solutions for Class 10 Maths Chapter 3, NCERT Solutions for Class 10 Maths Chapter 4, NCERT Solutions for Class 10 Maths Chapter 5, NCERT Solutions for Class 10 Maths Chapter 6, NCERT Solutions for Class 10 Maths Chapter 7, NCERT Solutions for Class 10 Maths Chapter 8, NCERT Solutions for Class 10 Maths Chapter 9, NCERT Solutions for Class 10 Maths Chapter 10, NCERT Solutions for Class 10 Maths Chapter 11, NCERT Solutions for Class 10 Maths Chapter 12, NCERT Solutions for Class 10 Maths Chapter 13, NCERT Solutions for Class 10 Maths Chapter 14, NCERT Solutions for Class 10 Maths Chapter 15, NCERT Solutions for Class 10 Science Chapter 1, NCERT Solutions for Class 10 Science Chapter 2, NCERT Solutions for Class 10 Science Chapter 3, NCERT Solutions for Class 10 Science Chapter 4, NCERT Solutions for Class 10 Science Chapter 5, NCERT Solutions for Class 10 Science Chapter 6, NCERT Solutions for Class 10 Science Chapter 7, NCERT Solutions for Class 10 Science Chapter 8, NCERT Solutions for Class 10 Science Chapter 9, NCERT Solutions for Class 10 Science Chapter 10, NCERT Solutions for Class 10 Science Chapter 11, NCERT Solutions for Class 10 Science Chapter 12, NCERT Solutions for Class 10 Science Chapter 13, NCERT Solutions for Class 10 Science Chapter 14, NCERT Solutions for Class 10 Science Chapter 15, NCERT Solutions for Class 10 Science Chapter 16, JEE Main Chapter Wise Questions And Solutions, Differentiation and Integration of Determinants, System of Linear Equations Using Determinants. https://www.youtube.com/watch?v=tGh-LdiKjBw. Make sure you know the convention used in the text you are reading. Play Matrices – Inverse of a 3x3 Matrix using Adjoint. Relation between matrix and its adjoint - result. Now, ∣AB∣=∣56102318107∣=5(21−10)−6(14−8)+10(20−24)=55−36−40=−21.\left| AB \right|=\left| \begin{matrix} 5 & 6 & 10 \\ 2 & 3 & 1 \\ 8 & 10 & 7 \\ \end{matrix} \right|=5\left( 21-10 \right)-6\left( 14-8 \right)+10\left( 20-24 \right)=55-36-40=-21.∣AB∣=∣∣∣∣∣∣∣52863101017∣∣∣∣∣∣∣=5(21−10)−6(14−8)+10(20−24)=55−36−40=−21. [clarification needed] For instance, the last property now states that (AB) ∗ is an extension of B ∗ A ∗ if A, B and AB are densely defined operators. C ij = (-1) ij det (Mij), C ij is the cofactor matrix. $$ Adjoint matrices correspond to … Transpose of a Matrix – Properties ( Part 1 ) Play Transpose of a Matrix – Properties ( Part 2 ) Play Transpose of a Matrix – Properties ( Part 3 ) ... Matrices – Inverse of a 2x2 Matrix using Adjoint. In , A ∗ is also called the tranjugate of A. For any n × n matrix A, elementary computations show that adjugates enjoy the following properties. Illustration 3: Let A=[21−101013−1] and B=[125231−111]. [100010001]=AA−1=[0121233x1][1/2−1/21/2−43y5/2−3/21/2]=[10y+1012(y+1)4(1−x)3(x−1)2+xy]\left[ \begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{matrix} \right]=A{{A}^{-1}}=\left[ \begin{matrix} 0 & 1 & 2 \\ 1 & 2 & 3 \\ 3 & x & 1 \\ \end{matrix} \right]\left[ \begin{matrix} 1/2 & -1/2 & 1/2 \\ -4 & 3 & y \\ 5/2 & -3/2 & 1/2 \\ \end{matrix} \right]=\left[ \begin{matrix} 1 & 0 & y+1 \\ 0 & 1 & 2\left( y+1 \right) \\ 4\left( 1-x \right) & 3\left( x-1 \right) & 2+xy \\ \end{matrix} \right]⎣⎢⎡100010001⎦⎥⎤=AA−1=⎣⎢⎡01312x231⎦⎥⎤⎣⎢⎡1/2−45/2−1/23−3/21/2y1/2⎦⎥⎤=⎣⎢⎡104(1−x)013(x−1)y+12(y+1)2+xy⎦⎥⎤, ⇒ 1−x=0,x−1=0;y+1=0,y+1=0,2+xy=1\Rightarrow \,\,\,1-x=0,x-1=0;y+1=0,y+1=0,2+xy=1⇒1−x=0,x−1=0;y+1=0,y+1=0,2+xy=1, Example Problems on How to Find the Adjoint of a Matrix. What is Adjoint? In this case, the rref of A is the identity matrix, denoted In characterized by the diagonal row of 1's surrounded by zeros in a square matrix. Show Instructions. Co-factors of the elements of any matrix are obtain by eliminating all the elements of the same row and column and calculating the determinant of the remaining elements. Illustration 5: If A =[0121233x1] and A−1=[1/2−1/21/2−43y5/2−3/21/2],=\left[ \begin{matrix} 0 & 1 & 2 \\ 1 & 2 & 3 \\ 3 & x & 1 \\ \end{matrix} \right] \;\;and \;\;{{A}^{-1}}=\left[ \begin{matrix} 1/2 & -1/2 & 1/2 \\ -4 & 3 & y \\ 5/2 & -3/2 & 1/2 \\ \end{matrix} \right],=⎣⎢⎡01312x231⎦⎥⎤andA−1=⎣⎢⎡1/2−45/2−1/23−3/21/2y1/2⎦⎥⎤, (a)x=1,y=−1 (b)x=−1,y=1 (c)x=2,y=−1/2 (d)x=1/2,y=12(a) x=1,y=-1\;\;\;\; (b) x=-1,y=1\;\;\;\;\; (c)x=2,y=-1/2 \;\;\;\;(d) x=1/2,y=\frac{1}{2}(a)x=1,y=−1(b)x=−1,y=1(c)x=2,y=−1/2(d)x=1/2,y=21. In mathematics, specifically in functional analysis, each bounded linear operator on a complex Hilbert space has a corresponding Hermitian adjoint. (1) A.adj(A)=adj(A).A=|A|In where, A is a square matrix, I is an identity matrix of same order as of A and |A| represents determinant of matrix A. a31;{{A}_{21}}={{\left( -1 \right)}^{2+1}}\left| \begin{matrix} {{a}_{12}} & {{a}_{13}} \\ {{a}_{32}} & {{a}_{33}} \\ \end{matrix} \right|=-{{a}_{12}}{{a}_{33}}+{{a}_{13}}.\,{{a}_{32}};{{A}_{22}}={{\left( -1 \right)}^{2+2}}\left| \begin{matrix} {{a}_{11}} & {{a}_{13}} \\ {{a}_{31}} & {{a}_{33}} \\ \end{matrix} \right|={{a}_{11}}{{a}_{33}}-{{a}_{13}}.\,{{a}_{31}};A21=(−1)2+1∣∣∣∣∣a12a32a13a33∣∣∣∣∣=−a12a33+a13.a32;A22=(−1)2+2∣∣∣∣∣a11a31a13a33∣∣∣∣∣=a11a33−a13.a31; A23=(−1)2+3∣a11a12a31a32∣=−a11a32+a12. Let A be a square matrix of by order n whose determinant is denoted | A | or det (A).Let a ij be the element sitting at the intersection of the i th row and j th column of A.Deleting the i th row and j th column of A, we obtain a sub-matrix of order (n − 1). If e 1 is an orthonormal basis for V and f j is an orthonormal basis for W, then the matrix of T with respect to e i,f j is the conjugate transpose of the matrix of T∗ with respect to f j,e i. Definition of Adjoint of a Matrix. Adjoint definition is - the transpose of a matrix in which each element is replaced by its cofactor. The inverse matrix is also found using the following equation: A-1= adj (A)/det (A), w here adj (A) refers to the adjoint of a matrix A, det (A) refers to the determinant of a matrix A. Properties of adjoint matrices are: $$ (A+B)^* = A^* + B^*\,,\ \ \ (\lambda A)^* = \bar\lambda A^* $$ $$ (AB)^* = B^* A^*\,,\ \ \ (A^*)^ {-1} = (A^ {-1})^*\,,\ \ \ (A^*)^* = A \. The adjoint of a square matrix A = [a ij] n x n is defined as the transpose of the matrix [A ij] n x n, where Aij is the cofactor of the element a ij. The adjoint of a matrix A or adj(A) can be found using the following method. Properties of T∗: 1. Here adj(A) is adjoint of matrix A. Example: Find the adjoint of the matrix. As in the case of matrices, eigenvalues, and related concepts play an important role in determining the properties of a compact self-adjoint operator. In other words, one gets the same number whether using a certain operator or using its adjoint, which leads to the definition used in the previous lecture. The matrix conjugate transpose (just the trans-pose when working with reals) is also called the matrix adjoint, and for this reason, the vector is called the vector of adjoint variables and the linear equation (2) is called the adjoint equation. The property of observability of the adjoint system (2.4) is equivalent to the inequality (2.5) because of the linear character of the system.In general, the problem of observability can be formulated as that of determining uniquely the adjoint state everywhere in terms of partial measurements. hold with appropriate clauses about domains and codomains. The matrix of cofactors of | AB | is = [3(7)−1(10)−{2(7)−8(1)}a2(10)−3(8)−{6(7)−10(10)}5(7)−8(10)−{5(10)−6(8)}6(1)−10(3)−{5(1)−2(10)}5(3)−6(2)]=[11−6−458−45−2−24153]\left[ \begin{matrix} 3\left( 7 \right)-1\left( 10 \right) & -\left\{ 2\left( 7 \right)-8\left( 1 \right) \right\} &a 2\left( 10 \right)-3\left( 8 \right) \\ -\left\{ 6\left( 7 \right)-10\left( 10 \right) \right\} & 5\left( 7 \right)-8\left( 10 \right) & -\left\{ 5\left( 10 \right)-6\left( 8 \right) \right\} \\ 6\left( 1 \right)-10\left( 3 \right) & -\left\{ 5\left( 1 \right)-2\left( 10 \right) \right\} & 5\left( 3 \right)-6\left( 2 \right) \\ \end{matrix} \right]=\left[ \begin{matrix} 11 & -6 & -4 \\ 58 & -45 & -2 \\ -24 & 15 & 3 \\ \end{matrix} \right]⎣⎢⎡3(7)−1(10)−{6(7)−10(10)}6(1)−10(3)−{2(7)−8(1)}5(7)−8(10)−{5(1)−2(10)}a2(10)−3(8)−{5(10)−6(8)}5(3)−6(2)⎦⎥⎤=⎣⎢⎡1158−24−6−4515−4−23⎦⎥⎤, adj AB =[1158−24−6−4515−4−23]So, (AB)−1=adj AB∣AB∣=−121[1158−24−6−4515−4−23]\left[ \begin{matrix} 11 & 58 & -24 \\ -6 & -45 & 15 \\ -4 & -2 & 3 \\ \end{matrix} \right] So, \,\,{{\left( AB \right)}^{-1}}=\frac{adj\,AB}{\left| AB \right|}=\frac{-1}{21}\left[ \begin{matrix} 11 & 58 & -24 \\ -6 & -45 & 15 \\ -4 & -2 & 3 \\ \end{matrix} \right]⎣⎢⎡11−6−458−45−2−24153⎦⎥⎤So,(AB)−1=∣AB∣adjAB=21−1⎣⎢⎡11−6−458−45−2−24153⎦⎥⎤, Next, ∣B∣=∣125231−111∣=1(3−1)−2(2+1)+5(2+3)=21\left| B \right|=\left| \begin{matrix} 1 & 2 & 5 \\ 2 & 3 & 1 \\ -1 & 1 & 1 \\ \end{matrix} \right|=1\left( 3-1 \right)-2\left( 2+1 \right)+5\left( 2+3 \right)=21∣B∣=∣∣∣∣∣∣∣12−1231511∣∣∣∣∣∣∣=1(3−1)−2(2+1)+5(2+3)=21, ∴ B−1adj B∣B∣=121[23−13−3695−3−1]; ∣A∣=[21−101013−1]=1(−2+1)=−1{{B}^{-1}}\frac{adj\,B}{\left| B \right|}=\frac{1}{21}\left[ \begin{matrix} 2 & 3 & -13 \\ -3 & 6 & 9 \\ 5 & -3 & -1 \\ \end{matrix} \right]; \;\;\left| A \right|=\left[ \begin{matrix} 2 & 1 & -1 \\ 0 & 1 & 0 \\ 1 & 3 & -1 \\ \end{matrix} \right]=1\left( -2+1 \right)=-1B−1∣B∣adjB=211⎣⎢⎡2−3536−3−139−1⎦⎥⎤;∣A∣=⎣⎢⎡201113−10−1⎦⎥⎤=1(−2+1)=−1, ∴ A−1=adj A∣A∣=1−1[−1−210−10−1−52]\,\,{{A}^{-1}}=\frac{adj\,A}{\left| A \right|}=\frac{1}{-1}\left[ \begin{matrix} -1 & -2 & 1 \\ 0 & -1 & 0 \\ -1 & -5 & 2 \\ \end{matrix} \right]A−1=∣A∣adjA=−11⎣⎢⎡−10−1−2−1−5102⎦⎥⎤, ∴ B−1A−1=−121[23−13−3695−3−1][−1−210−10−1−52]{{B}^{-1}}{{A}^{-1}}=-\frac{1}{21}\left[ \begin{matrix} 2 & 3 & -13 \\ -3 & 6 & 9 \\ 5 & -3 & -1 \\ \end{matrix} \right]\left[ \begin{matrix} -1 & -2 & 1 \\ 0 & -1 & 0 \\ -1 & -5 & 2 \\ \end{matrix} \right]B−1A−1=−211⎣⎢⎡2−3536−3−139−1⎦⎥⎤⎣⎢⎡−10−1−2−1−5102⎦⎥⎤, =−121[1158−24−6−4515−4−23] Thus, (AB)−1=B−1A−1=-\frac{1}{21}\left[ \begin{matrix} 11 & 58 & -24 \\ -6 & -45 & 15 \\ -4 & -2 & 3 \\ \end{matrix} \right] \;\;Thus, \;\;\;{{\left( AB \right)}^{-1}}={{B}^{-1}}{{A}^{-1}}=−211⎣⎢⎡11−6−458−45−2−24153⎦⎥⎤Thus,(AB)−1=B−1A−1. Example: Below example and explanation are taken from here. If A is a square matrix and B is its inverse then AB = I. The adjoint of a matrix A is the transpose of the cofactor matrix of A . Adjoint definition, a square matrix obtained from a given square matrix and having the property that its product with the given matrix is equal to the determinant of the given matrix times the identity matrix… the matrix A is non-singular. The adjoint of A, ADJ (A) is the transpose of the matrix formed by taking the cofactor of each element of A. ADJ (A) A = det (A) I If det (A) != 0, then A-1 = ADJ (A) / det (A) but this is a numerically and computationally poor way of calculating the inverse. Play Matrices – Inverse of a 3x3 Matrix using Adjoint. Adjoing of the matrix A is denoted by adj A. Special line segments in triangles worksheet. The adjoint of square matrix A is defined as the transpose of the matrix of minors of A. The property of observability of the adjoint system (2.4) is equivalent to the inequality (2.5) because of the linear character of the system. Section 2.5 Hermitian Adjoint ¶ The Hermitian adjoint of a matrix is the same as its transpose except that along with switching row and column elements you also complex conjugate all the elements. Your email address will not be published. Find the adjoint of the matrix: Solution: We will first evaluate the cofactor of every element, In the end it studies the properties k-matrix of A, which extends the range of study into adjoint matrix, therefore the times of researching change from one time to several times based on needs. In terms of components, By using the formula A-1 =adj A∣A∣ we can obtain the value of A−1=\frac{adj\,A}{\left| A \right|}\; we\; can\; obtain\; the\; value\; of \;{{A}^{-1}}=∣A∣adjAwecanobtainthevalueofA−1, We have A11=[45−6−7]=2 A12=−[350−7]=21{{A}_{11}}=\left[ \begin{matrix} 4 & 5 \\ -6 & -7 \\ \end{matrix} \right]=2\,\,\,{{A}_{12}}=-\left[ \begin{matrix} 3 & 5 \\ 0 & -7 \\ \end{matrix} \right]=21A11=[4−65−7]=2A12=−[305−7]=21, And similarly A13=−18,A31=4,A32=−8,A33=4,A21=+6,A22=−7,A23=6{{A}_{13}}=-18,{{A}_{31}}=4,{{A}_{32}}=-8,{{A}_{33}}=4,{{A}_{21}}=+6,{{A}_{22}}=-7,{{A}_{23}}=6A13=−18,A31=4,A32=−8,A33=4,A21=+6,A22=−7,A23=6, adj A =[26421−7−8−1864]=\left[ \begin{matrix} 2 & 6 & 4 \\ 21 & -7 & -8 \\ -18 & 6 & 4 \\ \end{matrix} \right]=⎣⎢⎡221−186−764−84⎦⎥⎤, Also ∣A∣=∣10−13450−6−7∣={4×(−7)−(−6)×5−3×(−6)}\left| A \right|=\left| \begin{matrix} 1 & 0 & -1 \\ 3 & 4 & 5 \\ 0 & -6 & -7 \\ \end{matrix} \right|=\left\{ 4\times \left( -7 \right)-\left( -6 \right)\times 5-3\times \left( -6 \right) \right\}∣A∣=∣∣∣∣∣∣∣13004−6−15−7∣∣∣∣∣∣∣={4×(−7)−(−6)×5−3×(−6)}, =-28+30+18=20 A−1=adj A∣A∣=120[26421−7−8−1864]{{A}^{-1}}=\frac{adj\,A}{\left| A \right|}=\frac{1}{20}\left[ \begin{matrix} 2 & 6 & 4 \\ 21 & -7 & -8 \\ -18 & 6 & 4 \\ \end{matrix} \right]A−1=∣A∣adjA=201⎣⎢⎡221−186−764−84⎦⎥⎤. Download this lesson as PDF:-Adjoint and Inverse of a Matrix PDF, Let the determinant of a square matrix A be ∣A∣\left| A \right|∣A∣, IfA=[a11a12a13a21a22a23a31a32a33] Then ∣A∣=∣a11a12a13a21a22a23a31a32a33∣If A=\left[ \begin{matrix} {{a}_{11}} & {{a}_{12}} & {{a}_{13}} \\ {{a}_{21}} & {{a}_{22}} & {{a}_{23}} \\ {{a}_{31}} & {{a}_{32}} & {{a}_{33}} \\ \end{matrix} \right]\;\; Then \;\;\left| A \right|=\left| \begin{matrix} {{a}_{11}} & {{a}_{12}} & {{a}_{13}} \\ {{a}_{21}} & {{a}_{22}} & {{a}_{23}} \\ {{a}_{31}} & {{a}_{32}} & {{a}_{33}} \\ \end{matrix} \right|IfA=⎣⎢⎡a11a21a31a12a22a32a13a23a33⎦⎥⎤Then∣A∣=∣∣∣∣∣∣∣a11a21a31a12a22a32a13a23a33∣∣∣∣∣∣∣, The matrix formed by the cofactors of the elements in is [A11A12A13A21A22A23A31A32A33]\left[ \begin{matrix} {{A}_{11}} & {{A}_{12}} & {{A}_{13}} \\ {{A}_{21}} & {{A}_{22}} & {{A}_{23}} \\ {{A}_{31}} & {{A}_{32}} & {{A}_{33}} \\ \end{matrix} \right]⎣⎢⎡A11A21A31A12A22A32A13A23A33⎦⎥⎤, Where A11=(−1)1+1∣a22a23a32a33∣=a22a33−a23. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … A12=(−1)1+2∣a21a23a31a 3∣=−a21. Example 2: If A and B are two skew-symmetric matrices of order n, then, (a) AB is a skew-symmetric matrix (b) AB is a symmetric matrix, (c) AB is a symmetric matrix if A and B commute (d)None of these. \;Prove\; that \;{{\left( AB \right)}^{-1}}={{B}^{-1}}{{A}^{-1}}.A=⎣⎢⎡201113−10−1⎦⎥⎤andB=⎣⎢⎡12−1231511⎦⎥⎤.Provethat(AB)−1=B−1A−1. Adjoint of a Matrix. 2 The Adjoint of a Linear Transformation We will now look at the adjoint (in the inner-product sense) for a linear transformation. Here 1l is the n×n identity matrix. If all the elements of a matrix are real, its Hermitian adjoint and transpose are the same. An adjoint matrix is also called an adjugate matrix. ADJ (AT)= ADJ (A) T It is denoted by adj A . When a vector is multiplied by an identity matrix of the same dimension, the product is the vector itself, Inv = v. rref( )A = 1 0 0 0 1 0 0 0 1 LINEAR TRANSFORMATION Hermitian operators have special properties. The Hermitian adjoint of a matrix is the same as its transpose except that along with switching row and column elements you also complex conjugate all the elements. That is, A = At. In order to simplify the matrix operation it also discuss about some properties of operation performed in adjoint matrix of multiplicative and block matrix. To find the Hermitian adjoint, ... Hermitian operators have special properties. ... (3, 2)$, so we can construct the matrix $\mathcal M (T)$ with respect to the basis $\{ (1, 0), (0, 1) \}$ to be: (1) ... We will now look at some basic properties of self-adjoint matrices. In general, you can skip the multiplication sign, so `5x` is equivalent to `5*x`. The inverse of a Matrix A is denoted by A-1. For a matrix A, the adjoint is denoted as adj (A). Finding inverse of matrix using adjoint Let's learn how to find inverse of matrix using adjoint But first, let us define adjoint. Determinant of a Matrix. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share …
Ketel One Botanical Spritz Where To Buy, Garnier Pastel Pink Hair Dye, La Roche-posay Toleriane Ultra Review, Cedar Rapids Temperature Records, Sample Efficient Adaptive Text-to-speech, Oxidation State Of N, What To Eat Before A Soccer Practice, Graphic Design Major Jobs, What Is Sagittarius A, Staff Of Sheogorath Oblivion,
0 0 2020-12-02 15:10:112020-12-02 15:10:11adjoint of a matrix properties | CommonCrawl |
Object segmentation by saliency-seeded and spatial-weighted region merging
Junxia Li1,
Jundi Ding1,
Jian Yang1 &
Lingzheng Dai1
Applied Informatics volume 3, Article number: 9 (2016) Cite this article
In this paper, we present a region merging-based method for object segmentation in natural images. The method consists of three separate steps: (1) initial over-segmentation such that pixels in each region are as homogeneous as possible and therefore likely to be from the same object; (2) saliency-seeded interaction to provide proper prior input to guide the segmentation; (3) region merging by an introduced maximal spatially weighted similarity (MSWS) criterion. Saliency-seeded interaction can well reflect the human intention but does not require any manual user editing, which makes our method applicable to increasingly large-scale image databases. The MSWS criterion takes into account both the color similarity and spatial distance of the candidate regions for merging, which allows the region merging-based method to achieve better performance. Extensive experiments show that our method can reliably and automatically segment the objects from a great variety of natural images.
Object segmentation is an important task in the field of image processing (Gollmer et al. 2014, Tavakoli and Amini 2013, Seo et al. 2006). In many applications such as object recognition (Russell et al. 2006) and content-aware image resizing (Avidan and Shamir 2007), one of the core issues is to segment the object(s) of interest out from an image. If the object(s) can be correctly segmented, better application performance can be achieved such as higher recognition rate or lower resizing deformation.
However, this segmentation task in itself is a difficult and still open problem. Over the last three decades, a plethora of methods have been proposed: mean shift (Comaniciu and Meer 2002), fuzzy c-means (Cai et al. 2007; Chen and Zhang 2004), normalized cuts (Shi and Malik 2000), the coherence-connected tree algorithm (Ding et al. 2006), etc. But, as reported, they are all restricted to work well if the assumption of homogeneity in one or more region attributes hold. In other words, these segmentation methods yield good results when the objects are piece-wise smooth or nearly constant in at least one attribute. However, in commonly encountered complex natural images, they often perform poorly. Quite often, the objects tend to be segmented into pieces. In recent years, interactive techniques such as graph cuts (Boykov and Jolly 2001), GrabCut (Rother et al. 2004), and those in Bai and Sapiro (2007), Peng et al. (2011), Xiang et al. (2009) and Li et al. (2004) have received considerable attention. The underlying idea is to utilize some prior user inputs to guide the segmentation.
Segmentation results from the method proposed by Ning et al. (2010). 1st row input image (left) and initial mean-shift over-segmentation (right); 2nd row four different interactive inputs of the object (green) and the background (blue); 3rd row the corresponding segmentation results
Segmentation results based on different merging rules. a Interactive inputs; b color similarity-based results; c MSWS-based results. In a the green lines are the object markers and the blue lines are the background markers
An overview of the general schematic flowchart of our proposed method. The image shown in the red box is our object segmentation result
Results of ten popular saliency detection algorithms. From left to right original image, IT, MZ, GB, SR, AC, CA, FT, LC, HC, and RC
Experiments have shown that if proper prior input is provided, most of the existing interactive methods can yield satisfactory results for natural images. But, providing the proper input is not straightforward (Yang et al. 2010). Quite often, the user, especially a non-expert, has to struggle with a carefully patient editing among all possibly 'desired' locations in the image (Rother et al. 2004; Li et al. 2004). If the user fails to provide effective priors, more interactions are required to correct the segmentation. This is a tedious task, and especially difficult when the object and its background have low contrast (Ning et al. 2010), or the object is camouflaged, or there is clutter in the image (Rother et al. 2004). In such cases, despite the interactive input, the segmentation may not always yield the desired output (see Fig. 1). This can be remedied partly by a second tier editing of the initial segmentation results (Li et al. 2004). Another option is to employ multiple types of prior user input, including object and background strokes, soft boundary brushes or boxes, hard edge scribbles, and any combination of these (Rother et al. 2004).
Although this effort results in improved segmentation results, the whole process is tedious and is not at all practical especially in view of image databases of increasingly larger sizes (Liu et al. 2011). Manual annotation of these databases is out of the question. This is the main motivation behind our method. Our proposed scheme aims as follows: (i) provide a segmentation method effective in a great variety of natural images, where regions are primed by a few background and object seed inputs; and (ii) any seed input must be acquired automatically, that is free from any user manual effort.
Photographs of natural scenes reflect real-world variations and are characterized by large ranges of color, texture, shape, or similar attributes. Image objects are not necessarily homogeneous in their attributes, and consequently even the state-of-the-art methods can fail to segment an object in its entirety, and more often the segmentation yields fragmented objects. This gives us the following idea: we can first over-segment an image into regions that are as homogeneous as possible, and then try to merge the object regions that are adjacent and similar to each other. The rationale is that these regions in all likelihood belong to the same object. To this end, we present a merging-based segmentation method in this paper.
An example of our saliency-seeded interaction. a Input image; b saliency map; c the histogram of the saliency map; d and e the obtained object and background seeds. Green lines denote object seeds and blue lines denote background seeds. The saliency values 39 and 238 in c correspond to P \(_{B}=0.5\) and P \(_{O}=0.05\), respectively
The explanation of our MSWS criterion. a Regions A−D represent four different homogeneous regions (a and b are object regions, c and d are background regions); b a saliency map helps determine where the object of interest is (object region A denoted 'O,' background region D denoted 'B,' and unlabeled regions denoted 'N' for simplicity); d centers of four regions; e MCS similarity; f spatial distance between two regions; g MSWS similarity; h–j are the corresponding labeled results
Segmentation results with two different similarity measures. a Saliency-seeded interactions; b MCS-based results; c MSWS-based results
Accuracy-P\(_{O}\) and Accuracy-P\(_{B}\) curves on the MSRA1000 dataset of different saliency methods
We introduce a novel rule termed 'maximal spatially weighted similarity' (MSWS) to aggregate regions. Specifically, our proposed rule is to merge the regions that not only have the highest similarity in color, but that also are the nearest to each other. That is, MSWS criterion takes into account both the "color similarity" and "spatial distance" of the candidate regions for merging. Merging methods in the current literature focus on finding neighboring regions with color similarity above a threshold (Yang et al. 2010) or the highest (Ning et al. 2010) among all, without a distance weighting criterion. Disregarding the distance weighting criterion increases the risk that background regions with similar colors will be erroneously merged with object regions (see Fig. 2b).
Furthermore, we adopt an interactive merging strategy as recently proposed in Ning et al. (2010). That is, we first generate image clues to direct the merging, and these clues, in the form of simple strokes, roughly indicate the locations of the object and of the background. However, while in Ning et al. (2010), the object and background seeds are all drawn by the user, in our scheme they are automatically extracted. To generate segmentation priors, we have to take into account the following observations:
From the prior interaction point of view, the locations of pixels which have different attributes but belong to the same object are often good candidates for priors (see the toucan image in Fig. 1). As a case in point, the "toucan" object consists in majority of black pixels, and a minority of orange and white pixels. To be segmented into the same region as the black pixel, the minority pixels have to be marked as prior object seeds.
From the human attention point of view, the locations of pixels which have different attributes but belong to the same object are generally the salient places where human attention is attracted (see Fig. 3 for the toucan again). The orange and white pixels which are highly contrasted to the black pixels have the highest salience, shown as bright regions. At the same time, we can also observe that the pixels with the lowest salience are usually part of background.
From these two observations, we can conclude that the salient parts of an image, which attract more human attention are also likely to be the locations of prior interactions. Inspired by this conclusion, we build a saliency-seeded interactive scheme that can automatically find the good object (i.e., by highest salience) and background (i.e., by lowest salience) seed inputs. A typical result of our automatic interaction for the toucan image is shown in Fig. 3. Clearly, the object marks fall onto a small portion of locations where the pixels are largely orange and white, while the background marks are all located in the background.
A brief overview of our 'saliency-seeded and spatial-weighted' (SSaSW) region merging-based method is illustrated in Fig. 3. It consists of three main stages: (A) initial over-segmentation; (B) saliency-seeded interaction; and (C) MSWS-based region merging. First, we run an image segmentation algorithm to divide the input image into many small homogenous regions. Next, with the aid of a saliency detection method, the prior interactions are determined automatically. Finally, the object is extracted from the background when our MSWS-based merging process ends. Extensive experiments are conducted and results show that our method can reliably segment the objects from a wide variety of natural images.
In summary, the contributions of this paper mainly include the following:
We build a saliency-seeded interaction scheme that can well reflect the human intention but is free of any manual user editing effort. In addition, it is easy and flexible for our interactions embedded into many interactive methods.
We propose a novel rule MSWS to aggregate regions. It takes into account both the color similarity and spatial distance of the candidate regions for merging, which allows the region merging-based method to achieve better performance.
Our merging-based segmentation method
In this section, we will detail three stages of our method.
Initial over-segmentation
There are many low-level homogeneity-based methods which can be used for an initial over-segmentation, such as normalized cuts (Ncuts) (Shi and Malik 2000), k-means (Mignotte 2008), mean shift (Comaniciu and Meer 2002), Otsu's thresholding (Otsu 1979), and watershed (Vincent and Soille 1991). Our required initial segmentation should be that pixels in each region are as homogeneous as possible such that (i) they are from the same object and (ii) the object boundary is well preserved. The results produced by the mean-shift algorithm satisfy these two requirements. However, methods like k-means, Ncuts, and otsu's require a preset threshold on the number of regions, and their computational complexity always rapidly increases with this threshold. The results produced by these three methods usually do not keep the boundary well. Although the results produced by watershed also satisfy the mentioned two requirements, they always tend to yield over-segmentation regions that increase the complexity of computation. For these reasons, we choose mean-shift to produce our required initial over-segmentation. In particular, the EDISON system EDISON Software (http://www.caip.rutgers.edu/riul/research/code.html) of mean shift software is used here.
Saliency-seeded automatic interaction
Most of existing interactive segmentation methods can yield satisfactory results, if proper user interaction is provided. However, the image database nowadays becomes increasingly larger, and manual annotation of them is impractical at all. Thus, finding an automatic way to figure out the prior interaction is very important.
Our motivation of automatic interaction
Saliency detection is one recently developed technique for object extraction (Cheng et al. 2011; Achanta et al. 2009). It seeks to identify the highly informative parts of a scene that attract more human attention. In an image, the regions that are strongly contrasted to their surroundings often tend to pop out being salient. To date, there are many popular salience detection methods proposed to identify these regions, such as IT (Itti et al. 1998), MZ (Ma and Zhang 2003), GB (Harel et al. 2007), SR (Hou and Zhang 2007), AC (Achanta et al. 2008), CA (Goferman et al. 2010), FT (Achanta et al. 2009), LC (Zhai and Shah 2006), HC (Cheng et al. 2011), and RC (Cheng et al. 2011). In all of them, the salience values of pixels are represented in gray and normalized to the range [0, 1]. The brighter a pixel is, the higher its salience value is. From their typical results shown in Fig. 4, we can observe that pixels with the higher salience (shown as brighter pixels) are near high-contrast positions (e.g., object boundaries), or within some high-contrast regions (e.g., a textured region). On the other hand, they are all related to the object of interest in one image. On the contrary, pixels in the background tend to have the lower salience, shown in black. Interactive methods such as GrabCut (Rother et al. 2004), graph cuts (Boykov and Jolly 2001), or MSRM (maximal similarity-based region merging) (Ning et al. 2010) yield good results when the locations of pixels with higher salience are marked as prior inputs. That is, the high-contrast positions or regions are always good candidate places for prior user interaction. Inspired by these, we will build a saliency-seeded automatic interaction scheme in the following:
Our way of automatic interaction
In particular, we intend to mark pixels with the highest salience being 'object' (denoted 'O'), and to mark pixels with the lowest salience being 'background' (denoted 'B'). That is, we are to pick the pixels with salience above a threshold \(T_O\) as the prior object seeds, and to pick the pixels with salience below a threshold \(T_B\) as the prior background seeds (\(T_O> T_B\)):
$$\begin{aligned} O=\{(x,y)\mid s(x,y)\ge {T}_{O}\} \end{aligned}$$
$$\begin{aligned} B=\{(x,y)\mid s(x,y)\le {T}_{B}\}, \end{aligned}$$
where s(x, y) is the salience value of the pixel (x, y). However, it is difficult to find a general-purpose value for such two thresholds. The objects and backgrounds in different images tend to have different salience values.
Thus we turn to specify other two alternative thresholds \(P_{O}\), \(P_{B}\) that represent the amount of prior object and background seeds in an image I:
$$\begin{aligned} {\rm Pr}(O)={\rm Pr}(s(x,y)\ge \text{T}_{O})={P}_{O} \end{aligned}$$
$$\begin{aligned} {\rm Pr}(B)={\rm Pr}(s(x,y)\le {T}_{B})= {P}_{B}, \end{aligned}$$
where \({\rm Pr}(\cdot )\) is a probability function and defined as
$$\begin{aligned} {\rm Pr}(O)=\frac{|O|}{|I|};\quad {\rm Pr}(B)=\frac{|B|}{|I|}. \end{aligned}$$
\(|\cdot |\) denotes the number of elements in a set. We observe that in each salience map, the probability of pixels with the highest salience is about 2–5%, and the probability of pixels with the lowest salience in black is near \(50\%\) (see Fig. 5c). Then, we here select a value for \(P_{O}\) in the range [0.02, 0.05] and set \(P_{B}\) to be 0.5. As shown in Fig. 5d, the object and background seed inputs are well determined.
However, with this approach, there are still too many marked inputs, especially in the background. For a shrink, we take the morphological 'thin' operation on marked object and background seeds, respectively. We use the function 'bwmorph' in the MATLAB R2010b function library in forms of bwmorph (BW, operation, n) which means applying a specific morphological operation to the binary image 'BW' n times. Specifically, we apply the operation 'thin' repeatedly until the image no longer changes, i.e., operation =' thin,' and n = inf. As a results, the 'thin' operation removes pixels so that the object or background seeds regions without hole shrink to a minimally connected stroke, and the regions with holes shrink to a ring halfway between the hold and outer boundary (see Fig. 5e).
Object segmentation of SSaSW based on different saliency maps. From left to right initial image, IT-seeded, MZ-seeded, GB-seeded, SR-seeded, AC-seeded, CA-seeded, FT-seeded, LC-seeded, HC-seeded, RC-seeded, and Ground truth
Similarity measure comparison between MCS and MSWS. a Input images; b initial mean shift segmentation and the input markers; c and d segmentation results based on MCS and MSWS, respectively
Segmentation results by RCC and SSaSW. a and d are original images. b and e show the segmentation results from RCC, and c and f are the segmentation results from SSaSW
Segmentation results with different parameters. a Initial image and saliency map; b saliency-seeded interactions and segmentation results with \(P_{O}=0.05\), \(P_{B}=0.5\); c \(P_{O}=0.02\), \(P_{B}=0.5\)
As thus, only a small portion of image pixels are marked as prior interaction inputs, and they have reflected human attention well. More importantly, they are all obtained free from any user manual effort and adaptive to the image content.
MSWS-based region merging
After the above interaction input, there are some over-segmented regions that will contain both object seeds and background seeds. Before the merging step, we should first label the regions with more prior object (or background) seeds as the object (or background) marker region, and label the regions with no prior seed input as non-marker regions. The merging aim of MSWS is to assign to each non-marker region the correct label 'O' or 'B.' The whole merging process contains two stages, which are repeatedly executed until no new merging occurs. (i) Merging non-marker regions with background marker regions. For each background marker region, if a non-marker region satisfies the MSWS criterion with it, the two regions are merged and the new region is labeled 'B.' (ii) Merging non-marker regions remained from the first stage adaptively. For each non-marker region, if a non-marker region satisfies the MSWS criterion with it, the two non-marker regions are merged and form a new non-marker region. In what follows, we will give a brief review of a principle of maximal color similarity (MCS) in MSRM. Based on it, we will provide our insight into why the spatial distance between regions is also important for the merging.
Overview of MCS
Color is a simple and effective low-level attribute that is commonly used for image segmentation. The idea is that regions from the same object are more similar in color than regions from different objects. Specifically, MCS is a very useful merging principle described in MSRM. It merges two neighboring regions that have the maximal similarity in color. That is, for one region R, let Q denote an adjacent region of R (i.e., a region with at least one pixel in common with R), if
$$\begin{aligned} \rho _{c} (R,Q^{*})=\max \limits _{Q\in N(R)}\rho _{c}(R,Q) \end{aligned}$$
\(Q^{*}\) is called the most similar region to R and is merged with R, where \(\rho _{c}(R,Q)\) denotes the color similarity between R and Q, and N(R) is the set of R's all adjacent regions. By this "max" operator, the merging process avoids a preset similarity threshold. However, the "max" operator may be somewhat sensitive to noise. To avoid this issue, MSRM uses an RGB histogram to represent each region. In the RGB space, each channel is uniformly quantized into 16 levels, and then a color space of \(16\times 16\times 16=4096\) bins is used to calculate the histogram of each region. MSRM computes the color similarity of regions as the Bhattacharyya coefficient between two histograms:
$$\begin{aligned} \rho _{c} (R,Q)=\sum _{u=1}^{4096}\sqrt{{\rm Hist}_R^u\cdot {\rm Hist}_Q^u}, \end{aligned}$$
where \({\rm Hist}_R\) and \({\rm Hist}_Q\) denote the normalized color histograms of R and Q respectively, and the superscript u represents the uth bin.
Our MSWS criterion
It is worthwhile to note that in MCS all neighboring regions are treated equally in the merging, and only color information is used to judge the similarity between regions. This has some limitations. This approach may fail when low-contrast edges and shadow occur. It may also fail when part of the object region is slightly more similar in color to the adjacent background region than adjacent object regions, or vice versa.
F-measure evaluations
Result comparisons. From left to right graph cuts, GrabCut, and SSaSW. In the first row, the green and blue strokes are the corresponding object and background seeds in graph cuts. The red rectangle around the desired object is the interaction in GrabCut
Segmentation results on some shadow images and medical images vascular images
Failure cases of SSaSW. 1st row initial images; 2nd row initial mean shift segmentations; 3rd row saliency maps; 4th row saliency-seeded interactions; 5th row segmentation results by our SSaSW; 6th row corresponding human segmentations
We take the yellow flower shown in Fig. 2 as an example. The flower consists of two parts: petals and stamen. Although both parts are yellow, the stamen is slightly darker than the surrounding petals. In Fig. 2b (first row), only parts of the petals are marked as belonging to the object, and a small portion of the background is present in the segmented object. In Fig. 2b (second row), it can be seen that the prior interactions are well designed, but the segmentation problem remains. The object cannot be reliably extracted from the background by either of these two interaction inputs. This example illustrates that even if the prior interactions are well designed, a satisfactory result cannot be obtained for this image. This is mainly because the object of interest is not piece-wise smooth or nearly constant in color and the contrast between the object and background is low. These problems are relatively common in natural images. Therefore, using only color information cannot ensure good segmentation performance for these natural images.
To solve this problem, we propose a novel rule termed maximal spatially weighted similarity (MSWS) to merge regions. It takes into account both the color similarity and the spatial distance of the candidate regions for merging. The implied idea is that regions of the same object are spatially adjacent and their colors are similar enough to each other. That is, one aims to merge the regions that not only have the highest similarity in color, but that also are the nearest to each other. Specifically, for two regions R and Q, we first define the spatial distance as
$$\begin{aligned} \rho _{s} (R,Q)={\Vert center_{R}-center_{Q}\Vert }_{2} \end{aligned}$$
where \(center_{R}\) and \(center_{Q}\) are the center pixel coordinates of the regions R and Q, respectively, and \({\Vert \cdot \Vert }_2\) denotes the Euclidean distance. The lower \(\rho _{s} (R,Q)\) is for a pair of regions, the higher the spatial similarity between them. Directly integrating spatial distance into the color similarity computation, the MSWS is defined as
$$\begin{aligned} \rho (R,Q)={\text{ exp }(-{\rho _{s} (R,Q)}/{\sigma ^{2}})}\cdot {\rho _{c} (R,Q)}, \end{aligned}$$
where \(\sigma\) controls the effect of spatial distance in the maximal spatially weighted similarity measure. In our experiments, we use \(\sigma ^{2}=1\) empirically. Note that, although we choose the RGB color space and Bhattacharyya coefficient to compute the color similarity as in Ning et al. (2010), other color spaces (e.g., HSI) and distance metrics (e.g., Euclidean distance) can also be used here.
We use Fig. 6 as a toy example to explain the rationale behind our MSWS criterion. Fig. 6a is the initial over-segmentation result. It contains four different homogeneous regions, denoted by A, B, C, and D (Fig. 6c). We assume that A and C are object regions, and B and D are background regions. In the MCS-based labeled result (Fig. 6h), regions B and region C are labeled 'O.' The labeling result which uses only spatial information is shown in Fig. 6j; in this case, region B and region C are labeled 'B.' However, as shown in Fig. 6i, the corresponding MSWS-based result is consistent with the benchmark (Fig. 6a). This shows that our criterion can improve the performance of region merging-based methods by considering the color similarity and spatial distance of the candidate regions jointly.
Figure 7 shows the segmentation results based on MCS and MSWS criterion. In the MCS-based results, the objects of interest cannot be segmented accurately. In the person image, parts of the object are merged into the background; in the flower image, a small portion of the background regions are erroneously integrated into the object (see Fig. 7b). Figure 7c shows the segmentation results by our proposed method. Clearly, it can effectively and accurately extract the objects from their backgrounds.
Experiments and comparisons
Experiment setting
In this section, we evaluate the performance of our proposed algorithm from multiple perspectives. These extensive experiments are conducted on two public image databases. The first one is the Berkeley Segmentation Database denoted as BSDS300 (Martin et al. 2001). It is an information-rich dataset which contains 300 images along with the ground-truth segmentations. These images are of complex, natural scenes, and have five to ten human hand-labeled segmentations on each one of them. The second database MSRA1000 is provided by Achanta et al. (2009). It consists of 1000 images with obvious salient objects and clean backgrounds with a manually generated segmentation result for each image.
Parameters setting
P\(_{O}\) and P\(_{B}\) are two important parameters for our method to obtain the object and background seeds. In order to determine the P\(_{O}\) and P\(_{B}\) values, we conducted an elaborated analysis on MSRA1000 dataset.
We analyzed this problem mainly from two aspects:
From the aspect of the accuracy that the saliency detection algorithm brings for our prior interactions, we conducted extensive statistical experiments over ten saliency detection methods with different thresholds P\(_{O}\) and P\(_{B}\). For each saliency method, we compute average accuracy-P \(_{O}\) curve and accuracy-P \(_{B}\) curve on MSRA1000 dataset, and present all the curves in Fig. 8, respectively. From Fig. 8, we can see that when P \(_{O}\le 5 \%\), the accuracy is above \(50\%\) for all these ten methods (SR has the worst performance when P \(_{O} = 5 \%\)), and when P \(_{O}> 5 \%\), the accuracy is decrease gradually. To our minds, it will not be accepted when the accuracy is less than \(50\%\). So, we here choose the maximum value of P \(_{O}\) is \(5\%\). From Fig. 8, it can be seen that when P \(_{B}\) is near \(50\%\) the accuracy is higher than \(95\%\) for most of saliency models.
From the aspect of the foreground object size of the image, we computed the proportion of the foreground object in the whole image for all the MSRA1000 dataset. In 1000 images, there are only 21 images which have a very small proportion—less than \(5\%\). Among the 21 images there are only two images, whose proportion is less than \(2\%\). So, here we choose the minimum value of P\(_{O}\) is \(2\%\). Besides, there are only several images whose proportion of the background is less than \(50\%\).
Taking these two aspects into consideration and for fairly comparing with other methods, in this paper, we select a value for P\(_{O}\) in the range [0.02, 0.05], and set P\(_{B}\) to be 0.5.
Qualitative result comparisons
The two main stages of our proposed method are the saliency-seeded automatic interaction and MSWS-based region merging. In order to verity their effectiveness, we conduct extensive experiments on the two test datasets.
Results based on different saliency detection methods
Figure 9 illustrates the corresponding segmentation results of SSaSW based on different saliency detection methods IT (Itti et al. 1998), MZ (Ma and Zhang 2003), GB (Harel et al. 2007), SR (Hou and Zhang 2007), AC (Achanta et al. 2008), CA (Goferman et al. 2010), FT (Achanta et al. 2009), LC (Zhai and Shah 2006), HC (Cheng et al. 2011), and RC (Cheng et al. 2011). These images are from the MSRA1000 database. We can clearly see that SSaSW yields satisfactory segmentation results from most of these methods, except for AC and SR. Therefore, most saliency detection methods except for AC and SR can provide the proper automatic interactions for SSaSW. In the following experiments, the RC saliency map is used to automatically determine prior interactions.
Effectiveness analysis of our MSWS
We compare the performance of our MSWS criterion with that of the MCS criterion. Note that MCS can be seamlessly embedded into our framework. All experiments are conducted on the BSDS300database. Figure 10 shows the segmentation results of the MCS- and MSWS-based region merging methods. In these images, some objects contain low-contrast edges, or parts of the background are very similar in color to the adjacent object regions. It is difficult to achieve satisfying results in these cases with MCS. However, given the same marking, MSWS achieves much better results than MCS.
Quantitative result evaluations
Evaluations on the \(\mathbf BSDS300\) database
Until now, the effectiveness of MSWS is evaluated visually. However, visual observation is subjective. In order to demonstrate the performance objectively, it is necessary to provide some performance measures for quantitative evaluations. We make use of the following performance measures: a probabilistic measure PRI (Unnikrishnan et al. 2007), and two metrics VoI (Meila 2005) and GCE (Martin et al. 2001), to demonstrate the effectiveness of our proposed MSWS. The three performance measures adopted here are described in the following sections:
Probabilistic Rand Index (PRI) (higher probability is better): The Rand index proposed in Unnikrishnan et al. (2005) calculates the fraction of pairs of pixels whose labels are consistent between the test segmentation S and the ground-truth segmentation G. PRI proposed in Unnikrishnan et al. (2007) is a simple extension of the Rand index. It allows the comparison of a segmentation algorithm to a set of ground-truth segmentations by averaging the results. Given a set of ground-truth segmentations \({\{G_k\}}\), the PRI is defined as
$$\begin{aligned} \text{ PRI }(S,\{G_k\})=\frac{1}{K}\sum \limits _{i<j}[c_{ij}p_{ij}+(1-c_{ij})(1-p_{ij})], \end{aligned}$$
where \(c_{ij}\) means that pixel i and j have the same label and \(p_{ij}\) denotes its probability. Let K be the number of ground-truth segmentations for an image. Thus, PRI is based on pair-wise relationships and highly correlated with human hand-labeled segmentation results.
Variation of Information (VoI) (lower distance is better): In contrast to PRI, VoI (Meila 2005) is based on the relationship between a pixel and its own cluster. It views a clustering as an element of a lattice. As a metric, VoI uses conditional entropies to approximate the distance between two clusters, and is defined as
$$\begin{aligned} \text{ VoI }(R_1,R_2)=H(R_1)+H(R_2)-2I(R_1,R_2), \end{aligned}$$
where H and I represent, respectively, the entropies and mutual information between two regions of \(R_1\) and \(R_2\). It is a form of 'external evaluation,' and measures the amount of information that is lost or gained in changing from one clustering to another.
Global Consistency Error (GCE) (lower distance is better): A supervised evaluation method, GCE, was introduced by Martin et al. (2001) to quantify the consistency between segmentations. Let R(S, p) be the set of pixels which are in the same region R as the pixel p in segmentation S, where \(|\cdot |\) denotes the cardinality of a set and \(\cdot \setminus \cdot\) set difference. The local refinement error is
$$\begin{aligned} E (S_1, S_2, p) = \frac {| R\,(S_1, p) \setminus R\, (S_2, p)|}{|R\, (S_1, p)|}. \end{aligned}$$
Then the GCE is defined as
$$\begin{aligned} \text{ GCE }(S_1, S_2)=\frac{1}{n}\min \left\{\sum \limits _{i}E(S_1, S_2, p_i), E(S_2, S_1, p_i) \right\}. \end{aligned}$$
Let n be the size of the image. Note that GCE forces all local refinements to be in the same direction, and it does not penalize over-segmentation.
Table 1 compares model performance on the images presented in Fig. 10 using the PRI, VoI, and GCE metrics, where 'NO.' denotes the ID number of the images. The values of PRI, VoI, and GCE are given comparatively in the two columns. Obviously, MSWS outperforms MCS on all the indices. The average PRI value of MSWS over the 300 images of the BSDS300 dataset is 0.5551, which is higher than MCS of 0.5476. The average GCE and VoI values of MSWS on this database are 0.0561 and 2.0146, which are lower than the MCS averages of 0.0646 and 2.0519.
Table 1 Qualitative comparison of the results of our method based on MSWS and MCS on the ten images presented in Fig. 10
Evaluations on the MSRA1000 database
In order to demonstrate the effective of our method, we conduct our method based on six recently proposed saliency detection methods LR (Shen and Wu 2012), SF (Perazzi et al. 2012), HS (Yan et al. 2013), MR (Yang et al. 2013), DS (Li et al. 2013), and AMC (Jiang et al. 2013) on the MSRA1000 database, and then compare our object segmentation results with their adaptive-thresholding segmentation results. The term of adaptive threshold is proposed by Achanta et al. (2009) which is image saliency dependent. Note that in the adaptive-thresholding segmentation, each saliency map is first over segmented by mean-shift. An average saliency is then calculated for each segment, and an overall mean saliency value over the entire image is obtained as well. If the saliency in this segment is larger than twice of the overall mean saliency value, the segment is marked as foreground, otherwise to be background. In this way, the binary segmentation map is yielded.
F-measure is used to assess the consistency of each segmentation result with the ground truth, and is defined as
$$\begin{aligned} {\text F}{\text {-measure}} = \frac{(1+\beta ^{2})\times {\rm Precision}\times {\rm Recall}}{\beta ^{2}\times {\rm Precision} + {\rm Recall}}. \end{aligned}$$
We use \(\beta ^{2}=0.3\) in our method to weigh Precision more than Recall. Table 2 shows the F-measure scores of our SSaSW and the adaptive-thresholding segmentation. From the results, we can see that our method consistently performs better than the adaptive-thresholding segmentation. This comparison results also nicely demonstrate the effectiveness of the strategies of our proposed saliency-seeded interaction and maximal spatially weighted similarity criterion.
Table 2 F-measure evaluations with different saliency methods on the MSRA1000 database
Furthermore, we compare the segmentation results of SSaSW and RCC (Cheng et al. 2011) with the human segmentation result for each image. RCC is an RC-based cut algorithm. It employs the RC saliency map to initialize the process of GrabCut instead of using human input. Figure 11 compares the segmentation results of SSaSW and RCC. From Fig. 11c and f, we can see that each object of interest is effectively extracted from the background by SSaSW, while RCC has difficulty handling images with cluttered and highly textured objects or backgrounds (see Fig. 11b, e). Table 3 presents the F-measure scores on the test images and shows that our results are very consistent with the ground truth. The averaged F-measure score of our SSaSW is 0.8749 on MSRA1000 database. These experiments are conducted using the parameters \(P_{O}=0.05\), \(P_{B}=0.5\), and \(\sigma ^{2}=1\) throughout.
Table 3 Precision (P), Recall (R), and F-measure values for test images
\(P_{O}\) and \(P_{B}\) are two important parameters for our method to obtain the object and background seeds. In our experiments, in general, we can find a good result in the range [0.02, 0.05] for \(P_{O}\), and 0.5 for \(P_{B}\). In some cases, SSaSW can obtain better results by adjusting the parameters \(P_{O}\) and \(P_{B}\). Such a case is shown in Fig. 12: with default parameters (\(P_{O}=0.05\)), the background regions circled in red are merged into the object (see Fig. 12b), since there are several pixels in the background with higher salience (see Fig. 12a) and the corresponding regions are erroneously assigned to the object marker regions. In Fig. 12c, SSaSW produces a relatively accurate result with \(P_{O}=0.02\).
For fairly comparing with other methods, we further introduce an effective scheme. For each image, with different \(P_{O}\) values \(P_{O_i}\) (\(i=1, 2, \ldots , k\)), we can easily yield the corresponding segmentation results \(Z_{P_{O_i}}\). Then the average map \(\bar{Z}\) is calculated for each pixel p as
$$\begin{aligned} \bar{Z}(p)=\frac{1}{k}\sum _{i=1}^{k}Z_{P_{O_i}}(p). \end{aligned}$$
Finally, the object segmentation result M can be obtained as (\(\bar{Z}\) is normalized to [0, 1])
$$\begin{aligned} M(p)=\left\{ \begin{array}{ll} 1,\quad &{}\hbox { if } \bar{Z}(p)\ge 0.5; \\ 0,\quad &{}\hbox {else.} \end{array} \right. \end{aligned}$$
In this result, \(M(p)=1\) indicates pixel p belonging to foreground object, and \(M(p)=0\) indicates pixel p belonging to background.
In the experiments, specifically, we vary \(P_{O}\) from 0.02 to 0.05 with 0.01 one step, and obtain four values \(P_{O_1}=0.02\), \(P_{O_2}=0.03\), \(P_{O_3}=0.04\), \(P_{O_4}=0.05\). In this way, all the results can be obtained using a unified parameter setting. The F-measure obtained by the proposed strategy is 0.91 which is higher than 0.90 obtained by RCC. Figure 13 shows the F-measure evaluations of SSaSW, RCC, and SSRMf (Li et al. 2011). SSRMf is also a saliency-based object segmentation method. Clearly, our SSaSW has the highest F-measure score. This confirms the effectiveness of our SSaSW.
In order to further illustrate the significance of the above comparisons, here, we give the results of statistical T tests. The corresponding p values are reported in Table 4. As we have expected, the p values are all below 0.05. This indicates that our proposed method has indeed outperformed RCC and SSRMf.
Table 4 p values of the statistical t tests for evaluations
Comparisons with graph cuts and grabcut
In this section, we compare our method with two interactive segmentation algorithms: graph cuts (Boykov and Jolly 2001) and GrabCut (Rother et al. 2004). Object segmentation is regarded as a minimal graph cuts problem in these two methods. For a fair comparison with our region-based algorithm, we extend the classical pixel-based graph cuts and GrabCut segmentation methods to the region-based scheme. Here, we take the regions segmented by mean shift as the nodes in the graph instead of the pixels. Both graph cuts (Boykov and Jolly 2001) and GrabCut (Rother et al. 2004) require some regions labeled as a prior, i.e., seeds. In graph cuts, the user is required to mark a few strokes as object and background interactions. And in GrabCut, the interaction is a rectangle around the desired object. In Fig. 14, it seems that the prior interactions for graph cuts and GrabCut are well designed. Despite this, our method can achieve a comparable segmentation performance with the interactive object segmentation methods.
Results on domain specific images
In order to demonstrate the effectiveness of our proposed method more widely, in this subsection, we conduct some experiments on domain specific images, e.g., shadow images, medical images (here we use two vascular images). Figure 15 shows the segmentation results. From these results, we can see that our method works well on these specific images.
On the extension to more features
Our method can benefit from the integration of more feature information. Specifically, in this subsection, we add the texture information into our model [three textural features coarseness, contrast, and directionality (Tamura et al. 1978) are used to extract texture information, as done in Dogra et al. (2012)]. That is, we use color similarity, spatial proximity, and texture similarity together to define our similarity measure. Table 5 shows the comparison results. It can be seen that our method can yield better results by integrating of texture information.
Table 5 Average F-measure values on the MSRA1000 dataset based on our MSWS and MSWS with texture
Computational complexity of SSaSW
For a clear qualitative analysis of the proposed method, we will discuss its computational complexity and compare it to that of RCC and SSRMf. The running time of our method mainly depends on two parts, the region merging process and the similarity measure. For the region merging process, the time complexity is \(O(N^2)\), where N is the number of regions after initial segmentation. The time complexity of the similarity measure is \(O(M\_{k})\), where \(M\_k\) is the number of pixels in the k-th region. So, the worst-case running time complexity for our SSaSW is \(O(N^2+MN)\), where \(M=\max _{k=1,\ldots ,N}{\{M\_k\}}\). The running time complexity for SSRMf is approximately equal to that of SSaSW. The RCC method iteratively applies GrabCut (Rother et al. 2004) to refine the segmentation result. The most time-consuming step is this GrabCut iteration. Thus, the time complexity for RCC is \(O(mn^2|C|)\), where n is the number of nodes, m is the number of edges, and |C| is the cost of the minimum cut in the graph. Therefore, \(n \gg N\) is clearly since n is the total number of pixels in an image and N is the number of regions after over-segmentation. Table 6 shows the average time taken by RCC, SSRMf, and SSaSW on the MSRA1000 database. SSaSW and SSRMf are implemented in Matlab. For RCC, we use the authors' implementation in C++. Although SSaSW takes longer to run, it has a lower time complexity than RCC (approximately equals to SSRMf). The difference in computation time is mainly due to the different execution environments.
Table 6 Average time required for object segmentation for images in the MSRA1000 database
Failure of SSaSW
Up until now, we have evaluated the effectiveness of SSaSW on a variety of images. However, it may fail when one of the following conditions occurs (such cases are summarized and shown in Fig. 16). The reason for the failure of Fig. 16 arises from the wrongly connected over-segmentation between pencil region. If there was no connection between the hole (from blue sky) and pencil regions, our rule of region merging will not merged them as one region, even though they are with the similar blue color to the nearby pencils. As for Fig. 16, the result should be better if the saliency-seeded interactions (i.e., high-level semantics) are all accurate, e.g., if the bottle neck is not indicated as the background. For Fig. 16, it is just due to the human ambiguity (i.e., subjective labeling). The pixels with the highest saliency values are all from the 'hand,' thus they are indicated as the foreground interactions. For this image in the dataset, however, the iron handle is the benchmarked foreground object.
This paper proposes a fully automatic framework of saliency-seeded and spatial-weighted region merging for natural object segmentation. With the aid of a saliency detection method, the proper prior inputs for the object of interest and the background region can be automatically obtained. This labeling reflects human intention and without requiring any manual user editing effort. In addition, we present an effective maximal spatially weighted similarity criterion for region merging. It merges the regions that have the highest similarity in color, and are also the nearest to each other. By incorporating both the color similarity and the spatial distance of the candidate regions for merging, the region merging-based method can achieve better performance. For a wide range of natural images, the salient objects can be reliably segmented from their complex backgrounds. SSaSW involves no user inputs and is a fully automatic framework for segmentation. Experimental results prove that our proposed scheme is comparable to current state-of-the-art automatic segmentation techniques and outperforms the conventional interactive methods. Our future work will focus on how to overcome the failure of SSaSW in some difficult situations and how to improve its speed.
MSWS:
maximal spatially weighted similarity
SSaSW:
saliency-seeded and spatial-weighted
MCS:
maximal color similarity
PRI:
probabilistic rand index
VoI:
variation of information
GCE:
global consistency error
Achanta R, Estrada F, Wils P, Susstrunk S (2008) Salient region detection and segmentation. In: IEEE international conference on computer vision systems. IEEE, New Jersey
Achanta R, Hemami S, Estrada F, Susstrunk S (2009) Frequency-tuned salient region detection. In: IEEE International conference on computer vision and pattern recognition. IEEE, New Jersey
Avidan S, Shamir A (2007) Seam carving for content-aware image resizing. ACM Trans Graphics 26:236–246
Bai X, Sapiro G (2007) A geodesic framework for fast interactive image and video segmentation and matting. In: IEEE international conference on computer vision, pp 1–8
Boykov VV, Jolly MP (2001) Interactive graph cuts for optimal boundary and region segmentation of objects in n-d images. IEEE Trans Pattern Anal Mach Intell 1:105–112
Cai W, Chen S, Zhang D (2007) Fast and robust fuzzy c-means clustering algorithms incorporating local information for image segmentation. Pattern Recognit 40:825–838
Article MATH Google Scholar
Chen S, Zhang D (2004) Robust image segmentation using fcm with spatial constraints based on new kernel-induced distance measure. IEEE Trans Syst Man Cybern 34:1907–1916
Cheng MM, Mitra NJ, Huang X, Torr PH, Hu SM (2011) Global contrast based salient region detection. IEEE Trans Pattern Anal Mach Intell 37:409–416
Comaniciu D, Meer P (2002) Mean shift: a robust approach toward feature space analysis. IEEE Trans Pattern Anal Mach Intell 24(5):603–619
Ding J, Chen S, Ma R, Wang B (2006) A fast directed tree based neighborhood clustering for image segmentation. In: International conference on neural information processing. Springer, Berlin, pp 369–378
Dogra DP, Majumdar AK, Sural S, Mukherjee J, Mukherjee S, Singh A (2012) Analysis of adductors angle measurement in hammersmith infant neurological examinations using mean shift segmentation and feature point based object tracking. Comput Biol Med 42:925–934
EDISON Software. http://www.caip.rutgers.edu/riul/research/code.html. Accessed 17 Juns 2013
Goferman S, Zelnik-Manor L, Tal A (2010) Context-aware saliency detection. IEEE Trans Conf Comp Vis Pattern Recogn 34:2376–2383
Gollmer ST, Kirschner M, Buzug TM, Wesarg S (2014) Using image segmentation for evaluating 3D statistical shape models built with groupwise correspondence optimization. Comp Vis Image Underst 125:283–303
Harel J, Koch C, Perona P (2007) Graph-based visual saliency. In: Advances in Neural Information Processing Systems. MIT Press, Cambridge, pp 545–552
Hou X, Zhang L (2007) Saliency detection: a spectral residual approach. In: IEEE international conference on computer vision and pattern recognition. IEEE, New Jersey, pp 1–8
Itti L, Kouch C, Niebur E (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell 20:1254–1259
Jiang B, Zhang L, Lu H, Yang M (2013) Saliency detection via absorbing markov chain. In: IEEE international conference on computer vision. IEEE, New Jersey
Li X, Lu H, Zhang L, Ruan X, Yang M (2013) Saliency detection via dense and sparse reconstruction. In: IEEE international conference on computer vision. IEEE, New Jersey
Li J, Ma R, Ding J (2011) Saliency-seeded region merging: automatic object segmentation. In: Asian conference on pattern recognition. IEEE, New Jersey, p 691
Li Y, Sun JC, Tang SH (2004) Interactive natural image segmentation via spline regression. SIGGRAPH, Los Angeles, pp 303–308
Liu T, Yuan Z, Sun J, Wang J, Zheng N, Tang X, Shum H (2011) Learning to detect a salient object. IEEE Trans Pattern Anal Mach Intell 33:353–367
Ma Y, Zhang H (2003) Contrast-based image attention analysis by using fuzzy growing. ACM, New York, pp 374–381
Martin D, Fowlkes C, Tal D, Malik J (2001) A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: IEEE international conference on computer vision. IEEE, New Jersey, pp 416–423
Meila M (2005) Comparing clusterings-an axiomatic view. In: IEEE international conference on machine learning. ACM, Los Angeles
Mignotte M (2008) Segmentation by fusion of histogram-based k-means clusters in different color spaces. IEEE Trans Image Process 17:780–787
MathSciNet Article Google Scholar
Ning J, Zhang L, Zhang D, Wub C (2010) Interactive image segmentation by maximal similarity based region merging. Pattern Recogn 43:445–456
Otsu N (1979) A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern 9:62–66
Peng B, Zhang L, Zhang D, Yang J (2011) Image segmentation by iterated region merging with localized graph cuts. Pattern Recogn 44:2527–2538
Perazzi F, Krahenbuhl P, Pritch Y, Hornung A (2012) Saliency filters: Contrast based filtering for salient object detection. In: IEEE international conference on computer vision and pattern recognition. IEEE, New Jersey, pp 733–740
Rother C, Kolmogorov V, Blake A (2004) grabcut: interactive foreground extraction using iterated graph cuts. SIGGRAPH, Los Angeles, pp 309–314
Russell BC, Freeman WT, Efros AA, Sivic J, Zisserman A (2006) Using multiple segmentations to discover objects and their extent in image collections. IEEE Comp Soc Conf Comp Vis Pattern Recognit 2:1605–1614
Seo K, Shin J, Kim W, Lee J (2006) Real-time object tracking and segmentation using adaptive color snake model. Int J Cont Autom Sys 4:236–246
Shen X, Wu Y (2012) A unified approach to salient object detection via low rank matrix recovery. In: IEEE international conference on computer vision and pattern recognition. IEEE, New Jersey, pp 853–860
Shi J, Malik J (2000) Normalized cuts and image segmentation. IEEE Trans Pattern Anal Mach Intell 22:888–905
Tamura H, Mori S, Yamawaki T (1978) Textural features corresponding to visual perception. IEEE Trans Syst Man Cybern 8:460–472
Tavakoli V, Amini AA (2013) A survey of shaped-based registration and segmentation techniques for cardiac images. Comp Vis Image Underst 117:966–989
Unnikrishnan R, Pantofaru C, Hebert M (2005) A measure for objective evaluation of image segmentation algorithms. In: IEEE international conference on computer vision and pattern recognition workshop on empirical evaluation methods in computer vision. IEEE, New Jersey
Unnikrishnan R, Pantofaru C, Hebert M (2007) Toward objective evaluation of image segmentation algorithms. IEEE Trans Pattern Anal Mach Intell 29:929–944
Vincent L, Soille P (1991) Watersheds in digital spaces: an efficient algorithms based on immersion simulations. IEEE Trans Pattern Anal Mach Intell 13:583–598
Xiang S, Nie F, Zhang C, Zhang C (2009) Interactive natural image segmentation via spline regression. IEEE Trans Image Process 18:1623–1632
Yan Q, Xu L, Shi J, Jia J (2013) Hierarchical saliency detection. In: IEEE international conference on computer vision and pattern recognition. IEEE, New Jersey, pp 1155–1162
Yang W, Cai J, Zheng J, Luo J (2010) User-friendly interactive image segmentation through unified combinatorial user inputs. IEEE Trans Image Process 19:2470–2479
Yang C, Lu L, Ruan X, Yang M (2013) Saliency detection via graph-based manifold ranking. In: IEEE international conference on computer vision and pattern recognition. IEEE, New Jersey, pp 3166–3173
Zhai Y, Shah M (2006) Visual attention detection in video sequences using spatiotemporal cues. ACM Multimedia, New York
JL, JD, and JY conceived and designed the study. JL and LD performed the experiments. JD, JY, and LD reviewed and edited the manuscript. All authors read and approved the final manuscript.
The authors would like to thank the editor and the anonymous reviewers for their critical and constructive comments and suggestions. This work was supported in part by the National Science Fund of China under Grants 91420201, 61472187, 61502235, 61233011, and 61373063, in part by the Key Project of Chinese Ministry of Education under Grant 313030, the 973 Program under Grant 2014CB349303, and in part by the Program for Changjiang Scholars and Innovative Research Team in University Grant IRT13072.
The authors declared that they have no competing interests.
All the funding includes National Science Fund of China under Grant 91420201, Grant 61472187, Grant 61502235, Grant 61233011, and Grant 61373063, the Key Project of Chinese Ministry of Education under Grant 313030, the 973 Program under Grant 2014CB349303, and the Program for Changjiang Scholars and Innovative Research Team in University Grant IRT13072. All the above funding gives the financial support for the designing of the study and conducting experiments.
School of Computer Science and Engineering, Nanjing University of Science and Technology, 200 Xiaolingwei Street, Nanjing, 210094, China
Junxia Li, Jundi Ding, Jian Yang & Lingzheng Dai
Junxia Li
Jundi Ding
Jian Yang
Lingzheng Dai
Correspondence to Junxia Li.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Li, J., Ding, J., Yang, J. et al. Object segmentation by saliency-seeded and spatial-weighted region merging. Appl Inform 3, 9 (2016). https://doi.org/10.1186/s40535-016-0024-z
Saliency detection
Spatial neighbor
Region merging | CommonCrawl |
How many non-collinear points determine an $n$-ellipse?
$1$-ellipse is the circle with $1$ focus point, $2$-ellipse is an ellipse with $2$ foci. $n$-ellipse is the locus of all points of the plane whose sum of distances to the $n$ foci is a constant.
I know that $3$ non-collinear points determine a circle. $5$ non-collinear points on a plane determine an ellipse.
After that my question is: how many non-collinear points determine an $n$-ellipse on a plane?
Futhermore: is there a unique shape which is kind of generalization of circle or ellipse and it is determined by $4$ given non-collinear points on a plane? What can we say in this case? Is there a special fitted unique closed curve for any points?
geometry conic-sections curves
$\begingroup$ I don't know, but see this presentation by Sturmfels: math.berkeley.edu/~bernd/feb19.pdf or this paper: math.ucsd.edu/~njw/PUBLICPAPERS/kellipse_imaproc_toappear.pdf $\endgroup$ – Michael Lugo Sep 27 '14 at 19:36
The number of points needed to identify a $n$-ellipse is $2n+1$. This directly follows from the general equation of a $n$-ellipse
$$\sum_{i=1}^n \sqrt{(x-u_i)^2+(y-v_i)^2}=k$$
where the number of parameters is $2n+1$. So, for a $1$-ellipse (circle) we need $3$ noncollinear points to identify $3$ parameters ($u_1,v_1,k$), for a $2$-ellipse we need $5$ noncollinear points to identify $5$ parameters ($u_1,v_1,u_2,v_2,k$), and so on.
As regards the "shape" identified by $4$ points, since these points allow to define a $2$-ellipse with the exception of a single parameter that remains unknown, the resulting figure is a set of $2$-ellipses. For example, we could use the $4$ points to calculate $u_1,v_1,u_2,v_2$, leaving $k$ unknown. This would create a set of $2$-ellipses where the only variable parameter is $k$, that is to say the sum of the distances to the two foci.
AnatolyAnatoly
Not the answer you're looking for? Browse other questions tagged geometry conic-sections curves or ask your own question.
What are curves (generalized ellipses) with more than two focal points called and how do they look like?
Why are two definitions of ellipses equivalent?
Focus-Focus Definition for a Parabola
Equivalence of geometric and algebraic definitions of conic sections
How to prove the parametric equation of an ellipse?
Show that the curve $\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$ form an ellipse
Graphically locate the axes or foci of an ellipse from 5 arbitrary points on its perimeter.
Circle, Ellipse, closed-curve-with-n-foci? | CommonCrawl |
Finding a limit
How would I find the limit of $\frac{1}{x^2} - \cosec^2(x)$ as $x$ goes to 0?
There are several valid approaches to this; two that spring to mind are L'Hôpital's rule, which I like because it's got an accent and two apostrophes, and series expansions, which I like because I like them, ok?
In both cases, the first step is to turn the expression into a single fraction: $\frac{\sin^2(x) - x^2}{x² \sin^2(x)}$.
If you try putting $x=0$ into that fraction, you get… a problem. The top is 0 and the bottom is 0, which is an indeterminate form - you need to do more work.
L'Hôpital's Rule
L'Hôpital's Rule was discovered by Bernoulli, of course, and says "if you have an indeterminate limit of the form $\frac{f(x_0)}{g(x_0)}$, the limit is given by $\frac{f'(x_0)}{g'(x_0)}$."
Unfortunately, in this case, the first derivatives are both zero. As are the second derivatives. And the third. Not to mention the fourth. It's a real bear to get down to something you can work with – five pairs of derivatives, each nastier than the one before.
Let's do it the other way.
Series expansion
In your trusty formula book, you're told that $\sin(x) \approx x - \frac 16 x^3 + O(x^5)$ - the big O means "stuff that's this small or smaller" - when we're looking at small values of $x$, it saves us thinking about really really small ones!
If we square that, carefully, we get $\sin^2(x) \approx x^2 - \frac 13 x^4 + O(x^6)$.
The top of the fraction becomes ${-\frac 13 x^4 + O(x^6)}$. (Yes, I changed the sign of the big O thing. Doesn't matter. We'll treat it as 0 in a minute).
The bottom of the fraction becomes $x^4 + O(x^6)$.
Dividing both top and bottom by $x^4$ gives $-\frac 13 + O(x^2)$, which goes to $-\frac 13$ as $x$ goes to 0.
Calculator check
Ask Uncle Colin: Why does the $ac$ method work?
The Stuckness of Andrew Wiles
A neat number trick: digital roots and modulo-9 arithmetic
Quotable maths: Erdős | CommonCrawl |
Prototypes: More serious questions about Taylor polynomials
Beyond just writing out Taylor expansions, we could actually use them to approximate things in a more serious way. There are roughly three different sorts of serious questions that one can ask in this context. They all use similar words, so a careful reading of such questions is necessary to be sure of answering the question asked.
(The word 'tolerance' is a synonym for 'error estimate', meaning that we know that the error is no worse than such-and-such)
Given a Taylor polynomial approximation to a function, expanded at some given point, and given a required tolerance, on how large an interval around the given point does the Taylor polynomial achieve that tolerance?
Given a Taylor polynomial approximation to a function, expanded at some given point, and given an interval around that given point, within what tolerance does the Taylor polynomial approximate the function on that interval?
Given a function, given a fixed point, given an interval around that fixed point, and given a required tolerance, find how many terms must be used in the Taylor expansion to approximate the function to within the required tolerance on the given interval.
As a special case of the last question, we can consider the question of approximating $f(x)$ to within a given tolerance/error in terms of $f(x_o), f'(x_o), f''(x_o)$ and higher derivatives of $f$ evaluated at a given point $x_o$.
In 'real life' this last question is not really so important as the third of the questions listed above, since evaluation at just one point can often be achieved more simply by some other means. Having a polynomial approximation that works all along an interval is a much more substantive thing than evaluation at a single point.
It must be noted that there are also other ways to approach the issue of best approximation by a polynomial on an interval. And beyond worry over approximating the values of the function, we might also want the values of one or more of the derivatives to be close, as well. The theory of splines is one approach to approximation which is very important in practical applications.
Previous: Computational tricks regarding Taylor polynomials
Next: Determining tolerance/error in Taylor polynomials.
Taylor polynomials: formulas
Classic examples of Taylor polynomials
Computational tricks regarding Taylor polynomials
Determining tolerance/error in Taylor polynomials.
How large an interval with given tolerance for a Taylor polynomial?
Achieving desired tolerance of a Taylor polynomial on desired interval
Integrating Taylor polynomials: first example
Integrating the error term of a Taylor polynomial: example
The idea of the derivative of a function
Derivatives of polynomials
Garrett P, "Prototypes: More serious questions about Taylor polynomials." From Math Insight. http://mathinsight.org/prototypes_more_serious_questions_taylor_polynomials_refresher
Keywords: ordinary derivative, Taylor polynomial
Send us a message about "Prototypes: More serious questions about Taylor polynomials"
Prototypes: More serious questions about Taylor polynomials by Paul Garrett is licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 4.0 License. For permissions beyond the scope of this license, please contact us. | CommonCrawl |
$ \aleph $
The first letter of the Hebrew alphabet. As symbols, alephs were introduced by G. Cantor to denote the cardinal numbers (i.e., the cardinality) of infinite well-ordered sets. Each cardinal number is some aleph (a consequence of the axiom of choice). However, many theorems about alephs are demonstrated without recourse to the axiom of choice. For each ordinal number $ \alpha $, by $ \aleph_{\alpha} = w(\omega_{\alpha}) $ one denotes the cardinality of the set of all ordinal numbers smaller than $ \omega_{\alpha} $. In particular, $ \aleph_{0} $ is the cardinality of the set of all natural numbers, $ \aleph_{1} $ is the cardinality of the set of all countable ordinal numbers, etc. If $ \alpha < \beta $, then $ \aleph_{\alpha} < \aleph_{\beta} $. The cardinal number $ \aleph_{\alpha + 1} $ is the smallest cardinal number that follows $ \aleph_{\alpha} $. The generalized continuum hypothesis ($ \mathsf{GCH} $) states that $ 2^{\aleph_{\alpha}} = \aleph_{\alpha + 1} $ for each ordinal number $ \alpha $. When $ \alpha = 0 $, this equation assumes the form $ 2^{\aleph_{0}} = \aleph_{1} $, which is known as the continuum hypothesis ($ \mathsf{CH} $). The set of all alephs smaller than $ \aleph_{\alpha} $ is totally ordered according to magnitude, and its order type is $ \alpha $. The definitions of the sum, the product and a power of alephs are obvious. One has $$ \aleph_{\alpha} + \aleph_{\beta} = \aleph_{\alpha} \cdot \aleph_{\beta} = \aleph_{\max(\alpha,\beta)}. $$ The following formulas are most frequently encountered.
The recursive Hausdorff formula: $$ \aleph_{\alpha + n}^{\aleph_{\beta}} = \aleph_{\alpha}^{\aleph_{\beta}} \cdot \aleph_{\alpha + n}, $$ a particular case of which, for $ \alpha = 0 $, is the Bernshtein formula: $$ \aleph_{n}^{\aleph_{\beta}} = 2^{\aleph_{\beta}} \cdot \aleph_{n}. $$
The recursive formula of Tarski: If an ordinal number $ \alpha $ is a limit ordinal, and if $ \beta < \mathsf{cf}(\alpha) $, then $$ \aleph_{\alpha}^{\aleph_{\beta}} = \sum_{\xi < \alpha} \aleph_{\xi}^{\aleph_{\beta}}. $$ Here, $ \mathsf{cf}(\alpha) $ denotes the cofinality of the ordinal number $ \alpha $. As in the case of cardinal numbers, one distinguishes between singular alephs, regular alephs, limit alephs, weakly inaccessible alephs, strongly inaccessible alephs, etc. For example, $ \aleph_{\alpha} $ is singular if $ \alpha $ is a limit ordinal and $ \mathsf{cf}(\alpha) < \alpha $.
There is no largest aleph among all alephs. It was shown by Cantor that the set of all alephs is meaningless, i.e., there is no such set. See also Totally well-ordered set; Continuum hypothesis; Set theory; Ordinal number; Cardinal number.
[1] P.S. Aleksandrov, "Einführung in die Mengenlehre und die Theorie der reellen Funktionen", Deutsch. Verlag Wissenschaft. (1956). (Translated from Russian)
[2] F. Hausdorff, "Grundzüge der Mengenlehre", Leipzig (1914). (Reprinted (incomplete) English translation: Set theory, Chelsea (1978))
[3] P.J. Cohen, "Set theory and the continuum hypothesis", Benjamin (1966).
[4] K. Kuratowski, A. Mostowski, "Set theory", North-Holland (1968).
A more recent theorem on the exponentiation of alephs was proved by J. Silver in 1974 (cf. [a2]). A particular case says that if $$ 2^{\aleph_{\xi}} = \aleph_{\xi + 1} \quad \text{for all} \quad \xi < \omega_{1}, $$ then $$ 2^{\aleph_{\omega_{1}}} = \aleph_{\omega_{1} + 1}. $$ A reasonable up-to-date additional reference for this topic is [a1].
[a1] A. Levy, "Basic set theory", Springer (1979).
[a2] J. Silver, "On the singular cardinals problem", R. James (ed.), Proc. Internat. Congress Mathematicians (Vancouver, 1974), 1, Canad. Math. Congress (1975), pp. 265–268.
Aleph. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Aleph&oldid=51158
This article was adapted from an original article by B.A. Efimov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/index.php?title=Aleph&oldid=51158" | CommonCrawl |
Home > Journals > Rocky Mountain J. Math. > Volume 49 > Issue 8
Rocky Mountain Journal of Mathematics
VOL. 49 · NO. 8 | 2019
Content Email Alerts notify you when new content has been published.
Visit My Account to manage your email alerts.
Receive bi-monthly emailed content alerts
Receive immediate emailed alerts when a new issue has been published
Please select when you would like to receive an alert.
Alert saved!
< Previous Issue | Next Issue >
VIEW ALL ABSTRACTS +
Table of Contents, Rocky Mountain Journal of Mathematics, vol. 49, no. 8, (2019)
Rocky Mountain J. Math. 49 (8), (2019)
Further properties of Osler's generalized fractional integrals and derivatives with respect to another function
Ricardo Almeida
Rocky Mountain J. Math. 49 (8), 2459-2493, (2019) DOI: 10.1216/RMJ-2019-49-8-2459
KEYWORDS: fractional integral, fractional derivative, Taylor's Theorem, semigroup law, expansion formulas, 26A33, 26A24, 41A58
Read Abstract +
In this paper we discuss fractional integrals and fractional derivatives of a function with respect to another function. We present some fundamental properties for both types of fractional operators, such as Taylor's theorem, Leibniz and semigroup rules. We also provide a numerical tool to deal with these operators, by approximating them with a sum involving integer-order derivatives.
Existence results for a semipositone singular fractional differential equation
Rim Bourguiba, Faten Toumi
KEYWORDS: fractional differential equation, positive solution, integral boundary conditions, Green's function, dependence on a parameter, perturbed term, semipositone, 34A08, 34B27, 34B18, 35G60
This paper provides sufficient conditions to guarantee the existence and multiplicity of positive solutions for a singular semipositone fractional equation, subject to integral boundary value condition and parametric dependence $\mu $. Our approach relies on Krasnoselskii's fixed point theorem. Some examples are presented to illustrate our results.
New series involving harmonic numbers and squared central binomial coefficients
John Maxwell Campbell
KEYWORDS: harmonic number, central binomial coefficient, infinite series, symbolic computation, Gamma function, 33E20, 33B15
Recently, there has been a variety of intriguing discoveries regarding the symbolic computation of series containing central binomial coefficients and harmonic-type numbers. In this article, we present a vast generalization of the recently discovered harmonic summation formula $$\sum _{n=1}^{\infty } \binom {2n}{n}^{2} \frac {H_{n}}{32^{n}} = \frac {\Gamma^2\bigl(\frac {1}{4} )}{4 \sqrt {\pi }} ( 1 - \frac {4 \ln 2}{\pi } ) $$ through creative applications of an integration method that we had previously introduced and applied to prove new Ramanujan-like formulas for $\frac {1}{\pi }$. We provide explicit closed-form expressions for natural variants of the above series that cannot be evaluated by state-of-the-art computer algebra systems, such as the elegant symbolic evaluation $$ \sum _{n=1}^{\infty } \frac { \binom {2 n}{n}^2 H_n}{32^n (n + 1)} = 8-\frac {2 \Gamma^2\bigl(\frac {1} {4} )}{\pi ^{3/2}}-\frac {4 \pi ^{3/2}+16 \sqrt {\pi } \ln 2}{\Gamma^2\bigl(\frac {1}{4})} $$ introduced in our present paper. We also discuss some related problems concerning binomial series containing alternating harmonic numbers. We also introduce a new class of harmonic summations for Catalan's constant $G$ and $\frac {1}{\pi }$ such as the series $$ \sum _{n=1}^{\infty } \frac { \binom {2 n}{n}^2 H_n}{ 16^{n} (n+1)^2} = 16+\frac {32 G-64 \ln 2}{\pi }-16 \ln 2 $$ which we prove through a variation of our previous integration method for constructing $\frac {1}{\pi }$ series.
On a conjecture of Mordell
Debopam Chakraborty, Anupam Saikia
KEYWORDS: Continued fraction, period, fundamental unit, 11D09, 11A55, 11R11, 11R27
A conjecture of Mordell states that if $p$ is a prime and $p$ is congruent to $3$ modulo $4$, then $p$ does not divide $y$ where $(x,y)$ is the fundamental solution to $x^{2}-py^{2}=1$. The conjecture has been verified for primes not exceeding $10^{7}$. In this article, we show that Mordell's conjecture holds for four conjecturally infinite families of primes.
Cubic sums of $q$-binomial coefficients and the Fibonomial coefficients
Wenchang Chu, Emrah Kılıç
KEYWORDS: Fibonomial coefficient, Lucanomial coefficient, Basic hypergeometric series, $q$-binomial coefficient, well-poised series., 11B39, 05A30, 11B65
Triple product sums on the generalized Fibonomial and Lucanomial coefficients are evaluated in closed forms by means of Bailey's summation formulae for two terminating well-poised $_3\phi _2$-series.
A uniformly sharp convexity result for discrete fractional sequential differences
Rajendra Dahal, Christopher S. Goodrich
KEYWORDS: Discrete fractional calculus, convexity, sequential fractional delta difference., 26A51, 39A70, 39B62, 26A33, 39A12, 39A99.
We prove that a class of convexity-type results for sequential fractional delta differences is uniformly sharp. More precisely, we consider the sequential difference $\Delta _{1-\mu +a}^{\nu }\Delta _{a}^{\mu }f(t)$, for $t\in \mathbb {N}_{3+a-\mu -\nu }$, and demonstrate that there is a strong connection between the sign of this function and the convexity or concavity of $f$ if and only if the pair $(\mu ,\nu )$ lives in a particular subregion of the parameter space $(0,1)\times (1,2)$.
Convex Stone-Weierstrass theorems and invariant convex sets
Nathan S. Feldman, Paul J. McGuire
KEYWORDS: polynomial approximation, convex polynomial, convex polynomial approximation, Stone-Weierstrass, convex-cyclic, invariant convex set, 41A10, 47A16, 46E15
A \textit {convex polynomial} is a convex combination of the monomials $\{1, x, x^2, \ldots \}$. This paper establishes that the convex polynomials on $\mathbb R$ are dense in $L^p(\mu )$ and weak$^*$ dense in $L^\infty (\mu )$ whenever $\mu $ is a compactly supported regular Borel measure on $\mathbb {R}$ and $\mu ([-1,\infty )) = 0$. It is also shown that the convex polynomials are norm dense in $C(K)$ precisely when $K \cap [-1, \infty ) = \emptyset $, where $K$ is a compact subset of the real line. Moreover, the closure of the convex polynomials on $[-1,b]$ is shown to be the functions that have a convex power series representation.
A continuous linear operator $T$ on a locally convex space $X$ is \textit {convex-cyclic} if there is a vector $x \in X$ such that the convex hull of the orbit of $x$ is dense in $X$. The previous results are used to characterize which multiplication operators on various real Banach spaces are convex-cyclic. Also, it is shown for certain multiplication operators that every nonempty closed invariant convex set is a closed invariant subspace.
May modules of countable rank
Patrick W. Keef
KEYWORDS: module, complete discrete valuation ring, totally projective, balanced-projective, valuation, 20K30, 20K21, 16W20
In a 1990 paper, W. May studied the question of when isomorphisms of the endomorphism rings of mixed modules are necessarily induced by isomorphisms of the underlying modules. In so doing he introduced a class of mixed modules over a complete discrete valuation domain; we later renamed these modules after their inventor. The class of May modules of countable torsion-free rank is particularly important. A decomposition theorem is established for such modules. The modules in this class are characterized in several ways. Finally, an example is constructed showing that several of these ideas do not extend to May modules of uncountable torsion-free rank.
Uniqueness dictated by boundary orbit accumulation sets
Steven G. Krantz
KEYWORDS: boundary orbit accumulation point, boundary orbit accumulation set, pseudoconvex, 32H02, 32M99, 32E99
We study domains in which the set of boundary orbit accumulation points contains a relatively open subset of the boundary. The main result, in complex dimension 2, is that a domain with such a property must be either the ball or the bidisc.
Controllability results for a Volterra integro dynamic inclusion with impulsive condition on time scales
Vipin Kumar, Muslim Malik
KEYWORDS: Dynamical inclusion, Time scales, Controllability, impulses, 34A60, 34N05, 93B05, 34A37
The main motive here is to establish the controllability results for a Volterra integro dynamic inclusion with the impulsive condition on time scales. Fixed point theorem for multivalued maps due to Dhage has been used to establish the main results. Moreover, we give the controllability results for the taken problem with nonlocal effect. Also, an example for two different time scales is given to validate the utilization of these analytical outcomes.
Approximation order of two-direction multiscaling functions
Soon-Geol Kwon
KEYWORDS: Two-direction multiwavelets, Condition E, basic regularity conditions, approximation order, sum rule order., 42C15
We investigate Condition E and the basic regularity conditions for the two-direction multiscaling functions $\boldsymbol {\phi }$, and then investigate approximation order of $\boldsymbol {\phi }$. By investigating a structure of approximation vectors, we are able to find simple and efficient criteria for Condition E, the basic regularity conditions, and the approximation order of $\boldsymbol {\phi }$. Examples for illustrating the general theory are given.
Zalcman's lemma and normality concerning shared values of holomorphic functions and their total derivatives in several complex variables
Zhixue Liu, Tingbin Cao
KEYWORDS: several complex variables, normal family, total derivative, Shared Values, 32A19, 30D45
The paper is a continuation of a recent paper of ours, which considers the normality for a family of holomorphic functions concerning the total derivative in several complex variables. Here we extend the famous Zalcman's lemma concerning the total derivative from one complex variable to several complex variables, and obtain some normality criteria where complex values are shared by every function from the family and its $k$-th total derivative which may be seen as some generalizations of normality for holomorphic functions sharing values in one complex variable. In addition, the case of sharing analytic function by every function from the family and its $k$-th total derivative is also verified.
Colored graph homomorphisms
Colton Magnant, Chunwei Song, Suman Xia
KEYWORDS: graph homomorphisms, colored homomorphism, extremal family, 05C60, 05C15, 05A15, 05D99, 51A10
In this paper we investigate some colored notions of graph homomorphisms. We compare three different notions of colored homomorphisms and determine the number of such homomorphisms between several classes of graphs. More specifically, over all possible colorings of paths, we consider the colorings that yields the largest and smallest number of colored homomorphisms.
Convergence of Poincare series on Hecke groups of large width
Paul C. Pasles
KEYWORDS: modular forms, automorphic forms, Poincare series, 11F12, 40A05
In earlier work, we described a relation between the parameters associated with multiplier systems of complex weight on the discrete Hecke groups $G_{\lambda }$ when $1 \leq \lambda \lt 2$, and consequently showed that parabolic Poincare series of nonreal weight on the modular group are not absolutely convergent anywhere. In the current paper we establish an analogous divergence result for all Hecke groups with $\lambda > 2$.
Regularity-type properties of the boundary spectrum in Banach algebras
Heinrich Raubenheimer, Andre Swartz
KEYWORDS: regularities, semiregularities, boundary spectrum, MSC, 2010:, 46H05, 46H30, 47A05
We provide examples to shed some light on the regularity-type properties of the boundary spectrum of a Banach algebra element. In particular, we show that the boundary spectrum is generated by a set that is neither an upper nor a lower semiregularity.
Extension of Dunkl--Williams inequality and characterizations of inner product spaces
J. Rooin, S. Rajabi, M.S. Moslehian
KEYWORDS: inner product space, Dunkl--Williams inequality, $p$-angular distance, orthogonality, characterization of inner product space., 47A30, 46C15, 26D15.
Given $ p, q \in \mathbb {R} $, we generalize the classical Dunkl--Williams inequality for $p$-angular and $q$-angular distances in inner product spaces. We extend the Hile inequality for arbitrary $p$-angular and $q$-angular distances and study some geometric aspects of a generalization of Dunkl--Williams inequality. Introducing power refinements, we show significant power refinements of the generalized Dunkl--Williams inequality under some mild conditions. Among other things, we give new characterizations of inner product spaces with regard to $p$-angular and $q$-angular distances. In particular, we prove that if $ p, q, r \in \mathbb {R} $, $ q \neq 0$ and $ 0\leq p/q\lt 1 $, then $ X $ is an inner product space if and only if for every $ x, y \in X \setminus \{0\} $, $$ \bigl \lVert \lVert x \rVert ^{p-1} x - \lVert y \rVert ^{p-1}y \bigr \rVert \leq \frac {2^{1/r}\bigl \lVert \lVert x \rVert ^{q-1} x - \lVert y \rVert ^{q-1}y \bigr \rVert }{\bigl [\lVert x \rVert ^{r(q-p)} + \lVert y \rVert ^{r(q-p)}\bigr ]^{\frac {1}{r}}}. $$
Pairs of Pythagorean triangles with given ratios between catheti
Mariusz Skalba, Maciej Ulas
KEYWORDS: Pythagorean triples, Pythagorean triangles, cathetus ratios, Elliptic curves, rational points., 11D25
We investigate the problem of finding pairs of Pythagorean triangles $(a, b, c), (A, B, C)$, with given cathetus ratios $A/a,\, B/b$. In particular, we prove that there are infinitely many essentially different (non-similar) pairs of Pythagorean triangles $(a, b, c), (A, B, C)$ satisfying given proportions, provided that $Aa\neq Bb$.
Three results for $\tau $-rigid modules
Zongzhen Xie, Libo Zan, Xiaojin Zhang
KEYWORDS: $\tau $-rigid module, projective dimension, Tilted algebra, 16G10, 16E10.
$\tau $-rigid modules are essential in the $\tau $-tilting theory introduced by Adachi, Iyama and Reiten. In this paper, we give equivalent conditions for Iwanaga-Gorenstein algebras with self-injective dimension at most one in terms of $\tau $-rigid modules. We show that every indecomposable module over iterated tilted algebras of Dynkin type is $\tau $-rigid. Finally, we give a $\tau $-tilting theorem on homological dimension which is an analog to that of classical tilting modules.
Volume Index to Volume 49 (2019)
Rocky Mountain J. Math. 49 (8), 2809-2819, (2019) | CommonCrawl |
Almost synonymous terms used in various areas are Topological bundle, Locally trivial fibre bundle, Fibre space, Fibration, Skew product etc. Particular cases are Vector bundle, Tangent bundle, Principal fibre bundle, $\dots$
2010 Mathematics Subject Classification: Primary: 55Rxx Secondary: 14Dxx32Lxx53Cxx55Sxx57Rxx [MSN][ZBL]
A very flexible geometric construction aimed to represent a family of similar objects (fibres or fibers, depending on the preferred spelling) which are parametrized by the index set which itself has an additional topological or geometric structure (topological space, smooth or holomorphic manifold etc.).
The most known examples are the tangent and cotangent bundle of a smooth manifold. The coverings are also a special particular form of a topological bundle (with discrete fibers).
1 Formal definition of a topological bundle
1.2 Cocycle of a bundle
1.2.1 Bundles from cocycles: the abstract "patchwork" construction
2 Vector bundles and other additional structures on the fibers
2.1 Tangent and cotangent bundle of a smooth manifold
2.2 Equivalence of cocycles
3 Special classes of bundles
3.1 Fibrations
3.2 $G$-bundles and principal bundles
3.3 Line bundles and the "genuine" cohomology
3.4 Algebraic and analytic vector bundles
3.5 Bundles with a discrete fiber and topological coverings
3.6 More features
4 Morphisms and sections
4.1 Fibered maps
4.2 Gauge transforms
4.3 Induced bundle
5 Fiberwise operations
5.1 Whitney sum of bundles
5.2 Other constructions with bundles
6.1 Connections on bundles
6.2 Riemannian geometry
6.3 Characteristic classes
6.4 Literature
Formal definition of a topological bundle
Let $\pi:E\to B$ be a continuous map between topological spaces, called the total space[1] and the base, and $F$ yet another topological space called the (generic) fiber, such that the preimage $F_b=\pi^{-1}(b)\subset E$ of every point of the base is homeomorphic to $F$. The latter condition means that $E$ is the disjoint union of "fibers", $E=\bigsqcup_{b\in B} F_b$ homeomorphic to each other.
The map $\pi$ is called fibration[2] of $E$ over $B$, if the above representation is locally trivial: any point of the base admits an open neighborhood $U$ such that the restriction of $\pi$ on the preimage $\pi^{-1}(U)$ is topologically equivalent to the Cartesian projection $\pi_2$ of the product $F\times U$ on the second component: $\pi_2(v,b)=b$. Formally this means that there exists a homeomorhism $H_U=H:\pi^{-1}(U)\to F\times U$ such that $\pi=\pi_2\circ H$.
The trivial bundle $E=F\times B$, $\pi=\pi_2: F\times B\to B$, $(v,b)\mapsto b$. In this case all trivializing homeomorphisms are globally defined on the entire total space (as the identity map).
Let $E=\R^n\smallsetminus\{0\}$ be the punctured Euclidean space, $B=\mathbb S^{n-1}$ the standard unit sphere and $\pi$ the radial projection $\pi(x)=\|x\|^{-1}\cdot x$. This is a topological bundle with the fiber $F=(0,+\infty)\simeq\R^1$.
Let $E=\mathbb S^{n-1}$ as above, $B=\R P^{n-1}$ the real projective space (all lines in $\R^n$ passing through the origin) and $\pi$ the map taking a point $x$ on the sphere into the line $\ell_x$ passing through $x$. The preimage $\pi^{-1}(\ell)$ consists of two antipodal points $x$ and $-x\in\mathbb S^{n-1}$, thus $F$ is a discrete two-point set $\mathbb Z_2=\{-1,1\}$. This is a topological bundle, which cannot be trivial: indeed, if it were, then the total space $\mathbb S^{n-1}$ would consist of two connected components, while it is connected.
More generally, let $\pi:M^m\to N^n$ be a differentiable map between two smooth (connected) compact manifolds of dimensions $m\ge n$, which has the maximal rank (equal to $n$) everywhere. One can show then, using the implicit function theorem and partition of unity, that $\pi$ is a topological bundle with a fiber $F$ which itself is a smooth compact manifold[3].
The Hopf fibration $\mathbb S^3\to\mathbb S^2$ with the generic fiber $\mathbb S^1$. It is best realized through the restriction on the sphere $\mathbb S^3=\{|z|^2+|w|^2=1\}\subseteq\C^2$ of the canonical map $(z,w)\mapsto [z:w]\in\C P^1=\C^1\cup\{\infty\}\simeq\mathbb S^2$. The preimage of each point on the projective plane is a line in $\C^2$ which intersects the unit sphere $\mathbb S^3$ by the circle. This fibration can be spectacularly visualized if the sphere $\mathbb S^3$ is punctured (one of its point deleted) to become $\R^3$: fibers are linked between themselves.
Cocycle of a bundle
On a nonvoid overlapping $U_{\alpha\beta}=U_\alpha\cap U_\beta$ of two different trivializing charts $U_\alpha$ and $U_\beta$ two homeomorphisms $H_\alpha,H_\beta: \pi^{-1}(U_{\alpha\beta})\to F\times U_{\alpha\beta}$ are defined. Since both $H_\alpha$ and $H_\beta$ conjugate $\pi$ with the Cartesian projection on $U_{\alpha\beta}$, they map each fiber $F_b=\pi^{-1}(b)$ into the same space $F\times\{b\}$. The composition $H_\alpha\circ H_\beta^{-1}$ keeps constant the $b$-component and hence takes the "triangular" form $$ H_\alpha\circ H_\beta^{-1}:(v,b)\mapsto (H_{\alpha\beta}(b,v),b),\qquad H_{\alpha\beta}(\cdot,b)\in\operatorname{Homeo}(F) $$ with the homeomorphisms $H_{\alpha\beta}(\cdot, b)$ continuously depending on $b\in U_{\alpha\beta}$. The collection of these "homeomorphism-valued" functions defined in the intersections $U_{\alpha\beta}$ is called the cocycle associated with a given trivialization of the bundle $\pi$ (or simply the cocycle of the bundle. They homeomorphisms $\{H_{\alpha\beta}\}$ satisfy the following identities, obvious from their construction: $$ H_{\alpha\beta}\circ H_{\beta\alpha}=\operatorname{id},\qquad H_{\alpha\beta}\circ H_{\beta\gamma}\circ H_{\gamma\alpha}=\operatorname{id}, \tag{HC} $$ the second being true on every nonvoid triple intersection $U_{\alpha\beta\gamma}=U_\alpha\cap U_\beta\cap U_\gamma$.
Bundles from cocycles: the abstract "patchwork" construction
Every bundle directly defined by the map $\pi$ implicitly assumes that a trivializing atlas can be produced, thus defining the corresponding cocycle. Conversely, starting from a cocycle (HC) one can explicitly construct an abstract topological space $E$ together with the projection $\pi$. Let $\widetilde E=\bigsqcup F\times U_\alpha$ be the disjoint union of the "cylinders" $F\times U_\alpha$, on which the equivalence relation is defined: $$ (v_\alpha, b_\alpha)\sim(v_\beta,b_\beta) \iff b_\alpha=b_\beta\in U_\alpha\cap U_\beta,\quad v_\alpha=H_{\alpha\beta}(b_\beta)\,v_\beta. $$ The cocycle identities ensure that this is indeed a symmetric and transitive equivalence relation. The quotient space $E=\widetilde E/\sim$ admits the natural projection on the base $B$ which precisely corresponds to the specified cocycle.
Example. One can construct the "product" of any two bundles $\pi_1:E_1\to B$ and $\pi_2:E_2\to B$ over the same base by applying the above construction to the sets $(F_1\times F_2)\times U_\alpha$ and using the Cartesian product of the maps $\{H_{\alpha\beta}^i\}$, $i=1,2$, for the identification, $$ \begin{pmatrix} H^1_{\alpha\beta}&\\&H^2_{\alpha\beta}\end{pmatrix}:(F_1\times F_2)\times U_{\alpha\beta}\to (F_1\times F_2)\times U_{\alpha\beta}. $$
Vector bundles and other additional structures on the fibers
The general construction of bundle easily allows various additional structures, both on the base space and (more importantly) on the fibers. By far the most important special case is that of vector bundles.
To define a vector bundle, one has in addition to the principal definition assume the following:
The fiber $F$ is a vector space[4], and
The trivializing homeomorphisms must respect the linear structure of the fibers.
The second assumption means that rather than being arbitrary homeomorphisms, the maps $\{H_{\alpha\beta}\}$ forming the bundle cocycle, must be linear invertible of each "standard fiber" $F\times \{b\}$; if the fiber is identified with the canonical $n$-space $\Bbbk^n$ (over $\Bbbk=\R$ or $\Bbbk=\C$), then the cocycle will consist of invertible continuous matrix-functios $M_{\alpha\beta}:U_{\alpha\beta}\to\operatorname{GL}(n,\Bbbk)$, so that $H_{\alpha\beta}(v,b)=(M_{\alpha\beta}(b)\, v, b)$, $v\in\Bbbk^n$. The cocycle identities become then the identites relating the values of these matrix-valued functions, $$ M_{\alpha\beta}(b)\cdot M_{\beta\alpha}(b)\equiv E,\qquad M_{\alpha\beta}(b)\cdot M_{\beta\gamma}(b)\cdot M_{\gamma\alpha}(b)\equiv E, \tag{MC} $$ where $E$ is the $n\times n$-identical matrix.
For vector bundles all linear constructions become well defined on fibers.
Following the way, one may define vector bundles with extra algebraic structures on the fibers. For instance, if the cocycle defining the bundle, consists of orthogonal matrices, $M_{\alpha\beta}:U_{\alpha\beta}\to\operatorname{SO}(n,\R)$, then the fibers of the bundle naturally acquire the structure of Euclidean spaces. Other natural examples are bundles whose fibers have the Hermitian structure (the cocycle should consist of unitary matrix functions then) or symplectic spaces (with canonical cocycle matrices preserving the symplectic structure).
Tangent and cotangent bundle of a smooth manifold
If $M$ is a smooth manifold with the atlas of coordinate charts $\{U_\alpha\}$ and the maps $h_\alpha:U_\alpha\to\R^m$, then the differentials of these maps $\rd h_\alpha$ allow to identify the tangent space $T_a M$ at $a\in U_{\alpha}$ with $\R^m$ and the union $\bigsqcup_{a\in U_\alpha}T_a M$ with $\R^m\times U_\alpha$ (we write the tangent vector first). For a point $a\in U_{\alpha\beta}$ there are two identifications which differ by the Jacobian matrix of the transition map $h_{\alpha\beta}=h_\alpha\circ h_\beta^{-1}$. This shows that the tangent bundle $TM$ is indeed a vector bundle in the sense of the above definition.
The cotangent bundle is also trivialized by every atlas $\{h_\alpha:U_\alpha\to\R^m\}$ on $M$, yet in this case the direction of arrows should be reverted[5]: the cotangent space $T_a^*M$ is identified with $\R^n$ by the linear map $(\rd h_\alpha^*)$, thus the corresponding cocycle will consist of the transposed inverse Jacobian matrices.
Equivalence of cocycles
The trivializing maps defining the structure of a bundle (vector or topological) are by no means unique, even if the covering domains $U_\alpha$ remain the same. E.g., one can replace the collection of maps $\{H_{\alpha}\}$ trivializing a vector bundle, by another collection $\{H'_{\alpha}\}$, post-composing them with the maps $F\times U_\alpha\to F\times U_\alpha$, $(v,b)\mapsto (C_\alpha(b)\,v, b)$ with invertible continuous matrix functions $C_\alpha:U_\alpha\to\operatorname{GL}(n,\Bbbk)$. The corresponding matrix cocycle $\{M_{\alpha\beta}\}$ will be replaced then by the new matrix cocycle $\{M'_{\alpha\beta}(b)\}$, $$ M'_{\alpha\beta}(b)=C_\alpha(b)M_{\alpha\beta}(b)C_\beta^{-1}(b),\qquad b\in U_{\alpha\beta}. \tag{CE} $$ Two matrix cocycles related by these identities, are called equivalent and clearly define the same bundle.
Example. The trivial cocycle $\{M_{\alpha\beta}(b)\}=\{E\}$ which consists of identity matrices, corresponds to the trival bundle $F\times B$: the trivializing maps agree with each other on the intersections and hence define the global trivializing map $H:E\to F\times B$. A cocycle equal to the trivial one in the sense of (CE) is called solvable: its solution is a collection of invertible matrix functions $C_\alpha:U_\alpha\to\operatorname{GL}(n,\Bbbk)$ such that on the overlapping of the domains $U_{\alpha\beta}=U_\alpha\cap U_\beta$ the identities $$ M_{\alpha\beta}(b)=C_{\alpha}^{-1}(b)C_\beta(b),\qquad \forall\alpha,\beta,\ b\in U_{\alpha}\cap U_\beta. $$ Thus solvability of cocycle is an analytic equivalent of the topological triviality of the bundle.
↑ Also names fibre space or fibered space are used.
↑ Also the terms bundle or fiber bundle are used.
↑ This statement is also known as the Ehresmann theorem, see Ehresmann, C., Les connexions infinitésimales dans un espace fibré différentiable, Colloque de Topologie, Bruxelles (1950), 29-55. The compactness assumption can be relaxed by the requirement that the map $\pi$ is proper, i.e., preimage of any compact in $N$ is a compact in $M$.
↑ The fiber $F$ should be equipped with some topology, but often it is finite-dimensional, $F\simeq\R^n$ or $F\simeq\C^n$, thus leaving only the default option.
↑ Covectors form a covariant rather than contravariant tensor of rank $1$.
Special classes of bundles
Together with vector bundles, there are other special classes of bundles.
If all three spaces occurring in the definition of the topological bundle (the total space $E$, the base $B$ and the generic fiber $F$) are smooth manifolds and all the maps (the projection $\pi$ and all the trivalizing maps $H_\alpha$) are differentiable, then the bundle is often called a fibration, or locally trivial fibre bundle.
For a fibration every tangent space $T_x E$ is mapped by the differential $\rd \pi:T_x E\to T_a B$, $a=\pi(x)$ surjectively, with the kernel being the tangent space to the fiber $F_a$ at $x$: $\operatorname{Ker}\rd \pi(x)=T_x F_a$. The direction tangent to the fibers is often referred to as vertical, with the idea that the base is "horizontal". However, the accurate definition of the horizontal direction can be made only in terms of an appropriate connection on the bundle.
$G$-bundles and principal bundles
Assume that $\pi:E\to B$ is a fibration as above and the fiber $F$ has a structure of a homogeneous space on which a Lie group $G$ acts freely and transitively (say, by the right multiplication)[1], generating thus the action of $G$ on the total space $E$ which is continuous. Then this action should be consistent with the local trivializations $H_\alpha:\pi^{-1}(U_\alpha)\to G\times U_\alpha$: the corresponding transition maps $H_{\alpha\beta}(\cdot,b):G\to G$ must commute with the right action of $G$. This means that $$ \forall g\in G, \ b\in B,\qquad H_{\alpha\beta}(g,b)=H_{\alpha\beta}(e\cdot g,b)=H_{\alpha\beta}(e,b)\cdot g =g_{\alpha\beta}(b)\cdot g, \tag{T} $$ where $g_{\alpha\beta}=H_{\alpha\beta}(e)\in G$ is the uniquely defined group element (depending continuously on $b\in B$), and $e\in G$ is the unit of the group. Thus the $G$-bundle is completely determined by the cocycle $\{g_{\alpha\beta}:U_{\alpha\beta}\to G\}$ satisfying the cocycle identites, $$ g_{\alpha\beta}(\cdot)g_{\beta\alpha}(\cdot)\equiv e,\qquad g_{\alpha\beta}(\cdot)g_{\beta\gamma}(\cdot)g_{\gamma\alpha}(\cdot)\equiv e. \tag{GC} $$ Such a bundle (defined by the left $G$-action of multiplication by $g_{\alpha\beta}$ in the transition maps (T)) is called a principal $G$-bundle with the structural group $G$.
This construction allows to associate (tautologically) with each vector bundle $\pi:E\to B$ with a fiber $\Bbbk^n$ a principal $G$-bundle $\varPi:\mathbf E\to B$ with the same base, where $G=\operatorname{GL}(n,\Bbbk)$. Analytically this is achieved by considering the same matrix cocycle (MC) and re-interpreting it as the $G$-valued cocycle (GC), $g_{\alpha\beta}=M_{\alpha\beta}$. This bundle is (not surprisingly) called the associated principal bundle. If the matrix cocycle $\{M_{\alpha\beta}\}$ takes values in a subgroup $G\subsetneq\operatorname{GL}(n,\Bbbk)$, then the associated principal bundle may have a "smaller" fiber (say, the orthogonal group).
Example. The principal bundle associated with the tangent vector bundle $TM$ is the bundle whose fibers are frames (linear independent ordered tuples of tangent vectors spanning the tangent space $T_aM$ at each point $a\in M$).
The importance of vector bundles and principal bundles roots in the fact that their fibers have a natural structure of a parallelizable manifolds (homogeneous spaces), looking the same near each point.
Line bundles and the "genuine" cohomology
The case of vector bundles of rank $1$ is especially important: first, because in this case the vector bundle is indistinguishable from the associated principal bundle, but mainly because the corresponding group $G=\operatorname{GL}(1,\Bbbk)\simeq\Bbbk^*$ is commutative. This allows to activate the powerful machinery of the sheaf theory and the respective (Cech) cohomology theory.
Algebraic and analytic vector bundles
In the algebraic category it is natural to consider bundles over schemes (ringed spaces) rather than over smooth manifolds. This requires quite a number of changes in the construction, see Vector bundle, algebraic.
Similar difficulties arise in an attempt to define vector bundles over analytic spaces, see Vector bundle, analytic. In both versions one has to use the language and constructions from the sheaf theory.
Bundles with a discrete fiber and topological coverings
If the fiber $F$ is a topological space with the discrete topology, the corresponding bundle is generally referred to as a covering. Indeed, since $F$ is completely disjoint (each point $v$ is both open and closed), the preimage $\pi^{-1}(U)=\bigsqcup_{v\in F} U_v$ is a disjoint union of the sets $U_v$ homeomorphic to $U$.
In several areas of applications other types of fibers may be important, among them:
Sphere bundles $F\simeq\mathbb S^k$,
Projective bundles $F\simeq \mathbb P^k$ (real or complex projective spaces),
Quaternionic bundles.
The construction of the bundle is so flexible that almost any specific flavor can be incorporated into it. In particular, one can consider holomorphic fibrations with the total space, base and the generic fiber are complex analytic manifolds and the projection and the trivializations are holomorphic maps.
Finally, one can allow for singulatities, assuming that the local structure of the Cartesian product holds only outside of a "small" subset $\varSigma$ of $B$, on which the "bundle" is singular. While formally one can simply omit the exceptional locus and consider the "genuine" bundle $\pi':E'\to B'$, where $E'=E\smallsetminus\pi^{-1}(B')$ and $B'=B\smallsetminus\varSigma$, the singularity very often carries the most important part of the information encoded in the specific degeneracy of the cocycle automorphisms $\{H_{\alpha\beta}(\cdot,b)\}$.
Morphisms and sections
The "triangular" structure (fibers parametrized by points of the base) dictates necessarily restrictions on the morphisms in the category of bundles, but also the possible operations with bundles.
Fibered maps
If $\pi_i:E_i\to B_i$ are two bundles, $i=1,2$, then a morphism between the two bundles is a map between the total spaces, which sends fibers to fibers. Formally, such morphism is defined by a pair of maps $h:B_1\to B_2$ between the bases and $H:E_1\to E_2$ between the total spaces, such that $$ \pi_2\circ H=h\circ \pi_1, $$ and such that the restriction of $H$ on each fiber $F_b=\pi_1^{-1}(b_1)$ preserves the possible additional structures which may exist on the fibers $\pi_1^{-1}(b_1)$ and $\pi_2^{-1}(h(b_1))$. E.g., if $\pi_1,\pi_2$ are vector bundles, then the restriction of $H$ on each fiber should be a linear map.
Two bundles are equivalent (or isomorphic), if there exist two mutually inverse morphisms $(H,h)$ and $(H^{-1},h^{-1})$ between them in the two opposite directions.
Gauge transforms
A fibered self-map of the bundle which covers the identical map of the base is called also gauge transformation because of the connections with the various field theories in Physics. In each trivializing map $F\times U_\alpha$ it is defined by a map $G_\alpha:U_\alpha\to\operatorname{Hom}(F,F)$; two trivializations $G_\alpha,G_\beta$ both defined in a common domain $U_{\alpha\beta}$ are conjugated by the corresponding cocycle, $$ H_{\alpha\beta}\circ G_\beta=G_\alpha\circ H_{\alpha\beta}\qquad\forall \alpha,\beta. $$
Induced bundle
If $\pi:E\to B$ is a topological bundle and $h:B'\to B$ a continuous map, then one can construct the induced fibre bundle $\pi':E\to B'$ with the same generic fiber $F$. By construction (pullback), the fibers of the new bundle, $\pi'^{-1}(b')$ coincide with the fibers $\pi^{-1}(h(b'))$ for all $b'\in B'$. Formally one defines the total space $E'$ as a subset of $E\times B'$ which consists of pairs $(x,b')$ such that $\pi(x)=h(b')$. Then the map $\pi':E'\to B'$ is well defined by the tautological identity $\pi'(x,b')=b'$. Simple checks show that this construction allows to carry all additional fiber structures from one bundle to another[2].
If $B'\subseteq B$ and $h$ is the inclusion map, $h:B'\hookrightarrow B$, then the induced bundle is simply the restriction of $\pi$ on $B'$[3], usually denoted as $\pi|_{B'}$.
A section of a bundle $\pi:E\to B$ is a regular (continuous, smooth, analytic) selector map which chooses for each point $b\in B$ of the base a single element from the corresponding fiber $F_b=\pi^{-1}(b)$. Formally, a section is a map $s:B\to E$, such that $\pi\circ s:B\to B$ is the identity map. A frequent notation for the space of sections of the bundle $E$ is $\Gamma(E)$.
Examples. A "scalar" ($\Bbbk$-valued) function $f:B\to\Bbbk$ is a section of the trivial line bundle $\pi:\Bbbk\times B\to B$. A section of the tangent bundle of a manifold $M$ is called the vector field on $M$. A section of the cotangent bundle is a differential 1-form.
Not every bundle admits sections. For instance, the principal bundle associated with the tangent bundle $T\mathbb S^2$ to the 2-sphere, admits no smooth sections (if it would, then one would be able to construct a nonvanishing vector field on the 2-sphere, which is impossible).
The set of all sections forms a topological space with additional structures inherited from that on the generic fiber, e.g., sections of the vector bundle form a module over the ring of "scalar" functions.
↑ These conditions guarantee that $F$ is diffeomorphic to $G$: choosing any point $o\in F$ as the "origin", the map $G\to F$, $g\mapsto o\cdot g$ is a diffeomorhism.
↑ E.g., the pullback of a vector bundle is again a vector bundle etc.
↑ Formally it is more correct to say about restriction of $\pi$ on $E'=\pi^{-1}(B')\subseteq E$.
Fiberwise operations
For topological bundles with generic fibers having extra structure, almost every construction which makes sense in this structure, can be implemented "fiberwise".
Example. Let $\pi:E\to B$ be a topological bundle with a generic fiber $F$, and $A\subset F$ is a topological subspace. The map $\pi': E'\to B$ is a subbundle of $\pi$, if $E'\subset E$ is a subset and the trivializing maps $H_\alpha:\pi^{-1}(U_\alpha)\to F\times U_\alpha$ can be chosen in such a way that they map $\pi'^{-1}(U_\alpha)$ homeomorphically onto $A\times U_\alpha$. In other words, a subbundle of $\pi$ is a subspace $E'\subset E$ which is itself a bundle with respect to the restriction of $\pi|_{E'}$.
A subbundle of the tangent bundle $TM$ of a smooth manifold is called distribution of tangent subspaces.
Note. A subbundle of a trivial bundle may well be nontrivial.
Whitney sum of bundles
If $\pi_i:E_i\to B$, $i=1,2$, are two bundles with generic fibers $F_1,F_2$ over the same base, then one can construct a bundle $\pi$ with the generic fiber $F=F_1\times F_2$ over the same base. In case of the vector bundles one usually says about the direct sum, or Whitney sum and denoted by $\pi_1\oplus \pi_2$.
Intuitively this means that the fibers $\pi^{-1}(b)$ of new bundle for all $b\in B$ are Cartesian products $\pi^{-1}_1(b)\times\pi^{-1}_2(b)\simeq F_1\times F_2=F$. Formally the construction goes through the intermediate step of the bundle $\pi'=\pi_1\times \pi_2$ with the total space $E'=E_1\times E_2$ and the base $B'=B\times B$: $$ \pi'(x_1,x_2)=(b_1,b_2),\qquad b_i=\pi_i(x_i)\in B,\quad x_i\in E_i. $$ The Whitney sum $\pi_1\oplus\pi_2$ is the restriction (see above) of the bundle $\pi'$ on the diagonal $B\simeq\{(b_1,b_2):\ b_1=b_2\}\subset B\times B=B'$.
Predictably, if both $\pi_1$ and $\pi_2$ are subbundles of some common ambient vector bundle $\varPi:\mathbf E\to B$, and the fibers $\pi_i^{-1}(b)\subset\varPi^{-1}(b)$ are disjoint, then their sum $\pi_1\oplus\pi_2$ is isomorphic to the subbundle of $\varPi$ with the fibers $\pi_1^{-1}(b)+\pi_2^{-1}(b)$ for all $b\in B$.
In terms of the trivializing coordinates, if the matrix cocycles of the two vector bundles are $M^1_{\alpha\beta}(\cdot)$ and $M^2_{\alpha\beta}(\cdot)$, defined in the pairwise intersections $U_{\alpha\beta}=U_\alpha\cap U_\beta\subseteq B$, then the matrix cocycle associated with the Whitney sum is the cocycle of the block diagonal matrix functions $$ M_{\alpha\beta}(\cdot)=\begin{pmatrix}M^1_{\alpha\beta}(\cdot)&\\& M^2_{\alpha\beta}(\cdot)\end{pmatrix}:U_{\alpha\beta}\to \operatorname{GL}(d_1+d_2,\Bbbk),\tag{WS} $$ where $d_{1,2}$ are the dimensions (ranks) of the vector bundles $\pi_{1,2}$. Moreover, the Whitney sum can be directly build from the cocycle (WS) using the "patchworking" construction.
Other constructions with bundles
Besides the Whitney sum, one can use most of (linear algebraic) "continuous" functorial constructions to produce new bundles from existing ones. The formal way to do this is by applying the constructions in the trivializing charts and use the patchworking method to piece the results together. A partial list of such constructions is as follows:
Dual bundle $\pi^*:E^*\to B$ with the generic fiber being the dual vector space $F^*\simeq \R^{n*}$ and the matrix cocycle $\{(M_{\alpha\beta}^*)^{-1}\}$.
Tensor product $\pi_1\otimes\pi_2$ of two bundles $\pi_1,\pi_2$ (always over the same base) with the matrix cocycle $\{M^1_{\alpha\beta}\otimes M^2_{\alpha\beta}\}$;
The cocycle $\operatorname{Hom}(\pi_1,\pi_2)$[1] with the generic fiber being the space of linear operators from $F_1$ to $F_2$. The dual bundle is the particular case of this construction, $\pi^*=\operatorname{Hom}(\pi,\epsilon)$, where $\epsilon:\Bbbk\times B\to B$ is the trivial scalar bundle. As with the linear spaces, $\operatorname{Hom}(\pi_1,\pi_2)=\pi_1^*\otimes\pi_2$.
The exterior products, e.g., powers $\pi\land\cdots\land\pi$, including the determinant bundle (the highest exterior power). Especially important are wedge powers of the tangent and cotangent bundle, $T^pM=\bigwedge^pTM$, resp., ${T^*}^q M=\bigwedge^q T^*M$ of a smooth manifold $M$: sections of these bundles are $p$-polyvector fields, resp., exterior (differentiable) $q$-forms.
Clearly, this approach works (with necessary minimal modifications) also in the categories of bundles with other structure of the generic fiber.
Vector bundles over differentiable manifolds may carry a special geometric structure, called connection. In terms of these connections one can introduce certain cohomology classes of the base manifold, which in fact depend only on the on the bundle and not on the connection.
Connections on bundles
Although the fibers $F_b=\pi^{-1}(b)$ of a bundle vary "in a regular way" in the total space $E$ together with the base point $b\in B$, in general there is no canonical way to compare (identify) points on two (even close) fibers[2]. One can introduce an additional structure on the bundle, which allows for any two fibers $F_{b_0},F_{b_1}$ over two different points $b_0,b_1\in B$ connected by a piecewise-smooth curve $\gamma:[0,1]\to B$, $\gamma(0)=b_0$, $\gamma(1)=b_1$, to construct a linear[3] parallel transport map $T_\gamma:F_{b_0}\to F_{b_1}$ describing the way vectors from the fibers are moved along the curve $\gamma$. The infinitesimal generator of this "parallel translation" construction is called the covariant derivative and is formalized by a family of operators allowing to differentiate sections of the vector bundle in the direction of the velocity vector $w=\dot \gamma(0)$. The result of a parallel transport (translation) along a closed loop with $\gamma(0)=\gamma(1)$ may well be nonzero and its quantitative measure is the curvature of the connection. Flat connections (with zero curvature) are similar to coverings: they admit a special class of locally constant sections.
This huge subject covers the local and global study of manifolds whose tangent bundle is equipped with the Euclidean structure (positive definite quadratic form, also known as the metric tensor). This additional structure allows to define lengths or smooth arcs on the manifold, introduce the extremals of this length etc.
In the Riemannian geometry (of manifolds whose tangent spaces are equipped with a scalar product) the isometric parallel transport can be introduced in a unique way[4] leading to the notion of the Levi-Civita connection. For this connection the abstract curvature is closely related with the Gauss curvature.
Characteristic classes
Using connections on real and complex bundles, one can define special cohomology classes of the manifold $B$ (with coefficients in $\Z_2$ or $\Z$) which turn out to be independent of a specific connection used for their construction and measure different aspects of nontriviality of vector bundles. These classes behave naturally with respect to the pullback operation (induced connections) and obey some simple rules for the Whitney sums. These classes are called characteristic classes (there are four main types of them, -- Stiefel-Whitney class, Pontryagin class, Euler class, Chern class).
↑ Sometimes the notation $\operatorname{Hom}(E_1,E_2)$ is used.
↑ An important exception is the bundles with a discrete fiber, where continuity suffices to establish one-to-one correspondence between two fibers over two sufficiently close points $b_1,b_2\in B$ in the base.
↑ For vector bundles with special structure, e.g., Riemannian bundles, the parallel transport is usually assumed to be compatible with this structure, i.e., an isometry.
↑ M. Berger refers to this uniqueness as a "miracle" in [B].
The classical expositions are still the most popular sources for references.
[Sd] N. E. Steenrod, The topology of fibre bundles , Princeton Univ. Press (1951), reprinted in 1999. MR1688579.
[N] K. Nomizu, Lie groups and differential geometry, The Mathematical Society of Japan, 1956. MR0084166
[K] J.-L. Koszul, Lectures on fibre bundles and differential geometry. With notes by S. Ramanan. Tata Institute of Fundamental Research Lectures on Mathematics and Physics, 20, 1965. Reprinted by Springer-Verlag, Berlin, 1986. MR0901943
[H] D. Husemoller, Fibre bundles, McGraw-Hill (1966). Third edition. Graduate Texts in Mathematics, 20. Springer-Verlag, New York, 1994. MR1249482
[BC] R. L. Bishop, R. J. Crittenden, Geometry of manifolds, Acad. Press (1964), reprint: AMS Chelsea Publishing, Providence, RI, 2001. MR1852066.
[Sb] S. Sternberg, Lectures on differential geometry, Prentice-Hall (1964). Second edition, Chelsea Publishing Co., New York, 1983. MR0891190.
[KN] S. Kobayashi, K. Nomizu. Foundations of differential geometry, Vols. I, II. Reprint of the 1963/1969 original. John Wiley & Sons, Inc., New York, 1996. MR1393940, MR1393941.
[MS] J. W. Milnor, J. D. Stasheff, Characteristic classes, Annals of Mathematics Studies, No. 76. Princeton University Press, Princeton, N. J.; University of Tokyo Press, Tokyo, 1974. MR0440554.
[G] C. Godbillon, Géométrie différentielle et mécanique analytique. Hermann, Paris 1969, 183 pp. MR0242081
[AVL] D. V. Alekseevskij, A. M. Vinogradov, V. V. Lychagin, Basic ideas and concepts of differential geometry Geometry, I, 1–264, Encyclopaedia Math. Sci., 28, Springer, Berlin, 1991. MR1300019
[SCL] S. S. Chern, W. H. Chen, K. S. Lam, Lectures on differential geometry, Series on University Mathematics, 1. World Scientific Publishing Co., Inc., River Edge, NJ, 1999, MR1735502
[B] M. Berger, A panoramic view of Riemannian geometry, Springer-Verlag, Berlin, 2003. MR2002701
Bundle. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Bundle&oldid=50960
Retrieved from "https://encyclopediaofmath.org/index.php?title=Bundle&oldid=50960"
TeX done
Several complex variables and analytic spaces | CommonCrawl |
A multilayer approach to multiplexity and link prediction in online geo-social networks
Desislava Hristova ORCID: orcid.org/0000-0001-5618-73271,
Anastasios Noulas2,
Chloë Brown1,
Mirco Musolesi3 &
Cecilia Mascolo1
Online social systems are multiplex in nature as multiple links may exist between the same two users across different social media. In this work, we study the geo-social properties of multiplex links, spanning more than one social network and apply their structural and interaction features to the problem of link prediction across social networking services. Exploring the intersection of two popular online platforms - Twitter and location-based social network Foursquare - we represent the two together as a composite multilayer online social network, where each platform represents a layer in the network. We find that pairs of users connected on both services, have greater neighbourhood similarity and are more similar in terms of their social and spatial properties on both platforms in comparison with pairs who are connected on just one of the social networks. Our evaluation, which aims to shed light on the implications of multiplexity for the link generation process, shows that we can successfully predict links across social networking services. In addition, we also show how combining information from multiple heterogeneous networks in a multilayer configuration can provide new insights into user interactions on online social networks, and can significantly improve link prediction systems with valuable applications to social bootstrapping and friend recommendations.
Online social media has become an ecosystem of overlapping and complementary social networking services, inherently multiplex in nature, as multiple links may exist between the same pair of users [1]. Multiplexity is a well studied property in the social sciences [2] and it has been explored in social networks from Renaissance Florence [3] to the Internet age [4]. Despite the broad contextual differences, multi-channel ties are consistently found to exhibit greater intensity of interactions across different communication channels, which is related to a stronger social bond [2, 5]. In this work, we explore how we can leverage multiplex tie strength through the geographic and social interactions of users and apply it to the classic networks problem of link prediction [6].
Link prediction systems are key components of social networking services due to their practical applicability to friend recommendations and social network bootstrapping, as well as to understanding the link generation process. Link prediction is a well-studied problem, explored in the context of both OSNs and location-based social networks (LBSNs) [6–9]. However, only very few link prediction works tackle multiple networks at a time [10–13], while most link prediction systems only employ features internal to the network under prediction, without considering additional link information from other OSNs.
Recently, empirical models of multilayer networks have emerged to address the multi-relational nature of social networks [1, 14]. In such models, interactions are considered as layers in a systemic view of the social network. Despite the observable multilayer nature of online social networks (OSNs) as a system [1, 15, 16], there is little empirical work exploiting data-driven applications in the domain of multilayer OSNs, especially with respect to how location-based and social interactions are coupled in the online social space [10, 17]. Most empirical multilayer social network literature considers multiple dimensions of the same platform [14, 18], whereas we are interested in interactions across different platforms. In the few exceptions where multiple platforms are considered [19], the same properties of social interactions are examined across, whereas our interests lie in using heterogenous interactions from different platforms (both social and geographic) and their multiplex properties.
Media multiplexity [4] is the principle that tie strength is observed to be greater when the number of media channels used to communicate between two people is greater (higher multiplexity). In [2] the authors studied the effects of media use on relationships in an academic organisation and found that those pairs of participants who utilised more types of media (including email and videoconferencing) interacted more frequently and therefore had a closer relationship, such as friendship. More recently, multiplexity has been studied in light of multilayer communication networks, where the intersection of the layers was found to indicate a strong tie, while single-layer links were found to denote a weaker relationship [5]. The strength of social ties is an important consideration in friend recommendations and link prediction [20], and we employ the previously understudied tie multiplexity properties of OSNs to such ends in this work.
In this work, we explore multilayer networks with heterogeneous layers and apply media multiplexity theory to study the social and geographical features of pairs of users and their application to link prediction across online social networks. Unlike previous work [12, 18, 19], we frame the multilayer link prediction problem across online social network platforms and apply media multiplexity as a measure of tie strength, showing its applicability to link prediction in the geo-social domain. We find that pairs of users with links on both Twitter and Foursquare exhibit significantly higher interactions on both social networks than those pairs of users with a link on just one or the other in terms of number of mentions and colocations within the same venues, as well as a lower distance and higher number of common hashtags in their tweets. In our evaluation, we use these interaction features to predict Twitter links from Foursquare features and vice versa, and we achieve this with AUC scores up to 0.86 on the different datasets, which is just as good as predicting links internal to the network on Twitter and almost as good for Foursquare (\(\mathrm{AUC}=0.86\) for Twitter and \(\mathrm{AUC}=0.88\) for Foursquare). In predicting links which span both networks, we achieve the highest AUC score of 0.88 from our multilayer features set which is higher than results for each single network, suggesting that multilayer frameworks can be a useful tool for social bootstrapping and friend recommendations due their comprehensive perspective on the online social 'ecosystem'.
Multilayer online social network
The social network of human interactions is usually represented by a graph \(G(V,E)\) where the nodes in set V represent people and the edges E represent interactions between them. While this representation has been immensely helpful for the uncovering of many social phenomena, it is focused on a single-layer abstraction of human relations. In this section, we describe a model, which represents link multiplexity by supporting multiple friendship and interaction links across heterogenous online social network platforms.
We represent the parallel interactions between nodes across OSNs as a multilayer network \(\mathcal{M}\), or an ensemble of M graphs, each corresponding to a distinct layer as \({\mathcal{M}} = \{G^{1},\ldots,G^{\alpha},\ldots,G^{M}\}\). We indicate the α-th layer of the multilayer as \(G^{\alpha}(V^{\alpha}, E^{\alpha})\), where \(V^{\alpha}\) and \(E^{\alpha}\) are the sets of vertices and edges of the graph \(G^{\alpha}\). Figure 1(A) illustrates the concept by showing how two graphs \(G^{\alpha}\) and \(G^{\beta}\) are coupled by common neighbours, while some links may be present or absent across the two graphs. As this represents the general case of online social networks, members need not be present at all layers and the multilayer network is not limited to two layers. While each platform can be explored separately as a network in its own right, this does not capture the dimensionality of online social life, which spans across many different platforms.
Multilayer model. Multilayer model of OSNs (Panel Figure A) with different link types (Panel Figure B): I. Multiplex link; II. Single-layer link on \(G^{\alpha}\); and III. Single-layer link on \(G^{\beta}\).
Figure 1(B) illustrates three link types for the case of a two layer network. Firstly, we define a multiplex link between two nodes i and j as a link that exists between them at least in two layers \(\alpha, \beta\in\mathcal{M}\). Second, we consider that a single-layer link between two nodes i and j exists if the link appears only in one layer in the multilayer social network. In systems with more layers, multiplexity can take on a value depending on how many layers the link is present on [5]. Since our model is applied to online social media, the number of layers can be expected to remain in the single digits due to cognitive limits in human interaction [21]. This will ensure that with each additional layer, the value of link between two individuals increases and information is added to their tie strength [2].
The multilayer neighbourhood
Following our definition of a multilayer online social network, we can extend the ego network of a node to a multilayer neighbourhood. While the simple node neighbourhood is the collection of nodes one hop away from the ego, we define the multilayer global neighbourhood (denoted by GN) of a node i as the total number of unique neighbours across network layers:
$$ \Gamma_{GNi} = \bigl\{ j \in V^{\mathcal{M}} : e_{i,j} \in E^{\alpha\cup\beta}\bigr\} , $$
where given layer α and layer β, we denote the set of all links present in the multilayer network as \(E^{\alpha\cup\beta}\). This allows us to reason about the full global connectivity across layers of the system.
We can similarly define the core neighbourhood (denoted by CN) of a node i across layers of the multilayer network as:
$$ \Gamma_{CNi} = \bigl\{ j \in V^{\mathcal{M}} : e_{i,j} \in E^{\alpha\cap\beta}\bigr\} , $$
where we define the set of multiplex links as \(E^{\alpha\cap\beta}\). Although we weigh all edges equally, we could also take into account the level of multiplexity in geo-social systems with more layers, similarly to [5]. We can further consider the set of all single-layer links on layer α only as \(E^{\alpha\backslash\beta}\). This simple formulation allows for powerful extensions of existing metrics of neighbourhood similarity. We can consider the Jaccard similarity of two users i and j's global neighbourhoods as:
$$ \textit{sim}_{GNij} = \frac{|\Gamma_{GNi} \cap\Gamma_{GNj}|}{|\Gamma_{GNi} \cup\Gamma_{GNj}|}, $$
where the number of common friends is divided by the number of total friends of i and j. The same can be done for the core degree of two users.
We can further consider the multilayer Adamic/Adar index for link likelihood [22], which takes into account the overlap of two neighbourhoods based on the popularity of common friends (originally through web pages) in a single-layer network as:
$$ aa\_{\textit{sim}}_{GNij} = \sum_{z \in\Gamma_{GNi} \cap\Gamma_{GNj}} \frac{1}{\log(|\Gamma_{GNz}|)}, $$
where it is applied to the global common neighbours between two nodes but can be equally applied to their core neighbourhoods. Both the Jaccard similarity and the Adamic/Adar index have been shown to be effective in solving the link prediction problem in both social and location-based networks [6, 9]. In the present work, we aim to show its applicability to the multilayer space in predicting online social links across and between Twitter and Foursquare - two heterogenous social networking platforms.
Twitter and Foursquare are two of the most popular social networks, both with respect to research efforts and user base. They have distinct broadcasting functionalities - microblogging and venue check-ins. While Twitter can reveal a lot about user interests and interactions, Foursquare check-ins provide a proxy for human mobility. In Foursquare users check-in to venues that they visit through their location enabled devices, and share their visit of a place with their connections. Foursquare is two years younger than Twitter and its broadcasting functionality is exclusively for mobile users (50M to dateFootnote 1), while also 80% of Twitter's 284M users are active on mobile.Footnote 2 Twitter generally allows anyone to 'follow' and be 'followed', where followers and followed do not necessarily know one another. On the other hand, Foursquare supports undirected links, referred to as 'friendship' in the service. A similar undirected relationship can be constructed from Twitter, where a link can be considered between two users if they both follow each other reciprocally [23]. Since we are interested in ultimately in predicting friendship, we consider only reciprocal Twitter links throughout this work.
Our dataset was crawled from the public Twitter and Foursquare APIs between May and September 2012 for three major US cities, where tweets and check-ins were downloaded for users who had checked in during that time, and where those check-ins were shared on Twitter. We initially identified Foursquare users on Twitter by hashtags that pertain to the Foursquare service and then continuously crawled their tweets over the four month period. Therefore, our dataset contains a subset of Foursquare users who publicly share their check-ins via the Twitter service, who are estimated to be 20-25% of the Foursquare user base [24]. This allows us to study the intersection of the two networks through users who have accounts and are active on both Twitter and Foursquare. Tweets were divided into check-ins and tweets depending on whether the content of the tweet was a Foursquare check-in or not. A tweet is in the form (userId, mentions, hashtags), where we do not consider the actual content of the tweet but only if it mentions another user or identifies with a topic through Twitter's hashtag (#) paradigm of topics. Check-ins are in the form (userId, venueId, coordinates, timestamp) where we consider the temporal and spatial aspects of the check-in and not its semantic properties. At the end of the period, we also crawled the social network of each user in our dataset on both platforms by obtaining the user ids of their followers and who they are following as well as Foursquare friends of up to one hop in the network. Our dataset does not contain bots or other automated accounts as only real users post content through Foursquare due to its mobile application context.
Table 1 shows the details for each city, in terms of activity and venues, multilayer edges and degrees for each network, where \(E^{T \cap F}\) denotes the set of edges, which exist on both Twitter and Foursquare, \(E^{T \backslash F}\) and \(E^{F \backslash T}\) are the sets of edges on Twitter only and Foursquare only respectively.
Table 1 Dataset properties: number of users (nodes); number of multiplex links (edges); number of Twitter and Foursquare only edges; average global and core degrees; activity and venues per city.
Properties of multiplex links
Our first goal is to gain insight into the geo-social structural and interaction properties of multiplex links in the multilayer social network and how they differ from other link types. We study the three types of links as described in our multilayer model above: multiplex links across both Twitter and Foursquare, which we denote as tf for simplicity; single-layer links on Foursquare only (denoted as fo); single-layer links on Twitter only (denoted as to), and compare these to unconnected pairs of users (denoted as na). We use the insight gained from the discriminative power of each feature to interpret the results of our link prediction tasks defined in the following section.
Link multiplexity and structural similarity
The number of common friends between two individuals has been shown to be an important indicator of a link in social networks [6]. Moreover, the neighbourhood overlap weighted on the popularity of common links between two users has been shown to be a good predictor of friendship in online networks [22]. Figure 2 shows the cumulative distribution of the Adamic/Adar index of neighbourhood similarity across the various single and multilayer configurations of the networks at hand and each of the four link types. Figure 2(A) and (B) shows the cumulative distribution over the single-layer configurations of Twitter and Foursquare respectively, while Figure 2(C) and (D) shows the distribution over the core and global multilayer configurations. These plots allow us to reason about the fraction of pairs of users with an Adamic/Adar index greater than a certain threshold which relates to the way that features are ranked in a machine learning framework.
CCDF of the Adamic/Adar overlap metric. Complementary cumulative distribution function of the log Adamic/Adar index for the different network configurations, grouped by link type - Twitter overlap (A), Foursquare overlap (B), Global overlap (C), Core overlap (D). Each figure shows the fraction of links with an \(aa_{\textit{sim}}\) value greater than x.
Each figure shows the fraction of Adamic/Adar indices greater than the given threshold. In Figure 2(A) we can see that 25% of Twitter user pairs (to) have an overlap of 100.3 or greater, while 25% of multiplex tie pairs (tf) have 101 or higher. Those pairs that are not connected (na) and those which are only connected on Foursquare (fo) have a similarly lower Adamic/Adar threshold of 100. The results over different fractions of user pairs remain consistent where multiplex tie pairs (tf) always have a higher Adamic/Adar index threshold than Twitter only (to), Foursquare only (fo) and no link (na) pairs, based on the CCDF curves. These results are analogous for the Foursquare network, where we have an Adamic/Adar index of approximately 101 for 25% of multiplex pairs, closely followed by Foursquare only (fo) pairs and then Twitter only (to) and na user pairs with a value of 100. From the two single layer configurations, we can see that multiplex links always exhibit the highest structural similarity, followed by links native to the platform and then exogenous and finally unconnected user pairs.
With respect to our multilayer configurations, we can see that user pairs in Figure 2(C), where the global connectivity between the two services is considered, have a similar arrangement in terms of curves as the single-layer configurations. The main differences, however, come from the greater distinction between non-present links (na) and single-layer links (fo and to) than in the single network configurations where exogenous links and non-existent links had similar distribution. In particular, we can see that in Figure 2(C) 50% of user pairs which are not connected have an Adamic/Adar index of 10−4 or greater, whereas 50% of single-layer links (fo and to) have 10−3 or higher and finally multiplex link pairs (tf) have an index of 10−3.5 or greater. On the other hand, in the core configuration in Figure 2(D) we can see a division between multiplex link types and all other link types, where 25% of all pairs of all multiplex ties (tf) have an index of approximately 101 or greater while all other link types have a lower threshold of 100 or higher. While this is somewhat expected, it shows that the core configuration is a good proxy for multiplex ties. In agreement with previous studies of tie strength [20], we observe that multiplex links share greater structural similarity than other link types across network configurations and this will be s useful property in our link prediction problem.
Link multiplexity and interaction
The volume of interactions between users is often used as a measure of tie strength [25]. In this section we compare how the volume of geo-social interactions on Twitter and Foursquare discriminate between the presence of the various link types. We extract a number of interaction features from the two services, which we will examine in the following section in light of their predictive power in addition to the structural features analysed above. These interaction features are:
Number of mentions: The number of instances in our dataset in which user i has mentioned user j on Twitter during the period. Mentions include direct tweets and retweets mentioning another user. Any user on Twitter can mention any other user and does not have to be following that user in the social network. This allows us to measure this feature across pairs which do not have a link on any network (na). Twitter users have been shown to exhibit favouritism for a small group of their contacts when it comes to mentions (retweets) [23].
Number of common hashtags: Similarity between users on Twitter can be captured through common interests. Topics are commonly expressed on Twitter with hashtags using the # symbol. We therefore measure the number of instances in which user i and user j have posted a tweet using the same hashtag. Similar individuals have been shown to have a greater likelihood of having a tie through the principles of homophily [26].
Number of colocations: The number of times two users have checked into the same venue within a given time window. In order to reduce false positives, we consider a shorter time window of 1 hour only. Two users who appear at the same place, at the same time on multiple occasions, have a higher likelihood of knowing each other (and therefore having a link on social media). We weight each colocation on the popularity of a place in terms of the total user visits, to reduce the probability that colocation is by chance at a large hub venue such as an airport or train station. The importance of colocations has been highlighted in discovering social ties as well as place-focused communities [27].
Distance: Human mobility and distance play an important role in the formation of links, both online and offline, and have been shown to be highly indicative of social ties and informative for link prediction [28]. We calculate the distance between the geographic coordinates of two users' most frequent check-in locations as the Haversine distance, the most common measure of great-circle spherical distance: \(\textit{dist}_{ij} = \textit{haversine} (\textit{lat}_{i},\textit{lon}_{i}, \textit{lat}_{j},\textit{lon}_{j})\), where the coordinate pairs for \(i,j\) are those of the places where users with more than two check-ins have checked in most frequently, equivalent to the mode in the multiset of the venues where they have checked in. This allows us to minimise data loss motivated by the typical long-tail distribution of activities shown in empirical studies of Foursquare [24], while increasing the probability that a most frequent location will emerge, similar to previous related work in the field [29–31].
We additionally consider two geo-social features, which merge information from the Twitter social network and the Foursquare location network. In order to capture the tie strength between a pair of users in the multilayer network, we consider their similarity based on the social layer, or the number of common hashtags, denoted by \({\textit{sim}}_{ij}\) and their spatial similarity, or the distance between their most frequented venues on Foursquare, denoted by \(\textit{dist}_{ij}\). We draw inspiration from gravity models in transportation studies where the attraction between two entities is proportional to the importance of their interaction over their distance [32]. In a similar manner, we aim to identify such an attraction force in the formation of links. Firstly, we define the global similarity as the Twitter similarity over Foursquare distance as:
$$ \textit{sim}_{GNij} = \frac{\textit{sim}_{ij}^{a}}{\textit{dist}_{ij}^{b}}, $$
where exponents \(a,b\) are chosen based on the context of at hand. In our case, a is the potential for the similarity measure to reflect a reciprocal link between two users, whereas b is a parameter related to how well connected the two venues are and therefore how significant the distance between them is, similar to the gravity model's original use in transportation [32]. In the present work, we set the exponents \(a=2, b=1\) after optimising for the exponents that maximise the difference between the median values of multiplex links (tf) and no link (na). Figure 3 shows how these results vary across different exponents a and b in the range \([1,2]\).
Exponent matrix for \(\pmb{sim_{GNij}}\) . Colour gradient indicates the optimal exponents in terms of difference maximisation between the medians of the multiplex and non-existent link types - \(|Md_{tf}-Md_{na}|\).
We additionally construct a feature which captures the complete interaction across layers of social networks:
$$ \textit{int}_{GNij} = \sum_{\alpha}^{M} k^{\alpha}|\textit{int}^{\alpha}_{ij}|, $$
where int can be any type of interaction between i and j in layer α and interactions are summed across layers and weighted by a constant k for each layer. This allows for adjustments based on the weighted importance of an interaction, specific to the context of the measurement. In our case we consider mentions and colocations as the interactions across layers and a coefficient \(k=1\) for both layers as we would like to maintain the empirical properties of interactions and after optimising for a number of different coefficients.
In Figure 4, we observe the four types of spatial and social interaction on the two social networking services as well as the two geo-social features in the order in which they were presented. Each box-and-whiskers plot represents an interaction between multiplex links (tf), Twitter only (to), Foursquare only (fo), and unconnected pairs (na) on the x axis. On the y axis we can observe the distribution divided in four quartiles, representing 25% of values each. The dark line in the middle of the box represents the median of the distribution, while the dots are the outliers, where the definition for an outlier is a value which is less than the first quartile or greater than the third quartile by more than 1.5 times the interquartile range between quartile 3 and 1. The 'whiskers' represent the top and bottom quartiles, while the boxes are the middle quartiles of the distribution.
Interaction features' distribution for each link type. Panel Figure A-C show the distributions of Twitter mentions (A) Common hashtags (B), and Number of colocations (C) in log scale. Panel Figure D shows the distribution of distance in km between the home locations of users according to the type of link they have (top 10% of distances are excluded for figure readability), and Figure E and F show the distribution of the multilayer similarity and interaction features.
In terms of Twitter mentions (Figure 4(A)), multiplex ties (tf) exhibit higher values of mentions than any other group, including the Twitter only group (to) with a median value of 101 and top-quartile values above 104. Pairs of users connected only on Foursquare (fo) do not typically mention each other on Twitter although this is made possible by the service. On the other hand, mentions are just as common between users who are not connected on any network (na) as between those who are connected on both(tf), which may be as a result of mentioning celebrities and other commercial accounts. This, however, is not the case for hashtags (Figure 4(B)), where we find that almost all of unconnected users share 10 hashtags or less with the exception of outliers. While mentions are more discriminative between multiplex links (tf) and single-layer connectivity (to and fo), hashtags are better at distinguishing between links and non-links (na) in terms of median values.
With regard to Foursquare spatial interaction in Figure 4(C) and (D), multiplex ties (tf) have the highest probability of multiple colocations with a median value of 10−3.8. Despite being weighted by the popularity of a venue, values in the top quartile of unconnected pairs (na) are relatively high with respect to other link types. However, in terms of median values there is still a distinction between the different levels of multiplexity which each link type represents. On the other hand, while distance (Figure 4(D)) does not vary much in terms of median values for the different link types, based on the top quartiles of the distributions across link types, it appears that Foursquare only pairs (fo) are more likely to frequent locations close to each other, closely followed by multiplex link pairs (tf) where distances for both are below 20 km. Twitter only (to) and unconnected pairs frequent locations similarly further away. This indicates that both Foursquare spatial features are better at distinguishing multiplex links and native Foursquare links than other link types based on the distributions observed.
In Figure 4(E) and (F) we can compare the geo-social features we defined above to the single-layer social and geographic features observed. Firstly, we observe the distribution of the \(\textit{sim}_{GNij}\) measure integrating similarity and distance as factors of attraction between pairs of users. We can distinguish between link types mainly based on the maximum value in the top quartile of the distributions in Figure 4(E), where we observe that the maximum values for multiplex links are higher than any other link type (over 7.5), whereas the maximum value for unconnected pairs is approximately 4 while the median is 0. This shows that only values with low similarity and high distance fall below 0, whereas most pairs of users have less negligible similarity where values around 1 indicate a balance between distance and similarity.
In Figure 4(F) the distinction between different link types in the distributions of values is more striking than for any of the single-layer features. We can see that each median value is significantly different - multiplex links (tf) are the highest with a median of 101.5, followed by to links (100), fo links (10−1.5) and finally non-present links (10−4). This satisfies two desirable properties for link prediction - distinct thresholds between link types, and a discriminative threshold between the non-existent links (na) and all other link types, on which to base binary decisions of the presence/absence of a link.
Multilayer approach to link prediction
The problem of link prediction in online social networks has been actively researched in the past decade, following its ignition by the seminal work of Liben-Nowell and Kleinberg [6]. Since then, has been applied to various platforms and services. For instance, in [9] the authors exploit place features in location-based services to recommend friendships and in similar spirit the authors in [33] show how using both location and social information from the same network significantly improves link prediction, while in [34] a new model based on supervised random walks is proposed to predict new links in Facebook. Link prediction has also been approached in the multidimensional setting [12] and in multi-relational networks [13], however, these works build on features that are endogenous to the system that hosts the network of users.
Drawing upon these works, we train and test on heterogeneous and fundamentally different network layers from two distinct platforms - social network Twitter and location-based social network Foursquare - by mining features from both. Our approach differs in that it frames the link prediction task across layers in the context of multilayer networks, rather than partitions of the same network. Having empirically shown the value of the different features in distinguishing between different link types above, here we approach the question of how this information can be used to predict links across layers of social networks. We evaluate the likelihood of forming a social tie as a process that depends on a union of factors, using the Foursquare, Twitter, and multilayer features we have defined up until now in a supervised learning approach, and comparing their predictive power in each feature set respectively.
Prediction space
The main motivation for considering multiple social networks in a multilayer construct is that each layer carries with it additional heterogeneous information about the links between the same users, which can potentially enhance the predictive model. In the context of our work we have two distinct layers of information - the spatial movements of users from Foursquare and their parallel social interactions on Twitter. We are interested in exploring whether by using spatial features from one network layer (Foursquare), we are able to predict links on the social network layer (Twitter), and vice versa. In light of the multilayer nature of OSNs, we are also interested in whether we can achieve better prediction by combining features from multiple networks.
Formally, for two users in the multilayer network \(i,j \in\mathcal{M}\), where \(V^{\mathcal{M}}\) are the nodes (users) that are present in any layer of the multilayer network, we employ a set of features in a supervised learning framework that output a score \(r_{ij}^{\alpha}\) so that all possible pairs of users \(V^{\mathcal{M}} \times V^{\mathcal{M}}\) are ranked according to their expectation of having a link \(e_{ij}^{\alpha}\) on a specific layer α in the network. We specify and evaluate two distinct prediction tasks:
(1) We rank pairs of users based on their interaction on one network layer in order to predict a link on the other. This entails (a) training on spatial mobility interactions to predict social links on Twitter, and (b) training on social interaction features on Twitter to test on Foursquare links.
(2) We rank pairs of users based on their interaction on both network layers in order to predict a link across both (a multiplex link). We train on three sets of features - spatial interactions, social interactions, and multilayer features which are summarised in Table 2.
Table 2 Summary of link features. We denote the Twitter neighbourhood as \(\pmb{\Gamma^{T}}\) and the Foursquare neighbourhood as \(\pmb{\Gamma^{F}}\)
We perform our evaluation on the three datasets described in Table 1 for the cities of San Francisco, Chicago, and New York to show performance on these tasks across urban geographies. In terms of algorithmic implementation, we have used public versions of the algorithms available in [35]. Supervised learning methodologies have been proposed as a better alternative to unsupervised models for link prediction [36]. We fit our data to a Random Forest classifier [37], which uses a sub-sampling and averaging technique across a number of tree estimators to improve the predictive accuracy and control over-fitting. Subsampling takes place with replacement and is equal to the training set size. We have optimised across two parameters in each prediction task: the number of tree estimators and the max depth allowed for each estimator.
We additionally use 10-fold stratified cross-validation testing strategy: for each test we train on 90% of the data and test on the remaining 10% and each fold set contains approximately the same percentage of samples of each target class as the complete set since the number of prediction items in the data are in the order of \(|V^{\mathcal{M}}|^{2}\). For every test case, the user pairs are ranked according to the scores returned by the classifier for the positive class label (i.e., for an existing link), and subsequently, all possible thresholds of probability in terms of true positive (TP) and false positive (FP) values rate are plotted against each other as Receiver Operating Characteristic (ROC) curves. We use Area Under the Curve (AUC) scores from these curves to report the relative performance of each task by averaging the results across all folds, where we are interested in the fraction of positive examples correctly classified as opposed to the fraction of negative examples incorrectly classified. ROC analysis can provide insight about how well the classifier can be expected to perform in general, at a variety of different class imbalance ratios and therefore, against different random baselines that could correspond to these ratios.
Multilayer link prediction
We present our evaluation using ROC curves and the corresponding Area Under the Curve (AUC) scores across cities, shown in Figure 5. First, we train on the Twitter social interaction features summarised in Table 2 and test on the Foursquare target labels. Formally, for a pair of users i and j we define a feature vector \(\mathbf{x_{ij}}^{\boldsymbol {\alpha}}\) encoding the values of the users' feature scores on layer α in the multilayer network. We also specify a target label \(y_{ij}^{\beta}\in\{-1,+1\}\) representing whether the user pair is connected on the β layer under prediction. We use the supervised Random Forest classifier (best performance achieved with 45 tree estimators, allowing for a maximum tree depth = 25 each) to predict links from one layer using features from the other.
Link prediction results. ROC curves for the Random Forest classifier and Area Under the Curve (AUC) scores for each city dataset. Panel Figure A shows the results for predicting Foursquare links using Twitter features, while panel Figure B displays the results for the reverse task of predicting Twitter links using the Foursquare geographical features. Figures C-E focus on the second prediction task - predicting multiplex links using Twitter features (C), using Foursquare features (D) and using multilayer features (E).
Figure 5(A) shows the ROC curves and respective AUC scores for each city in predicting Foursquare links from Twitter features, ranging between 0.7 for the New York dataset to 0.81 for Chicago, and 0.73 for San Francisco. These results represent the probability that the classifier will rank higher a randomly chosen positive instance than a randomly chosen negative instance [38]. On the other hand, we consider the reverse task of predicting Twitter links using Foursquare features in Figure 5(B), where we obtain AUC scores of 0.86, 0.73, and 0.79 for the three cities respectively. We observe slightly higher results for Twitter links, and we note that this may be a result of the higher number of Twitter links in our dataset or as a result of the greater difficulty of the inverse task. We compare these results to the traditional single-layer prediction task of Twitter links from Twitter features and Foursquare links from Foursquare features internal to the platform where we achieve an \(\mathrm{AUC}= 0.86\) and \(\mathrm{AUC}= 0.88\) on average between cities with the same Random Forest set-up. This shows that our performance across services is comparable to that within the service itself.
We have observed in our preceding analysis on link types that those pairs connected only on Foursquare do not exhibit strong interaction on Twitter by exchanging a low number of mentions and having low neighbourhood overlap, however, those pairs of users connected on both platforms, exhibit high interaction across. We can therefore expect that we have identified a large number of stronger multiplex ties in this task. In our second prediction task, we test this assumption by observing if we are able to achieve higher predictive power across cities when testing on the presence or absence of a multiplex link. Formally, given a feature vector \(\mathbf{x_{ij}}\), we would like to predict a target label \(y_{ij} \in\{-1,+1\}\), where a link exists on both layers (+1) or not (−1). In Figure 5(C) and (D) we can observe that we are able to achieve greater predictive power using Twitter features in predicting multiplex links than Foursquare links in Figure 5(A) and in using Foursquare features in Figure 5(B), with the highest AUC scores of 0.82 and 0.84 for each set respectively. We also note that the Foursquare spatial features perform slightly better than the social interaction features for Twitter, which places importance on the discriminative power of spatial interactions as also observed in the first part of our analysis. This confirms our assumption that multiplex links are easier to identify than single layer links by using the same algorithmic set-up and shows that the strength of multiplex ties exhibited in the first part of our analysis can be used to predict links across networks.
Finally, we can see that using multilayer and geo-social features which employ both spatial and social interactions from the two heterogeneous platforms can outperform both single layer sets in predicting multiplex links (highest \(\mathrm{AUC} = 0.88\) for Chicago). It is intuitive that when using information from both layers the prediction of multiplex links becomes easier and it is often the case that such multilayer network data is not available. However, we have also shown that we can achieve relatively good results using only social or only geographic information.
In order to evaluate the information added by our proposed features as compared to the previously widely used Adamic-Adar and overlap metrics, we compare our prediction results thus far with a simplified model using the Adamic-Adar and overlap features alone, while using the same predictive framework, and compare the change in average AUC scores between cities. For our first prediction task of using the Twitter social layer features to predict links on the spatial Foursquare layer, we achieve an AUC score of 0.68 when using \(aa\_{\textit{sim}}\) and \(overlap\) features alone as compared to \(\mathrm{AUC}=0.8\) when using the full feature set including interactions. For our second task of using the Foursquare spatial features to predict links on the Twitter social layer, we obtain an AUC score of 0.65 when using the two structural features alone as opposed to \(\mathrm{AUC}=0.75\) on average across cities when using the full model. This indicates that our additional interaction features add significantly to the predictive power of the model.
When predicting the presence of a multilayer link between pairs of users, using the structural Adamic-Adar and overlap features alone, we achieve an AUC of 0.7 for the social Twitter layer, 0.71 for the spatial Foursquare layer, and 0.69 for our multilayer configuration. When compared to our full feature model (\(\mathrm{AUC}=0.77,0.8, \mbox{and } 0.83\) respectively), we note a significant improvement in terms of predictive power. In conclusion, the information added by our multilayer interaction features results in a significant improvement over the existing methods based on popular structural features alone.
Discussion & conclusions
Recently, social media has been increasingly alluded to as an ecosystem. The allusion comes from the emergence of multiple OSNs, interacting as a system, while competing for the same resources - users and their attention. We have addressed this system aspect by modelling multiple social networks as a multilayer online social network in this work. Most new OSNs joining the 'ecosystem' use contact list integration with external existing networks, such as copying friendships from Facebook through the open graph protocol.Footnote 3 Copying links from pre-existing social networks to new ones results in higher social interaction between copied links than between links created natively in the platform [39]. We propose that augmenting this copied network with a rank of relevance of contacts using multiplexity can provide even further benefits for newly launched services.
In addition to fostering multiplexity, however, new OSNs and especially interest-driven ones such as Pinterest for example, may benefit from similarity-based friend recommendations. In this work, we apply mobility features and neighbourhood similarity from Foursquare to predict links on Twitter and vice versa, highlighting the relationship between similar users across heterogeneous platforms. Similarly in [11], the authors infer types of relationships across different domains such as mobile and co-author networks. Although using a transfer knowledge framework, and not exogenous interaction features like we do, the authors also agree that integrating social theory in the prediction framework can greatly improve results.
The strength of ties manifested through multiplexity is expressed through a greater intensity of interactions and greater similarity across attributes both the offline [4, 5], and in the online context as we have seen in this work. We have explored a number of features, which take into consideration the multilayer neighbourhood of users in OSNs. The Adamic/Adar coefficient of neighbourhood similarity in its core neighbourhood version proved to be a strong indicator of multiplex ties. Additionally, we introduced combined features, such as the global interaction and similarity over distance, which reflect more distinctively the type of link, which exists between two users, than its single-layer counterparts. These features can be applied across multiple networks and can be flexible in their construction according to the context of the OSNs under consideration.
Media multiplexity is fascinating from the social networks perspective as it can reveal the strength and nature of a social tie given the full communication profile of people across all media they use [4]. Unfortunately, full online and offline communication profiles of individuals were not available and our analysis is limited to two social networks. Nevertheless, we have observed some evidence of media multiplexity manifested in the greater intensity and structural overlap of multiplex links and have gained insight into how we can utilise these properties for link prediction. Certainly, considering more OSNs and further relating media multiplexity to its offline manifestation is one of our future goals, and we believe that with the further integration of social media services and availability of data this will be possible in the near future.
https://foursquare.com/about
https://about.twitter.com/company
https://developers.facebook.com/docs/opengraph
Kivelä M, Arenas A, Barthelemy M, Gleeson JP, Moreno Y, Porter MA (2014) Multilayer networks. J. Complex Netw. 2(3):203-271
Haythornthwaite C, Work WB (1998) Friendship, and media use for information exchange in a networked organization. J Am Soc Inf Sci 49(12):1101-1114
Padgett JF, Mclean PD (2006) Organizational invention and elite transformation: the birth of partnership systems in Renaissance Florence. Am J Sociol 111:1463-1568
Haythornthwaite C (2005) Social networks and Internet connectivity effects. Inf Commun Soc 8(2):125-147
Hristova D, Musolesi M, Keep MC (2014) Your friends close and your Facebook friends closer: a multiplex network approach to the analysis of offline and online social ties. In: ICWSM
Liben-Nowell D, Kleinberg J (2007) The link-prediction problem for social networks. J Am Soc Inf Sci Technol 58(7):1019-1031
Menon AK, Link EC (2011) Prediction via matrix factorization. In: Machine learning and knowledge discovery in databases. Springer, Berlin, pp 437-452
Crandall DJ, Backstrom L, Cosley D, Suri S, Huttenlocher D, Kleinberg J (2010) Inferring social ties from geographic coincidences. In: PNAS, vol 107, pp 22436-22441
Scellato S, Noulas A, Mascolo C (2011) Exploiting place features in link prediction on location-based social networks. In: KDD
Lee K, Ganti RK, Srivatsa M, Liu L (2014) When Twitter meets foursquare: tweet location prediction using Foursquare. In: UbiComp
Tang J, Lou T, Kleinberg J (2012) Inferring social ties across heterogenous networks. In: WSDM
Rossetti G, Berlingerio M, Giannotti F (2011) Scalable link prediction on multidimensional networks. In: 2011 IEEE 11th international conference on data mining workshops (ICDMW). IEEE, New York, pp 979-986
Yang Y, Chawla N, Sun Y, Hani J (2012) Predicting links in multi-relational and heterogeneous networks. In: 2012 IEEE 12th international conference on data mining (ICDM). IEEE, New York, pp 755-764
Szell M, Lambiotte R, Thurner S (2010) Multirelational organization of large-scale social networks in an online world. Proc Natl Acad Sci 107(31):13636-13641
Kazienko P, Brodka P, Musial K, Gaworecki J (2010) Multi-layered social network creation based on bibliographic data. In: SocialCom
Bródka P, Kazienko P (2012) Multi-layered Social Networks. CoRR. arXiv:1212.2425
Ottoni R, Casas DL, Pesce JP, Jr WM, Wilson C, Mislove A et al. (2014) Of pins and tweets: investigating how users behave across image- and text-based social networks. In: ICWSM
Berlingerio M, Coscia M, Giannotti F, Monreale A, Pedreschi D (2013) Multidimensional networks: foundations of structural analysis. World Wide Web 16(5–6):567-593
Pappalardo L, Rossetti G, Pedreschi D (2012) 'How well do we know each other?' detecting tie strength in multidimensional social networks. In: 2012 IEEE/ACM international conference on advances in social networks analysis and mining (ASONAM). IEEE, New York, pp 1040-1045
Gilbert E, Karahalios K (2009) Predicting tie strength with social media. In: CHI
Hill RA, Dunbar RI (2003) Social network size in humans. Hum Nat 14(1):53-72
Adamic L, Adar E (2001) Friends and neighbors on the web. Soc Netw 25:211-230
Kwak H, Lee C, Park H, Moon S (2010) What is Twitter, a social network or a news media? In: WWW, pp 591-600
Noulas A, Scellato S, Mascolo C, Pontil M (2011) An empirical study of geographic user activity patterns in Foursquare. In: ICWSM '11, pp 570-573
Onnela JP, Saramäki J, Hyvönen J, Szabó G, Lazer D, Kaski K et al. (2007) Structure and tie strengths in mobile communication networks. Proc Natl Acad Sci 104(18):7332-7336
McPherson M, Smith-Lovin L, Cook JM (2001) Birds of a feather: homophily in social networks. Annu Rev Sociol 27:215-444
Brown C, Nicosia V, Scellato S, Noulas A, Mascolo C (2012) The importance of being placefriends: discovering location-focused online communities. In: Proceedings of the 2012 ACM workshop on Workshop on online social networks. ACM, New York, pp 31-36
Wang D, Pedreschi D, Song C, Giannotti F, Barabasi AL (2011) Human mobility, social ties, and link prediction. In: KDD
Cho E, Myers SA, Leskovec J (2011) Friendship and mobility: user movement in location-based social networks. In: Proceedings of the 17th ACM SIGKDD international conference on knowledge discovery and data mining. ACM, New York, pp 1082-1090
Scellato S, Noulas A, Lambiotte R, Mascolo C (2011) Socio-spatial properties of online location-based social networks. In: ICWSM '11, pp 329-336
Noulas A, Scellato S, Lathia N, Mascolo C (2012) A random walk around the city: new venue recommendation in location-based social networks. In: 2012 international conference on privacy, security, risk and trust (PASSAT) and 2012 international conference on social computing (SocialCom). IEEE, New York, pp 144-153
Rodrigue JP, Comtois C, Slack B (2013) The geography of transport systems. Routledge, London
Sadilek A, Kautz H, Bigham JP (2012) Finding your friends and following them to where you are. In: WSDM
Backstrom L, Leskovec J (2011) Supervised random walks: predicting and recommending links in social networks. In: WSDM. ACM, New York
Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O et al. (2011) Scikit-learn: machine learning in python. J Mach Learn Res 12:2825-2830
Lichtenwalter RN, Lussier JT, Chawla NV (2010) New perspectives and methods in link prediction. In: KDD, ACM, New York, pp 243-252
Breiman L (2001) Random forests. Mach Learn 45(1):5-32
Fawcett T (2006) An introduction to ROC analysis. Pattern Recognit Lett 27(8):861-874
Zhong C, Salehi M, Shah S, Cobzarenco M, Sastry N, Cha M (2014) Social bootstrapping: how Pinterest and Last.Fm social communities benefit by borrowing links from Facebook. In: WWW
This work was supported by the Project LASAGNE, Contract No. 318132 (STREP), funded by the European Commission and EPSRC through Grant GALE (EP/K019392).
Computer Lab, University of Cambridge, 15 JJ Thompson Ave, Cambridge, CB3 0FD, UK
Desislava Hristova, Chloë Brown & Cecilia Mascolo
Data Science Institute, University of Lancaster, South Drive, Lancaster, LA1 4YW, UK
Anastasios Noulas
Department of Geography, University College London, Gower Street, London, WC1E 6BT, UK
Mirco Musolesi
Desislava Hristova
Chloë Brown
Cecilia Mascolo
Correspondence to Desislava Hristova.
DH and AN designed the study. CB collected and pre-processed the data. DH carried out the computational tasks, analysed the data and prepared the figures. DH and MM wrote the main text of the manuscript and CM edited it. All authors read and approved the final manuscript.
Hristova, D., Noulas, A., Brown, C. et al. A multilayer approach to multiplexity and link prediction in online geo-social networks. EPJ Data Sci. 5, 24 (2016). https://doi.org/10.1140/epjds/s13688-016-0087-z
Received: 28 September 2015
online social networks
media multiplexity
multilayer networks
link prediction | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.