text
stringlengths
100
500k
subset
stringclasses
4 values
Local descriptive body weight and dietary norms, food availability, and 10-year change in glycosylated haemoglobin in an Australian population-based biomedical cohort Suzanne J. Carroll1, Catherine Paquet1,2, Natasha J. Howard1, Neil T. Coffee1, Robert J. Adams3, Anne W. Taylor3, Theo Niyonsenga1 & Mark Daniel1,4,5 Individual-level health outcomes are shaped by environmental risk conditions. Norms figure prominently in socio-behavioural theories yet spatial variations in health-related norms have rarely been investigated as environmental risk conditions. This study assessed: 1) the contributions of local descriptive norms for overweight/obesity and dietary behaviour to 10-year change in glycosylated haemoglobin (HbA1c), accounting for food resource availability; and 2) whether associations between local descriptive norms and HbA1c were moderated by food resource availability. HbA1c, representing cardiometabolic risk, was measured three times over 10 years for a population-based biomedical cohort of adults in Adelaide, South Australia. Residential environmental exposures were defined using 1600 m participant-centred road-network buffers. Local descriptive norms for overweight/obesity and insufficient fruit intake (proportion of residents with BMI ≥ 25 kg/m2 [n = 1890] or fruit intake of <2 serves/day [n = 1945], respectively) were aggregated from responses to a separate geocoded population survey. Fast-food and healthful food resource availability (counts) were extracted from a retail database. Separate sets of multilevel models included different predictors, one local descriptive norm and either fast-food or healthful food resource availability, with area-level education and individual-level covariates (age, sex, employment status, education, marital status, and smoking status). Interactions between local descriptive norms and food resource availability were tested. HbA1c concentration rose over time. Local descriptive norms for overweight/obesity and insufficient fruit intake predicted greater rates of increase in HbA1c. Neither fast-food nor healthful food resource availability were associated with change in HbA1c. Greater healthful food resource availability reduced the rate of increase in HbA1c concentration attributed to the overweight/obesity norm. Local descriptive health-related norms, not food resource availability, predicted 10-year change in HbA1c. Null findings for food resource availability may reflect a sufficiency or minimum threshold level of resources such that availability poses no barrier to obtaining healthful or unhealthful foods for this region. However, the influence of local descriptive norms varied according to food resource availability in effects on HbA1c. Local descriptive health-related norms have received little attention thus far but are important influences on individual cardiometabolic risk. Further research is needed to explore how local descriptive norms contribute to chronic disease risk and outcomes. Public health interventions commonly focus on modifiable individual-level risk factors such as dietary behaviour. However, individual-level risk factors are themselves shaped by environmental risk conditions, that is, properties of environmental living conditions that exacerbate a vulnerability to disease for the individuals exposed to those places [1]. Individual-level health behaviours, such as dietary choices, are one possible pathway through which local environments may influence health outcomes such as cardiometabolic risk [2]. For example, fast food intake may be influenced by the number of fast-food outlets in an individual's residential area [3]. Environmental features can be contextual (i.e., features of areas) or compositional (i.e., aggregated characteristics of people residing within areas) [1, 4]. Both contextual and compositional features are associated with cardiometabolic risk. A comprehensive review concluded there were reasonably consistent associations reported between accessibility to a supermarket and lower body weight, and between convenience store and fast-food outlet accessibility and higher body weight [5], higher body weight being a cardiometabolic risk factor. Some studies, however, have not observed any relationship between cardiometabolic risk and features of the food environment. Others have observed counterintuitive associations. One US study among low-income women reported no associations between body mass index (BMI) or cardiovascular disease (CVD) risk and the density of grocery stores, fast-food outlets, restaurants, or minimarts [6]. Similarly, a multi-ethnic study of pregnant women in the UK observed no associations between fast-food availability (count of outlets) or accessibility (distance to nearest outlet) and BMI or obesity for non-South Asian pregnant women [7]. For South Asian pregnant women, the same study reported an unexpected negative association between fast-food availability and accessibility and BMI and obesity [7]. Explanations for null or unexpected observations need to reach beyond demographic attributions such as ethnicity and socioeconomic status (SES). It is possible that additional, broader factors, not accounted for by statistical adjustments for SES, such as norms, could shape the nature of relationships between food resources and health outcomes. Numerous studies have investigated whether contextual features of local environments (e.g., fast-food outlets) are related to cardiometabolic risk, particularly body weight. Fewer studies have assessed the relationships between cardiometabolic risk and compositional features of local environments, beyond area-level SES. Associations between area-level SES and cardiometabolic risk are now very well established [5]. What remains to be far better investigated are the aggregated characteristics of people beyond area-level SES, for example, health-related norms, as they vary geographically. Local descriptive health-related norms may be important factors shaping cardiometabolic risk and disease through their effects on collective lifestyles and behaviour. Though norms feature prominently in behavioural theories, for example the Theory of Planned Behaviour [8], norms are not always well defined within research. Social norms can be differentiated into injunctive and descriptive norms [9]. Injunctive norms are 'shared rules of conduct', that is, what ought to be done, while descriptive norms are what most people actually do [9]. Injunctive and descriptive norms are likely to influence individuals through different motivational processes [9, 10]. Descriptive norms can be further differentiated into subjective and local descriptive norms [11, 12]. Subjective descriptive norms refers to what friends and family typically do. In contrast, local descriptive norms are what people sharing the same spatial setting, such as a work-place or residential area, typically do. This is regardless of any emotional connection, or lack thereof, between individuals within the setting [11–13]. Local descriptive norms have been associated with littering and recycling behaviours [9, 14]. While subjective descriptive norms, such as smoking behaviour, have been explored within social networks [15], local descriptive norms have rarely been examined in relation to health outcomes. A longitudinal study (involving 13 years of follow up) by Blok and colleagues [16], found neighbourhood prevalence of overweight/obesity predicted normal weight individuals becoming overweight/obese after accounting for individual factors and neighbourhood SES. Unfortunately, the study did not account for contextual features of the local environment, such as food availability, which may account for both prevalence of overweight/obesity and change in individual-level BMI. A recent longitudinal study using the same cohort reported on here accounted for contextual features of the physical activity environment, finding that local descriptive norms for overweight/obesity and physical inactivity predicted rising HbA1c concentrations over time [17]. Local descriptive health-related norms may be important influences on clinical outcomes by predisposing individuals towards or against particular health behaviours. It is important to empirically assess the influence of such norms on individual-level health outcomes, ideally while accounting for potential confounders such as availability of health-related resources. Furthermore, while local descriptive health-related norms may act as predisposing factors for health-related behaviours, the availability of contextual resources may enable (or inhibit) such behaviour. Thus the availability of health-related resources may modify associations between local descriptive health-related norms and health outcomes that are a function of behaviour. For example, associations between a local descriptive norm for overweight/obesity and the development of cardiometabolic risk in individuals may be more pronounced in areas with greater, as opposed to lesser, fast-food availability. Few studies have assessed contextual and compositional interaction effects in relation to important public health issues such as the rising level of cardiometabolic risk. Specifically, no study published thus far has investigated whether cardiometabolic risk is related to spatial variation in local-area norms for body weight and dietary behaviour while accounting for the built food environment, and whether any such relationship varies with food resource availability. This study assessed in a population-based biomedical cohort: 1) the influence of local descriptive norms for body weight and dietary behaviour on 10-year change in HbA1c (a marker of cardiometabolic risk); and 2) whether associations between change in HbA1c and local descriptive norms for body weight and dietary behaviour varied according to food resource availability. This study used an observational design incorporating data from a prospective biomedical cohort linked with other data sets utilising a Geographic Information System. The study was part of the Place and Metabolic Syndrome (PAMS) Project which aimed to assess the influence of social and built environmental factors on the evolution of cardiometabolic risk. The PAMS Project received ethical approval from the University of South Australia, Central Northern Adelaide Health Service, Queen Elizabeth Hospital, and South Australian Department for Health and Ageing Human Research Ethics Committees. The baseline study area consisted of the northern and western regions of Adelaide (Fig. 1), the capital city of South Australia. These regions accounted for 38% of the city's 1.1 million population in 2001 [18, 19] and are of particular interest due to elevated cardiometabolic risk relative to other areas [20, 21]. Study area – North-western region of Adelaide (urban area) (Reprinted from Social Science & Medicine, Vol. 166, Carroll, SJ, Paquet, C, Howard, N, Coffee, NT, Taylor, AW, Niyonsenga, T & Daniel, M, Local descriptive norms for overweight/obesity and physical inactivity, features of the built environment, and 10-year change in glycosylated haemoglobin in an Australian population-based biomedical cohort, pp. 233–243, 2016, with permission from Elsevier) Associations between environments, health behaviours and outcomes may differ between urban and rural regions [22]. This study was therefore limited to urban areas only, defined as Census Collection Districts (CDs) with a population density of >200 persons per hectare [19]. Individual-level data were sourced from the North West Adelaide Health Study (NWAHS), a 10-year biomedical cohort incorporating three waves of data collection, Wave 1 (2000–03), Wave 2 (2005–06), and Wave 3 (2008–10). The NWAHS investigated the prevalence of chronic conditions, including diabetes and cardiovascular disease, and their associated risk factors [23]. Households identified as within the study region by postcode were randomly selected from the Electronic White Pages telephone directory, and the person aged 18 years or over with the most recent birthday invited to participate in the study. Each NWAHS wave involved the collection of standardised measures using Computer-Assisted Telephone Interviews, self-report paper questionnaires, and clinic visits. Fasting blood samples were collected during the clinic visits and used to assess glycosylated haemoglobin (HbA1c) concentration. Written informed consent was obtained prior to each wave of data collection. Georeference points, made from participant residential addresses at each wave, enabled data linkage with other spatial datasets. To retain cohort study participants, a multi-strategy approach was employed including consistent use of study promotional materials, newsletters and birthday cards, tracking via White Pages telephone directory and State Electoral Roll [23]. Of the 4056 Wave 1 participants, 3205 (79.0% of baseline sample) attended the Wave 2 clinic assessment and 2487 (77.6% of Wave 2 sample; 61.3% of baseline sample) attended the clinic at Wave 3. The baseline NWAHS sample was not statistically significantly different to the Adelaide metropolitan population [24] by sex, education or household income. However, older individuals (≥45 years) were over-represented in the baseline sample. Further information on recruitment and cohort profile has previously been published [23, 25]. Cardiometabolic risk (outcome measure) Glycosylated haemoglobin (HbA1c) concentration (%), assayed at each wave, was used to represent cardiometabolic risk. HbA1c is a stable marker of glycaemic control and thus risk, reflecting 2–3 month time-averaged blood glucose levels [26]. Concentrations 6.5% or greater are indicative of diabetes [27]. However, the relationship between HbA1c and cardiovascular disease (CVD) is continuous and lacking an obvious risk threshold [28]. Environmental measures Environmental exposures were expressed within spatial units defined as participant-centred road-network buffers set to 1600 m (1 mile). This distance can be covered by an average adult walking at a comfortable pace of around 5 km/hour for approximately 20 min [29]. The 1600 m buffer distance has previously been used in similar studies (e.g., [30–32]) allowing for comparison of findings across studies. Smaller buffers of 1000 m were also considered but dropped due to unstable estimates of local descriptive norms associated with small counts of survey participants within buffers (see below). Geocoded data for constructing local descriptive norms were not available prior to 2006. To temporally match data for local descriptive norms, other environmental exposures were expressed for the year 2007. Local descriptive health-related norms Local descriptive norms for overweight/obesity and insufficient fruit intake were respectively expressed as local prevalence of overweight/obesity (proportion of South Australian Monitoring and Surveillance System [SAMSS] participants per buffer classified as having a BMI ≥ 25 kg/m2) and insufficient fruit intake (proportion of SAMSS participants per buffer not meeting fruit intake recommendations), based on health recommendations of two or more serves per day [33, 34]. Local descriptive norms were aggregated from geocoded individual-level survey response data (adults 18 years and older), extracted from the SAMSS for the years 2006–2010. Processing of individual-level SAMSS data was performed by the data custodians to protect the confidentiality of SAMSS participants. The SAMSS survey for which details are published elsewhere, monitors population trends in chronic diseases and risk factors [35, 36]. SAMSS participants are recruited annually across all of South Australia by simple random sampling of households from the Electronic White Pages telephone directory. The individual, of any age, with the most recent birthday is invited to participate. Overall, the response rate for SAMSS contacts during 2006–2010 was 65% with 35,830 interviews conducted across South Australia. Of the 8355 SAMSS participants interviewed during 2006–2010, 18 years and over residing within the NWAHS region, 6860 participant records were geocoded (82%); 1439 participants did not provide consent (17%) and 56 (<1%) could not be geocoded. To maximise SAMSS participant representation within each NWAHS participant buffer, SAMSS data were pooled across survey years 2006 to 2010. To protect confidentiality and support the reliability of estimates, aggregated norms data for NWAHS buffers with fewer than 50 SAMSS participants, or less than five participants per measurement category, were not released by the data custodians. Consequently sample loss occurred which was particularly severe at the 1000 m buffer size and hence this unit was not considered further. Unstandardised prevalence rates were used following the precedent of Blok and colleagues [16]. Appropriate weightings for standardisation were unavailable at the level of the geographic buffers used, and the use of other weightings (e.g., for the Adelaide metropolitan region) may artificially reduce or inflate spatial variation. Contextual features Contextual data were extracted from the 2007 South Australian Retail Database [37]. The database catalogues shops, with information including shop location, retail activity type, and shop floor-space. Retail activity type is coded based on predominant retail activities [38]. Contextual food environment data were extracted according to these retail codes. Food resources were classified by the authors based on these retail codes, using classifications designed by a dietician for use in a previous Australian study [39]. Fast-food outlets were defined as major fast-food franchises (e.g., McDonalds©) and independent fast-food take-away stores (e.g., fish and chips). Healthful food resources were defined as greengrocers, butchers, supermarkets (with > 200 m2 floor space), and health food shops. Food outlets selling a mix of healthful and unhealthful foods, with neither food group being obviously predominant (e.g., sandwich and lunch bars, bakeries, and restaurants other than those identified as fast food), were excluded from classification. Road-network distance from NWAHS participants' residence to food resources was calculated using Network Spatial Analyst in ArcGIS (version 9.3.1, ESRI, Redlands, California). Healthful food resources and fast-food outlets identified within 1600 m of participant residences were then summed according to type. Density measures (count/area of buffer intersected parcels in km2) were calculated in addition to counts. Covariates Individual- and area-level covariates were included in analytic models. Predictors of NWAHS cohort attrition were assessed using logistic regression within the analytic sample (i.e., after application of inclusion criteria as listed in Table 1). The pattern of missingness did not meet the missing completely at random criterion. As participants who were younger, not in the work-force, currently a smoker, and not married (or de facto) were more likely to have missing HbA1c information at follow ups, these measures were included in statistical models to satisfy the analytic criterion of missing at random [40]. Therefore individual-level covariates included age, sex, employment status (full-time, part-time, or not in the work force), level of education (university graduate or not), marital status (married/de facto, or single), and smoking status (current smoker, ex-smoker, or never smoked). Covariates other than baseline age and sex were treated as time varying. Table 1 Loss of analytic sample due to application of inclusion criteria Area-level education (proportion with a university degree) was selected to represent area-level SES. The use of area-level education allows interpretation of specific area-level SES relations with health outcomes (i.e., change in HbA1c) and comparisons with studies similarly using education to express area-level SES. Education data were extracted from the 2006 Population and Housing Census [41] at the level of CDs and further aggregated using the weighted average of values from CDs intersected by the NWAHS participant buffers. CDs, the smallest unit for which census data are available, include an average of 220 dwellings [42]. Weights were defined based on the proportion of dwellings within a CD included within the NWAHS participant buffer: $$ BUFFE{R}_{SES}={\displaystyle \sum}\left[ C{D}_{SES} \times \frac{dwelling{ s}_a}{dwelling{ s}_b}\right] $$ where dwellings a represents the number of dwellings included within a CD intersected with a buffer, and dwellings b represents the total number of dwellings within a buffer. Though assuming that the distribution of the characteristic of interest (area-level education) is evenly distributed across all dwellings, this method is an improvement over assuming that the characteristic is evenly spread across the spatial unit with no recognition of the distribution of dwellings. Linear multilevel models (three levels), assessed associations between environmental features and 10-year change in HbA1c. Level one of the model (time) regressed time-specific HbA1c data on time of measurement (in years) from baseline data collection. As data collection between participant waves was unevenly spaced, with slightly different years possible within each wave, time was expressed in a continuous format from the participant's first clinic visit. Level two (participant level) modelled associations between environmental exposures and participant baseline values of, and changes in, HbA1c. Included random effects allowed variation in baseline HbA1c (intercept) and HbA1c change (slope for time) between participants. Lastly, level three accounted for spatial clustering within State Suburbs, with a random intercept specified to allow for variations in baseline HbA1c across State Suburbs. State Suburbs are formed by aggregating CDs to align with the most recent gazetted suburb at the time of the Census [19]. Four separate sets of models were constructed, with individual-level covariates included in all models. Predictor variables were added sequentially: 1) compositional norm (prevalence of overweight/obesity or insufficient fruit intake), time, and the two-way interaction between these terms; 2) context (fast food or healthful food availability), and the two-way interaction term (context x time); and 3) area-level education (covariate). Interaction terms for predictors and time (e.g., compositional norm x time) assessed the influence of the predictor (compositional norm) on change in HbA1c over time. Additional two-way (compositional norm x context) and three-way (compositional norm x context x time) interaction terms were included in full models to test for interactions between environmental predictors in relation to baseline HbA1c and change in HbA1c respectively. Environmental measures were standardised prior to analyses to allow comparison of their relative effects. All analyses were conducted using SAS (version 9.4, SAS Institute Inc, Cary, North Carolina). Statistical significance was set at alpha = 0.05. Table 1 outlines sample loss due to analysis inclusion criteria. Participants who moved between waves 1 and 2 were excluded from analyses. The two analytic samples contained were 1890 and 1945 eligible NWAHS participants with local descriptive norms data for overweight/obesity and insufficient fruit intake, respectively. Participant characteristics and environmental features are summarised in Table 2. There were no notable differences between the two analysis samples. The majority (90.2%) of eligible participants were born in Australia, New Zealand or Western Europe, and the median length of follow-up was 6.84 years for both samples. Table 2 Individual characteristics and environmental features for the analytic samples Intraclass correlations (ICC), describing the degree of similarity (or homogeneity) of the observed response within a given unit of analysis (i.e., HbA1c concentration across waves for a participant) or cluster (i.e., State Suburb) were calculated from covariance parameter estimates of the three-level model with no predictors [43]. These ICCs indicated moderate correlation of HbA1c at the individual level (repeated measures over time; ICC participants = 0.57) and relatively low correlation at the suburb level (ICC State Suburb = 0.01) consistent with previously reported levels of cardiometabolic risk clustering according to geographic area [44]. Tables 3 and 4 present the results of the four sets of multilevel models and adjusted ICCs. As environmental exposure measures (including area-level education) were standardised prior to analyses, the reported beta coefficients reflect change in HbA1c concentration per one standard deviation (SD) change in the environmental exposure predictor. Means and SDs of environmental measures are provided in Table 2. Model 1 in each set included time, one local descriptive (either overweight/obesity or insufficient fruit intake) and individual-level covariates. Model fit (based on AIC and BIC) did not improve in any of the four sets of models with the inclusion of measures of food resource availability (neither fast food nor healthful food resources; Model 2). Similarly, the inclusion of area-level education at Model 3 did not improve model fit in sets of models including the overweight/obesity norm (Table 3). However, model fit did improve with inclusion of area-level education in models with the insufficient fruit intake norm (Table 4, Model 3), and area-level education was statistically significantly positively associated with baseline HbA1c. Lastly, the inclusion of the environmental exposures interaction term in model 4 did not improve model fit in three of the four sets of models. In the fourth set, the inclusion of an interaction term between the featured environmental predictors, namely overweight/obesity norm and healthful food resources, improved model fit. Table 3 Associations between local descriptive overweight/obesity norm, food resource availability, and 10-year change in HbA1c Table 4 Associations between local descriptive insufficient fruit intake norms, food resource availability, and 10-year change in HbA1c In Models 1–3, lesser overweight/obesity norm was statistically significantly associated with greater baseline HbA1c concentration (β = −0.03 to −0.04 depending on model; i.e., a 2.21% [1SD] increment in overweight/obesity prevalence was associated with a −0.03% to −0.04% lower HbA1c concentration). Insufficient fruit intake norm, fast-food outlets, and healthful food resources were not associated with baseline HbA1c. HbA1c increased over the 10-year follow-up period (time was statistically significantly positively associated with HbA1c concentration in all models) with an increase in HbA1c concentration of 0.03% per year. Statistically significant positive time x norm interactions indicate that greater prevalences for the overweight/obesity norm, and greater insufficient fruit intake norm, were each associated with greater rates of rising HbA1c over time (e.g., in model 3 with fast food availability: overweight/obesity norm x time β = 0.008 indicating that a 2.21% [1SD] increment in overweight/obesity prevalence was associated with a further 0.008% increase in HbA1c per year). Fast-food outlets and healthful food resources were not associated with change in HbA1c over time. There were no statistically significant two-way (local descriptive norm and food resource availability) interactions related to baseline HbA1c concentration. The three-way interaction of the overweight/obesity norm x healthful food resource x time was statistically significantly associated with HbA1c (β = −0.0057 [95% CI −0.0092 to −0.0022], p = 0.001). The effect of healthful food resource availability on the relationship between local descriptive overweight/obesity norm and the trajectory of HbA1c is shown graphically in Fig. 2. The figure shows that greater healthful food resource availability reduced the impact of the overweight/obesity norm on increasing HbA1c concentration. Models including the food environment measures as density rather than count measures found similar results (not reported here). Associations between local descriptive overweight/obesity norms and HbA1c trajectories according to healthful food resource availability Few studies have examined the influence of local descriptive health-related norms on trajectories of individual health outcomes. This study found that local descriptive norms, operationalised as the prevalence of local residents being overweight/obese or not meeting fruit intake recommendations, were each associated with the rate of increase in HbA1c levels over 10 years. These relationships were robust to the inclusion of contextual measures (fast-food outlet and healthful food resource availability), area-level education, and individual-level demographic and smoking information. Fast-food outlet and healthful food resource availability were not statistically significantly associated with change in HbA1c in this sample and region. However, greater healthful food resources reduced the unhealthful influence of the overweight/obesity norm on the rate of increase in HbA1c. This observation supports the premise that the availability of food resources can modify relationships between local descriptive health-related norms and health outcomes. Associations between subjective descriptive norms and individual health-related outcomes have previously been reported for social networks. A longitudinal (32 years) social network study found associations between norms for overweight and an individual becoming overweight [45]. Similarly, the dietary norms of peers are related to individuals' diet and dietary intentions [46, 47]. The influence of geographically defined (i.e. local) descriptive norms on individual cardiometabolic risk has rarely been evaluated. One study found that the odds of a Dutch adult becoming overweight/obese over 13-years of follow-up increased with greater prevalence of neighbourhood overweight/obesity in models adjusted for age, sex, education and neighbourhood deprivation [16]. Similarly, an analysis using the same cohort as reported on here, documented associations between greater local descriptive overweight/obesity and physical inactivity norms, and increasing HbA1c over time in models adjusted for walkability, availability of public open space, area-level education and individual-level covariates [17]. No study has thus far reported the influence of local descriptive dietary norms on individual health outcomes. The current study's findings, along with those of the two referenced studies, support the notion that local descriptive norms influence individual-level health outcomes. However, more research in different regions and populations is needed to replicate these results. Behavioural theory suggests we imitate the behaviours of others, whether informed by direct viewing or by other informational sources [48, 49]. It may be that by observing locality-based body weight (i.e., the local descriptive body weight norm) an individual determines what they consider to be a socially acceptable body weight, and that the norm overrides any known health consequences associated with a larger body size. Therefore, exposure to greater prevalence of overweight or obese persons may reduce motivation to follow health recommendations relating to diet and body weight. Interestingly, the associations between insufficient fruit intake norm and change in HbA1c were similar to those for the overweight/obesity norm. The fruit intake of other residents is unlikely to be easily observed, unlike local body weight norms. As such, it is difficult to understand how the eating behaviour of nearby residents may influence individuals. Norms for overweight/obesity and insufficient fruit intake were moderately correlated (rho = 0.37, p < 0.0001) which may partly explain these findings. It is also possible that the similarities in results reflect broader influences such as the formation of geographically defined collective lifestyles, the expression of a shared way of relating and acting in a given environment [50, 51]. Intervention strategies previously applied to reduce smoking behaviour could be adapted for use by initiatives to improve dietary behaviour. Smoking intervention has successfully changed attitudes to smoking, pushing the norm towards non-smoking due to policy intervention strategies such as increased pricing, reduced availability and limitations as to where one can smoke [52]. Similar manipulation of the food environment may assist in changing norms relating to diet behaviour and weight, particularly where norms are most unhealthful. Moreover, psychology research has shown that information on the eating behaviours of others can influence both the food selection and quantity of food consumed [53]. As such, descriptive norms information could be used to encourage increased fruit and vegetable intake [54]. This study found no association between fast-food outlet or healthful food resource availability and change in HbA1c. Findings from previous studies indicate mixed results in this regard [55]. Some studies report greater fast-food outlet availability as associated with: greater weight status [56, 57]; an increase in systolic and diastolic blood pressure over 1 year in low walkability neighbourhoods [58]; and mortality and hospital admissions for acute coronary syndromes [59] in models adjusted for individual and area-level covariates. Other studies of fast-food outlet availability have reported no significant association with weight status [31, 60, 61]. Still other research suggests that relationships between fast-food availability and fast-food consumption [62] and cardiometabolic risk [63] are complex, being moderated by individual psychological dispositions. Similarly, when regarding healthful food resources, some studies have found associations between healthful food availability and lower 5 year diabetes incidence [64], and greater supermarket availability and reduced odds of obesity [56]. Other studies, like ours, have not observed any association between the availability of healthful food resources and cardiometabolic risk, or have observed associations in an unexpected direction (e.g., [55, 61]). Food resource availability is largely viewed to function as an enabler (or conversely, a barrier, where unavailable) to obtaining and consuming desired foods. Whilst different local areas within our study region are likely to have different availabilities of food resources, all might nevertheless provide access sufficient as not to unduly limit individual dietary choices or a capacity to obtain desired foods. The median number of fast-food outlets in a buffer was five (IQR 3–8), suggesting that fast food was readily available across the region and lack of access would not generally be a barrier to obtaining fast food, if desired. Though food resource availability was not associated with change in HbA1c over time, healthful food resource availability modified the association between the overweight/obesity norm and change in HbA1c over time. In areas with a greater availability of healthful food resources, the impact of a greater overweight/obesity norm on rising HbA1c was reduced. Conversely, where there was a lesser availability of healthful food resources, the rate of increase in HbA1c due to a greater overweight/obesity norm was amplified. Interactions between environmental features in relation to cardiometabolic risk have rarely been studied [5]. No previous studies have reported on the presence or absence of interactions between local descriptive health-related norms and the contextual food environment in relation to HbA1c or other health outcomes. Further study of the interactive, joint effects of contextual and compositional risk conditions on chronic disease outcomes is required to inform strategies for intervention design and targeting. Local environments that predispose and enable individual health behaviours will be more supportive of health than environments that support only one or the other set of factors. Attention to local health-related norms (to predispose healthful behaviour) together with the provision of sufficient resources (to enable healthful behaviour) is needed to reduce chronic disease outcomes [65, 66]. It is necessary also to develop an understanding of how local descriptive norms are shaped. Intervention strategies intending to change local norms will need to be assessed for effectiveness. Appropriate intervention strategies may need to differ according to the level of the current descriptive norm. Applying Rogers' Diffusion of Innovations theory [67], where the local prevalence of a positive behavioural norm is low, intervention strategies might prioritise the targeting of early adopters. Conversely, where the local prevalence of a positive behavioural norm is moderate, targeting laggards may be more appropriate. Strategies might further differ depending on the target group. For example, strategies aimed at early adopters could involve health education campaigns that appeal to values, attitudes and beliefs which predispose behaviour, while those targeting the early majority could focus on enabling mechanisms such as the provision of healthful food at affordable cost. Further research is necessary to empirically measure and to implement interventions to shape and apply healthful norms. Strengths of this study include the use of a 10-year population-based cohort with three waves of data including clinical measures. The longitudinal design supports causal inference through temporality of measures [2]. However, the positive longitudinal associations between the local descriptive overweight/obesity norm and increasing HbA1c differed from the cross-sectional results which indicated an inverse association between the overweight/obesity norm and baseline HbA1c. Sub-analyses indicated that these unexpected cross-sectional findings were carried primarily by older participants and that age adjustment did not fully remove the impact of these influences. Emphasis should be on the longitudinal findings as the cross-sectional results are likely spurious. The outcome (HbA1c) was clinically measured, avoiding self-report bias. However, individual demographic and smoking information were self-reported, and the local descriptive norms were aggregated from self-report survey data with the consequent possibility of self-report bias. The methods and data sources used to operationalise the environmental exposures were strengths of this study. The contextual food environment was represented using objective measures extracted from a database constructed from data collected by field surveyors [37]. Local descriptive norms data were derived from a separate survey, thus avoiding same-sample bias [68]. Environmental exposures were defined using ego-centred road-network buffers, as has been previously recommended [69], with local-area education expressed using a spatial unit designed to closely match with these road-network buffers. The use of differently sized road-network buffers would have added to this research, however this was not possible, as previously outlined. It is also important to note the possibility that self-selection into neighbourhoods may have influenced this study's results. This, however, is of greater concern in cross-sectional studies than those with a longitudinal design [70]. Lastly, a basic premise of this study is that people are influenced by their local residential environments and their opportunities to access resources within these areas. This does not account for time spent proximal to their place of residence, and opportunities and exposures provided within the work-place and other destinations, or while commuting. The influence of local residential exposures may vary according to time spent in the local residential area, which may itself relate to individual-level sociodemographic factors and lifestyle choices. Older individuals, or those caring for young children at home, may spend more time close to home and thus be more strongly influenced by local environment exposures. Future research will use technologies such as GPS tracking to assess time spent within different geographies for individuals. Consideration of transport modes may also be important. Car ownership may modify relationships between residential exposures and health outcomes. In the current study region, cars are the predominant mode of transport though public transport options are available and streets are generally walkable with adequate footpaths provided. These may be important factors to consider in future studies. Local descriptive body weight and dietary norms reflect compositional population characteristics. Food resource availability reflects context. The assessment of compositional norms in relation to health outcomes has rarely been investigated. This longitudinal study found only compositional norms, not food resource availability, to be associated with 10-year change in HbA1c. However, the availability of healthful food resources modified the relationship between the local descriptive overweight/obesity norm and rate of change in HbA1c. Research in different populations and regions is recommended to replicate these results. It is also recommended that future research investigate how compositional norms may be shaped, and the mechanisms through which compositional norms influence individual health outcomes. The findings of this study suggest that compositional norms should be considered in intervention strategies targeting cardiometabolic risk. AIC: Akaike information criterion Bayesian information criterion BMI: Census Collection District CVD: HbA1c : Glycosylated haemoglobin ICC: Intraclass correlations IQR: NWAHS: North West Adelaide Health Study PAMS Project: Place and Metabolic Syndrome Project SAMSS: South Australian Monitoring and Surveillance System Daniel M, Lekkas P, Cargo M, Stankov I, Brown A. Environmental risk conditions and pathways to cardiometabolic diseases in Indigenous populations. Annu Rev Publ Health. 2011;32:327–47. Daniel M, Moore S, Kestens Y. Framing the biosocial pathways underlying associations between place and cardiometabolic disease. Health Place. 2008;14:117–32. Black C, Moon G, Baird J. Dietary inequalities: what is the evidence for the effect of the neighbourhood food environment? Health Place. 2014;27:229–42. Macintyre S, Ellaway A, Cummins S. Place effects on health: how can we conceptualise, operationalise and measure them? Soc Sci Med. 2002;55:125–39. Leal C, Chaix B. The influence of geographic life environments on cardiometabolic risk factors: a systematic review, a methodological assessment and a research agenda. Obes Rev. 2011;12:217–30. Mobley LR, Root ED, Finkelstein EA, Khavjou O, Farris RP, Will JC. Environment, obesity, and cardiovascular disease risk in low-income women. Am J Prev Med. 2006;30:327–32. Fraser LK, Edwards KL, Tominitz M, Clarke GP, Hill AJ. Food outlet availability, deprivation and obesity in a multi-ethnic sample of pregnant women in Bradford, UK. Soc Sci Med. 2012;75:1048–56. Ajzen I. The theory of planned behavior. Organ Behav Hum Dec. 1991;50:179–211. Cialdini RB, Reno RR, Kallgren CA. A focus theory of normative conduct: recycling the concept of norms to reduce littering in public places. J Pers Soc Psychol. 1990;58:1015–26. Deutsch M, Gerard HB. A study of normative and informational social influences upon individual judgment. J Abnorm Soc Psych. 1955;51:629–36. Carrus G, Bonnes M, Fornara F, Passafaro P, Tronu G. Planned behavior and "local" norms: an analysis of the space-based aspects of normative ecological behavior. Cogn Process. 2009;10:198–200. Fornara F, Carrus G, Passafaro P, Bonnes M. Distinguishing the sources of normative influence on proenvironmental behaviors: the role of local norms in household waste recycling. Group Process Intergroup Relat. 2011;14:623–35. Kormos C, Gifford R, Brown E. The influence of descriptive social norm information on sustainable transportation behavior: a field experiment. Environ Behav. 2015;47:479–501. Nigbur D, Lyons E, Uzzell D. Attitudes, norms, identity and environmental behaviour: using an expanded theory of planned behaviour to predict participation in a kerbside recycling programme. Brit J Soc Psychol. 2010;49:259–84. Christakis NA, Fowler JH. Social contagion theory: examining dynamic social networks and human behavior. Stat Med. 2013;32:556–77. Blok DJ, de Vlas SJ, van Empelen P, Richardus JH, van Lenthe FJ. Changes in smoking, sports participation and overweight: does neighborhood prevalence matter? Health Place. 2013;23:33–8. Carroll SJ, Paquet C, Howard NJ, Coffee NT, Taylor AW, Niyonsenga T, Daniel M. Local descriptive norms for overweight/obesity and physical inactivity, features of the built environment, and 10-year change in glycosylated haemoglobin in an Australian population-based biomedical cohort. Soc Sci Med. 2016;166:233–43. ABS. Usual Residents Profile 2001, cat. no. 2004.0 [Online]. vol. 2011. Canberra: Australian Bureau of Statistics; 2003. ABS. Statistical Geography Volume 2: Census Geographic Areas Australia. Canberra: Australian Bureau of Statistics; 2001. South Australian Department of Health. HOS: Self Reported Prevalence of Obesity in the SA Health Regions. Adelaide: Population Research and Outcome Studies Unit, South Australian Department of Health; 2005. Dal Grande E, Taylor A, Hurst B, Kenny B, Catcheside B. The Health Status of People Living in the South Australian Divisions of General Practice: South Australian Monitoring and Surveillance System July 2002 - December 2003. Adelaide: Population Research and Outcome Studies Unit, South Australian Department of Health; 2004. Wilcox S, Castro C, King AC, Housemann R, Brownson RC. Determinants of leisure time physical activity in rural compared with urban older and ethnically diverse women in the United States. J Epidemiol Commun H. 2000;54:667–72. Grant J, Chittleborough C, Taylor A, Dal Grande E, Wilson D, Phillips P, Adams R, Cheek J, Price K, Gill T, Ruffin R. The North West Adelaide Health Study: detailed methods and baseline segmentation of a cohort for chronic diseases. Epidemiological Perspectives and Innovations. 2006;3. ABS. In: Statistics ABo, editor. Census of Population and Housing: CDATA 2001 Datapack - Usual Residents Profile, 2001. Canberra: Australian Bureau of Statistics; 2001. Grant J, Taylor A, Ruffin R, Wilson D, Phillips P, Adams R, Price K. Cohort profile: The North West Adelaide Health Study (NWAHS). Int J Epidemiol. 2009;38:1479–86. Bennett CM, Guo M, Dharmage SC. HbA1c as a screening tool for detection of Type 2 diabetes: a systematic review. Diabetic Med. 2007;24:333–43. IEC. International Expert Committee Report on the role of the A1C assay in the diagnosis of diabetes. Diabetes Care. 2009;32:1327–34. Khaw K-T, Wareham N, Bingham S, Luben R, Welch A, Day N. Association of hemoglobin A1c with cardiovascular disease and mortality in adults: the European Prospective Investigation into Cancer in Norfolk. Ann Intern Med. 2004;141:413–20. Bohannon RW. Comfortable and maximum walking speed of adults aged 20—79 years: reference values and determinants. Age Ageing. 1997;26:15–9. Astell-Burt T, Feng X. Geographic inequity in healthy food environment and type 2 diabetes. MJA. 2015;203. Jeffery RW, Baxter J, McGuire M, Linde J. Are fast food restaurants an environmental risk factor for obesity? Int J Behav Nutr Phy. 2006;3. Reitzel LR, Regan SD, Nguyen N, Cromley EK, Strong LL, Wetter DW, McNeill LH. Density and proximity of fast food restaurants and body mass index among African Americans. Am J Public Health. 2014;104:110–6. National Health and Medical Research Council. Australian Dietary Guidelines. Canberra: National Health and Medical Research Council; 2013. WHO. Global Database on Body Mass Index: An Interactive Surveillance Tool for Monitoring Nutrition Transition [http://apps.who.int/bmi/index.jsp] Population Research and Outcome Studies. South Australian Monitoring and Surveillance System: Survey Methodology [http://health.adelaide.edu.au/pros/docs/reports/report_samss_tech_paper.pdf] Population Research and Outcome Studies. South Australian Monitoring and Surveillance System (SAMSS) [http://health.adelaide.edu.au/pros/data/samss/] South Australian Government. South Australian Retail Database. Adelaide: Department of Planning, Transport and Infrastructure (DPTI); 2007. Spatial Planning Analysis and Research Unit. Guide to using the 2007 Retail Database Adelaide Statistical Division and nearby large Country Centres. Adelaide: Planning SA; 2008. Turrell G, Giskes K. Socioeconomic disadvantage and the purchase of takeaway food: a multilevel analysis. Appetite. 2008;51:69–91. Murray GD, Findlay JG. Correcting for the bias caused by drop‐outs in hypertension trials. Stat Med. 1988;7:941–6. ABS. Basic Community Profile (BCP) DataPack, cat. no. 20069.0.30.001 (Second Release). Canberra: Australian Bureau of Statistics; 2006. ABS. Australian Standard Geographic Classification Vol. 2: Census Geographic Areas, Australia 2006. Canberra: Australian Bureau of Statistics; 2006. West BT, Welch KB, Galecki AT. Linear Mixed Models: A Practical Guide Using Statistical Software. Boca Raton: Chapman & Hall/CRC; 2007. Ukoumunne O, Gulliford M, Chinn S, Sterne J, Burney P. Methods for evaluating area-wide and organisation-based interventions in health and health care: a systematic review. Health Technol Asses. 1999;3:98. Christakis NA, Fowler JH. The spread of obesity in a large social network over 32 years. New Engl J Med. 2007;357:370–9. Pachucki MA, Jacques PF, Christakis NA. Social network concordance in food choice among spouses, friends, and siblings. Am J Public Health. 2011;101:2170–7. Tuu HH, Olsen SO, Thao DT, Anh NTK. The role of norms in explaining attitudes, intention and consumption of a common food (fish) in Vietnam. Appetite. 2008;51:546–51. Rivis A, Sheeran P. Descriptive norms as an additional predictor in the theory of planned behaviour: a meta-analysis. Curr Psychol. 2003;22:218–33. Bandura A. Social Learning Theory. New York: General Learning Press; 1971. Frohlich KL, Corin E, Potvin L. A theoretical proposal for the relationship between context and disease. Sociol Health Ill. 2001;23:776–97. Bernard P, Charafeddine R, Frohlich KL, Daniel M, Kestens Y, Potvin L. Health inequalities and place: a theoretical conception of neighbourhood. Soc Sci Med. 2007;65:1839–52. Ashe M, Graff S, Spector C. Changing places: policies to make a healthy choice the easy choice. Public Health. 2011;125:889–95. Robinson E, Thomas J, Aveyard P, Higgs S. What everyone else is eating: a systematic review and meta-analysis of the effect of informational eating norms on eating behavior. J Acad Nutr Diet. 2014;114:414–29. Robinson E, Fleming A, Higgs S. Prompting healthier eating: testing the use of health and social norm based messages. Health Psychol. 2014;33:1057–64. Daniel M, Paquet C, Auger N, Zang G, Kestens Y. Association of fast-food restaurant and fruit and vegetable store densities with cardiovascular mortality in a metropolitan population. Eur J Epidemiol. 2010;25:711–9. Bodor JN, Rice JC, Farley TA, Swalm CM, Rose D. The association between obesity and urban food environments. J Urban Health. 2010;87:771–81. Mehta NK, Chang VW. Weight status and restaurant availability: a multilevel analysis. Am J Prev Med. 2008;34:127–33. Li F, Harmer P, Cardinal BJ, Vongjaturapat N. Built environment and changes in blood pressure in middle aged and older adults. Prev Med. 2009;48:237–41. Alter DA, Eny K. The relationship between the supply of fast-food chains and cardiovascular outcomes. Can J Public Health. 2005;96:173–7. Simmons D, McKenzie A, Eaton S, Cox N, Khan MA, Shaw J, Zimmet P. Choice and availability of takeaway and restaurant food is not related to the prevalence of adult obesity in rural communities in Australia. Int J Obesity. 2005;29:703–10. Wang MC, Kim S, Gonzalez AA, MacLeod KE, Winkleby MA. Socioeconomic and food-related physical characteristics of the neighbourhood environment are associated with body mass index. J Epidemiol Commun H. 2007;61:491–8. Paquet C, Daniel M, Knäuper B, Gauvin L, Kestens Y, Dubé L. Interactive effects of reward sensitivity and residential fast-food restaurant exposure on fast-food consumption. Am J Clin Nutr. 2010;91:771–6. Paquet C, Dube L, Gauvin L, Kestens Y, Daniel M. Sense of mastery and metabolic risk: moderating role of the local fast-food environment. Psychosocial Medicine. 2010;72:324–31. Auchincloss AH, Diez Roux AV, Mujahid MS, Shen M, Bertoni A, Carnethon M. Neighborhood resources for physical activity and healthy foods and incidence of type 2 diabetes mellitus: the multi-ethnic study of Atherosclerosis. Arch Intern Med. 2009;169:1698–704. Green LW, Richard L, Potvin L. Ecological foundations of health promotion. Am J Health Promot. 1996;10:270–81. Daniel M, Green LW. Health promotion and education. In: Breslow L, editor. Encyclopedia of Public Health, vol. 2. New York: Macmillan; 2002. p. 541–8. Rogers EM. Diffusion of Innovations, 5th edn. New York: Simon and Schuster; 2003. Diez Roux AV. Neighborhoods and health: where are we and where do we go from here? Rev Epidemiol Sante Publique. 2007;55:13–21. Chaix B, Merlo J, Evans D, Leal C, Havard S. Neighbourhoods in eco-epidemiologic research: delimiting personal exposure areas. A response to Riva, Gauvin, Apparicio and Brodeur. Soc Sci Med. 2009;69:1306–10. Boone-Heinonen J, Gordon-Larsen P, Guilkey DK, Jacobs Jr DR, Popkin BM. Environment and physical activity dynamics: the role of residential self-selection. Psychol Sport Exerc. 2011;12:54–60. The authors wish to acknowledge the contributions of Eleonora Dal Grande and Simon Fullerton in preparation of the SAMSS data. SAMSS is owned by SA Health, South Australia, Australia. All collected source data are maintained and managed by Population Research and Outcomes Studies, The University of Adelaide. The opinions expressed in this work are those of the authors and may not represent the position or policy of SA Health. We are grateful for the interest and commitment of NWAHS cohort participants. We appreciate the contributions of research support staff involved in NWAHS recruitment and clinical follow up. The Spatial Epidemiology and Evaluation Research Group at the University of South Australia in collaboration with the South Australian Department for Health and Ageing conducted this research under National Health and Medical Research Council (NHMRC) projects (#570150 and #631917) investigating associations between Place and Metabolic Syndrome (PAMS). Dr Catherine Paquet was funded by a NHMRC Post-doctoral Training Research Fellowship (#570139) and salary support from an NHMRC Program Grant (#0631947). The funding sources had no involvement with study design, data collection, analysis and interpretation of results, writing this manuscript or choice of journal. The ethics approvals granted for this research do not include consent for the sharing of the datasets supporting the conclusions of this article. The spatial nature of the data provides a risk to participant identifiability and confidentiality. Further, the provider of data from the longitudinal cohort, a branch of the state government, has not agreed to these data being made publicly available. SJC, MD and CP conceived and designed the study. SJC analysed the data with input from CP and TN. SJC, MD, CP, NJH and NTC contributed to interpretation of results. SJC wrote the manuscript, and MD, CP, NJH, NTC, TN, AWT and RJA revised it critically for important intellectual content. All authors approved the final manuscript. The research presented in this manuscript is not a case study, nor does not contain any individual person's data in any form. This section is, therefore, not applicable. This study was part of the Place and Metabolic Syndrome (PAMS) Project which aimed to assess the influence of social and built environmental factors on the evolution of cardiometabolic risk. The PAMS Project received ethical approval from the University of South Australia, Central Northern Adelaide Health Service, Queen Elizabeth Hospital, and South Australian Department for Health and Ageing Human Research Ethics Committees. Written informed consent of NWAHS cohort participants was obtained prior to each wave of data collection. Spatial Epidemiology and Evaluation Research Group, School of Health Sciences and Centre for Population Health Research, University of South Australia, IPC CWE-48, GPO Box 2471, Adelaide, South Australia, 5001, Australia Suzanne J. Carroll, Catherine Paquet, Natasha J. Howard, Neil T. Coffee, Theo Niyonsenga & Mark Daniel Research Centre of the Douglas Mental Health University Institute, Verdun, Québec, Canada Catherine Paquet Discipline of Medicine, The University of Adelaide, Adelaide, South Australia, Australia Robert J. Adams & Anne W. Taylor Department of Medicine, The University of Melbourne, St. Vincent's Hospital, Melbourne, VIC, Australia Mark Daniel South Australian Health & Medical Research Institute, Adelaide, South Australia, Australia Suzanne J. Carroll Natasha J. Howard Neil T. Coffee Robert J. Adams Anne W. Taylor Theo Niyonsenga Correspondence to Suzanne J. Carroll. Carroll, S.J., Paquet, C., Howard, N.J. et al. Local descriptive body weight and dietary norms, food availability, and 10-year change in glycosylated haemoglobin in an Australian population-based biomedical cohort. BMC Public Health 17, 149 (2017). https://doi.org/10.1186/s12889-017-4068-3 Cardiometabolic risk Food environment Descriptive norms Multilevel models
CommonCrawl
electromagnetic induction examples About Us | Lenz's law states that when an EMF is generated by a change in magnetic flux according to Faraday's Law, the polarity of the induced EMF is such, that it produces an induced current whose magnetic field opposes the initial changing magnetic field which produced it. Induction Experiments (Faraday / Henry) - If the magnetic flux through a circuit changes, an emf and a current are induced. It is produced by subjecting a metal to a changing magnetic field. Some of the common uses of the electromagnetism are in: Motors and Generators: In small toy motors we use permanent magnets as the sources of the magnetic field, but in large industrial motors we use field coils which act as an electromagnet when a current is provided. One of the most widely known uses is in electrical generators (such as hydroelectric dams) where … In Figure, a copper rod is released so that it … Instead of producing a magnetic field from electricity, we produce electricity from a magnetic field. To differentiate it from the currents and the voltage we get from a battery. Electromagnetic Induction The interaction between current and magnetic field 2. One of our academic counsellors will contact you within 1 working day. This simple demonstration shows the interaction between electricity and magnetism. At this juncture, let us mention several that have to do with data storage and magnetic fields. Let's start with the most visible type of electromagnetic radiation: visible light waves. It's an easier way as well. An alternating current is the kind of electricity flowing through power lines and home wiring, as opposed to a direct current, which we get from batteries. And when Faraday presented his discovery, one person asked him, so what? using askIItians. One of the most widely known uses is in electrical generators (such as … A1 nad A2 be their respective areas of cross-section. What is the emf in the coil when the current in the solenoid changes at the rate of 10 Amp/s? This video shows how Faraday's Law is used to calculate the magnitude of the induced voltage in a coil of wire. physics 112N 3 magnetic flux! Students can hear the music through the speaker even though there is no direct connection. From the above observation we conclude that, the magnitude of torque required on the loop to keep it moving with constant ω would be [(BAωN)2/R] |sin2ωt|. If you're behind a web filter, please make sure that the domains … Introduction to Electromagnetic Induction, AC Circuits, and Electrical Technologies. The diagram below shows a segment of a wire as it moves through a region of uniform magnetic field B.Show the direction of the induced current flow through the wire. Example Problems for algebra-based physics (from College Physics 2nd Edition by Knight, Jones, and Field): Example Problems (Magnetic Induction and Lenz's Law) Solutions to Example Problems (Magnetic Induction and Lenz's Law) Example Problems for calculus-based physics (from Fundamentals of Physics 9th Edition by Halliday, Resnick, and Walker): Use Coupon: CART20 and get 20% off on all online Study Material, Complete Your Registration (Step 2 of 2 ), Live 1-1 coding classes to unleash the creator in your Child. This type of radiation derives from what our eyes perceive as a clear, observable field of view. From the above observation we conclude that, the virtual current in the circuit would be 60.67 amp. Write something. The strength of the magnetic field is 0.5 T and the side of the loop is 0.2 m. What is the magnetic flux in the loop? Today, electromagnetic induction is used to power many electrical devices. Terms & Conditions | Register yourself for the free demo class from Generally, Michael Faraday is recognized with the innovation of induction in the year 1831. Due to this flux, an emf is induced in the coil. For example, an electric generator produces a current because of electromagnetic induction. Due to this flux, an emf is induced in the coil. Magnetic flux and Faraday's law. When the coils are stationary, no current is induced. Electromagnetism is a branch of Physics, that deals with the electromagnetic force that occurs between electrically charged particles. A copper disc 20 cm in diameter rotates with an angular velocity of 60 rev s-1 about its axis. Write something. _________________________________________________________________________________________. Transformers are critical in electrical transmission because they can step voltage up or down as needed during its journey to consumers. Read here to know about the electromagnetism and its uses in everyday life. Electromagnetic Induction. Faraday's experiment showing induction between coils of wire: The liquid battery (right) provides a current which flows through the small coil (A), creating a magnetic field. Faraday's law of electromagnetic induction (referred to as Faraday's law) is a basic law of electromagnetism predicting how a magnetic field will interact with an electric circuit to produce an electromotive force (EMF). So, this phenomenon of generating induced emf or current because of changing flux is called the Electromagnetic Induction. When changing current is passed through a closed coil, varying magnetic flux develops in it. This induced emf generates an induced current in it. This either happens when a conductor is placed in a moving magnetic field (when using AC power source) or when a conductor is constantly moving in a stationary magnetic field. FAQ's | conceptual. Faraday's law of induction is one of the four equations in Maxwell's equations, governing all electromagnetic phenomena. One of the major applications is the production of electricity. Example Definitions Formulaes. Faraday's law of induction, in physics, a quantitative relationship between a changing magnetic field and the electric field created by the change, developed on the basis of experimental observations made in 1831 by the English scientist Michael Faraday.The phenomenon called electromagnetic induction was first noticed and investigated by Faraday; the… Faraday's laws of electromagnetic induction Based on his studies on the phenomenon of electromagnetic induction, Faraday proposed the following two laws. number, Please choose the valid Generally, Michael Faraday is recognized with the innovation of induction in the year 1831. seems we can induce a current in a loop with a changing magnetic field. Solved Examples on Electromagnetic Induction and Alternating Current Question 1:- A copper disc 20 cm in diameter rotates with an angular velocity of 60 rev s-1 about its axis. One is attached to a music source, such as a small radio or iPod, and the other is attached to an external speaker. 1. Magnitude of induced current = [BAωN/R] |sin ωt|. According to Faraday's law, magnitude of induced e.m.f is. Electromagnetic Induction was first discovered way back in the 1830's by Michael Faraday. Do you know that a kind of electromagnetic radiation have the potential to kill cancer cells? for a plane surface of area A = = physics 112N 5 Remember from Year 12 The relationship between electricity and magnetism (Motor effect) • When current flows through a wire a magnetic field is created around the wire • And if the wire is coiled (to form a solenoid) Just like; Lessons. If you're seeing this message, it means we're having trouble loading external resources on our website. Unit: Electromagnetic induction. There are many applications of Faraday's Law of induction, as we will explore in this tutorial and others. Advanced Knowledge of Electromagnetic Induction. Electromagnetic induction is one of the major portion of many subjects like physics, basic electrical and others. The disc is placed in a magnetic field of induction 0.2 T acting parallel to the axis of rotation of the disc. Electromagnetic induction A changing magnetic flux induces a current into a coil. The definition of electromagnetic induction is the creation of voltage or an electromotive force across a conductor within a varying magnetic field. The phenomenon of electromagnetism is broadly used in many electrical devices and in machines. First law Whenever the amount of magnetic flux linked with a closed circuit changes, an emf is induced in the circuit. To test his hypothesis he made … And this phenomena is called electromagnetic induction. Wind pushes the blades of the turbine, spinning a shaft attached to magnets. Enroll For Free. Solution: Using Fleming's right-hand rule or the right-hand slap rule, the direction of motion of the wire is D. Figure shows a bar magnet falling towards a solenoid. A coil replaced with another coil that has loops 2 times the initial loops and the rate of change of magnetic flux is constant. This wind turbine in the Thames Estuary in the UK is an example of induction at work. physics 112N 2 experimental basis of induction! Calculate the magnitude of the e.m.f. These field coils in large motors then becomes the source of the magnetic field. induced between the axis of rotation and the rim of the disc. Electromagnetic induction is the use of the movement of magnets around a coil of wire to create an electrical current through the wire. The area that combines magnetics and fluid flow is known as magnetohydrodynamics. A … Please refer to the examination notes which you can use for preparing and revising for exams. The induction cooker uses a magnetic field to produce eddy currents in the metal frying pan by a process known as electromagnetic induction. - A time-varying magnetic field can act as source of electric field. grade, Please choose the valid Known : Initial loops (N) = 1. In which direction did the wire move to produce the induced current? Michael Faraday is generally credited with the discovery of induction in 1831, and James Clerk Maxwell mathematically described it as Faraday's law of induction. An electric generator rotates a coil in a magnetic field, inducing an EMF given as a function of time by \(\mathrm{ε=NABw \sin ωt}\). Electromagnetic Induction. Sitemap | Compute the mutual inductance of the two circuits. Lenz's Law of Electromagnetic Induction. EMI is the interference from one electrical or electronic system to another caused by the electromagnetic fields generated by its operation. Related Concepts. Assuming that at t = 0, the plane of the loop is normal to the lines of force, find an expression for the peak value of the emf and current Induced in the loop. These notes will help you to revise the concepts quickly and get good marks. Electromagnetic forces are not only of importance in solid materials. So, power input = heat dissipation per second. ?§€:¢‹0ÂFB'x$ !«¤i@ڐ¤¹ŠH'§È[EE1PL"ʅ⢖¡V¡6£ªQP¨>ÔUÔ(j Electromagnetic induction (also known as Faraday's law of electromagnetic induction or just induction, but not to be confused with inductive reasoning), is a process where a conductor placed in a changing magnetic field (or a conductor moving through a stationary magnetic field) causes the production of a voltage across the conductor. Franchisee | Flux ϕ2 through coil crated by current i1 in solenoid is ϕ2 = N2(B1A2), So magnitude of induced emf = E2 = M|di1/dt|, Signing up with Facebook allows you to connect with friends and classmates already induced between the axis of rotation and the rim of the disc would be 0.377 V. _____________________________________________________________________________________. Figure 1. À•p|î"O×àX The electromagnetic force is one of the four fundamental forces and exhibits electromagnetic fields such as magnetic fields, electric fields, and light.It is the basic reason electrons bound to the nucleus and responsible for the complete structure of the nucleus. Ex.1 A square loop ACDE of area 20 cm2 and resistance 5 W is rotated in a magnetic field B = 2T through 180° This process of electromagnetic induction, in turn, … Email, Please Enter the valid mobile It is a system that converts mechanical energy into electrical energy. Let's understand the Lenz's law by below given two definitions and then we shall see few examples to understand the concept of Electromagnetic Induction in more simple way so that it is easy for new students to absorb the complexity of the concept. This phenomenon is known as electromagnetic induction. The faster the magnet the higher the induced current. A rectangular loop of N turns of area A and resistance R rotates at a uniform angular velocity ω about Y-axis. A long solenoid of length 1 m, cross sectional area 10 cm2, having 1000 turns has wound about its centre a small coil of 20 turns. Slide 2 / 47 Multiple Choice. So electromagnetic induction is a phenomena in which when you change the magnetic field through a coil it induces a voltage or a current. Area swept by radius vector during one revolution, A = (area swept in one revolution) × (Number of revolutions per second). Privacy Policy | A conducting sheet lies in a plane perpendicular to a magnetic field \(\displaystyle \vec{B}\) that is below the sheet. "Relax, we won't flood your facebook Lenz's law describes the direction of the induced field. Careers | Today, electromagnetic induction is used to power many electrical devices. When a wire that's near to a changing magnetic field produces electric current, this phenomenon is called induction. For applications and consequences of the law, see Electromagnetic induction. The plane of the area of antenna is inclined at 47º with the direction of Earth's magnetic field. RD Sharma Solutions | Tutor log in | Electric motors, power generators and transformers all work because of induction. Many of our electrical home appliances use electromagnetism as a basic principle of working. Demo 1: As the magnet is moved in, the magnetic flux through the solenoid changes and an induced current appears (Faraday's law). Learn. Slide 3 / 47 1 A square loop of wire is placed in a uniform magnetic field perpendicular to the magnetic lines. Electromagnetic induction is the complementary phenomenon to electromagnetism. Gill offers full end-to-end development services of intrinsically safe products. Generator: Generate electricity with a bar magnet! Write something completely different. Write something. Electromagnetic Induction: This simple animation shows the current induced in a coil when a magnet is moved near the coil. Electromagnetic induction is a phenomenon that explains how EMF and current is or can be induced in a coil when a coil and a magnetic field interact. In the above figure, A rectangular conductor width sides are placed in between a magnetic field. electromagnetic induction. An induction motor is an asynchronous AC motor where power is transferred to the rotor by electromagnetic induction, much like transformer action. Discover the physics behind the phenomena by exploring magnets and how you can use them to make a bulb light. Electrical bell: In electrical bells electromagnetic coils are us… {{{;Ž}ƒ#âtp¶8_\. _______________________________________________________________________________________________. Electromagnetic induction 1. Examples of Faraday's Law: Rotating Loop in a Field Now consider a circular loop with area 1 m 2 and three turns of wire (N = 3) rotating in a magnetic field with a … 13.6 Eddy Currents. The disc is placed in a magnetic field of induction 0.2 T acting parallel to the axis of rotation of the disc. Visible Light Waves. , An alternating emf 200 virtual volts at 50 Hz is connected to a circuit of resistance 1W and inductance 0.01 H. What is the phase difference between the current and the emf in the circuit. Electromagnetic Induction: Solved Example Problems EXAMPLE 4.1 A circular antenna of area 3 m2 is installed at a place in Madurai. õMFk¢ÍÑÎè t,:‹.FW ›Ðè³èô8úƒ¡cŒ1ŽL&³³³ÓŽ9…ÆŒa¦±X¬:ÖëŠ År°bl1¶ So, this phenomenon of generating induced emf or current because of changing flux is called the Electromagnetic Induction. Flux and magnetic flux (Opens a modal) What is magnetic flux? The induced emf lasts so long as the change in magnetic flux continues. Figure shows the direction of the induced current in a wire. Electromagnetic induction is the use of the movement of magnets around a coil of wire to create an electrical current through the wire. A … (Opens a modal) Faraday's Law Introduction (Opens a modal) Lenz's Law (Opens a modal) Faraday's Law example (Opens a modal) What is Faraday's law? Determine the ratio of initial and final induced emf. Examples from Classical Literature This video shows how Faraday's Law is used to calculate the magnitude of the induced voltage in a coil of wire. askiitians. Electromagnetic Induction is a current produced because of voltage production (electromotive force) due to a changing magnetic field. This process of electromagnetic induction, in turn, … When Michael Faraday made his discovery of electromagnetic induction in 1831, he hypothesized that a changing magnetic field is necessary to induce a current in a nearby circuit. Thus from the above observation we conclude that, the magnitude of the e.m.f. Faraday's Magnetic Field Induction Experiment. Electromagnetic induction (also known as Faraday's law of electromagnetic induction or just induction, but not to be confused with inductive reasoning), is a process where a conductor placed in a changing magnetic field (or a conductor moving through a stationary magnetic field) causes the production of a voltage across the conductor. Faraday's law of induction, in physics, a quantitative relationship between a changing magnetic field and the electric field created by the change, developed on the basis of experimental observations made in 1831 by the English scientist Michael Faraday.The phenomenon called electromagnetic induction was first noticed and investigated by Faraday; the… Demo 1: As the magnet is moved in, the magnetic flux through the solenoid changes and an induced current appears (Faraday's law). Pay Now | To obtain the magnitude of torque required on the loop to keep it moving with constant ω, we have to equate power input is equal to heat dissipation per second. Electromagnetic or magnetic induction is the production of an electromotive force across an electrical conductor in a changing magnetic field. Here are 10 examples of electromagnetic radiation which we come across daily and the harmful effects that result from it: 1. name, Please Enter the valid Electromagnetic induction occurs when a circuit with an alternating current flowing through it generates current in another circuit simply by being placed nearby. A very important application has to do with audio and video recording tapes. If \(\displaystyle \vec{B}\) oscillates at a high frequency and the conductor is made of a material of low resistivity, the region above the sheet is effectively shielded from \(\displaystyle \vec{B}\). Media Coverage | This produces a Voltage or EMF (Electromotive Force) across the electrical conductor. Problems practice. Register online for Physics tuition on Vedantu.com to score … Electromagnetic induction A changing magnetic flux induces a current into a coil. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. Electromagnetic Induction - definition The property due to which a changing magnetic field within a closed conducting coil induces electric current in the coil is called electromagnetic induction. The definition of electromagnetic induction is the creation of voltage or an electromotive force across a conductor within a varying magnetic field. Electromagnetic induction, induced EMF – problems and solutions. Electromagnetic Induction. Many questions are asked in academic and competitive exam and even in interviews of many high profile companies. In case of an ac, the voltage leads the current in phase by angle. So it is required to have an in-depth knowledge of the same. Electromagnetic Induction Practice Problems. Blog | As the disc rotates, any of its radii cuts the lines of force of magnetic field. Dear Falling Behind in Studies? If we take an example of an electric fan, the motor works on the principle of electromagnetic induction. 1. Other uses for electromagnetic induction include electric motors used in anything from washing machines to trains, electric hobs and cookers, transformers, welding and … The electromagnetic induction of membrane sodium potassium ATPase inhibition mediated ATP synthesis may also be a similar primitive source of cellular energy. Electromagnetic Induction or Induction is a process in which a conductor is put in a particular position and magnetic field keeps varying or magnetic field is stationary and a conductor is moving. Electric and hybrid vehicles also take advantage of electromagnetic induction. Preparing for entrance exams? Electromagnetic Induction Examples. Applications of Electromagnetic Induction. Explanation with an example DC Generator works on the principle of Faraday's Law of Electromagnetic Induction. CBSE Class 12 Physics Electromagnetic Induction Solved Examples. Refund Policy, Register and Get Free access of Online Study Material, Please choose a valid One limiting factor that inhibits widespread acceptance of 100% electric vehicles is that the lifetime of the battery is not as long as the time you get to drive on a full tank of gas. The loop lies in a uniform magnetic field B in the direction of X-axis. AP Physics Chapter 20 Electromagnetic Induction - Examples: pacemakers stopping, migrating birds get lost, GPS won t work, etc. Faraday noticed that when he moved a permanent magnet in and out of a coil or a single loop of wire it induced an E lectro M otive F orce or emf, in other words a Voltage, and therefore a current was produced. For example, in metals processing using induction furnaces, electromagnetic forces are important to understand, since molten metals are typically highly conductive. 21. Solved Examples on Electromagnetic Induction and Alternating Current Question 1:-A copper disc 20 cm in diameter rotates with an angular velocity of 60 rev s-1 about its axis. useful to define a quantity called magnetic flux! Examples of electromagnetic induction include: moving a magnet inside a wire coil; generating the high voltage necessary to ionise the vapour in a fluorescent tube and cause the spark needed to ignite the explosive mixture in a petrol engine; changing the voltage of an alternating current, using a transformer. Electricity,is something that we take for granted,which is sad,because it's a fascinating phenomenon. Electromagnetic interference (or EMI) is a disruption that affects an electrical circuit because of either electromagnetic induction or externally emitted electromagnetic radiation. The disc is placed in a magnetic field of induction 0.2 T acting parallel to the axis of rotation of the disc. The flow of eddy currents in the frying pan produces heat. - A time-varying electric field can act as source of magnetic field. news feed!". This induced emf generates an induced current in it. Join Our Performance Improvement Batch. Contact Us | When changing current is passed through a closed coil, varying magnetic flux develops in it. What is the magnitude of torque required on the loop to keep it moving with constant ω? subject, Electromagnetic Induction and Alternating Current, Solved Examples on Electromagnetic Induction and Alternating Current, A copper disc 20 cm in diameter rotates with an angular velocity of 60 rev s, A rectangular loop of N turns of area A and resistance R rotates at a uniform angular velocity, From the above observation we conclude that, the magnitude of torque required on the loop to keep it moving with constant ω would be, A long solenoid of length 1 m, cross sectional area 10 cm, Structural Organisation in Plants and Animals, French Southern and Antarctic Lands (+262), United state Miscellaneous Pacific Islands (+1), Complete JEE Main/Advanced Course and Test Series, Complete AIPMT/AIIMS Course and Test Series. Gill Research & Development Limited (Gill R&D), part of the Gill Group of product engineering companies, promotes its ability to provide full end-to-end design to certified manufacturing of intrinsically safe (IS) products. School Tie-up | Also find the virtual current in the circuit. Free PDF download of Important Questions with Answers for CBSE Class 12 Physics Chapter 6 - Electromagnetic Induction prepared by expert Physics teachers from latest edition of CBSE(NCERT) books. Two coils of wire are held close to each other, but not touching. This phenomenon "electromagnetic induction" is explained by Faraday's laws of electromagnetic induction. The faster the magnet the higher the induced current. Maxwell - An induced current (and emf ) is generated when: (a) we move a magnet Apart from these major examples, there are numerous applications that use electromagnetic induction principle for their functioning such as electric power transmission, induction cookers, industrial furnaces, medical equipments, electromagnetic flow sensors, musical instruments (like electric violin and electric guitar) and so on. Land For Sale Mason, Tx, Golden Pothos In The Wild, Hedge Trimmer Starts But Won't Run, Witte Jasmijn Kamerplant, Pink Maggit Lyrics, Samsung Dryer Timer Not Working, electromagnetic induction examples 2020
CommonCrawl
Home Journals TS Plant Leaf Classification and Comparative Analysis of Combined Feature Set Using Machine Learning Techniques Plant Leaf Classification and Comparative Analysis of Combined Feature Set Using Machine Learning Techniques Sujith Ariyapadath Department of Computer Science, University of Kerala, Research Center, Thiruvananthapuram 695581, India [email protected] The main purpose of this research work is to apply machine learning and image processing techniques for plant classification efficiently. In the plant classification system, the conventional method is time-consuming and needs to apply expensive analytical instruments. The automated plant classification system helps to predict plant classes easily. The most challenging part of the automated plant classification research is to extract unique features of leaves. This paper proposes a plant classification model using an optimal feature set with combined features. The proposed model is used to extract features from leaf images and applied to image classification algorithms. After the evaluation process, it is found that GIST, Local Binary Pattern and Pyramid Histogram Oriented Gradient have better results than others in this particular application. Combined these three features extraction techniques and selected the optimal feature set through Neighbourhood Component Analysis. The optimal feature set helps classify plants with maximum accuracy in minimal time. Here performed an extensive experimental comparison of the proposed optimal feature set and other feature extraction methods using different classifiers and tested on different data sets (Swedish Leaves, Flavia, D-Leaf). The results confirm that this optimal feature set with NCA using ANN classifier leads to better classification achieved 98.99% accuracy in 353.39 seconds. plant classification, optimal feature set, GIST, local binary pattern, pyramid histogram oriented gradient, machine learning, neighbourhood component analysis, artificial neural network Plants are backbone of all living organisms and it produces oxygen, helps to control climate change. Plants are a good source of food and gives oxygen, shelter, medicine and fuel. Without plants, the environment and human life on this earth cannot exist. Plants are facing extinction due to deforestation, urbanization, global warming and overexploitation. Thus, creating a plant database for speedy and effective classification is necessary to protect the plants. In ancient times, taxonomists classified plant species based on the characteristics of the plant using the wet lab process. Different parts of a plant such as leaves, flowers, seeds, barks, fruits, and roots can be used for classification. Leaves are an essential part of a plant and vary in shape, color, texture, and size [1]. The structure of each plant leaf is different and can differentiate one variety of plants from the other. Thus leaf-based classification is the most widely accepted approach. Classification of plant species using the wet lab process is time-consuming. The automated plant identification system helps researchers, botanists, and non-specialists to protect endangered plants. Several researchers attempt to develop a plant classification system using digital image processing and machine learning techniques [2]. A number of feature extraction, feature selection techniques and classifiers have been proposed for classification. The qualities of extracted features influence the performance of plant leaf classification. However, acquiring meaningful and unique features from a low variation plant species is a complex task. Combining two or more feature extraction techniques (shape, texture, color, venation etc.) gives better classification results than a single feature extraction technique [3]. Optimal feature set selection helps to maintain the quality of features [4]. This paper proposed an optimal feature set from multiple feature descriptors using feature selection techniques to build a classification model. The main highlights of this paper are summarized as follows: (1) Proposed an optimal feature set using combination of feature extraction methods with three selection methods and evaluated in three benchmark datasets. (2) Comparative analysis of proposed model with three classifiers, Feed Forward Back Propagation Neural Network, K Nearest Neighbour (KNN), and ensemble learning with Random Forest (RF), and all three get better results. The model performs well using the Feed Forward Back Propagation Neural Network classifier with the Swedish leaf dataset. (3) Analyze the combined feature set with feature selection methods NCA, ReliefF and MRMR. (4) Compare the proposed model with recent existing works. More specifically, this paper focus on optimal feature set from combined multiple feature extraction methods. An optimal feature set can highly influence plant image classification accuracy and reduction of computational time. This paper consists of five main sections. Section 1 explains the importance of plants in human life, the relevance of automated plant classification systems, and the importance of a combination of multiple feature extraction methods and optimal feature set for the classification of plants. Section 2 reviews the existing plant classification model methodology, and description. Section 3 describes proposed machine learning models for plant classification incorporating feature extraction methods and classification phases. Section 4 describes the evaluation of the proposed model using different performance measures. Compare the proposed model results with state of art works-finally, conclusion and future enhancement is given in Section 5. 2. Preliminary Study In the literature review, we can find several plant classification methods. The performance of the classifier will vary with respect to several factors like noise, irrelevant, and number of features. The quality of extracted features plays an important role deciding the total performance. There are many plant leaf classification models are available in the literature. Zhao et al. [5] proposed an Independent-Inner-Distance Shape Context (I-IDSC) and ANN classifier using Swedish Dataset. Munisami et al. [6] presented a model by integrating shape, morphological information, color histogram, distance maps and k-Nearest Neighbor classifier using Flavia dataset. Naresh and Nagendraswamy [7] proposed medicinal plant classification system using modified LBP and 1-nearest neighbor classifier with UoM medicinal plant dataset. Yang et al. [8] build a shape-based system using multi-scale triangular centroid distance and shape dissimilarity measurement with Swedish leaves. Rzanny et al. [9] introduced Elliptical Half Gabor and Maximum Gap Local Line Direction Pattern to obtain stable and independent local line responses from leaf contour, texture, and vein. Begue et al. [10] proposed a medicinal plant recognition system by integrating shape and color features using Random Forest classifier with ten-fold cross-validation. Kan et al. [11] proposed an automatic plant classification method based on shape features, three geometrical features, and GLCM texture characteristics using SVM classifier. Hewitt and Mahmoud [12] presents a novel feature set using shape, signal features, curvature maps and SVM classifier with radial basis function (RBF) kernel using the Swedish leaves dataset. Ali et al. [13] proposed combined feature set using LBP and Bag of features for classification using SVM. Salve et al. [14] proposed a multi-model plant classification system by integrating LBP and GIST features using feature-level fusion and score-level fusion techniques. Mostajer Kheirkhah and Asghari [15] proposed a model with the help of GIST texture features with PCA feature selection method. The Cosine KNN classifier is used for classifying this model. Kuang et al. [16] proposed a method to construct a defect detection method of the bamboo strip using an SVM classifier by integrating LBP and GIST features. Pacifico et al. [17] proposed an automatic plant classification system based on color and texture features using a multi-layer perception with backpropagation (ML-BP) classifier. Sujith and Neethu [18] proposed a feature combination method to classify plants using ANN classifier by combining shape and texture features. Ahmed et al. [19] proposed six color features and twenty-two texture features (GLCM) have been calculated. These features applied to SVM one-vs-one classifier. The research conducted in plant analysis models still has some challenges and limitations. It is essential to extract all relevant features for classification to increases the classification accuracy with minimal computational time. Some feature descriptors are better to suit particular types of datasets than others. Also, it is found that a combination of various feature descriptors, effective feature selection and dimensionality reduction methods are beneficial for increasing the overall classification performance. 3. Dataset The introduced technique is trained and tested with three publicly available benchmark data set Swedish Leaves, Flavia, and D-Leaf. (a) Swedish leaves dataset (b) Flavia dataset 1c.png (c) D Leaf dataset Figure 1. Datasets The Swedish Leaves dataset [20] consists of 15 tree species with 24-bit RGB images. Each class contains 75 pictures with a white background and various dimensions with total number of 1125 images in tiff file format. The proposed model resizes the image to 200×200. Datasets are divided into 70% for training, 15% for validation, and 15% for testing. The Flavia dataset [21] consists of 32 plant classes. The Flavia dataset consist of 1907 RGB plant images with white background. Each class contains a various number of images in the jpeg file format with 1600 x 1200 dimensions. In the proposed model, choose 1600 images, and each class consists of 50 images used for the experiment. Datasets are divided into 70% for training, 15% for validation, and 15% for testing. The D- Leaf dataset [22] consists of 43 plant classes with a white background. This dataset contains 1290 RGB plant leaf images. Each class contains 30 leaf images in a tiff file format with 250 x 250 dimensions. Datasets are divided into 70% for training, 15% for validation, and 15% for testing. Figure 1(a), 1(b), 1(c) shows sample images of each class from Swedish leaves, Flavia and D Leaf datasets, respectively. 4. Proposed Methodology Proposed model consists of five phases Pre-processing, Feature extraction, Feature normalization and combination, Feature selections and reduction, classification as shown in Figure 2. Pre-processing phase deals with converting RGB images into Grayscale and apply median filter. Initially selected five feature extraction method. Based on performance evaluation measure of these feature extraction methods in Table 1(a), 1(b), 1(c) choose three feature extraction methods GIST, Local Binary Pattern (LBP), and Pyramid Histogram Oriented Gradient (PHOG) for the optimal feature set combination. Feature normalization and combination stage, which normalize features using mapminmax techniques and combine the above features. The feature selection and reduction phase uses the filter method for optimal feature selection and reduction tasks like Neighborhood Component Analysis (NCA), ReliefF, and Maximum Relevance Minimum Redundancy (MRMR) algorithms. This method can select relevant features for better classification; reduce the feature vector size, and reduce the model's time complexity. The classification phase includes training, validation and testing. The optimal feature set is used in the training and the parameters are selected using cross-validation. The classification task is performed using Artificial Neural Network with backpropagation (ANN), k-Nearest Neighbour (KNN) and Random Forest with Decision Tree (RF). 4.1 Pre-processing Image pre-processing techniques significantly impact the quality of image and classification performance rate of the model [23]. The different artifacts may occur during image acquisition, such as low contrast, brightness, image transformation etc. The proposed method uses the Swedish leaves dataset containing 15 plant species with 1125 RGB leaf images. In these images first converted the RGB image to Grayscale using Eq. (1). Next, apply the median filter using Eq. (2) to reduce noise and preserve image edges. Finally, filtered leaf images were resized to 200×200 from variable size. R_to_G $=0.2989 * \mathrm{R}+0.5870^{*} \mathrm{G}+0.1140^{*} \mathrm{~B}$ (1) According to Eq. (1), red, green, and blue have contributed 30%, 59%, and 11%, respectively. Above three colors (R, G, and B) have different wavelengths and contribute to image formation. Here uses the luminosity method to convert RGB to grayscale images properly. The Median filtering, 3×3 windows of neighborhood pixel values sorted in ascending order and pick the median value. This filter is efficient for reducing impulse noise, with less blurring of edges [24]. $\mathrm{y}[\mathrm{m}, \mathrm{n}]=\operatorname{median}\{\mathrm{x}[\mathrm{i}, \mathrm{j}],(\mathrm{i}, \mathrm{j}) \varepsilon \omega\}$ (2) where, ω represents the neighborhood defined by the user, centered around the location [m, n] in the image. The same operation is done on other datasets like Flavia and D leaf datasets for evaluating our model. 4.2 Feature extraction In image processing, the features are the characteristics that describe from images. In this section, LBP, GIST and PHOG features were extracted from the leaf image and for evaluation purpose, extracted HOG, GLCM features. The following sub-sections describe all feature extraction methods incorporated in this work. Figure 2. The architecture of the proposed model 4.2.1 GIST The GIST feature itself is a collection of Gabor filter responses from the image, and it can represent a region boundary of the object or shape of the scene in the picture. The GIST descriptors base a low-dimensional representation of scenes and are called spatial envelopes. The scene's spatial envelop properties are naturalness, openness, perspective, size, diagonal plane, depth, symmetry, contrast, roughness, expansion, and ruggedness [25]. 2D Gabor filter is defined in Eq. (3) as follows $\left\{\begin{array}{l}g(x, y ; \sigma, \theta, \lambda, \gamma, \psi)=\exp \left[-\left(\frac{x^{\prime 2}+\gamma^{2} y^{\prime 2}}{2 \sigma^{2}}\right)\right] \cdot \exp \left[i\left(2 \pi \frac{x^{\prime}}{\lambda}+\psi\right)\right] \\ x^{\prime}=a^{-m}(x \cos \theta+y \sin \theta), y^{\prime}=-x \sin \theta+y \cos \theta \\ \theta=\frac{n \pi}{n+1}\end{array}\right.$ (3) where, g(x, y) is normalized frequency of Gabor filter, σ is standard deviation, θ is the rotation angle, λ is the wavelength, γ is the aspect ratio, ψ is phase offset, a-m denotes the scale factor, m denotes the number of scales, n denotes the skewness. Gabor filter is used to extract information separately from two different regions with similar gray levels. First, split the pre-processed image into 4×4 size grids with four scales, and eight orientations then compute the filter response of each cell using a series of Gabor filters in equation 3. All the cell responses were concatenated to form a GIST feature vector. Intuitively, GIST summarizes the gradient information (scales=4 and orientations=8) for different parts of an image, which provides a rough description (the gist) of the scene. 4.2.2 Local binary pattern (LBP) LBP [26] is a texture feature descriptor used to describe the texture characteristic of an image. Grayscale image divides into 3×3windows into cells, and each cell contains 16×16 pixels. Next, compare each neighboring pixel value with the center pixel for every cell. The value set 0 for the neighbor pixel, which is less than the center pixel value, and set 1 for the neighboring pixel, is greater than the center pixel value. Select eight binary bits in a clockwise direction and convert them into decimal form. This decimal value assigns to the center pixel. After completing this process, get the LBP code matrix of the given image. Find a histogram of each number in the cell and finally get 256-dimensional feature vectors. Concatenation of all histograms of each cell provides the feature vector of the entire row. Calculate LBP pixel code using Eq. (4) and Eq. (5) as follows: $\operatorname{LBP}\left(\mathrm{g}_{\mathrm{px}}, \mathrm{g}_{\mathrm{py}}\right)=\sum_{\mathrm{p}=0}^{\mathrm{P}-1} \mathrm{~S}\left(\mathrm{~g}_{\mathrm{p}}-\mathrm{g}_{\mathrm{c}}\right) 2^{\mathrm{p}}$ (4) $S(x)=\left\{\begin{array}{l}1, \text { if } x \geq 0 \\ 0, \text { if } x<0\end{array}\right.$ (5) where, gc is the intensity value of the center pixel, gp is the intensity of the neighboring pixel index, P is the number of sampling points on a circle of radius R, p controls the quantization of the method and S(x) is the threshold step function. Here choose P=8. 4.2.3 Pyramid histogram of oriented gradients (PHOG) The Pyramid Histogram of Oriented Gradients is an extended version of the HOG feature descriptor. The PHOG [27] is a spatial shape descriptor. First read the RGB images and convert them to a grayscale image. Next, extract edge counter using Canny edge detection algorithm. Compute HOG for each grid at each pyramid level. Here chooses the pyramid level is 3 (l=0, 1, 2). The local shape represents a histogram of edge orientations within an image sub-region quantized into eight bins. Compute edge counters and orientation gradients in the original image. The final PHOG descriptor vector of the entire image concatenated all HOG vectors at each pyramid resolution. This vector contains the spatial information of the image. The PHOG vector size is 1×680 and calculated using Eq. (6) as follows. $N=\eta\left(\frac{4^{l+1}-1}{3}\right)$ (6) where, N is the total number of PHOG features, l is the number of pyramid levels; η is the number of orientation bins. GLCM is a second-order statistical texture analysis measure for the probabilities of finding the pair of pixels in a particular location and orientation of an image [28]. The second-order measure presents the changing of one gray level pixel value to its neighboring pixel value. GLCM is a tabulation of how often a different combination of gray levels co-occurrence in an image or image section. GLCM calculates in 8 angles at any offset. Calculate GLCM at 0˚ means finding the relationship between a pair of two-pixel values horizontally on the right-hand side. Here choose 0˚ direction in this case. Count the pixel number with 0 quantization numbers appearing with 0 pixels and 0 quantization numbers appearing with one quantization. The resultant matrix's diagonal value indicates the homogenous area, and the heterogeneity increased away from the diagonal elements. The Histogram of Oriented Gradients is a shape descriptor [29]. First, crop the image into small patches. This patched image resized into 64×128. The resized image divides into 8×16 grids. Each block in the grid is 8×8 pixels. Now get one block of grid. Calculate this pixel position's gradient magnitude and direction using Eq. (7) and Eq. (8). Find the difference of pixel intensity in x and y-direction—diagonal elements of the grid value not considered for calculating intensity in x and y-direction GradientMagnitude $=\sqrt{\left(x_{\text {drection }}\right)^{2}+\left(y_{\text {drection }}\right)^{2}}$ (7) Gradient $_{\text {Direction }}=\tan ^{-1}\left(\frac{x_{\text {direction }}}{y_{\text {direction }}} \quad\right)$ (8) Similarly, calculate the gradient magnitude and direction of all the pixel positions. Here the entire angle is between 0˚ and 180˚. This angle is divided into nine different bins to build the histogram. Check the gradient direction at any pixel position and its corresponding magnitude. In this histogram, place and distribute the magnitude value corresponding to its bin. Likewise, check all the gradient directions and their corresponding magnitude value. Here got the value of the magnitude in the corresponding bins and placed it in the vector form. Here the size of the feature vector is the distribution of the magnitude of each bin in the histogram. 4.3 Feature normalization and combination Feature normalization helps to scale down all feature values between 0 and 1. Features have two basic properties units and magnitude. Unit is how that particular feature used to measure and magnitude is the value of a particular feature. Here I used minmax normalization method which will help the classifier to learn the weight quickly and improve the performance of the model by reducing bias. The inter-class variation between each plant leaf cannot be correctly classified using a single feature [30]. To overcome the high inter-class variation, it uses different features concatenate with two or more feature sets. LBP texture feature can be reflecting the intensity relation between a pixel and its neighboring pixels. Spatial information is lost when LBP capture local pattern. PHOG is the spatial shape descriptor that adds spatial information. GIST feature extraction is used for the low-dimensional representation of the scene. Therefore, combining different complementary feature extraction methods (LBP, PHOG, and GIST) can easily distinguish each class. Here used feature appending techniques for combining feature extraction methods. The concatenation of these features increases the classification accuracy. The size of the feature vector increased after the concatenation of different features. Here NCA technique uses for feature selection and reduction to reduce the size of the feature vector. This reduction method improves the classification performance in terms of accuracy and computational time. 4.4 Feature selection and reduction In machine learning, feature selection of relevant features is essential. Feature selection enables proposed algorithm to train faster, reduce computational complexity and minimize the overfitting of a model. Here chose a selection algorithm under the category of filter methods like NCA, RelifF, and MRMR algorithm. Filter methods are fast and do not involve in training the model, and it is computationally less expensive. The following subsection describes all feature selection methods used in proposed model and by evaluating this model. 4.4.1 Neighborhood component analysis (NCA) NCA is used in the k-nearest neighborhood classification algorithm to learn low dimensional labeled data with Mahalanobis distance measure [31]. NCA is a supervised metric learning algorithm, and it transfers data points in new space. The distance between two data points in the same class is small compared to different classes. This metric learner uses to learn distance with the help of Mahalanobis distance measure. The weights of the unwanted features should be close to zero. With the support of NCA, linear dimensionality reduction with low-rank distance is possible. This case learned metric will be of low rank. The selected feature set ranked based on maximum feature weights and chose maximum weighted 1000 elements to form a new feature vector. Define the probability that a point i select another point j as its neighbor pij using a softmax over Euclidean distances in the transformed space using Eq. (9) as follows. $p_{i j}= \begin{cases}\frac{e^{-\left\|A x_{i}-A x_{j}\right\|^{2}}}{\sum_{k} e^{\left\|A x_{i}-A x_{k}\right\|^{2}}}, & if \quad j \neq i \\ 0, & , { if } \quad j=i\end{cases}$ (9) The main objective of NCA is to select optimal data points for correctly classify the model under Eq. (10) as follows: $f(A)=\sum_{i} \sum_{j \in C_{i}} p_{i j}=\sum_{i} p_{i}$ (10) where, probability pi that point i will be correctly classified by Ci={j|ci=cj}. 4.4.2 ReliefF ReliefF algorithm is an extension of the Relief feature selection algorithm and is used for multi-class feature selection. This algorithm is an attribute selection algorithm based on the instant. Instead of finding one near miss, the algorithm finds one near miss for each class and averages their contribution of updating the weights [32]. 4.4.3 MRMR The MRMR [33] is a sequential forward feature selection algorithm to find an optimal set of mutually and maximally dissimilar features and can represent the dependent variable effectively. The algorithm minimizes the inefficiency of the feature set and maximizes the relevance of the feature set to the dependent variable. The algorithm quantifies inefficiency and relevance using mutual and pairwise mutual information of features and the dependent variable. 4.5 Classification The final phase of proposed model is classification. This phase deals with matching the target label to the predicted labels. In this research work, to classify multiple classes using this model. The following subsection describes classifiers that are used in proposed model and evaluates this model. The best model obtained using Artificial Neural Network with backpropagation in this particular application. 4.5.1 Artificial neural network (ANN) The ANN [34] with backpropagation structures into input layer, hidden layers and output layer. The input layer contains 1000 feature and output layer have 15 classes. Here chose two hidden layers, each contains 400 neurons and fix through a trial and error mechanism. Here chose the conjugate gradient algorithm as the training function, and its convergence speed is higher than other training functions. Calculate cross-entropy for network performance during training and the sigmoid activation function to forecast the probability as an output. Here calculated the cross-entropy using Eq. (11) to evaluate the network performance during training. This neural network learns all the features from training sample validate data using cross-validation technique and test with given testing images. Cross_entropy_loss $=-\sum_{c=1}^{M} y_{o, c} \log \left(P_{o, c}\right)$ (11) where, M is number of classes, log is the natural log, y is Binary indicator (0 or 1) if class label c is the correct classification for observation o, P-Predicted probability observation o is of class c. Figure 3 shows the network topology diagram for feed-forward backpropagation neural network containing input layer, two hidden layers, and output layer. The green circle indicates the difference in desired values, and the red line shows the errors' backpropagate. Figure 3. ANN topology diagram 4.5.2 K-Nearest neighbour (KNN) The K-Nearest Neighbour is a supervised machine learning algorithm that predicts each class based on feature similarity of target trained sample and test data. KNN classifier works better with a small number of input variables and the same scale of data [35]. Here, load the sample images and initializes the number of neighbor pixel (K) value is 5. Then apply the Euclidian distance method using Eq. (12) to find the distance between test data points and each row of training data using Eq. (12). The resultant distance values are sorted in ascending order and choose top K rows from the sorted array. Assign a class to the test points which are most frequent among the K training sample nearest to that test point $d=\sqrt{\left(x_{1}-x_{2}\right)^{2}+\left(y_{1}-y_{2}\right)^{2}}$ (12) where, xi, yi is data points d (Euclidian distance) is the shortest distance between two data points. 4.5.3 Random Forest (RF) Random Forest is a classifier that works based on the ensemble learning approach [36]. The Ensemble learning approach used multiple machine learning models and tried to output a particular problem. Randomly select the subset of features set. The process of choosing a random sample of the observation is known as bagging. The bagging technique is design to improve the stability, accuracy and reduce the variance of the model. Here choose multiple decision tree models (set number of decision tree=200). Each decision tree has its role. Pick each node for an optimal split from the root node and subsequent splits in the decision tree based on the Gini impurity value. The Gini impurity is the probability of incorrectly classifying a randomly chosen element in the dataset if it is randomly labeled according to the dataset's class distribution. Gini impurity calculated using Eq. (13) as follows: $G=\sum_{i=1}^{c} P(i)^{*} \quad(1-P(i))$ (13) where, c represents a number of classes and P(i) is the probability of randomly picking an element of class i. The training is complete with 200 decision trees; combine all decision trees' training accuracy and calculate the mean of the training accuracy. Whenever give test data to these decision trees, each one predicts some labeled value. These predicted results count the maximum number of the decision tree's similar output using the majority voting technique. This result is taken and aggregated as the model's output. 5. Results and Discussions In this section, analyze various feature extraction techniques and their combinations. The features are extracted and analyzed using five feature extraction methods LBP, PHOG, GIST, GLCM, and HOG, with three classifiers (ANN, KNN, RF) and three datasets (Swedish Leaves, Flavia, D Leaf). The summaries of the results are in Table 1(a), (b), (c). In Table 2(a), (b), (c) found that a combination of features provides better results than a single feature for classification. From Table 3(a), (b), (c), evaluated all the feature selection methods and found NCA gives a better optimal feature set than ReliefF and MRMR. It is found that combining two or more feature extraction techniques provides high accuracy, which helps identify plants easily. Here choose three feature extraction techniques LBP, PHOG, and GIST for the combination. The proposed model analyses with three classifiers, such as ANN, KNN, and RF. The proposed model outperforms using the ANN classifier with the Swedish Leaves dataset. Proposed model has been implemented using the following specifications. Windows 10, 64 bit, Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz 3.40 GHz, 4 GB RAM and MATLAB R2019b. Calculate proposed model accuracy, precision, recall and F1 score using the parameters such as True Positive (TP), False Positive (FP), True Negative (TN), and False Negative (FN). The confusion matrix helps to calculate these parameters. Accuracy $=\left(\frac{\text { No.ofcorrectlyrecognizedsamples }}{\text { totalnoofsamples }} \quad\quad \right) * \quad \%$ (14) Precision $=\frac{\sum \text { TruePositive }}{\sum \text { Predicted ConditionPositive }}$ (15) Recall $=\frac{\sum \text { TruePositive }}{\sum \text { ConditionPositive }}$ (16) $F_{1}$ Score $=2 * \frac{\text { Precision } \quad ^{*} \quad \text { Recall }}{\text { Precision }+\text { Recall }}$ (17) 5.1 Evaluation of feature extraction method Extract features from leaf images and analyses using five feature extraction methods such as LBP, PHOG, GIST, GLCM, and HOG, with three classifiers (ANN, KNN, RF) on three datasets (Swedish Leaves, Flavia, D Leaf). Table 1(a) shows the evaluation of feature extraction method using Swedish leaves dataset, and found that LBP and GIST has better results in terms of accuracy and computational time, GLCM achieved 80.47% accuracy using ANN classifiers and HOG achieved 90.21% classification accuracy with high computational time using RF classifier. The high-performance measures indicate in the bolded face. Prec-Precision, Rec- Recall, F1-F1 Score, Acc-Accuracy, T(s)- Computational Time in seconds. Table 1(a). Analysis of feature extraction methods using Swedish leaves Features method Swedish Leaves dataset (Mean) Acc (%) T(s) PHOG GLCM Table 1(b). Analysis of feature extraction methods using Flavia Flavia dataset Table 1(c). Analysis of feature methods using D Leaf D Leaf dataset Table 1(b) shows the evaluation of the feature extraction method using the Flavia dataset and found LBP has better results achieved 95.79% accuracy than GIST and GLCM features using ANN classifiers. Here, PHOG achieved 92.71% classification accuracy using the RF classifier, and HOG achieved 84.38% accuracy using the KNN classifier. Here LBP feature perform better in ANN and KNN classifier and GIST feature perform better in RF classifier. Table 1(c) shows the evaluation of the feature extraction method using D Leaf dataset. From the analysis found better results for LBP with 91.39% accuracy and GIST achieved second highest accuracy 89.12% using ANN classifier, and PHOG achieved 83.15% accuracy, GLCM achieved 54.78% accuracy and HOG achieved 87.60% accuracy using RF classifier. Here LBP perform better in all three classifiers. Here chose LBP, GIST and PHOG feature extraction methods based on higher accuracy rate and less computational time from the above evaluation. 5.2 Evaluation of a combination of feature set Table 2(a) analyses different combinations of feature extraction techniques using LBP, GIST and PHOG evaluated using ANN, KNN and RF classifier and tested on Swedish dataset. It is found that the combined feature set (LBP+PHOG +GIST) has better classification results and achieved 98.99% with 353.4 seconds and next highest results using the combination of LBP and GIST with accuracy 98.22% in 157.99 seconds using ANN. Table 2(a). Analyse combination of feature set using Swedish Leaves Combined Feature Swedish Leaves Acc(%) LBP+GIST LBP+PHOG GIST+PHOG LBP+GIST+PHOG Table 2(b). Analyze combination feature set using Flavia Table 2(b) analyses combinations of feature extraction techniques and evaluate with ANN, KNN and RF classifier tested on Flavia dataset and achieved better results 97.50% accuracy with 321.32 seconds using ANN. Table 2(c) analyses combinations of feature extraction techniques with ANN, KNN and RF classifier tested on D Leaf dataset and achieved better results 94.84% accuracy with 242.16 seconds using ANN. Table 2(c). Analysis of Combination feature set using D Leaf D- Leaf Dataset 5.3 Evaluation of feature selection method for an optimal feature set Table 3(a) analyses combined feature set using NCA, feature selection method and tested with Swedish leaves, Flavia and D Leaf dataset. Results show optimal feature set using NCA with ANN classifier provides better performance with accuracy 98.99% in 353.4 seconds on Swedish Leaves data. Table 3(b) analyses combined feature set using ReliefF feature selection method and tested on Swedish leaves, Flavia, and D Leaf dataset. Results show optimal feature set using NCA with ANN classifier has 98.22% accuracy with 284.27 seconds on Swedish Leaves. Table 3(c) analyses combined feature set using the MRMR feature selection method and tested on Swedish leaves, Flavia and D Leaf dataset. Results show optimal feature set using NCA with ANN classifier has better accuracy 97.63% with 392.01 seconds on Swedish Leaves. From the above Table 3(a), (b), (c), found that NCA feature selection provides optimal feature set and it applied in ANN classifier on the Swedish Leaves dataset gave better accuracy. Table 4 shows the number of features suitable for getting reasonable classification accuracy starting from 600 to 1400. It is found that increasing the number of features increases the accuracy with an increase in the computational time. After a certain number of features (here set 1000 features) the result may not much difference and this is done by trial and error mechanism. In Table 4 shows the average accuracy of the proposed model is 98.99% in 353.39seconds. The cross-entropy of the proposed classification model 0.0007 indicates a measure of performance of a model. When the network is learning, the model aims to get the lowest value of cross-entropy. The cross-entropy value goes to 0 indicates the model is better. The best validation performance is 0.002 in 54 epochs. Figure 4 shows proposed model accuracy in different classifiers and datasets. This analysis shows that the proposed model has high accuracy (98.99%) using the ANN classifier and Swedish leaves dataset. This bar chart shows proposed model's performance using the Swedish, Flavia, and D-leaf datasets in all three classifiers (ANN, KNN, and Random forest (RF)). Table 3(a). Analysis of combined feature set using NCA Combined Feature set D-Leaf dataset Table 3(b). Analysis of combined feature set using RelifF Table 3(c). Analysis of combined feature set using MRMR Table 4. Effect of increasing the number of different features on the accuracy of the Swedish Leaves dataset No.of feature F1 Score (Mean) Accuracy % (Mean) Time(s)(Mean) Cross-Entropy (Mean) Best validation (mean) Epoch (Mean) LBP+PHOG+ GIST Figure 4. Comparison of proposed model with classifiers and dataset Confusion metrics are used to measure how many predicted classes were correctly predicted or not. Figure 5 represents target class indicates the actual class of plant leaf and the output class is the predicted class corresponding to the target class. Here class 7 is correctly predicted 12 plants and incorrectly predicted one. Class 9 correctly predicted 16 times and incorrectly predicted one. Diagonal values indicate correctly predicted results of each class. Figure 5. Confusion matrix Here have four parameters TP, TN, FP, FN.In multi-class, the diagonal value of the confusion matrix is the TP of the corresponding class. The total number of TN for a specific class will be the sum of all columns and rows, excluding that class's column and row. The total number of FP's for a class is the sum of values in the corresponding column, excluding the TP values. FN is the sum of values in the corresponding row, excluding the TP value. Figure 6 shows Receiver Operating Characteristics to analyze and visualize the performance of the output of a classifier. The X-axis represents a false positive rate (1- Specificity), and the Y-axis represents a true positive rate (Sensitivity). Here uses the one verse all method and get the ROC curve for each class. The top left corner of ROC plot is the ideal point. At this point, a false positive rate is zero, and a true positive rate is one. Here class 9th curve does not exactly match with this line, and all others approximately matched. From this analysis found that the model classified well. Figure 6. Receiver characteristic curve In Figure 7 network performance of proposed model shows the three curves indicating training, validation, and test performance of the proposed model. The X-axis represents the number of epochs. Y-axis represents a loss. Here used the cross-entropy loss function, and its loss value of 0.0 is the perfect model. These three lines indicate training, validation, and test curve. These three curves are diminishing, so the model does not have an overfitting problem. The best validation performance of this model is 0.0021326 at epoch 65, and the total number of epochs is 75. Figure 8 represents a standardized way of displaying data distribution based on the minimum, first quartile, median, third quartile, and maximum values. Figure 7 shows the range of model accuracy and determines how tightly grouped classification accuracy. After the ten executions, LBP's minimum accuracy is 96.45%, and the maximum is 99.41%. GIST minimum accuracy is 95.86%, and the maximum is 98.82%, PHOG minimum accuracy is 89.35% and maximum 94.08%, proposed model minimum accuracy is 97.63%, and the maximum is 100%. The Red line indicates the median value of the classification accuracy. Figure 7. Network training, validation & test performance Figure 8. Box plot for accuracy range of feature method In the Figure 9 represent X-axis represents the combination of feature set and Y-axis represent the classification of accuracy. This bar graph shows that the Swedish leaves dataset has slightly better performance than the flavia and D leaf dataset in all the combination feature sets. Figure 9. Dataset comparisons using proposed model (NCA) As shown in the Figure 10 X-axis represents classifiers, and Y-axis represents classification accuracy. Compare filter feature selection methods such as NCA, ReliefF, MRMR using a combined feature set with classifiers. NCA feature selection method achieved better performance in each classifier than ReliefF and MRMR in this particular application. Figure 10. Comparison of feature selection method using Swedish leaves Table 5 shows a comparison of the proposed classification model with existing methods. The proposed method has the highest average classification accuracy with 98.99%. The average precision-recall and F1 Score obtained are 0.9908,0.9906, and 0.9907, respectively in Table 4. Table 5. Comparison of the proposed model with recent models Feature method 1-NN [37] MTD+LBP(HF) SVM [38] Color Features DT with PBPSO [39] GIST, LBP, geometric features MLP [40] features discriminable using the Fisher vector OMNCNN [41] CNN [42] GLCM and Canny Edge detection CNN ResNet18, ResNet34, ResNet50 [43] CNN ResNet50 [44] Proposed Technique This paper experimentally analyzed five feature extraction techniques, namely LBP, GIST PHOG, GLCM, and HOG to extract plant leaf images using Swedish leaves, Flavia, and D leaf datasets. In this experiment, LBP achieves 97.22% with 140.49 seconds run time, GIST is 97.22% with 222.92 seconds and GLCM is 80.47% with 33.25 seconds using ANN classifier with the Swedish leaves dataset, PHOG is 93.62% with 39.81 seconds and HOG is 90.21% with 847.36 seconds using RF classifier with the Swedish leaves dataset. Based on the above highest accuracy values obtained from Table 1(a), (b), (c), choose LBP, GIST, and PHOG feature extraction techniques for finding optimal feature set. It is found that the combined feature extraction method has achieved better results than the single features extraction method. For optimal feature set selection NCA is used and to reduce the size of the feature set. After evaluating NCA and ReliefF, MRMR results, NCA has better feature selection techniques in this model. The model achieved 98.99% classification accuracy in 353.39 seconds (Table 4) for the optimal feature set using ANN with the Swedish Leaves dataset. Plant species classification explored with deep learning techniques with the newly created dataset will be future work. [1] Ellis, B., Daly, D.C., Hickey, L.J., Johnson, K.R., Mitchell, J.D., Wilf, P., Wing, S.L. (2009). Manual of Leaf Architecture. Published in Association with the New York Botanical Garden. [2] Wäldchen, J., Rzanny, M., Seeland, M., Mäder, P. (2018). Automated plant species identification—Trends and future directions. PLoS Computational Biology, 14(4): e1005993. https://doi.org/10.1371/journal.pcbi.1005993 [3] Le, V.N.T., Apopei, B., Alameh, K. (2019). Effective plant discrimination based on the combination of local binary pattern operators and multiclass support vector machine methods. Information Processing in Agriculture, 6(1): 116-131. https://doi.org/10.1016/j.inpa.2018.08.002 [4] Narayan, V., Subbarayan, G. (2014). An optimal feature subset selection using GA for leaf classification. The International Arab Journal of Information Technology, 11(5): 447-451. https://ccis2k.org/iajit/PDF/vol.11,no.5/4924.pdf. [5] Zhao, C., Chan, S.S., Cham, W.K., Chu, L.M. (2015). Plant identification using leaf shapes—A pattern counting approach. Pattern Recognition, 48(10): 3203-3215. https://doi.org/10.1016/j.patcog.2015.04.004 [6] Munisami, T., Ramsurn, M., Kishnah, S., Pudaruth, S. (2015). Plant leaf recognition using shape features and colour histogram with K-nearest neighbour classifiers. Procedia Computer Science, 58: 740-747. https://doi.org/10.1016/j.procs.2015.08.095 [7] Naresh, Y.G., Nagendraswamy, H.S. (2016). Classification of medicinal plants: An approach using modified LBP with symbolic representation. Neurocomputing, 173: 1789-1797. https://doi.org/10.1016/j.neucom.2015.08.090 [8] Yang, C., Wei, H., Yu, Q. (2016). Multiscale triangular centroid distance for shape-based plant leaf recognition. In ECAI, pp. 269-276. https://doi.org/10.3233/978-1-61499-672-9-269 [9] Rzanny, M., Seeland, M., Wäldchen, J., Mäder, P. (2017). Acquiring and preprocessing leaf images for automated plant identification: Understanding the tradeoff between effort and information gain. Plant Methods, 13(1): 1-11. https://doi.org/10.1186/s13007-017-0245-8 [10] Begue, A., Kowlessur, V., Mahomoodally, F., Singh, U., Pudaruth, S. (2017). Automatic recognition of medicinal plants using machine learning techniques. International Journal of Advanced Computer Science and Applications, 8(4): 166-175. [11] Kan, H.X., Jin, L., Zhou, F.L. (2017). Classification of medicinal plant leaf image based on multi-feature extraction. Pattern Recognition and Image Analysis, 27(3): 581-587. https://doi.org/10.1134/S105466181703018X [12] Hewitt, C., Mahmoud, M. (2018). Shape-only features for plant leaf identification. arXiv preprint arXiv:1811.08398. [13] Ali, R., Hardie, R., Essa, A. (2018). A leaf recognition approach to plant classification using machine learning. In NAECON 2018-IEEE National Aerospace and Electronics Conference, Dayton, OH, USA, pp. 431-434. https://doi.org/10.1109/NAECON.2018.8556785 [14] Salve, P., Sardesai, M., Yannawar, P. (2018). Classification of plants using GIST and LBP score level fusion. In International Symposium on Signal Processing and Intelligent Recognition Systems, pp. 15-29. https://doi.org/10.1007/978-981-13-5758-9_2 [15] Mostajer Kheirkhah, F., Asghari, H. (2019). Plant leaf classification using GIST texture features. IET Computer Vision, 13(4): 369-375. https://doi.org/10.1049/iet-cvi.2018.5028 [16] Kuang, H., Ding, Y., Li, R., Liu, X. (2018). Defect detection of bamboo strips based on LBP and GLCM features by using SVM classifier. In 2018 Chinese Control and Decision Conference (CCDC), Shenyang, China, pp. 3341-3345. https://doi.org/10.1109/CCDC.2018.8407701 [17] Pacifico, L.D., Britto, L.F., Oliveira, E.G., Ludermir, T.B. (2019). Automatic classification of medicinal plant species based on color and texture features. In 2019 8th Brazilian Conference on Intelligent Systems (BRACIS), Salvador, Brazil, pp. 741-746. https://doi.org/10.1109/BRACIS.2019.00133 [18] Sujith, A., Neethu, R. (2021). Classification of plant leaf using shape and texture features. In Inventive Communication and Computational Technologies, pp. 269-282. https://doi.org/10.1007/978-981-15-7345-3_22 [19] Ahmed, N., Asif, H.M.S., Saleem, G. (2021). Leaf Image-based Plant Disease Identification using Color and Texture Features. arXiv preprint arXiv:2102.04515. https://arxiv.org/abs/2102.04515. [20] Söderkvist, O. (2001). Computer vision classification of leaves from Swedish trees. https://www.cvl.isy.liu.se/en/research/datasets/swedish-leaf/, accessed on 12 July 2021. [21] Wu, S.G., Bao, F.S., Xu, E.Y., Wang, Y.X., Chang, Y.F., Xiang, Q.L. (2007). A leaf recognition algorithm for plant classification using probabilistic neural network. In 2007 IEEE International Symposium on Signal Processing and Information Technology, Giza, Egypt, pp. 11-16. https://doi.org/10.1109/ISSPIT.2007.4458016 [22] Tan, J.W., Chang, S.W. (2018). D-Leaf Dataset. https://doi.org/10.6084/m9.figshare.5732955.v1 [23] Hamuda, E., Glavin, M., Jones, E. (2016). A survey of image processing techniques for plant extraction and segmentation in the field. Computers and Electronics in Agriculture, 125: 184-199. https://doi.org/10.1016/j.compag.2016.04.024 [24] Mythili, C., Kavitha, V. (2011). Efficient technique for color image noise reduction. The Research Bulletin of Jordan ACM, 2(3): 41-44. [25] Oliva, A., Torralba, A. (2001). Modeling the shape of the scene: A holistic representation of the spatial envelope. International Journal of Computer Vision, 42(3): 145-175. https://doi.org/10.1023/A:1011139631724 [26] Prakasa, E. (2016). Texture feature extraction by using local binary pattern. INKOM Journal, 9(2): 45-48. [27] Amirshahi, S.A., Koch, M., Denzler, J., Redies, C. (2012). PHOG analysis of self-similarity in aesthetic images. In Human Vision and Electronic Imaging XVII, 8291: 82911J. https://doi.org/10.1117/12.911973 [28] Ehsanirad, A., YH, S.K. (2010). Leaf recognition for plant classification using GLCM and PCA methods. Oriental Journal of Computer Science and Technology, 3(1): 31-36. http://www.computerscijournal.org/?p=2180. [29] Dalal, N., Triggs, B. (2005). Histograms of oriented gradients for human detection. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), San Diego, CA, USA, Vol. 1, pp. 886-893. https://doi.org/10.1109/CVPR.2005.177 [30] Soofivand, M.A., Amirkhani, A., Daliri, M.R., Rezaeirad, G. (2014). Feature level combination for object recognition. In 2014 4th International Conference on Computer and Knowledge Engineering (ICCKE), Mashhad, Iran, pp. 559-563. https://doi.org/10.1109/ICCKE.2014.6993395 [31] Manit, J., Youngkong, P. (2011). Neighborhood components analysis in sEMG signal dimensionality reduction for gait phase pattern recognition. In 7th International Conference on Broadband Communications and Biomedical Applications, Melbourne, VIC, Australia, pp. 86-90. https://doi.org/10.1109/IB2Com.2011.6217897 [32] Robnik-Šikonja, M., Kononenko, I. (2003). Theoretical and empirical analysis of ReliefF and RReliefF. Machine Learning, 53(1): 23-69. https://doi.org/10.1023/A:1025667309714 [33] Ding, C., Peng, H. (2005). Minimum redundancy feature selection from microarray gene expression data. Journal of Bioinformatics and Computational Biology, 3(02): 185-205. https://doi.org/10.1142/S0219720005001004 [34] Şekeroğlu, B., İnan, Y. (2016). Leaves recognition system using a neural network. Procedia Computer Science, 102: 578-582. https://doi.org/10.1016/j.procs.2016.09.445 [35] Zhang, S., Li, X., Zong, M., Zhu, X., Wang, R. (2017). Efficient kNN classification with different numbers of nearest neighbors. IEEE Transactions on Neural Networks and Learning Systems, 29(5): 1774-1785. https://doi.org/10.1109/TNNLS.2017.2673241 [36] Pal, M. (2003). Random forests for land cover classification. IGARSS 2003. 2003 IEEE International Geoscience and Remote Sensing Symposium. Proceedings (IEEE Cat. No.03CH37477), pp. 3510-3512. https://doi.org/10.1109/IGARSS.2003.1294837 [37] Yang, C. (2021). Plant leaf recognition by integrating shape and texture features. Pattern Recognition, 112: 107809. https://doi.org/10.1016/j.patcog.2020.107809 [38] Shrivastava, V.K., Pradhan, M.K. (2021). Rice plant disease classification using color features: A machine learning paradigm. Journal of Plant Pathology, 103(1): 17-26. https://doi.org/10.1007/s42161-020-00683-3 [39] Keivani, M., Mazloum, J., Sedaghatfar, E., Tavakoli, M.B. (2020). Automated analysis of leaf shape, texture, and color features for plant classification. Traitement du Signal, 37(1): 17-28. https://doi.org/10.18280/ts.370103 [40] Kurmi, Y., Gangwar, S., Agrawal, D., Kumar, S., Srivastava, H.S. (2021). Leaf image analysis-based crop diseases classification. Signal, Image and Video Processing, 15(3): 589-597. https://doi.org/10.1007/s11760-020-01780-7 [41] Ashwinkumar, S., Rajagopal, S., Manimaran, V., Jegajothi, B. (2021). Automated plant leaf disease detection and classification using optimal MobileNet based convolutional neural networks. Materials Today: Proceedings. https://doi.org/10.1016/j.matpr.2021.05.584 [42] Nigam, S., Jain, R. (2020). Plant disease identification using deep learning: A review. Indian Journal of Agricultural Sciences, 90: 249-257. [43] Afifi, A., Alhumam, A., Abdelwahab, A. (2021). Convolutional neural network for automatic identification of plant diseases with limited data. Plants, 10(1): 28. https://doi.org/10.3390/plants10010028 [44] Taslim, A., Saon, S., Muladi, M., Hidayat, W.N. (2021). Plant leaf identification system using convolutional neural network. Bulletin of Electrical Engineering and Informatics, 10(6): 3341-3352. https://doi.org/10.3390/plants10010028i.org/10.11591/eei.v10i6.2332
CommonCrawl
Permutations excluding repeated characters I'm working on a Free Code Camp problem - http://www.freecodecamp.com/challenges/bonfire-no-repeats-please The problem description is as follows - Return the number of total permutations of the provided string that don't have repeated consecutive letters. For example, 'aab' should return 2 because it has 6 total permutations, but only 2 of them don't have the same letter (in this case 'a') repeating. I know I can solve this by writing a program that creates every permutation and then filters out the ones with repeated characters. But I have this gnawing feeling that I can solve this mathematically. First question then - Can I? Second question - If yes, what formula can I use? To elaborate further - The example given in the problem is "aab" which the site says has six possible permutations, with only two meeting the non-repeated character criteria: aab aba baa aab aba baa The problem sees each character as unique so maybe "aab" could better be described as "a1a2b" The tests for this problem are as follows (returning the number of permutations that meet the criteria)- "aab" should return 2 "aaa" should return 0 "abcdefa" should return 3600 "abfdefa" should return 2640 "zzzzzzzz" should return 0 I have read through the following posts that have been really helpful in that they have convinced me I should be able to do this mathematically but I'm still in the dark as to how to apply their ideas to my particular problem. - Number of permutations in a word ignoring the consecutive repeated characters and https://answers.yahoo.com/question/index?qid=20110916091008AACH7vf permutations factorial John BehanJohn Behan Hint: Let's assume you have 15 positions and you need to distribute the letter a $5$ times. Then you can achieve this in the following way: Distribute the letter "a" 5 times on 11 positions. Now set an empty space between every pair of consecutive letters. A specific example: The first distribution might yield -a-aaa--a-- Adding the spaces [here denoted by an underscore]: -a_-a_a_a_--a-- In the next step you can distribute the next letter on the remaining 10 positions etc.. DominikDominik 19k11 gold badge2424 silver badges4646 bronze badges $\begingroup$ OK, I'll ponder this one and get back tomorrow :) Thanks $\endgroup$ – John Behan $\begingroup$ I figure you're pushing me towards considering the blank spaces between the letters. $\endgroup$ $\begingroup$ I figure you're pushing me towards considering the blank spaces between the letters. However I can't see a way to use this in a formula for counting the permutations without consecutive letters. When I try to think of how to count these using anything but the most trivial example ("aab") I can't see a pattern. $\endgroup$ $\begingroup$ Have you already understood how this construction yields all arrangements? Counting them is simple combinatorics. $\endgroup$ – Dominik $\begingroup$ I've come up with this solution for the problem "abcdefa" In total there are 7! possible permutations = 5040 But, with the two 'a' characters beside each other, there are 6! possible permututions = 720 As both 'a' characters are seen as unique, I then double this result, which gives me 1440 5040 - 1440 gives a result of 3600 I've started working on the other problems 'aab' and 'abfdefa' to see if I can find a pattern. Am I on the right track? $\endgroup$ Interesting problem indeed: I am keen to try and see what is possible to fix down. So we are considering the words, formed from the alphabet $$ \alpha = \left\{ {1,2,\, \cdots ,\,n} \right\} $$ with total number of repetitions of each character strictly correspondending to the vector $$ {\bf r} = \left( {r_{\,1} ,\,r_{\,2} ,\, \cdots ,\,r_{\,n} } \right) $$ and thus having length equal to $$ R = \sum\limits_{1 \le \,k\, \le \,n} {r_{\,k} } $$ With $r_k=0$ we will indicate that there is no occurence of the character $k$, which is a case useful to consider in the following. Of course, there is no loss of generality in permuting the vector $\bf r$, so that we may assume that its components are arranged ,e.g., as non-increasing; then $a$ and $b$ shall also be permuted accordingly. So $$ {\bf r}_{\,{\bf n}} = \left( {r_{\,1} ,r_{\,2} ,\, \cdots ,\,r_{\,m} ,0,\, \cdots ,0} \right)\quad \Rightarrow \quad {\bf r}_{\,{\bf m}} = \left( {r_{\,1} ,r_{\,2} ,\, \cdots ,\,r_{\,m} } \right) $$ The total number of words that can be formed from a given $\bf r$ (the alphabet is implicit in its dimension), by definition of Multinomial is: $$ N_w ({\bf r}) = \left( \matrix{ R \cr r_{\,1} ,r_{\,2} ,\, \cdots ,\,r_{\,n} \cr} \right) = {{R!} \over {r_{\,1} !r_{\,2} !\, \cdots \,r_{\,n} !}} $$ Our focus is on the words that do not have equal contiguous characters. The Multinomial approach Consider the words in which one specific character (wlog, the first) does not appear contiguously, i.e. in runs of not more than $1$ in length, while the others can be placed in whatever way. Then, as explained in this post, their number will be: $$ \bbox[lightyellow] { \eqalign{ & N_{\,w1} (r_{\,1} ,R) = \left( \matrix{ R - r_{\,1} \cr r_{\,2} ,\, \cdots ,\,r_{\,n} \cr} \right)N_b (r_{\,1} ,1,R - r_{\,1} + 1) = \left( \matrix{ R - r_{\,1} \cr r_{\,2} ,\, \cdots ,\,r_{\,n} \cr} \right)\sum\limits_{\left( {0\, \le } \right)\,\,j\,\,\left( { \le \,r_{\,1} \,/2} \right)} {\left( { - 1} \right)^j \left( \matrix{ R - r_{\,1} + 1 \cr j \cr} \right)\left( \matrix{ R - 2j \cr r_{\,1} - 2\,j \cr} \right)} = \cr & = \left( \matrix{ R - r_{\,1} \cr r_{\,2} ,\, \cdots ,\,r_{\,n} \cr} \right)\left( \matrix{ R - r_{\,1} + 1 \cr r_{\,1} \cr} \right) = \left( \matrix{ R \cr r_{\,1} ,\,r_{\,2} ,\, \cdots ,\,r_{\,n} \cr} \right) \left( \matrix{ R - r_{\,1} + 1 \cr r_{\,1} \cr} \right)\;\mathop /\limits_{} \; \left( \matrix{ R \cr r_{\,1} \cr} \right) \cr} \tag{1} }$$ If the other characters have repetition $1$, then clearly that is the number of words with no equal contiguous character. In this case $R=n+r_{1}-1$ $$ \bbox[lightyellow] { \eqalign{ & N_{\,w1} \left( {r_{\,1} ,\,1,\, \cdots ,1} \right) = \left( \matrix{ R \cr r_{\,1} ,1,\, \cdots ,\,1 \cr} \right)\left( \matrix{ R - r_{\,1} + 1 \cr r_{\,1} \cr} \right)\;\mathop /\limits_{} \;\left( \matrix{ R \cr r_{\,1} \cr} \right) = \cr & = \left( {R - r_{\,1} } \right)!\left( \matrix{ R - r_{\,1} + 1 \cr r_{\,1} \cr} \right) = \left( {n - 1} \right)!\left( \matrix{ n \cr r_{\,1} \cr} \right) \cr} \tag{2} }$$ For the examples you give $$ \eqalign{ & a,a,b\quad \Rightarrow \;\quad N_{\,w1} = 1!\left( \matrix{ 2 \cr 2 \cr} \right) = 1 \cr & a,a,b,c,d,e,f\quad \Rightarrow \;\quad N_{\,w1} = 5!\left( \matrix{ 6 \cr 2 \cr} \right) = 1800 \cr} $$ so, with your method of counting, you just have to multiply by $r_{1}!$. For the case of a general $\bf {r}$, we could develop forward identity (1) and proceed by inclusion-exclusion, but looks cumbersome. Let's try instead to find an adequate recurrence. The recursive approach Let's call $$ N_{\,0} ({\bf r},a,b) $$ the N. of words with no equal contiguous character, with repetition vector $\bf r$ (the alphabet is implicit in it) and with starting character $a$ and ending character $b$. We consider empty, and thus their number null, the words for which: - the starting and ending character do not pertain to the alphabet : $a \notin \alpha \,\; \vee \;b \notin \alpha $ - any component of the repetition vector is negative : $\exists k:r_{\,k} < 0$ - the start or the end character have null repetition : $r_{\,a} = 0\,\; \vee \;r_{\,b} = 0$ - the dimension of $\bf r$, i.e. of the alphabet, is null : $n<1$ In this way, it is clear that the words with a given starting and ending character constitute a partition of the set of the considered words. Let's then indicate $$ N_{\,0} ({\bf r}) = \sum\limits_{1 \le \,k,\,j\, \le \,n} {N_{\,0} ({\bf r},k,j)} $$ and put by definition that the number of empty words is 1 $$ N_{\,0} ({\bf 0}) = 1 $$ Also, we define the following vectors $$ \eqalign{ & {\bf u}_{\,n} (a) = \left( {\delta _{\,k,\,a} \;\left| {\,1 \le k \le n} \right.} \right) = \left( {\left[ {a = k} \right]\;\;\left| {\,1 \le k \le n} \right.} \right) \cr & {\bf u}_{\,n} (a,b) = \left( {\delta _{\,k,\,a} + \delta _{\,k,\,b} \;\left| {\,1 \le k \le n} \right.} \right) = \left( {\left[ {a = k} \right] + \left[ {b = k} \right]\;\;\left| {\,1 \le k \le n} \right.} \right) \cr & {\bf u}_{\,n} = \left( {1\;\;\left| {\,1 \le k \le n} \right.} \right) \cr} $$ where $\delta _{\,k,\,a} $ denotes the Kronecker delta and $[P]$ denotes the Iverson bracket. The starting cases Then, considering the simplest case of $R=1$, we have $$ N_{\,0} ({\bf u}_{\,n} (c),a,b) = \left[ {a = b = c} \right] = \left[ {a = c} \right]\left[ {b = c} \right] $$ while for $R=2$ $$ N_{\,0} ({\bf u}_{\,n} (c,d),a,b) = \left[ {c \ne d} \right]\left( {\left[ {a = c} \right]\left[ {b = d} \right] + \left[ {a = d} \right]\left[ {b = c} \right]} \right) $$ and for $R=3$ $$ N_{\,0} ({\bf u}_{\,n} (c,d,e),a,b) = \left[ {a \ne b} \right]\left[ {\left( {a,b} \right) \in 2{\rm - permutation}\,{\rm from}\,(c,d,e)} \right] $$ and finally for ${\bf u}_{\,n} $ ($R=n$) $$ N_{\,0} ({\bf u}_{\,n} ,a,b) = \left[ {1 = n} \right]\left[ {a = b} \right] + \left[ {2 \le n} \right]\left[ {a \ne b} \right]\left( {n - 2} \right)! $$ The recursion Take a word (with no equal contiguous character) of length $2 \le R$. We can divide it into two subwords of fixed length $R-S$ and $1 \le S < R$. The end character of the first shall be different from the starting character of the second. Then we can part the repetition vector between them in all the ways, such that the sum of the components of the first is $R-S$ and that of the second is $S$. That means that $\bf{r}$ is reparted into $\bf{r-s}$ and $\bf{s}$. Both vectors shall have at least one positive component. However, with the conditions established above $N_{\,0} (\mathbf{0},a,b) = 0\quad \left| {\;1 \leqslant n} \right.$ We can therefore write $N_{\,0} (\mathbf{r},a,b)$ as the following sum in $j,k,\bf{s}$ $$ \bbox[lightyellow] { \begin{gathered} N_{\,0} (\mathbf{r},a,b) = \left[ {1 = R} \right]\left[ {a = b = k:r_{\,k} \ne 0} \right] + \hfill \\ + \left[ {2 \leqslant R} \right]\sum\limits_{\left\{ \begin{subarray}{l} 1 \leqslant \,k \ne \,j\, \leqslant \,n \\ \mathbf{0}\, \leqslant \,\mathbf{s}\, \leqslant \,\mathbf{r}\, \\ 0 < \,\sum\limits_l {s_{\,l} } = S\, < \,R \end{subarray} \right.} {N_{\,0} (\mathbf{r} - \mathbf{s},a,k)\;N_{\,0} (\mathbf{s},j,b)} \hfill \\ \end{gathered} \tag{3} }$$ Then, for $S=1$ , (and $2 \le R$), in particular, we have $$ \bbox[lightyellow] { \begin{gathered} N_{\,0} (\mathbf{r},a,b) = \sum\limits_{\left\{ \begin{subarray}{l} 1 \leqslant \,k \ne \,j\, \leqslant \,n \\ \mathbf{0} \leqslant \,\mathbf{s}\, \leqslant \,\mathbf{r} \\ 0 < \sum\limits_l {s_{\,l} } = 1\, < \,R \end{subarray} \right.} {N_{\,0} (\mathbf{r} - \mathbf{s},a,k)N_{\,0} (\mathbf{s},j,b)} = \hfill \\ = \sum\limits_{\left\{ \begin{subarray}{l} 1 \leqslant \,k \ne \,j\, \leqslant \,n \\ 1 \leqslant \,l\, \leqslant \,n \\ 1 < \,R \end{subarray} \right.} {N_{\,0} \left( {\left( {r_{\,1} ,\, \cdots ,r_{\,l} - 1,\, \cdots ,\,r_{\,n} } \right),a,k} \right)\;N_{\,0} \left( {\left( {0,\, \cdots ,1_{\,\left( l \right)} ,\, \cdots ,\,0} \right),j,b} \right)} = \hfill \\ = \sum\limits_{\left\{ \begin{subarray}{l} 1 \leqslant \,k \ne \,j\, \leqslant \,n \\ 1 \leqslant \,l\, \leqslant \,n \\ 1 < \,R \end{subarray} \right.} {N_{\,0} (\left( {r_{\,1} ,\, \cdots ,r_{\,l} - 1,\, \cdots ,\,r_{\,n} } \right),a,k)\;\left[ {l = j = b} \right]} = \hfill \\ = \sum\limits_{\left\{ \begin{subarray}{l} 1 \leqslant \,k \ne \,b\, \leqslant \,n \\ 1 < \,R \end{subarray} \right.} {N_{\,0} (\left( {r_{\,1} ,\, \cdots ,r_{\,b} - 1,\, \cdots ,\,r_{\,n} } \right),a,k)} = \hfill \\ = \sum\limits_{1 \leqslant \,k \ne \,b\, \leqslant \,n} {N_{\,0} (\mathbf{r} - \mathbf{u}(b),a,k)} = \hfill \\ = \sum\limits_{1 \leqslant \,k\, \leqslant \,n} {N_{\,0} (\mathbf{r} - \mathbf{u}(b),a,k)} - N_{\,0} (\mathbf{r} - \mathbf{u}(b),a,b) \hfill \\ \end{gathered} \tag{4} }$$ which I checked to be correct against a dozen of cases. G CabG Cab Not the answer you're looking for? Browse other questions tagged permutations factorial or ask your own question. Number of occurrences of k consecutive 1's in a binary string of length n (containing only 1's and 0's) How many words can be formed from the letters of the word 'DAUGHTER' so that the vowels never come together? What is the number of rearrangements of the string AAABBBCCC that do not contain three consecutive letters of the same type? Number of permutations in a word ignoring the consecutive repeated characters Number of different permutations permutations of a multiset having symbols with fixed multiplicity Very silly permutation question Number of Words with two letters $a$ and $b$. How to calculate a subgroup of some permutation? How many ways to arrange 2 distinct things in 3 ways?
CommonCrawl
Post-glacial biogeography of trembling aspen inferred from habitat models and genetic variance in quantitative traits Genetic diversity and population structure in Nothofagus pumilio, a foundation species of Patagonian forests: defining priority conservation areas and management M. Gabriela Mattera, Mario J. Pastorino, … Carolina Soliani Evidence of local adaptation despite strong drift in a Neotropical patchily distributed bromeliad Bárbara Simões Santos Leal, Cleber Juliano Neves Chaves, … Clarisse Palma-Silva A genome-wide scan shows evidence for local adaptation in a widespread keystone Neotropical forest tree Rosane G. Collevatti, Evandro Novaes, … Dario Grattapaglia Revisiting the hyperdominance of Neotropical tree species under a taxonomic, functional and evolutionary perspective Gabriel Damasco, Christopher Baraloto, … Paul V. A. Fine Genetic data improve the assessment of the conservation status based only on herbarium records of a Neotropical tree André Carneiro Muniz, José Pires Lemos-Filho, … Maria Bernadete Lovato Differing, multiscale landscape effects on genetic diversity and differentiation in eastern chipmunks Elizabeth M. Kierepka, Sara J. Anderson, … Olin E. Rhodes Jr Genome-wide genetic variation coupled with demographic and ecological niche modeling of the dusky-footed woodrat (Neotoma fuscipes) reveal patterns of deep divergence and widespread Holocene expansion across northern California Robert A. Boria, Sarah K. Brown, … Jessica L. Blois Genetic diversity of Norway spruce ecotypes assessed by GBS-derived SNPs Jiří Korecký, Jaroslav Čepl, … Yousry A. El-Kassaby Genomic comparisons reveal biogeographic and anthropogenic impacts in the koala (Phascolarctos cinereus): a dietary-specialist species distributed across heterogeneous environments Shannon R. Kjeldsen, Herman W. Raadsma, … Kyall R. Zenger Chen Ding1, Stefan G. Schreiber ORCID: orcid.org/0000-0001-5033-88841, David R. Roberts ORCID: orcid.org/0000-0002-3437-24221, Andreas Hamann1 & Jean S. Brouard2 Boreal ecology Ecological modelling Palaeoecology Using species distribution models and information on genetic structure and within-population variance observed in a series of common garden trials, we reconstructed a historical biogeography of trembling aspen in North America. We used an ensemble classifier modelling approach (RandomForest) to reconstruct palaeoclimatic habitat for the periods 21,000, 14,000, 11,000 and 6,000 years before present. Genetic structure and diversity in quantitative traits was evaluated in common garden trials with 43 aspen collections ranging from Minnesota to northern British Columbia. Our main goals were to examine potential recolonisation routes for aspen from southwestern, eastern and Beringian glacial refugia. We further examined if any refugium had stable habitat conditions where aspen clones may have survived multiple glaciations. Our palaeoclimatic habitat reconstructions indicate that aspen may have recolonised boreal Canada and Alaska from refugia in the eastern United States, with separate southwestern refugia for the Rocky Mountain regions. This is further supported by a southeast to northwest gradient of decreasing genetic variance in quantitative traits, a likely result of repeated founder effects. Stable habitat where aspen clones may have survived multiple glaciations was predicted in Mexico and the eastern United States, but not in the west where some of the largest aspen clones have been documented. Trembling aspen (Populus tremuloides Michx.) is the most frequent and genetically diverse forest tree in North America, occupying many ecological site types from Mexico to Alaska in the west, and across Canada and the United States to the Atlantic ocean in the east1,2,3. Aspen is capable of colonizing newly available habitat, yet differs from typical pioneer species in that it can persist in colonised environments for thousands of years through clonal reproduction. Due to its life history, large range, and wide ecological amplitude, aspen is an interesting organism to address questions concerning ecological genetics, physiology, and biogeography. Aspen can colonise marginal habitat and survive disturbance events by root suckering4. Seed production is commonly observed but seedling establishment is less common than suckering, especially in the semiarid areas of western North America2, 5. In moister habitat of the northern Rocky Mountains and eastern North America, seedling establishment occurs more frequently5, 6. Once an individual is established, it will send out lateral roots from which hundreds of ramets can originate. The clone increases in size as each ramet also contributes distally to the expanding root system from which new stems can be formed4. The largest confirmed aspen clone to date (known as "Pando") covers 43 ha and comprises 47,000 stems with an estimated biomass of 6,000 t5, 7, 8. In the eastern United States, the average clone size has been estimated to be approximately 0.04 ha, with exceptional individuals reaching 14 ha4, 9, 10. In central Canada, average clone sizes were reported around 0.08 ha, with the largest clones reaching 1.5 ha11. Several genetic studies have also shown that trembling aspen shows exceptionally high levels of genetic diversity, but little among-population genetic differentiation in neutral genetic markers, such as isozymes, microsatellites or other molecular markers7, 12,13,14,15,16,17,18,19,20,21. The highest levels of genetic diversity with an expected heterozygosity (He) of 0.42 were reported for Alberta by12, but other studies in the region showed more typical levels of genetic diversity for the species with an He of 0.2922. Electrophoretic surveys of aspen in eastern populations (e.g. Minnesota and Ontario) were lower with He rates of 0.22 and 0.25, respectively15, 16. In a recent range-wide study of genetic structure and diversity based on microsatellite markers, Callahan, et al.20 identified a pronounced geographic differentiation into a genetically more diverse northern cluster (Alaska, Canada, northeastern US) and a slightly less diverse southwestern cluster (western US and Mexico) while showing no evidence of higher genetic diversity in Alberta. However, due to different rates and mechanisms of mutations in isozyme versus microsatellite marker systems the outlined results may not be contradictory14, 23. A different approach to investigate genetic structure and diversity is to assess genetic differences of quantitative traits in common garden experiments. Such data usually does not provide insight into the biogeographic history of a species, because the traits evolve too quickly in response to current environments. In the case of aspen, however, where clones may have persisted for thousands of years, genetic variance in quantitative traits or adaptational lag may provide additional clues regarding the migration history of the species. In a reciprocal transplant experiment, Schreiber, et al.24 showed evidence for strong suboptimality in adaptive traits, and suboptimality in quantitative traits could potentially be explained by considering aspen's clonal life history, with populations being adapted to fossil climate conditions25. In fact, it has been speculated that aspen clones may be millions of years old and have survived dozens or hundreds of glacial cycles2, 4, 5. Although precise dating of aspen clones remains an elusive task, recent studies have drawn some boundaries. Ally, et al.26 found that the upper boundary for the age of aspen clones at two study sites in British Columbia is approximately 4,000 and 10,000 years, which corresponds well with the timing of glacial retreat at those two sites. Relating clone size with age, however, proved not possible. Speculations about very large clones that may have persisted through repeated glacial cycles are also not supported by Mock, et al.17, who studied the largest known clone "Pando" and concluded that it has a low frequency of somatic mutations at microsatellite loci and is not likely to be more than several thousand years old. Another valuable approach to address questions concerning biogeography and species migration are species distribution models. These models use observed species range data in combination with environmental predictors (typically climate) to generate statistical relationships, which can be used to project probabilities of species presence from new environmental data27. Although more typically used as a risk assessment tool for future climate change e.g. ref. 28, they are also employed to reconstruct biogeographical histories of species e.g. refs 29,30,31,32,33. In this study, we contribute reconstructions of glacial refugia and post-glacial migration histories for aspen by means of species distribution models. Our primary goal is to use habitat reconstructions to augment inferences of putative glacial refugia that are based on geographic patterns in neutral genetic markers. Callahan et al.20's more diverse northern cluster and a less diverse southwestern cluster suggests two refugia for the species south of the ice sheet, but others have proposed that boreal species may also had refugia in ice-free Beringia, allowing for southward post-glacial recolonisation routes34, 35. Secondly, we investigate whether habitat reconstructions support the possibility of very old clones that may have persisted through multiple glaciations by climate conditions staying within their environmental tolerances. Last, we contribute an analysis of genetic diversity and adaptational lag in quantitative traits based on field trials. Since aspen clones may have persisted for thousands of years in many current locations, their adaptational lag would provide additional insight as to what climate conditions they have experienced in the past and what migration paths would be consistent with observed lags and gradients in genetic diversity. Genetic differentiation and adaptation Because aspen clones may have persisted for thousands of years, any adaptational lag relative to current environments may provide additional clues as to what climate conditions they have experienced in the past and what migration direction would be consistent with the observed lag. To concisely summarize multi-trait measurements at five test sites as well as climate conditions at seed source locations (Fig. 1), we use a principal component analysis (Fig. 2). The vectors represent components loadings, which are the correlations of the principal components with the original variables. The strength of the correlation is indicated by the vector length, and the direction indicates which seed sources have high values for the original variables. Climate conditions of seed source locations show a number of distinct groups (Fig. 2a). Minnesota sample site climates are characterised by warm and long summers (MWMT, DD > 5), Saskatchewan sources have the longest and harshest winter conditions (DD < 0, opposite MCMT), the Alberta Foothills sources have the strongest maritime influence with mild winters (MCMT, opposite DD < 0 and TD) and high precipitation (MAT, MSP), whereas the boreal forest locations (cAB, nAB, BC) are characterised by cool summers and short growing seasons, as well as dry growing season conditions (opposite MWMT, MAP, DD > 5). Collection locations and test sites of the aspen provenance trial series that was used to quantify within-population genetic diversity and adaptational lag of aspen populations. The map was created with ArcGIS v9.3 (http://esri.com). Principal components analyses for climate conditions at seed source locations (a) and multi-trait measurements in common garden trials (b). Vector labels represent the input variables. Symbols in (a) represent the geographic location of provenances; symbols in (b) represent the provenance collections. Vector labels in (a): PAS = precipitation as snow (mm), MWP = mean winter precipitation (°C), MCMT = mean coldest month temperature (°C), NFFD = number of frost free days, MAT = mean annual temperature (°C), MSP = mean summer precipitation (mm), MAP = mean annual precipitation (mm), DD > 5 = degree-days above 5 °C (growing degree-days), MWMT = mean warmest month temperature (°C), TD = temperature difference between MCMT and MWMT (or continentality, °C), DD < 0 = degree-days below 0 °C (chilling degree-days); Vector labels in (b): BC = height at British Columbia test site, nAB = height at northern Alberta test site, ABf = height at Alberta Foothills test site, SK = height at Saskatchewan test site, cAB = height at central Alberta test site, Bud break = timing of bud break at central Alberta test site, Leaf senescence = timing of leaf senescence at central Alberta test site. Regarding genetic structure of populations (Fig. 2b), only two groups of samples are clearly differentiated from the other groups based on the measured traits. In this figure, symbols represent provenance collections, and vectors represent height measurements at five test sites (arrow labels BC, nAB, cAB, ABf, SK), plus bud break and leaf senescence measurements at one site (cAB). Provenances from British Columbia (BC) are characterised by poor relative performance at most test sites, particularly under the mild climates of the Alberta Foothills test site (opposite to most height vectors, particularly ABf). BC provenances are also characterised by early bud break. The other group that is clearly separated comprises the Minnesota (MN) provenances, which grow well at most test sites, particularly the central Alberta site (cAB). They are also characterised by late leaf senescence. The remaining groups of samples are not genetically differentiated in the measured traits, although the climate conditions of their origins is quite distinct, particularly for the Saskatchewan (SK) and Alberta Foothills (ABf) source climates (cf. Fig. 2a). Regional within-population genetic variation Residual variance components of growth and adaptive traits by region of origin reveal the Alberta Foothills and Minnesota as the most genetically diverse regions in quantitative traits (Table 1). If we ignore the sub-boreal Foothills location, a trend toward decreasing genetic diversity across aspen's main boreal distribution from southeastern Minnesota to northwestern British Columbia is apparent in all measured traits (cf. Fig. 1). The gradient is most pronounced for height measurements (0.94 in MN to 0.61 in BC), and height measurements also have the highest accuracy of within-population diversity estimates, because they were evaluated at five sites. With respect to timing of bud break, all western Canadian provenances are fairly homogenous only contrasting with the Minnesota provenances with much higher within-population diversity. The Alberta Foothills region has the highest residual variation for the timing of leaf senescence followed by Minnesota. Table 1 Measured adaptive traits and residual variance components summarised by region. Palaeoclimatic habitat reconstructions While an out-of-bag validation indicated excellent model fit to modern plot data with an AUC of 0.91, a model validation against pollen and fossil data yielded an AUC of only 0.67. Although truly independent model validation statistics are always much lower than out-of-bag validations e.g. refs 36 and 37, the model used in this study fits fossil pollen data for aspen poorly. High error rates for fossil data are expected due to model limitations, inaccurate palaeoclimate reconstructions, but also because of the nature of the palaeoecological validation data itself. For example, pollen deposits are restricted to certain landscape features and topographic positions, such as bogs or lakes, where the sources of pollen are different from ecological habitats in the broader surroundings, leading to false positive sediment records. Particularly low AUC values for aspen compared to other western North American tree species were previously observed38, 39. The reason may be that pollen identification for poplar is difficult beyond the genus level and that poplar pollen is also fragile and prone to disintegration, which may also lead to false negatives in sediment records40. Our historical projections of aspen habitat for 6,000, 11,000, 14,000 and 21,000 years before present (BP) show three potential glacial refugia in which aspen may have found suitable habitat during the last glacial maximum (Fig. 3). The predicted 21,000 years BP refugia are found in present-day Alaska, although small and with a low probability of presence, and in the southwestern and eastern United States (Fig. 3a). The maps highlight a potential contact zone located in the prairie provinces of western Canada in which populations from these three refugia may have merged after the retreat of the Wisconsin glaciers at around 11,000 years BP (Fig. 3c). The largest glacial refugium was predicted in the eastern United States, which may have contributed the highest genetic influx during recolonisation of the North American continent. Palaeoclimatic habitat projections for trembling aspen based on the CCM1 general circulation model for (a) 21,000 years before present, (b) 14,000 years before present, (c) 11,000 years before present and (d) 6,000 years before present. The maps were created with ArcGIS v9.3 (http://esri.com). Figure 4 shows a higher resolution image of the same projections of aspen habitat for the Fish Lake National Forest in south central Utah, where the largest confirmed aspen clone "Pando" has been documented. The model predicts suitable habitat to emerge at the earliest at 14,000 years BP, and no suitable habitat is predicted in the vicinity of today's location of the clone at the last glacial maximum at 21,000 years BP. At a larger scale, the overlap of suitable aspen habitat between the present and the last glacial maximum was obtained by multiplying probabilities of presence between the model outputs for the 1961–1990 baseline period and for 21,000 years BP (Fig. 5). The analysis reveals only a few locations in which aspen populations had a moderate or high probability of surviving multiple glaciations. These areas are located in eastern United States (southeastern Ohio) and the Sierra Madre mountain range in northeastern Mexico. (a) Topographic map of south-central Utah highlighting the approximate location of the aspen clone "Pando". Palaeoclimatic habitat projections for trembling aspen in south-central Utah (b) present day, (c) 14,000 years before present, (d) 21,000 years before present. The maps were created with ArcGIS v9.3 (http://esri.com). Probabilities of geographic locations in which aspen clones may have persisted through multiple glaciations. Data points were derived by multiplying the probability of presence estimates of the 1961–1990 reference climate with the 21,000 years before present period. The map was created with ArcGIS v9.3 (http://esri.com). Palaeoclimatic habitat reconstructions suggest three potential glacial refugia for trembling aspen from which recolonisation may have occurred. The eastern United States represents the largest refugium with the highest probabilities of presence, followed by the low elevation areas of the southwestern United States, and Alaska. Although the modelled Alaska refugium was very small with a low-probability of presence, the possibility of aspen recolonisation from the north should not be excluded based on habitat reconstructions alone. This leaves three conceivable recolonisation scenarios for aspen: (Scenario 1) recolonisation of the boreal north almost exclusively from the southeast to northwest up into Alaska; (Scenario 2) recolonisation predominantly from the east but with contributions from either the southwestern or Alaskan refugia, and (Scenario 3) simultaneous recolonisation from all three glacial refugia with a contact zone in Alberta, Canada, potentially explaining high levels of genetic diversity documented by one study for this region. Southwestern coastal and interior refugia are well documented for many western North American species e.g. reviewed by41, 42, and many interior plant species show genetic clusters that can be attributed to a further sub-structure of southwestern refugia. For example, Godbout et al.43 propose two well separated refugia in the Columbia River basin and the eastern Rocky Mountains to explain genetic structure within the interior variety of Pinus contorta. Northeastern refugia, just south of the ice sheet have also been well documented for several boreal tree species reviewed by42, implying either exclusive northwestern recolonisation paths (e.g. Pinus banksiana), or recolonisation from both the southwest and east (e.g. Picea mariana). In addition, there is evidence from both genetic data and fossil pollen records that several boreal species may also have found refuge in ice-free Beringia, allowing for southward post-glacial recolonisation routes34, 35. For aspen, two main genetic clusters in today's populations have been identified: a northern group comprised of the Alaskan, Canadian and eastern United States populations, and a second group of southern Rocky Mountain populations presumably originating from a separate southwestern refugium20. Recolonisation from southwestern and eastern refugia Our habitat reconstructions conform well to the genetic clusters described by Callahan, et al.20. In fact, they also confirm that an observed outlier in the southern cluster (a Yellowstone population), which according to the microsatellite data grouped into the northern cluster could have plausibly been recolonised from the east. Although westward extending habitat from eastern refugia at 14,000 years BP did not reach all the way to the Yellowstone region, it does extend well into Montana (Fig. 3b). It seems therefore likely that eastern population stretched all the way to the Rocky Mountain foothills at some point in time between 11,000 and 14,000 years BP, providing a complete southern front along the entire length of the Laurentian ice sheet. From here, northward recolonisation of boreal Canada and Alaska could have proceeded with little opportunity for genetic contributions from southwestern refugia. This provides an explanation for the large northern genetic cluster described by Callahan et al.20. The hypothesis of northern recolonisation after the retreat of the ice sheet from the east (Scenario 1) is further supported by our finding of a southeast to northwest gradient of decreasing genetic variance in quantitative traits (Table 1). Such a gradient would be expected because of repeated founder effects during post-glacial migration westwards and northwards44. Patterns of adaptational lag in quantitative traits also fit well with this migration history. Aspen provenances from northeastern British Columbia are the least well-adapted populations in terms of growth, survival, phenology and frost hardiness compared to populations from Alberta and Minnesota24. Decreasing genetic diversity in combination with aspen's clonal life history slow the process of adaptation to new environmental conditions, with current populations essentially being adapted to fossil climates25. Model reconstructions for 21,000, 14,000, 11,000 and 6,000 years BP (Fig. 3) further suggest that suitable climate habitat for aspen was consistently available since the last glacial maximum for aspen populations in the southwestern Rocky Mountains. These areas were not covered by continuous ice sheets and the complex landscape would have supported ample refugia for the southwestern genetic cluster of aspen populations identified by Callahan et al.20. Only minimal migration, primarily along elevational gradients, would have been required to maintain suitable habitat conditions for aspen populations in montane areas of Wyoming, Utah, Colorado, Arizona and New Mexico (Fig. 3). Alternate recolonisation patterns We should note that a southwards expansion from an isolated and genetically depauperate refugial population in Alaska could also explain the observed high degree of suboptimality in the British Columbia populations. However, data reported by ref. 20 does not support a strong influence of genetic material from Beringian refugia even if such refugia existed for aspen as indicated by pollen data34. Our habitat reconstructions are ambivalent in this resepect, with low probabilities of presence indicated for very restricted areas in Alaska at the last glacial maximum. The southward recolonisation hypothesis (Scenario 2) seems therefore unlikely based on molecular genetic information and habitat reconstructions. Consequently, while a Beringian refugium for aspen should not be excluded, it does not appear to be the origin of today's boreal aspen populations. The post-glacial migration scenario (Scenario 3) with populations from three refugia making contact in Alberta could potentially explain relatively high levels of genetic diversity observed in one study in this region12. While we did find high levels of genetic diversity in quantitative traits in the Alberta Foothills region (Table 1), alternative explanations have previously been proposed for this observation17, 22. Under the more favourable environmental conditions in the foothills, sexual reproduction and successful seedling establishment is more common6, and becomes a driver for generating and maintaining genetic diversity through recombination. Stable habitat and ancient clones The apparent continuity of suitable habitat conditions for aspen populations in montane areas of Wyoming, Utah, Colorado, Arizona and New Mexico (Fig. 3) raises the possibility that habitat conditions within the climatic tolerances of aspen were available at a single location, potentially supporting ancient clones that survived one or more glacial cycles. The largest and putatively oldest aspen clone, known as "Pando", occupies 43 ha in the Fish Lake National Forest in south central Utah. The age of this clone has been subject to speculation that it could be several millions of years old and having survived multiple glaciations2, 4, 5. On the other hand, molecular studies suggest that Pando may in fact be of relatively young age17. Our climate habitat reconstructions support the view that ancient aspen clones are unlikely to be found in the southwestern Rocky Mountains. While suitable climate habitat for aspen was available in the general area at all times since the last glaciation, our model hindcasts suggest that the difference between today's climate conditions and those of the last glacial maximum were too large to stay within the climatic tolerances of aspen at any single location (and without any migration response along elevational gradients). This is illustrated in Fig. 4 at small scale for the area of the "Pando" clone, and in Fig. 5 showing the lack of overlapping habitat between 21,000 years BP and the current reference climate. Our model predicts only a few patches of stable habitat in northern Mexico and the eastern United States from which no exceptionally large clones have been documented. It should be noted that species distribution models are generally not considered reliable enough to reconstruct species distributions at small scales for various reasons that are discussed in depth elsewhere e.g.,45, 46. In our reconstructions shown in Fig. 4, false positive habitat predictions would primarily be caused by not incorporating other important habitat parameters, such as soils. False negative projections would primarily be caused by the inability to model microclimate conditions that allow aspen to persist in complex terrain for both current and past climates. Both false positives and negatives would also be caused by the coarse scale of general circulation models, which prevent the reconstruction of changes to small scale weather patterns that determine local climate conditions. Our inferences, however, do not rely on precise spatial reconstructions of past aspen distributions. Rather, Figs 4 and 5 should be more generally interpreted to imply that the magnitude of climate change between the last glacial maximum and current conditions exceeds the full climate envelope of the species' realised niche. In the case of aspen, a pioneer species often found in marginal environments, the realised niche is likely a good proxy for the species' climatic tolerances. Further, climatic tolerances of individual populations within the species range and of individual clones within populations will be substantially narrower than for the species as a whole. Therefore, stable habitat conditions that fall within the environmental tolerances of individual clones and allow them to persist at the same location through full glacial cycles appear unlikely for the southwestern Rocky Mountains. Climate data were generated according to Hamann, et al.47, available for anonymous download at http://tinyurl.com/ClimateNA. We use a 1961–1990 climate normal baseline dataset generated with the Parameter-elevation Regressions on Independent Slopes Model (PRISM) for monthly average minimum temperature, monthly average maximum temperature and monthly precipitation48. From 36 monthly variables, six biologically relevant climate variables were derived that account for most of the variance in climate data while avoiding multicollinearity: the number of growing degree days above 5 °C, mean maximum temperature of the warmest month, temperature difference between mean January and mean July temperatures, mean annual precipitation, April to September growing season precipitation, and November to February winter precipitation. The procedure of selecting these climate variables is described in more detail in Worrall, et al.49, Supplement 1. The algorithms to estimate biologically relevant variables from monthly temperature and precipitation surfaces are explained in detail by Rehfeldt50. To represent palaeoclimatic conditions, we overlaid the 1961–1990 baseline climate with temperature and precipitation anomalies for 6,000, 11,000, 14,000 and 21,000 years before present, generated by the Community Climate Model (CCM1) developed by the National Center for Atmospheric Research (NCAR)51. Subsequently the same derived variables were generated as above. Species distribution modelling Past aspen habitat was reconstructed using a species distribution model for aspen based on more than 600,000 presence/absence data points from forest inventory plots, ecology plots and herbarium accessions throughout North America49. This model employs a regression tree ensemble classifier to relate climate variables to aspen census data, implemented by the randomForest package52 for the R programming environment53. Model hindcasts were validated, using the area under the curve (AUC) of the receiver operating characteristic54, against 9,568 records of combined fossil pollen, macrofossil, and pack rat midden (Neotoma species) data, drawn from the Neotoma Palaeoecology Database (www.neotomadb.com) for the time periods considered. Further details on fossil data sources and validation methods can be found in Roberts & Hamann38. To identify areas where aspen clones may have found continuously suitable habitat throughout glacial cycles, we multiplied projected aspen probability of presence layers for the 1961–1990 baseline period with projected aspen probability of presence for 21,000 years BP. Common garden experiments In this paper, we also reanalyze data from a large-scale common garden trial in a different context. We use a standard statistical design widely used in provenance testing, implementing a randomised complete block (RCB) design with 43 provenances planted in 5-tree row plots, in 6 blocks, at each of 5 sites. Provenances are open-pollinated single-tree seed collections from six ecological regions in the northern portion of the species range (Fig. 1); for further details refer to24. The measured traits were tree height, timing of bud break and timing of leaf senescence. Tree height was measured for 6,450 trees after nine growing seasons in the field in autumn of 2006 for all five test sites. Phenological measurements, i.e. timing of bud break and timing of leaf senescence were taken on 1,290 trees at the central Alberta test site. To visualise multi-trait genetic differentiation of the 43 seed sources, as well as the multivariate differences in the climate conditions of the seed source locations, we use principal components analysis implemented with the FactoMineR package55 for the R programming environment53. For the genetic ordination, traits summarised into principal components were height at five sites plus bud break and leaf abscission measured at one site (7 variables). For the climatic ordination, nine variables were used to describe the multivariate climate space more completely (Fig. 2). Within-population variance The common garden trial was primarily meant as a provenance experiment to investigate genetic differentiation in adaptive traits among populations. However, it can also be used to estimate regional within-population phenotypic variation by calculating residual variance components. Since all provenances experience the same environmental conditions at a given test site, differences in the residual phenotypic variance components can be attributed to different levels of genetic variance (including dominance and epistatic genetic effects that we cannot quantify). We therefore refer to differences in the residual variance components as differences in within-population genetic variation hereafter. Strictly speaking, they are differences in within-population phenotypic variation with the environmental variance component held constant (although we cannot quantify its absolute value). To estimate variance components, we use a random-term linear model implemented with PROC MIXED of the SAS statistical software package56: $${Y}_{ijkl}=\mu +{P}_{i}+{S}_{j}+{(P\times S)}_{ij}+B{(S)}_{(j)k}+{(P\times B(S))}_{(i(j)k)}+{e}_{l(ijk)}$$ where Y ijkl is the phenotypic observation of a trait made for the l-th tree of a row plot, belonging to the i-th provenance (P) grown in the j-th test site (S), in the k-th block (B) within a test site. A genotype × environment effect is given by the interaction between provenance and test site (P × S) as well as provenance and block within test site (P × B(S)). The overall mean is indicating by μ, and e l(ijk) represents the residual environmental error plus the within-family variation in each plot. The model was run separately for each region and each trait for a total of 18 model implementations. Bud break and leaf senescence were only measured at one test site (central Alberta), and in this case the test site effect does not apply and the block within site effect becomes a simple block effect. Standard errors of variance components were generated with the COVTEST option of PROC MIXED56. Little, E. L. Atlas of United States trees. Volume 1. Conifers and important hardwoods. Miscellaneous Publication No. 1146. (USDA, Forest Service, Washington, DC, 1971). Mitton, J. B. & Grant, M. C. Genetic variation and the natural history of quaking aspen. Bioscience 46, 25–31 (1996). Peterson, E. B. & Peterson, N. M. Ecology, management, and use of aspen and balsam poplar in the prairie provinces, Canada. Special Report 1. (Northern Forestry Centre, Edmonton, AB, 1992). Barnes, B. V. The clonal growth habit of American aspens. Ecology 47, 439–447 (1966). Kemperman, J. A. & Barnes, B. V. Clone size in American aspens. Can. J. Bot. 54, 2603–2607 (1976). Landhäusser, S. M., Deshaies, D. & Lieffers, V. J. Disturbance facilitates rapid range expansion of aspen into higher elevations of the Rocky Mountains under a warming climate. J. Biogeogr 37, 68–76 (2010). DeWoody, J., Rowe, C. A., Hipkins, V. D. & Mock, K. E. "Pando" lives: molecular genetic evidence of a giant aspen clone in Central Utah. West. N. Am. Naturalist 68, 493–497 (2008). Grant, M. C., Mitton, J. B. & Linhart, Y. B. Even larger organisms. Nature 360, 216–216 (1992). Article ADS Google Scholar Barnes, B. V. Natural variation and delineation of clones of Populus tremuloides and P. grandidentata in northern Lower Michigan. Silvae Genet. 18, 130–142 (1969). Zahner, R. & Crawford, N. A. The clonal concept in aspen site relations in Forest soil relationships in North America (ed. Youngberg, C. T.) 229-243. (Oregon State University Press, Corvallis, OR, 1965). Steneker, G. The size of trembling aspen (Populus tremuloides Michx.) clones in Manitoba. Can. J. Forest Res. 3, 472–478 (1973). Cheliak, W. M. & Dancik, B. P. Genic diversity of natural populations of a clone-forming tree Populus tremuloides. Can. J. Genet. Cytol. 24, 611–616 (1982). Cole, C. T. Allelic and population variation of microsatellite loci in aspen (Populus tremuloides). New Phytol. 167, 155–164 (2005). De Woody, J., Rickman, T. H., Jones, B. E. & Hipkins, V. D. Allozyme and microsatellite data reveal small clone size and high genetic diversity in aspen in the southern Cascade Mountains. Forest Ecol. Manag. 258, 687–696 (2009). Hyun, J. O., Rajora, O. P. & Zsuffa, L. Genetic variation in trembling aspen in Ontario based on isozyme studies. Can. J. Forest Res. 17 (1987). Lund, S. T., Furnier, G. R. & Mohn, C. A. Isozyme variation in quaking aspen in Minnesota. Can. J. Forest Res. 22, 521–524 (1992). Mock, K. E., Rowe, C. A., Hooten, M. B., Dewoody, J. & Hipkins, V. D. Clonal dynamics in western North American aspen (Populus tremuloides). Mol. Ecol. 17, 4827–4844 (2008). Namroud, M.-C., Park, A., Tremblay, F. & Bergeron, Y. Clonal and spatial genetic structures of aspen (Populus tremuloides Michx.). Mol. Ecol. 14, 2969–2980 (2005). Wyman, J., Bruneau, A. & Tremblay, M. F. Microsatellite analysis of genetic diversity in four populations of Populus tremuloides in Quebec. Can. J. Bot. 81, 360–367 (2003). Callahan, C. M. et al. Continental-scale assessment of genetic diversity and population structure in quaking aspen (Populus tremuloides). J. Biogeogr. 40, 1780–1791 (2013). Yeh, F. C., Chong, K. X. & Yang, R. C. RAPD variation within and among natural populations of trembling aspen (Populus tremuloides) from Alberta. J. Hered. 86, 454–460 (1995). Jelinski, D. E. & Cheliak, W. M. Genetic diversity and spatial subdivision of Populus tremuloides (Salicaceae) in a heterogeneous landscape. Am. J. Bot. 79 (1992). Arnaud-Haond, S., Duarte, C. M., Alberto, F. & Serrão, E. A. Standardizing methods to address clonality in population studies. Mol. Ecol. 16, 5115–5139 (2007). Schreiber, S. G. et al. Frost hardiness vs. growth performance in trembling aspen: an experimental test of assisted migration. J. Appl. Ecol. 50, 939–949 (2013). Brouard, J. S. Aspen, adaptation, and climate change. Is Alberta aspen adapted to a fossil climate? in Proceedings Joint Convention of the Society of American Foresters and Canadian Institute of Forestry, 3–5 October 2004 (Edmonton, AB, 2004). Ally, D., Ritland, K. & Otto, S. Can clone size serve as a proxy for clone age? An exploration using microsatellite divergence in Populus tremuloides. Mol. Ecol. 17, 4897–4911 (2008). Elith, J. & Leathwick, J. R. Species distribution models: ecological explanation and prediction across space and time. Annu. Rev. Ecol. Evol. S. 40, 677–697 (2009). Thomas, C. D. et al. Extinction risk from climate change. Nature 427, 145–148 (2004). Article ADS CAS PubMed Google Scholar Rissler, L. J. & Apodaca, J. J. Adding more ecology into species delimitation: ecological niche models and phylogeography help define cryptic species in the black salamander (Aneides flavipunctatus). Syst. Biol. 56, 924–942 (2007). Gugger, P. F., González-Rodríguez, A., Rodríguez-Correa, H., Sugita, S. & Cavender-Bares, J. Southward Pleistocene migration of Douglas-fir into Mexico: phylogeography, ecological niche modeling, and conservation of 'rear edge' populations. New Phytol. 189 (2011). Gugger, P. F., Ikegami, M. & Sork, V. L. Influence of late Quaternary climate change on present patterns of genetic variation in valley oak, Quercus lobata Née. Mol. Ecol. 22 (2013). Roberts, D. R. & Hamann, A. Climate refugia and migration requirements in complex landscapes. Ecography 39, 1238–1246 (2016). Maguire, K. C., Nieto-Lugilde, D., Fitzpatrick, M. C., Williams, J. W. & Blois, J. L. Modeling Species and Community Responses to Past, Present, and Future Episodes of Climatic and Ecological Change. Annu. Rev. Ecol. Evol. S. 46, 343–368 (2015). Brubaker, L. B., Anderson, P. M., Edwards, M. E. & Lozhkin, A. V. Beringia as a glacial refugium for boreal trees and shrubs: new perspectives from mapped pollen data. J. Biogeogr 32, 833–848 (2005). Anderson, L. L., Hu, F. S., Nelson, D. M., Petit, R. J. & Paige, K. N. Ice-age endurance: DNA evidence of a white spruce refugium in Alaska. P. Natl. Acad. Sci. USA 103 (2006). Eskildsen, A. et al. Testing species distribution models across space and time: high latitude butterflies and recent warming. Global Ecol. Biogeogr. 22, 1293–1303 (2013). Heikkinen, R. K., Marmion, M. & Luoto, M. Does the interpolation accuracy of species distribution models come at the expense of transferability? Ecography 35, 276–288 (2012). Roberts, D. R. & Hamann, A. Method selection for species distribution modelling: are temporally or spatially independent evaluations necessary? Ecography 35, 792–802 (2012). Roberts, D. R. & Hamann, A. Glacial refugia and modern genetic diversity of 22 western North American tree species. P. R. Soc. B-Biol. Sci. 282 (2015). Davis, M. B. Redeposition of Pollen Grains in Lake Sediment. Limnol. Oceanogr. 18, 44–52 (1973). Shafer, A. B. A., Cullingham, C. I., Cote, S. D. & Coltman, D. W. Of glaciers and refugia: a decade of study sheds new light on the phylogeography of northwestern North America. Mol. Ecol. 19, 4589–4621 (2010). Jaramillo-Correa, J. P., Beaulieu, J., Khasa, D. P. & Bousquet, J. Inferring the past from the present phylogeographic structure of North American forest trees: seeing the forest for the genes. Can. J. Forest Res. 39, 286–307 (2009). Godbout, J., Fazekas, A., Newton, C. H., Yeh, F. C. & Bousquet, J. Glacial vicariance in the Pacific Northwest: evidence from a lodgepole pine mitochondrial DNA minisatellite for multiple genetically distinct and widely separated refugia. Mol. Ecol. 17, 2463–2475 (2008). Davis, M. B. & Shaw, R. G. Range shifts and adaptive responses to Quaternary climate change. Science 292, 673–679 (2001). Hampe, A. Bioclimate envelope models: what they detect and what they hide. Global Ecol. Biogeogr. 13, 469–471 (2004). Austin, M. Species distribution models and ecological theory: A critical assessment and some possible new approaches. Ecol. Model. 200, 1–19 (2007). Hamann, A., Wang, T. L., Spittlehouse, D. L. & Murdock, T. Q. A comprehensive, high-resolution database of historical and projected climate surfaces for western North America. B. Am. Meteorol. Soc. 94, 1307–1309 (2013). Daly, C. et al. Physiographically sensitive mapping of climatological temperature and precipitation across the conterminous United States. Int. J. Climatol. 28, 2031–2064 (2008). Worrall, J. J. et al. Recent declines of Populus tremuloides in North America linked to climate. Forest Ecol. Manag. 299, 35–51 (2013). Rehfeldt, G. E. A spline model of climate for the western United States. General Technical Report RMRS-GTR-165 (USDA, Forest Service, Rocky Mountain Research Station, Fort Collins, CO, 2006). Kutzbach, J. et al. Climate and biome simulations for the past 21,000 years. Quaternary Sci. Rev. 17, 473–506 (1998). Liaw, A. & Wiener, M. Classification and Regression by randomForest. R News 2, 18–22 (2002). R Development Core Team. R: A language and environment for statistical computing. (R Foundation for Statistical Computing, Vienna, Austria, 2013). Fawcett, T. An introduction to ROC analysis. Pattern Recogn. Lett 27, 861–874 (2006). Husson, F., Josse, J., Le, S. & Mazet, J. FactoMineR: Multivariate Exploratory Data Analysis and Data Mining with R v1.16. http://cran.r-project.org/package=FactoMineR (2013). SAS Institute. SAS/STAT 9.2 User's Guide (SAS Intstitute Inc., Cary, NC, 2008). Funding was provided by the NSERC Discovery Grant RGPIN-330527-07 and the NSERC/Industry Collaborative Research and Development Grant CRDPJ 349100-06. We thank Alberta-Pacific Forest Industries Inc., Ainsworth Engineered Canada LP (now: Norbord Inc.), Daishowa-Marubeni International Ltd., the Western Boreal Aspen Corporation, and Weyerhaeuser Company, Ltd. for their financial and in-kind support. The aspen species distribution model was developed by G. R. Rehfeldt and used with permission. University of Alberta, Department of Renewable Resources, 751 General Services Building Edmonton, AB, Alberta, T6G 2H1, Canada Chen Ding, Stefan G. Schreiber, David R. Roberts & Andreas Hamann Isabella Point Forestry Ltd., 331 Roland Road, Salt Spring Island, BC, Alberta, V8K 1V1, Canada Jean S. Brouard Chen Ding Stefan G. Schreiber David R. Roberts Andreas Hamann A.H. and J.S.B. conceived and designed the study, S.G.S. and D.R.R. performed the palaeoecological modelling, C.D. performed the genetic analysis and led the writing. Correspondence to Andreas Hamann. Ding, C., Schreiber, S.G., Roberts, D.R. et al. Post-glacial biogeography of trembling aspen inferred from habitat models and genetic variance in quantitative traits. Sci Rep 7, 4672 (2017). https://doi.org/10.1038/s41598-017-04871-7
CommonCrawl
inradius and circumradius of equilateral triangle formula Area circumradius formula proof. No history. To prove this, note that the lines joining the angles to the incentre divide the triangle into three smaller triangles, with bases a, b and c respectively and each with height r. Circumradius and inradius these two terms come from geometry. Let be the perimeter of A'B'C', be the circumradius of ABC, and be the area of ABC. In an equilateral triangle all three sides are of the same length and let the length of each side be 'a' units.Formula 1: Area of an equilateral triangle if its… Related Calculator. picture. 5. The incenter is the intersection of the three angle bisectors. Distance between Incenter and Circumcenter of a triangle using Inradius and Circumradius. The circumference of the circumcircle = 2∏R = 2 X 22/7 X 14 = 88 cm. View 1 Upvoter. For equilateral triangles In the case of an equilateral triangle, where all three sides (a,b,c) are have the same length, the radius of the circumcircle is given by the formula: where s is the length of a side of the triangle. Joined Sep 28, 2005 Messages 7,216. The inradius of an equilateral triangle is s 3 6 \frac{s\sqrt{3}}{6} 6 s 3 . Find its area. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. Circumradius of a triangle given 3 exradii and inradius; Circumradius of a triangle given 3 sides; Distance between circumcenter and incenter by Euler's theorem; Heron's formula; Inradius of a triangle given 3 exradii; Length of angle bisector of angle C; Length of median (on side c) of a triangle 38. Circumradius, R for any triangle = \\frac{abc}{4A}) ∴ for an equilateral triangle its circum-radius, R = \\frac{abc}{4A}) = \\frac{a}{\sqrt{3}}), Let one of the ex-radii be r1. Q; in an equilateral triangle of side 2√3 then circum-radius is. For a triangle, it is the measure of the radius of the circle that circumscribes the triangle. Incircle of a triangle. By the triangle inequality, the longest side length of a triangle is less than the semiperimeter. 2 [18th Century]." Click hereto get an answer to your question ️ If in a triangle, R and r are the circumradius and inradius respectively and r1, r2 and r3 are in H.P . So if you were to draw the inradius for this one, which is kind of a neat result. Sure you can find a general formula for relationship between sides combination and radius, but it would be a little tricky, you should express from Pythagorean theorem sides and put in radius calculation formulas, or at least use angles formula, which you can find easily on the internet. Therefore, by AA similarity, so we have The circumradius is the radius of the circumscribed sphere. Area A = r \\times) s, where r is the in radius and 's' is the semi perimeter. Thank you. The formula above can be simplified with Heron's Formula, yielding ; The radius of an incircle of a right triangle (the inradius) with legs and hypotenuse is . If you know just one side and its opposite angle, https://artofproblemsolving.com/wiki/index.php?title=Circumradius&oldid=128765. Area of an Equilateral Triangle- Formula, Definition ... Cle properties are. For your triangle s=3 and the area is √3. I know the semiperimeter is $35$, but how do I find the area without knowing the height? Area = r1 * (s-a), where 's' is the semi perimeter and 'a' is the side of the equilateral triangle. The semi perimeter, s = \\frac{3a}{2}) In-radius, 'r' for any triangle = \\frac{A}{s}) ∴ for an equilateral triangle its in-radius, 'r' = \\frac{A}{s}) = \\frac{a}{{2\sqrt{3}}}) Formula 3: Area of a triangle if its circumradius, R is known. picture. ... Inradius and Circumradius. By Heron 's formula the area is √[s(s-a)(s-b)(s-c)], where a=2,b=2,c=2 are the sides of the triangle and s=(a+b+c)/2. Or inscribed circle of triangle a is largest containedcircle. The triangle of largest area of all those inscribed in a given circle is equilateral; and the triangle of smallest area of all those circumscribed around a given circle is equilateral. 5. Area A = r \\times) s, where r is the in radius and 's' is the semi perimeter. Add in the incircle and drop the altitudes from the incenter to the sides of the triangle. In an equilateral triangle, the inradius and circumradius are connected by. Formula for Circumradius. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. The circumradius of a triangle is the radius of the circle circumscribing the triangle. In an equilateral triangle, the inradius and the circumradius a. It is one of the five platonic solids (the other ones are tetrahedron, cube, dodecahedron and icosahedron). circumradius r . picture. The hypotenuse of the triangle is the diameter of its circumcircle, and the circumcenter is its midpoint, so the circumradius is equal to half of the hypotenuse of the right triangle. Let and denote the triangle's three sides and let denote the area of the triangle. 2 See answers 6. Staff member. Calculating the radius Its radius, the inradius (usually denoted by r) is given by r = K/s, where K is the area of the triangle and s is the semiperimeter (a+b+c)/2 (a, b and c being the sides). Joined Apr 8, 2006 Messages 122. However, remember that . Another triangle calculator, which determines radius of incircle Well, having radius you can find out everything else about circle. The circumradius of a cyclic polygon is a radius of the circle inside which the polygon can be inscribed. They must be similar triangles. The triangle area using Heron's formula Heron's formula gives the area of a triangle when the length of all three sides are known. An incircle center is called incenter and has a radius named inradius. Then, the measure of the circumradius of the triangle is simply . Let ABC be an acute triangle and A'B'C' be its orthic triangle (the triangle formed by the endpoints of the altitudes of ABC). ... What we have now is a right triangle with one know side and one known acute angle. Since the tangents a to from point a outside are circle equalwe. Open App Continue with Mobile Browser. In traditional or Euclidean geometry, equilateral triangles are also equiangular; that is, all three internal angles are also congruent to each other and are each 60. We are given an equilateral triangle of side 8cm. A circumscribed circle or circumcircle of a triangle is a circle which passes through all the vertices of the triangle. 2003 AIME II problem 7. 4. The ratio of circumradius (R) & inradius (r) in an equilateral triangle is 2:1, so R/ r = 2:1. Look at the image below Here ∆ ABC is an equilateral triangle. Ans; you have apply the formula as shown below. With the vertices of the triangle ABC as centres, three circles are described, each touching the other two externally. Register in 2 easy steps and start learning in 5 minutes! picture. The center of the incircle is called the triangle's incenter. The product of the inradius and semiperimeter (half the perimeter) of a triangle is its area. Circumradius of equilateral triangle= side of triangle/√3 =12/√3 HOPE IT HELPS YOU!! I need to find the inradius of a triangle with side lengths of $20$, $26$, and $24$. G. galactus Super Moderator. [14] : p.198 The triangle of largest area of all those inscribed in a given circle is equilateral; and the triangle of smallest area of all those circumscribed around a … An incircle center is called incenter and has a radius named inradius. then H.M of the exradii of the triangle is? So let me draw some angle bisectors here. I know the semiperimeter is $35$, but how do I find the area without knowing the height? or If the sides of the triangles are 10 cm, 8 … The semi perimeter, s = \\frac{3a}{2}) In-radius, 'r' for any triangle = \\frac{A}{s}) ∴ for an equilateral triangle its in-radius, 'r' = \\frac{A}{s}) = \\frac{a}{{2\sqrt{3}}}), Area, A = \\frac{abc}{4R}), where R is the circumradius. Here r = 7 cm so R = 2r = 2×7 = 14 cm. 1/3 ×√3 ×2√3. Since every triangle is cyclic, every triangle has a circumscribed circle, or a circumcircle. Its radius, the inradius (usually denoted by r) is given by r = K/s, where K is the area of the triangle and s is the semiperimeter (a+b+c)/2 (a, b and c being the sides). Inradius and circumradius of equilateral triangle formula Ask for details ; Follow Report by Narendramodi3821 01.04.2019 Log in to add a comment New questions in Math. . This can be rewritten as . With the vertices of the triangle ABC as centres, three circles are described, each touching the other two externally. The inradius of the incircle in a triangle with sides of length , , is given by r = ( s − a ) ( s − b ) ( s − c ) s , {\displaystyle r={\sqrt {\frac {(s-a)(s-b)(s-c)}{s}}},} where s = ( a + b + c ) / 2. is an equilateral triangle of side If are the circumradius, inradius, and altitude, respectively, then is equal to 4 (b) 2 (c) 1 (d) 3 1:27 1.5k LIKES Area of a Triangle Formula: inradius & semiperimeter. 186-190). But relation depends on the condition or types of the polygon. Calculate the distance of a side of the triangle from the centre of the circle. area of triangle circumradius and in radius in terms of area Equilateral Triangle Isosceles Triangle Altitude/height of a triangle on side c given 3 sides GO Circumradius of a triangle given 3 exradii and inradius GO Not registered. Then . there is also a unique relation between circumradius and inradius. Substituting this in gives us  Correct Answer   Choice (C). What is the ratio measures of the in-radius, circum-radius and one of the ex-radius of an equilateral triangle? We let , , , , and . Books. I let the sides of the triangle be $a$, $b$, and $c$. Next lesson. Here are the formulas for area, altitude, perimeter, and semi-perimeter of an equilateral triangle. -- View Answer: 7). Two actually equivalent problems that have constructions of rather different difficulties Thread starter Trenters4325; Start date Jun 7, 2006; T. Trenters4325 Junior Member. The incenter is the intersection of the three angle bisectors. like, if the polygon is square the relation is different than the triangle. The center of this circle is called the circumcenter and its radius is called the circumradius. ∴ ex-radius of the equilateral triangle, r1 = \\frac{A}{s-a}) = \\frac{{\sqrt{3}}a}{2}), Therefore, the ratio of these radii is \\frac{a}{{2\sqrt{3}}}) : \\frac{a}{\sqrt{3}}) : \\frac{{\sqrt{3}}a}{2}) Or the ratio is 1 : 2 : 3. If has inradius and semi-perimeter, then the area of is .This formula holds true for other polygons if the incircle exists. 1 : 2 : 3. Also, because they both subtend arc . go. The incircle or inscribed circle of a triangle is the largest circle contained in the triangle; it touches (is tangent to) the three sides. An equilateral triangle is a triangle in which all three sides are equal. Physics. The circumradius triangle's respectively. Note that this is similar to the previously mentioned formula; the reason being that . Circumradius: The circumradius( R ) of a triangle is the radius of the circumscribed circle (having center as O) of that triangle. [14]: p.198. Calculating the radius []. Find the length of one side of an equilateral triangle inscribed in a circle of the measure of a radius is 10 radical 3? By Euler's inequality, the equilateral triangle has the smallest ratio R/r of the circumradius to the inradius of any triangle: specifically, R/r = 2.: p.198. All triangles have an incenter, and it always lies inside the triangle. The inradius of a polygon is the radius of its incircle (assuming an incircle exists). By Heron 's formula the area is √[s(s-a)(s-b)(s-c)], where a=2,b=2,c=2 are the sides of the triangle and s=(a+b+c)/2. We have 6 is equal to 1/2 times the inradius times 12. And this formula comes from the area of Heron and . Proof. For an equilateral triangle, all 3 ex radii will be equal. It has 8 faces, 12 edges and 6 vertices. Copyright 2004 - 19 Ascent Education. In an equilateral triangle all three sides are of the same length and let the length of each side be 'a' units. Where is the circumradius, is the inradius, and , , and are the respective sides of the triangle and is the semiperimeter. Federal Medical Center Lexington Ky Jobs, Jolly Person Synonyms, Deepa Malik Game, Th9 Legend League Base, Veterinary Epidemiology Pdf, Alter World Movie, Bcba Salary San Antonio, Verizon Rv Signal Booster, inradius and circumradius of equilateral triangle formula 2021
CommonCrawl
For the album by Brainstorm, see Downburst (album). For the networking phenomenon, see micro-bursting (networking). Illustration of a microburst. The air moves in a downward motion until it hits ground level. It then spreads outward in all directions. The wind regime in a microburst is opposite to that of a tornado. Tree damage from a downburst Severe weather terminology Downburst seen from the ARMOR Doppler Weather Radar in Huntsville, Alabama in 2012. Note the winds in green going towards the radar, and the winds in red going away from the radar. Dry microbursts Wet microbursts Straight-line winds Heat bursts Development stages of microbursts Physical processes of dry and wet microbursts Basic physical processes using simplified buoyancy equations Negative vertical motion associated only with buoyancy Danger to aviation Danger to buildings Downbursts are created by an area of significantly rain-cooled air that, after reaching ground level, spreads out in all directions producing strong winds. Dry downbursts are associated with thunderstorms with very little rain, while wet downbursts are created by thunderstorms with high amounts of rainfall. Microbursts and macrobursts are downbursts at very small and larger scales, respectively. Another variety, the heat burst, is created by vertical currents on the backside of old outflow boundaries and squall lines where rainfall is lacking. Heat bursts generate significantly higher temperatures due to the lack of rain-cooled air in their formation. Downbursts create vertical wind shear or microbursts, which is dangerous to aviation, especially during landing, due to the wind shear caused by its gust front. Several fatal and historic crashes have been attributed to the phenomenon over the past several decades, and flight crew training goes to great lengths on how to properly recognize and recover from a microburst/wind shear event. They usually last for seconds to minutes. They go through three stages in their cycle: the downburst, outburst, and cushion stages. [1] Downburst damages in a straight line. (Source NOAA) A downburst is created by a column of sinking air that after hitting ground level, spreads out in all directions and is capable of producing damaging straight-line winds of over 240 km/h (150 mph), often producing damage similar to, but distinguishable from, that caused by tornadoes. This is because the physical properties of a downburst are completely different from those of a tornado. Downburst damage will radiate from a central point as the descending column spreads out when hitting the surface, whereas tornado damage tends towards convergent damage consistent with rotating winds. To differentiate between tornado damage and damage from a downburst, the term straight-line winds is applied to damage from microbursts. Downbursts are particularly strong downdrafts from thunderstorms. Downbursts in air that is precipitation free or contains virga are known as dry downbursts; [2] those accompanied with precipitation are known as wet downbursts. Most downbursts are less than 4 km (2.5 mi) in extent: these are called microbursts. [3] Downbursts larger than 4 km (2.5 mi) in extent are sometimes called macrobursts. [3] Downbursts can occur over large areas. In the extreme case, a derecho can cover a huge area more than 320 km (200 mi) wide and over 1,600 km (1,000 mi) long, lasting up to 12 hours or more, and is associated with some of the most intense straight-line winds, [4] but the generative process is somewhat different from that of most downbursts. The term microburst was defined by mesoscale meteorology expert Ted Fujita as affecting an area 4 km (2.5 mi) in diameter or less, distinguishing them as a type of downburst and apart from common wind shear which can encompass greater areas. [5] Fujita also coined the term macroburst for downbursts larger than 4 km (2.5 mi). [6] A distinction can be made between a wet microburst which consists of precipitation and a dry microburst which typically consists of virga. [2] They generally are formed by precipitation-cooled air rushing to the surface, but they perhaps also could be powered by strong winds aloft being deflected toward the surface by dynamical processes in a thunderstorm (see rear flank downdraft). Dry microburst schematic When rain falls below the cloud base or is mixed with dry air, it begins to evaporate and this evaporation process cools the air. The cool air descends and accelerates as it approaches the ground. When the cool air approaches the ground, it spreads out in all directions. High winds spread out in this type of pattern showing little or no curvature are known as straight-line winds. [7] Dry microbursts produced by high based thunderstorms that generate little to no surface rainfall, occur in environments characterized by a thermodynamic profile exhibiting an inverted-V at thermal and moisture profile, as viewed on a Skew-T log-P thermodynamic diagram. Wakimoto (1985) developed a conceptual model (over the High Plains of the United States) of a dry microburst environment that comprised three important variables: mid-level moisture, a deep and dry adiabatic lapse rate in the sub-cloud layer, and low surface relative humidity. Wet microbursts are downbursts accompanied by significant precipitation at the surface. [8] These downbursts rely more on the drag of precipitation for downward acceleration of parcels as well as the negative buoyancy which tend to drive "dry" microbursts. As a result, higher mixing ratios are necessary for these downbursts to form (hence the name "wet" microbursts). Melting of ice, particularly hail, appears to play an important role in downburst formation (Wakimoto and Bringi, 1988), especially in the lowest 1 km (0.62 mi) above ground level (Proctor, 1989). These factors, among others, make forecasting wet microbursts difficult. Dry Microburst Wet Microburst Location of highest probability within the United States Midwest / West Southeast Precipitation Little or none Moderate or heavy Cloud bases As high as 500 mb (hPa) As high as 850 mb (hPa) Features below cloud base Virga Precipitation shaft Primary catalyst Evaporative cooling Precipitation loading and evaporative cooling Environment below cloud base Deep dry layer/low relative humidity/dry adiabatic lapse rate Shallow dry layer/high relative humidity/moist adiabatic lapse rate See also: Derecho Straight-line winds (also known as plough winds, thundergusts and hurricanes of the prairie) are very strong winds that can produce damage, demonstrating a lack of the rotational damage pattern associated with tornadoes. [9] Straight-line winds are common with the gust front of a thunderstorm or originate with a downburst from a thunderstorm. These events can cause considerable damage, even in the absence of a tornado. The winds can gust to 210 km/h (130 mph) [10] and winds of 93 km/h (58 mph) or more can last for more than twenty minutes. [11] In the United States, such straight-line wind events are most common during the spring when instability is highest and weather fronts routinely cross the country.[ citation needed ] Straight-line wind events in the form of derechos can take place throughout the eastern half of the U.S. [12] Straight-line winds may be damaging to marine interests. Small ships, cutters and sailboats are at risk from this meteorological phenomenon.[ citation needed ] The formation of a downburst starts with hail or large raindrops falling through drier air. Hailstones melt and raindrops evaporate, pulling latent heat from surrounding air and cooling it considerably. Cooler air has a higher density than the warmer air around it, so it sinks to the ground. As the cold air hits the ground it spreads out and a mesoscale front can be observed as a gust front. Areas under and immediately adjacent to the downburst are the areas which receive the highest winds and rainfall, if any is present. Also, because the rain-cooled air is descending from the middle troposphere, a significant drop in temperatures is noticed. Due to interaction with the ground, the downburst quickly loses strength as it fans out and forms the distinctive "curl shape" that is commonly seen at the periphery of the microburst (see image). Downbursts usually last only a few minutes and then dissipate, except in the case of squall lines and derecho events. However, despite their short lifespan, microbursts are a serious hazard to aviation and property and can result in substantial damage to the area. Main article: Heat burst A special, and much rarer, kind of downburst is a heat burst, which results from precipitation-evaporated air compressionally heating as it descends from very high altitude, usually on the backside of a dying squall line or outflow boundary. [13] Heat bursts are chiefly a nocturnal occurrence, can produce winds over 160 km/h (100 mph), are characterized by exceptionally dry air, can suddenly raise the surface temperature to 38 °C (100 °F) or more, and sometimes persist for several hours. The evolution of microbursts is broken down into three stages: the contact stage, the outburst stage, and the cushion stage. A downburst initially develops as the downdraft begins its descent from the cloud base. The downdraft accelerates, and within minutes reaches the ground (contact stage). During the outburst stage, the wind "curls" as the cold air of the downburst moves away from the point of impact with the ground. During the cushion stage, winds about the curl continue to accelerate, while the winds at the surface slow due to friction. [14] This article needs attention from an expert in meteorology. Please add a reason or a talk parameter to this template to explain the issue with the article. WikiProject Meteorology may be able to help recruit an expert.(February 2009) Start by using the vertical momentum equation: dwdt=−1ρ∂p∂z−g{\displaystyle {dw \over dt}=-{1 \over \rho }{\partial p \over \partial z}-g} By decomposing the variables into a basic state and a perturbation, defining the basic states, and using the ideal gas law (p=ρRTv{\displaystyle p=\rho RT_{v}} ), then the equation can be written in the form B≡−ρ′ρ¯g=gTv′−T¯vT¯v{\displaystyle B\equiv -{\rho ^{\prime } \over {\bar {\rho }}}g=g{T_{v}^{\prime }-{\bar {T}}_{v} \over {\bar {T}}_{v}}} where B is buoyancy. The virtual temperature correction usually is rather small and to a good approximation; it can be ignored when computing buoyancy. Finally, the effects of precipitation loading on the vertical motion are parametrized by including a term that decreases buoyancy as the liquid water mixing ratio (ℓ{\displaystyle \ell } ) increases, leading to the final form of the parcel's momentum equation: dw′dt=1ρ¯∂p′∂z+B−gℓ{\displaystyle {dw^{\prime } \over dt}={1 \over {\bar {\rho }}}{\partial p^{\prime } \over \partial z}+B-g\ell } The first term is the effect of perturbation pressure gradients on vertical motion. In some storms this term has a large effect on updrafts (Rotunno and Klemp, 1982) but there is not much reason to believe it has much of an impact on downdrafts (at least to a first approximation) and therefore will be ignored. The second term is the effect of buoyancy on vertical motion. Clearly, in the case of microbursts, one expects to find that B is negative meaning the parcel is cooler than its environment. This cooling typically takes place as a result of phase changes (evaporation, melting, and sublimation). Precipitation particles that are small, but are in great quantity, promote a maximum contribution to cooling and, hence, to creation of negative buoyancy. The major contribution to this process is from evaporation. The last term is the effect of water loading. Whereas evaporation is promoted by large numbers of small droplets, it only requires a few large drops to contribute substantially to the downward acceleration of air parcels. This term is associated with storms having high precipitation rates. Comparing the effects of water loading to those associated with buoyancy, if a parcel has a liquid water mixing ratio of 1.0 g kg −1, this is roughly equivalent to about 0.3 K of negative buoyancy; the latter is a large (but not extreme) value. Therefore, in general terms, negative buoyancy is typically the major contributor to downdrafts. [15] Using pure "parcel theory" results in a prediction of the maximum downdraft of −wmax=2×NAPE{\displaystyle -w_{\rm {max}}={\sqrt {2\times {\hbox{NAPE}}}}} where NAPE is the negative available potential energy, NAPE=−∫SFCLFSBdz{\displaystyle {\hbox{NAPE}}=-\int _{\rm {SFC}}^{\rm {LFS}}B\,dz} and where LFS denotes the level of free sink for a descending parcel and SFC denotes the surface. This means that the maximum downward motion is associated with the integrated negative buoyancy. Even a relatively modest negative buoyancy can result in a substantial downdraft if it is maintained over a relatively large depth. A downward speed of 25 m/s (56 mph; 90 km/h) results from the relatively modest NAPE value of 312.5 m2 s−2. To a first approximation, the maximum gust is roughly equal to the maximum downdraft speed. [15] Further information: Downburst, Wind shear, and Cumulonimbus and aviation A series of photographs of the surface curl soon after a microburst impacted the surface Downbursts, particularly microbursts, are exceedingly dangerous to aircraft which are taking off or landing due to the strong vertical wind shear caused by these events. A number of fatal crashes have been attributed to downbursts. [16] The following are some fatal crashes and/or aircraft incidents that have been attributed to microbursts in the vicinity of airports: A 1956 Kano Airport BOAC Argonaut crash, BOAC Canadair C-4 Argonaut (G-ALHE), Kano Airport – 24 June 1956. Malév Flight 731 Ilyushin Il-18 (HA-MOC), Copenhagen Airport – 28 August 1971. Eastern Air Lines Flight 66, Boeing 727-225(N8845E), John F. Kennedy International Airport – 24 June 1975 [16] Pan Am Flight 759, Boeing 727-235 (N4737), New Orleans International Airport – 9 July 1982 [16] Delta Air Lines Flight 191, Lockheed L-1011 TriStar (N726DA), Dallas/Fort Worth International Airport – 2 August 1985 [16] Martinair Flight 495, McDonnell Douglas DC-10 (PH-MBN), Faro Airport – 21 December 1992 [17] USAir Flight 1016, Douglas DC-9 (N954VJ), Charlotte/Douglas International Airport – 2 July 1994 Goodyear Blimp GZ-20A (N1A, "Stars and Stripes"), Coral Springs, Florida – 16 June 2005 Bhoja Air Flight 213, Boeing 737-200 (AP-BKC), Islamabad International Airport, Islamabad, Pakistan – April 20, 2012 A microburst often causes aircraft to crash when they are attempting to land (the above-mentioned BOAC and Pan Am flights are notable exceptions). The microburst is an extremely powerful gust of air that, once hitting the ground, spreads in all directions. As the aircraft is coming in to land, the pilots try to slow the plane to an appropriate speed. When the microburst hits, the pilots will see a large spike in their airspeed, caused by the force of the headwind created by the microburst. A pilot inexperienced with microbursts would try to decrease the speed. The plane would then travel through the microburst, and fly into the tailwind, causing a sudden decrease in the amount of air flowing across the wings. The decrease in airflow over the wings of the aircraft causes a drop in the amount of lift produced. This decrease in lift combined with a strong downward flow of air can cause the thrust required to remain at altitude to exceed what is available, thus causing the aircraft to stall. [16] If the plane is at a low altitude shortly after takeoff or during landing, it will not have sufficient altitude to recover. The strongest microburst recorded thus far occurred at Andrews Field, Maryland on August 1st 1983, with wind speeds reaching 240.5 km/h (149.5 mi/h). [18] On 9 June 2019, a wet microburst in Dallas, Texas killed one and injured several when a crane collapsed on an apartment building. Strong microburst winds flip a several-ton shipping container up the side of a hill, Vaughan, Ontario, Canada On 15 May 2018, an extremely powerful front moved through the northeastern United States, specifically New York and Connecticut, causing significant damage. Nearly a half million people lost power and 5 people were killed. Winds were recorded in excess of 100 MPH and several tornadoes and macrobursts were confirmed by the NWS. On 3 April 2018, a wet microburst struck William P. Hobby Airport, Texas at 11:53 PM, causing an aircraft hangar to partially collapse. Six business jets (four stored in the hangar and two outside) were damaged. A severe thunderstorm warning was issued just seconds before the microburst struck. On August 9, 2016, a wet microburst struck the city of Cleveland Heights, Ohio, an eastern suburb of Cleveland. [19] [20] The storm developed very quickly. Thunderstorms developed west of Cleveland at 9 PM, and the National Weather Service issued a severe thunderstorm warning at 9:55 PM. The storm had passed over Cuyahoga County by 10:20 PM. [21] Lightning struck 10 times per minute over Cleveland Heights. [21] and 80 miles per hour (130 km/h) winds knocked down hundreds of trees and utility poles. [20] [22] More than 45,000 people lost power, with damage so severe that nearly 6,000 homes remained without power two days later. [22] On July 22, 2016, a wet microburst hit portions of Kent and Providence Counties in Rhode Island, causing wind damage in the cities of Cranston, Rhode Island and West Warwick, Rhode Island. Numerous fallen trees were reported, as well as downed powerlines and minimal property damage. Thousands of people were without power for several days, even as long as over 4 days. The storm occurred late at night, and no injuries were reported. On June 23, 2015, a macroburst hit portions of Gloucester and Camden Counties in New Jersey causing widespread damage mostly due to falling trees. Electrical utilities were affected for several days causing protracted traffic signal disruption and closed businesses. On August 23, 2014, a dry microburst hit Mesa, Arizona. It ripped the roof off of half a building and a shed, nearly damaging the surrounding buildings. No serious injuries were reported. On December 21, 2013 a wet microburst hit Brunswick, Ohio. The roof was ripped off of a local business; the debris damaged several houses and cars near the business. Due to the time, between 1 am and 2 am, there were no injuries. On July 9, 2012, a wet microburst hit an area of Spotsylvania County, Virginia near the border of the city of Fredericksburg, causing severe damage to two buildings. One of the buildings was a children's cheerleading center. Two serious injuries were reported. On July 1, 2012, a wet microburst hit DuPage County, Illinois, a county 15 to 30 mi (24 to 48 km) west of Chicago. The microburst left 250,000 Commonwealth Edison users without power. Many homes did not recover power for one week. Several roads were closed due to 200 reported fallen trees. [23] On June 22, 2012, a wet microburst hit the town of Bladensburg, Maryland, causing severe damage to trees, apartment buildings, and local roads. The storm caused an outage in which 40,000 customers lost power. On September 8, 2011, at 5:01 PM, a dry microburst hit Nellis Air Force Base, Nevada causing several aircraft shelters to collapse. Multiple aircraft were damaged and eight people were injured. [24] On September 22, 2010, in the Hegewisch neighborhood of Chicago, a wet microburst hit, causing severe localized damage and localized power outages, including fallen-tree impacts into at least four homes. No fatalities were reported. [25] On September 16, 2010, just after 5:30 PM, a wet macroburst with winds of 125 mph (201 km/h) hit parts of Central Queens in New York City, causing extensive damage to trees, buildings, and vehicles in an area 8 miles long and 5 miles wide. Approximately 3,000 trees were knocked down by some reports. There was one fatality when a tree fell onto a car on the Grand Central Parkway. [26] [27] On June 24, 2010, shortly after 4:30 PM, a wet microburst hit the city of Charlottesville, Virginia. Field reports and damage assessments show that Charlottesville experienced numerous downbursts during the storm, with wind estimates at over 75 mph (121 km/h). In a matter of minutes, trees and downed power lines littered the roadways. A number of houses were hit by trees. Immediately after the storm, up to 60,000 Dominion Power customers in Charlottesville and surrounding Albemarle County were without power. [28] On June 11, 2010, around 3:00 AM, a wet microburst hit a neighborhood in southwestern Sioux Falls, South Dakota. It caused major damage to four homes, all of which were occupied. No injuries were reported. Roofs were blown off of garages and walls were flattened by the estimated 100 mph (160 km/h) winds. The cost of repairs was thought to be $500,000 or more. [29] On May 2, 2009, the lightweight steel and mesh building in Irving, Texas used for practice by the Dallas Cowboys football team was flattened by a microburst, according to the National Weather Service. [30] On March 12, 2006, a microburst hit Lawrence, Kansas. 60 percent of the University of Kansas campus buildings sustained some form of damage from the storm. Preliminary estimates put the cost of repairs at between $6 million and $7 million. [31] On May 13, 1989, a microburst with winds over 95 mph hit Fort Hood, Texas. Over 200 U.S. Army helicopters were damaged. The storm damaged at least 20 percent of the fort's buildings, forcing 25 military families from their quarters. In a preliminary damage estimate, the Army said repairs to almost 200 helicopters would cost $585 million and repairs to buildings and other facilities about $15 million. https://www.nytimes.com/1989/05/20/us/storm-wrecks-new-copters.html Bow echo Convective storm detection Line echo wave pattern List of derecho events List of microbursts Low level windshear alert system (LLWAS) Mesovortex Planetary boundary layer (PBL) Vertical draft Windthrow warm uplift region where the tower of the thundercloud is tipped by high-altitude shear winds. The high shear causes horizontal vorticity which is tilted within the updraft to become vertical vorticity, and the mass of clouds spins as it gains altitude up to the cap, which can be up to 55,000 feet (17,000 m)–70,000 feet (21,000 m) above ground for the largest storms, and trailing anvil. The capped, moisture-laden air is cooled enough to precipitate as it is rotated toward the cooler region, represented by the turbulent air of the mammatus clouds where the warm air is spilling over top of the cooler, invading air. The cap is formed where shear winds block further uplift for a time, until a relative weakness allows a breakthrough of the cap ; cooler air to the right in the image may or may not form a shelf cloud, but the precipitation zone will occur where the heat engine of the uplift intermingles with the invading, colder air. As the cooler but drier air circulates into the warm, moisture laden inflow, the cloud base will frequently form a wall, and the cloud base often experiences a lowering, which, in extreme cases, are where tornadoes are formed. Tetsuya Theodore "Ted" Fujita was a prominent Japanese-American severe storms researcher. His research at the University of Chicago on severe thunderstorms, tornadoes, hurricanes, and typhoons revolutionized the knowledge of each. Although he is best known for creating the Fujita scale of tornado intensity and damage., he also discovered downbursts and microbursts, and was an instrumental figure in advancing modern understanding of many severe weather phenomena and how they affect people and communities, especially through his work exploring the relationship between wind speed and damage. The heat wave of 1995 derecho series are a series of derechos that occurred from July 11 through July 15, 1995 in the U.S. and Ontario, Canada. This weather event is among one of the least known about but still notable weather events that occurred during the 20th century. The Independence Day Derecho of 1977 was a derecho, or long-lived windstorm associated with a fast-moving band of thunderstorms, that occurred in the northern Great Plains of the U.S. on July 4, 1977. It lasted around 15½ hours. The derecho formed in Minnesota around 10 a.m. CDT on July 4 and became more intense around noon in the central part of the state. The derecho produced winds of 80-100 mph (130–160 km/h) in northern Wisconsin felling thousands of trees in the northern part of the state. The Late-May 1998 tornado outbreak and derecho was a historic tornado outbreak and derecho that began on the afternoon of May 30 and extended throughout May 31, 1998, across a large portion of the northern half of the United States and southern Ontario from southeastern Montana east and southeastward to the Atlantic Ocean. The initial tornado outbreak, including the devastating Spencer tornado, hit southeast South Dakota on the evening of May 30. The Spencer tornado was the most destructive and the second-deadliest tornado in South Dakota history. Eleven people were killed; 7 by tornadoes and 6 by the derecho. Over two million people lost electrical power, some for up to 10 days. A gustnado is a short-lived, shallow surface-based vortex which forms within the downburst emanating from a thunderstorm. The name is a portmanteau by elision of "gust front tornado", as gustnadoes form due to non-tornadic straight-line wind features in the downdraft (outflow), specifically within the gust front of strong thunderstorms. Gustnadoes tend to be noticed when the vortices loft sufficient debris or form condensation cloud to be visible although it is the wind that makes the gustnado, similarly to tornadoes. As these eddies very rarely connect from the surface to the cloud base, they are very rarely considered as tornadoes. The gustnado has little in common with tornadoes structurally or dynamically in regard to vertical development, intensity, longevity, or formative process --as classic tornadoes are associated with mesocyclones within the inflow (updraft) of the storm, not the outflow. An updraft is a small scale current of rising air, often within a cloud. The Canton, Illinois Tornadoes of 1975 is a destructive summer tornado event which occurred as part of a significant severe thunderstorm outbreak concentrated from eastern Iowa across northern and central Illinois on the afternoon and evening of July 23, 1975. The rear flank downdraft or RFD is a region of dry air wrapping around the back of a mesocyclone in a supercell thunderstorm. These areas of descending air are thought to be essential in the production of many supercellular tornadoes. Large hail within the rear flank downdraft often shows up brightly as a hook on weather radar images, producing the characteristic hook echo, which often indicates the presence of a tornado. Severe weather refers to any dangerous meteorological phenomena with the potential to cause damage, serious social disruption, or loss of human life. Types of severe weather phenomena vary, depending on the latitude, altitude, topography, and atmospheric conditions. High winds, hail, excessive precipitation, and wildfires are forms and effects of severe weather, as are thunderstorms, downbursts, tornadoes, waterspouts, tropical cyclones, and extratropical cyclones. Regional and seasonal severe weather phenomena include blizzards (snowstorms), ice storms, and duststorms. Vertically integrated liquid (VIL) is an estimate of the total mass of precipitation in the clouds. The measurement is obtained by observing the reflectivity of the air which is obtained with weather radar. ↑ "What is a Microburst?". National Weather Service . n.d. Retrieved 10 March 2018. 1 2 Fernando Caracena, Ronald L. Holle, and Charles A. Doswell III. Microbursts: A Handbook for Visual Identification. Retrieved on 9 July 2008. 1 2 Glossary of Meteorology. Macroburst. Retrieved on 30 July 2008. ↑ Peter S. Parke and Norvan J. Larson.Boundary Waters Windstorm. Retrieved on 30 July 2008. ↑ Glossary of Meteorology. Microburst. Archived 2008-12-12 at the Wayback Machine Retrieved on 2008-07-30. ↑ Glossary of Meteorology. Macroburst. Retrieved on 2008-07-30. ↑ Glossary of Meteorology. Straight-line wind. Archived 2008-04-15 at the Wayback Machine Retrieved on 2008-08-01. Fujita, T.T. (1985). "The Downburst, microburst and macroburst". SMRP Research Paper 210, 122 pp. ↑ Glossary of Meteorology. Straight-line wind. Archived 15 April 2008 at the Wayback Machine Retrieved on 1 August 2008. ↑ http://www.spc.noaa.gov/misc/AbtDerechos/derechofacts.htm#strength ↑ http://www.spc.noaa.gov/misc/AbtDerechos/casepages/jun291998page.htm ↑ http://www.spc.noaa.gov/misc/AbtDerechos/derechofacts.htm#climatology ↑ "Oklahoma "heat burst" sends temperatures soaring". USA Today|1999-07-08. 8 July 1999. Archived from the original on 25 December 1996. Retrieved 9 May 2007. ↑ University of Illinois – Urbana Champaign. Microbursts. Retrieved on 2008-08-04. 1 2 Charles A. Doswell III. Extreme Convective Windstorms: Current Understanding and Research. Retrieved on 2008-08-04. 1 2 3 4 5 NASA Langley Air Force Base. Making the Skies Safer From Windshear. Archived 2010-03-29 at the Wayback Machine Retrieved on 2006-10-22. ↑ Aviation Safety Network. Damage Report. Retrieved on 2008-08-01. ↑ Glenday, Craig (2013). Guinness Book of World Records 2014. The Jim Pattinson Group. pp. 20. ISBN 978-1-908843-15-9. ↑ Roberts, Samantha (10 August 2016). "What happened in Cleveland Heights Tuesday night?". KLTV. Retrieved 15 August 2016. 1 2 Steer, Jen; Wright, Matt (10 August 2016). "Damage in Cleveland Heights caused by microburst". Fox8.com. Retrieved 15 August 2016. 1 2 Reardon, Kelly (10 August 2016). "Wind gusts reached 58 mph, lightning struck 10 times a minute in Tuesday's storms". The Plain Dealer. Retrieved 15 August 2016. 1 2 Higgs, Robert (11 August 2016). "About 4,000 customers, mostly in Cleveland Heights, still without power from Tuesday's storms". The Plain Dealer. Retrieved 15 August 2016. ↑ Evbouma, Andrei (12 July 2012). "Storm Knocks Out Power to 206,000 in Chicago Area". Chicago Sun-Times . ↑ Gorman, Tom. "8 injured at Nellis AFB when aircraft shelters collapse in windstorm – Thursday, Sept. 8, 2011 | 9 p.m." Las Vegas Sun. Retrieved 30 November 2011. ↑ "Microbursts reported in Hegewisch, Wheeling". Chicago Breaking News. 22 September 2010. Retrieved 30 November 2011. ↑ "New York News, Local Video, Traffic, Weather, NY City Schools and Photos – Homepage – NY Daily News". Daily News. New York. ↑ "Power Restored to Tornado Slammed Residents: Officials". NBC New York. 20 September 2010. Retrieved 30 November 2011. ↑ "Archived copy". Archived from the original on 3 September 2012. Retrieved 26 June 2010. CS1 maint: archived copy as title (link) and http://www.nbc29.com/Global/story.asp?S=12705577 ↑ Brian Kushida (11 June 2010). "Strong Winds Rip Through SF Neighborhood – News for Sioux Falls, South Dakota, Minnesota and Iowa". Keloland.com. Archived from the original on 27 September 2011. Retrieved 30 November 2011. ↑ Gasper, Christopher L. (6 May 2009). "Their view on matter: Patriots checking practice facility". The Boston Globe. Retrieved 12 May 2009. ↑ "One year after microburst, recovery progresses" KU.edu. Retrieved 21 July 2009. Fujita, T. T. (1981). "Tornadoes and Downbursts in the Context of Generalized Planetary Scales". Journal of the Atmospheric Sciences , 38 (8). Wilson, James W. and Roger M. Wakimoto (2001). "The Discovery of the Downburst – TT Fujita's Contribution". Bulletin of the American Meteorological Society , 82 (1). National Weather Service. "Downbursts". National Weather Service Forecast Office Columbia, SC. 5 May 2010. 4 December 2010. http://www.erh.noaa.gov/cae/svrwx/downburst.htm Fujita, T.T. (1981). "Tornadoes and Downbursts in the Context of Generalized Planetary Scales". Journal of the Atmospheric Sciences, 38 (8). Wilson, James W. and Roger M. Wakimoto (2001). "The Discovery of the Downburst – TT Fujita's Contribution". Bulletin of the American Meteorological Society, 82 (1). Wikimedia Commons has media related to Downburst . University of Illinois WW2010 Project NWS JetStream Project Online Weather School Downburst event ~ Denton County, Texas Downburst event ~ Northern Wisconsin, 4 July 1977 Dry downburst event ~ North Carolina statewide, 7 March 2004 The Semi-official Microburst Handbook Homepage (NOAA) Taming the Microburst Windshear (NASA) Microbursts (University of Wyoming) Forecasting Microbursts & Downbursts (Forecast Systems Laboratory)
CommonCrawl
A novel phytocannabinoid isolated from Cannabis sativa L. with an in vivo cannabimimetic activity higher than Δ9-tetrahydrocannabinol: Δ9-Tetrahydrocannabiphorol Identification of a new cannabidiol n-hexyl homolog in a medicinal cannabis variety with an antinociceptive activity in mice: cannabidihexol Pasquale Linciano, Cinzia Citti, … Giuseppe Cannazza Synthetic, non-intoxicating 8,9-dihydrocannabidiol for the mitigation of seizures Mark Mascal, Nema Hafezi, … Jeremy P. E. Spencer Synthesis and target annotation of the alkaloid GB18 Stone Woo & Ryan A. Shenvi A neoclerodane orthoester and other new neoclerodane diterpenoids from Teucrium yemense chemistry and effect on secretion of insulin Mohammad Nur-e-Alam, Ifat Parveen, … Adnan J. Al-Rehaily An Unusually Broad Series of Seven Cyclombandakamines, Bridged Dimeric Naphthylisoquinoline Alkaloids from the Congolese Liana Ancistrocladus ealaensis Dieudonné Tshitenge Tshitenge, Torsten Bruhn, … Gerhard Bringmann Isolation and structure elucidation of pyridine alkaloids from the aerial parts of the Mongolian medicinal plant Caryopteris mongolica Bunge Dumaa Mishig, Margit Gruner, … Hans-Joachim Knölker Crepidtumines A and B, Two Novel Indolizidine Alkaloids from Dendrobium crepidatum Xiaolin Xu, Xingyue Chen, … Gang Ding Discovery, total syntheses and potent anti-inflammatory activity of pyrrolinone-fused benzoazepine alkaloids Asperazepanones A and B from Aspergillus candidus Li Xu, Feng-Wei Guo, … Chang-Lun Shao Phallac acids A and B, new sesquiterpenes from the fruiting bodies of Phallus luteus Seoung Rak Lee, Dahae Lee, … Ki Hyun Kim Cinzia Citti1,2,3 na1, Pasquale Linciano3 na1, Fabiana Russo3, Livio Luongo4, Monica Iannotta4, Sabatino Maione4, Aldo Laganà2,5, Anna Laura Capriotti5, Flavio Forni3, Maria Angela Vandelli3, Giuseppe Gigli2 & Giuseppe Cannazza ORCID: orcid.org/0000-0002-7347-73152,3 770 Altmetric Biological techniques (-)-Trans-Δ9-tetrahydrocannabinol (Δ9-THC) is the main compound responsible for the intoxicant activity of Cannabis sativa L. The length of the side alkyl chain influences the biological activity of this cannabinoid. In particular, synthetic analogues of Δ9-THC with a longer side chain have shown cannabimimetic properties far higher than Δ9-THC itself. In the attempt to define the phytocannabinoids profile that characterizes a medicinal cannabis variety, a new phytocannabinoid with the same structure of Δ9-THC but with a seven-term alkyl side chain was identified. The natural compound was isolated and fully characterized and its stereochemical configuration was assigned by match with the same compound obtained by a stereoselective synthesis. This new phytocannabinoid has been called (-)-trans-Δ9-tetrahydrocannabiphorol (Δ9-THCP). Along with Δ9-THCP, the corresponding cannabidiol (CBD) homolog with seven-term side alkyl chain (CBDP) was also isolated and unambiguously identified by match with its synthetic counterpart. The binding activity of Δ9-THCP against human CB1 receptor in vitro (Ki = 1.2 nM) resulted similar to that of CP55940 (Ki = 0.9 nM), a potent full CB1 agonist. In the cannabinoid tetrad pharmacological test, Δ9-THCP induced hypomotility, analgesia, catalepsy and decreased rectal temperature indicating a THC-like cannabimimetic activity. The presence of this new phytocannabinoid could account for the pharmacological properties of some cannabis varieties difficult to explain by the presence of the sole Δ9-THC. Cannabis sativa has always been a controversial plant as it can be considered as a lifesaver for several pathologies including glaucoma1 and epilepsy2, an invaluable source of nutrients3, an environmentally friendly raw material for manufacturing4 and textiles5, but it is also the most widely spread illicit drug in the world, especially among young adults6. Its peculiarity is its ability to produce a class of organic molecules called phytocannabinoids, which derive from an enzymatic reaction between a resorcinol and an isoprenoid group. The modularity of these two parts is the key for the extreme variability of the resulting product that has led to almost 150 different known phytocannabinoids7. The precursors for the most commonly naturally occurring phytocannabinoids are olivetolic acid and geranyl pyrophosphate, which take part to a condensation reaction leading to the formation of cannabigerolic acid (CBGA). CBGA can be then converted into either tetrahydrocannabinolic acid (THCA) or cannabidiolic acid (CBDA) or cannabichromenic acid (CBCA) by the action of a specific cyclase enzyme7. All phytocannabinoids are biosynthesized in the carboxylated form, which can be converted into the corresponding decarboxylated (or neutral) form by heat8. The best known neutral cannabinoids are undoubtedly Δ9-tetrahydrocannabinol (Δ9-THC) and cannabidiol (CBD), the former being responsible for the intoxicant properties of the cannabis plant, and the latter being active as antioxidant, anti-inflammatory, anti-convulsant, but also as antagonist of THC negative effects9. All these cannabinoids are characterized by the presence of an alkyl side chain on the resorcinyl moiety made of five carbon atoms. However, other phytocannabinoids with a different number of carbon atoms on the side chain are known and they have been called varinoids (with three carbon atoms), such as cannabidivarin (CBDV) and Δ9-tetrahydrocannabivarin (Δ9-THCV), and orcinoids (with one carbon atom), such as cannabidiorcol (CBD-C1) and tetrahydrocannabiorcol (THC-C1)7. Both series are biosynthesized in the plant as the specific ketide synthases have been identified10. Our research group has recently reported the presence of a butyl phytocannabinoid series with a four-term alkyl chain, in particular cannabidibutol (CBDB) and Δ9-tetrahydrocannabutol (Δ9-THCB), in CBD samples derived from hemp and in a medicinal cannabis variety11,12. Since no evidence has been provided for the presence of plant enzymes responsible for the biosynthesis of these butyl phytocannabinoids, it has been suggested that they might derive from microbial ω-oxidation and decarboxylation of their corresponding five-term homologs13. The length of the alkyl side chain has indeed proved to be the key parameter, the pharmacophore, for the biological activity exerted by Δ9-THC on the human cannabinoid receptor CB1 as evidenced by structure-activity relationship (SAR) studies collected by Bow and Rimondi14. In particular, a minimum of three carbons is necessary to bind the receptor, then the highest activity has been registered with an eight-carbon side chain to finally decrease with a higher number of carbon atoms14. Δ8-THC homologs with more than five carbon atoms on the side chain have been synthetically produced and tested in order to have molecules several times more potent than Δ9-THC15,16. To the best of our knowledge, a phytocannabinoid with a linear alkyl side chain containing more than five carbon atoms has never been reported as naturally occurring. However, our research group disclosed for the first time the presence of seven-term homologs of CBD and Δ9-THC in a medicinal cannabis variety, the Italian FM2, provided by the Military Chemical Pharmaceutical Institute in Florence. The two new phytocannabinoids were isolated and fully characterized and their absolute configuration was confirmed by a stereoselective synthesis. According to the International Non-proprietary Name (INN), we suggested for these CBD and THC analogues the name "cannabidiphorol" (CBDP) and "tetrahydrocannabiphorol" (THCP), respectively. The suffix "-phorol" comes from "sphaerophorol", common name for 5-heptyl-benzen-1,3-diol, which constitutes the resorcinyl moiety of these two new phytocannabinoids. A number of clinical trials17,18,19 and a growing body of literature provide real evidence of the pharmacological potential of cannabis and cannabinoids on a wide range of disorders from sleep to anxiety, multiple sclerosis, autism and neuropathic pain20,21,22,23. In particular, being the most potent psychotropic cannabinoid, Δ9-THC is the main focus of such studies. In light of the above and of the results of the SAR studies14,15,16, we expected that THCP is endowed of an even higher binding affinity for CB1 receptor and a greater cannabimimetic activity than THC itself. In order to investigate these pharmacological aspects of THCP, its binding affinity for CB1 receptor was tested by a radioligand in vitro assay and its cannabimimetic activity was assessed by the tetrad behavioral tests in mice. Identification of cannabidiphorol (CBDP) and Δ9-tetrahydrocannabiphorol (Δ9-THCP) by liquid chromatography coupled to high-resolution mass spectrometry (LC-HRMS) The FM2 ethanolic extract was analyzed by an analytical method recently developed for the cannabinoid profiling of this medicinal cannabis variety12,24. As the native extract contains mainly the carboxylated forms of phytocannabinoids as a consequence of a cold extraction25, part of the plant material was heated to achieve decarboxylation where the predominant forms are neutral phytocannabinoids. The advanced analytical platform of ultra-high performance liquid chromatography coupled to high resolution Orbitrap mass spectrometry was employed to analyze the FM2 extracts and study the fragmentation spectra of the analytes under investigation. The precursor ions of the neutral derivatives cannabidiphorol (CBDP) and Δ9-tetrahydrocannabiphorol (Δ9-THCP), 341.2486 for the [M-H]− and 343.2632 for the [M + H]+, showed an elution time of 19.4 min for CBDP and 21.3 min for Δ9-THCP (Fig. 1a). Their identification was confirmed by the injection of a mixture (5 ng/mL) of the two chemically synthesized CBDP and Δ9-THCP (Fig. 1b) as it will be described later. As for their carboxylated counterpart, the precursor ions of the neutral forms CBDP and Δ9-THCP break in the same way in ESI+ mode, but they show a different fragmentation pattern in ESI− mode. Whilst Δ9-THCP shows only the precursor ion [M-H]− (Fig. 1d), CBDP molecule generates the fragments at m/z 273.1858 corresponding to a retro Diels-Alder reaction, and 207.1381 corresponding to the resorcinyl moiety after the break of the bond with the terpenoid group (Fig. 1c). It is noteworthy that for both molecules, CBDP and Δ9-THCP, each fragment in both ionization modes differ exactly by an ethylene unit (CH2)2 from the corresponding five-termed homologs CBD and THC. Moreover, the longer elution time corroborates the hypothesis of the seven-termed phytocannabinoids considering the higher lipophilicity of the latter. UHPLC-HRMS identification of (-)-trans-CBDP and (-)-trans-Δ9-THCP. Extracted ion chromatograms (EIC) of CBDP and Δ9-THCP from a standard mixture at 25 and 10 ng/mL respectively (a) and from the native (red plot) and decarboxylated (black plot) FM2 (b). (c,d) Comparison of the high-resolution fragmentation spectra of synthetic and natural CBDP and Δ9-THCP in both positive (ESI+) and negative (ESI−) mode. Isolation and characterization of natural CBDP and Δ9-THCP In order to selectively obtain a cannabinoid-rich fraction of FM2, n-hexane was used to extract the raw material instead of ethanol, which carries other contaminants such as flavonoids and chlorophylls along with cannabinoids26. An additional dewaxing step at −20 °C for 48 h and removal of the precipitated wax was necessary to obtain a pure cannabinoids extract. Semi-preparative liquid chromatography with a C18 stationary phase allowed for the separation of 80 fractions, which were analyzed by LC-HRMS with the previously described method. In this way, the fractions containing predominantly cannabidiphorolic acid (CBDPA) and tetrahydrocannabipgorolic acid (THCPA) were separately subject to heating at 120 °C for 2 h in order to obtain their corresponding neutral counterparts CBDP and Δ9-THCP as clear oils with a >95% purity. The material obtained was sufficient for a full characterization by 1H and 13C NMR, circular dichroism (CD) and UV absorption. Stereoselective synthesis of CBDP and Δ9-THCP (-)-trans-Cannabidiphorol ((-)-trans-CBDP) and (-)-trans-Δ9-tetrahydrocannabiphorol ((-)-trans-Δ9-THCP) were stereoselectively synthesized as previously reported for the synthesis of (-)-trans-CBDB and (-)-trans-Δ9-THCB homologs11,12,24. Accordingly, (-)-trans-CBDP was prepared by condensation of 5-heptylbenzene-1,3-diol with (1 S,4 R)-1-methyl-4-(prop-1-en-2-yl)cycloex-2-enol, using pTSA as catalyst, for 90 min. Longer reaction time did not improve the yield of (-)-trans-CBDP because cyclization of (-)-trans-CBDP to (-)-trans-Δ9-THCP and then to (-)-trans-Δ8-THCP occurred. 5-heptylbenzene-1,3-diol was synthesized first as reported in the Supporting Information (Supplementary Fig. SI-1). The conversion of (-)-trans-CBDP to (-)-trans-Δ9-THCP using diverse Lewis' acids, as already reported in the literature for the synthesis of the homolog Δ9-THC27,28,29, led to a complex mixture of isomers which resulted in an arduous and low-yield isolation of (-)-trans-Δ9-THCP by standard chromatographic techniques. Therefore, for the synthesis of (-)-trans-Δ9-THCP, its regioisomer (-)-trans-Δ8-THCP was synthesized first by condensation of 5-heptylbenzene-1,3-diol with (1 S,4 R)-1-methyl-4-(prop-1-en-2-yl)cycloex-2-enol, as described above, but the reaction was left stirring for 48 hours. Alternatively, (-)-trans-CBDP could be also quantitatively converted to (-)-trans-Δ8-THCP in the same conditions. Hydrochlorination of the Δ8 double bond of (-)-trans-Δ8-THCP, using ZnCl2 as catalyst, allowed to obtain (-)-trans-HCl-THCP, which was successively converted to (-)-trans-Δ9-THCP in 87% yield by selective elimination on position 2 of the terpene moiety using potassium t-amylate as base (Fig. 2a). Synthesis and spectroscopic characterization of (-)-trans-CBDP and (-)-trans-Δ9-THCP. (a) Reagents and conditions: (a) 5-heptylbenzene-1,3-diol (1.1 eq.), pTSA (0.1 eq.), CH2Cl2, r.t., 90 min.; (b) 5-heptylbenzene-1,3-diol (1.1 eq.), pTSA (0.1 eq.), DCM, r.t., 48 h; (c) pTSA (0.1 eq.), DCM, r.t., 48 h; (d) ZnCl2 (0.5 eq.), 4 N HCl in dioxane (1 mL per 100 mg of Δ8-THCP), dry DCM, argon, 0 °C to r.t., 2 h. (e) 1.75 M potassium t-amylate in toluene (2.5 eq.), dry toluene, argon, −15 °C, 1 h. (b–g) Superimposition of 1H, 13C NMR and CD spectra for natural (red line) and synthesized (blue line) (-)-trans-CBDP (b–d) and (-)-trans-Δ9-THCP (e–g). The chemical identification of synthetic (-)-trans-CBDP and (-)-trans-Δ9-THCP, and their unambiguous 1H and 13C assignments were achieved by NMR spectroscopy (Supplementary Table SI-1,2 and Supplementary Fig. SI-2,3). Since (-)-trans-CBDP and (-)-trans-Δ9-THCP differ from the respective homologs (CBD, CBDB, CBDV, Δ9-THC, Δ9-THCB and Δ9-THCV) solely for the length of the alkyl chain on the resorcinyl moiety, no significant differences in the proton chemical shifts of the terpene and aromatic moieties were observed for CBD and Δ9-THC homologs. The perfect match in the chemical shift of the terpene and aromatic moieties between the synthesized (-)-trans-CBDP and (-)-trans-Δ9-THCP and the respective homologues11,24,30, combined with the mass spectra and fragmentation pattern, allowed us to unambiguously confirm the chemical structures of the two new synthetic cannabinoids. The trans (1R,6R) configuration at the terpene moiety was confirmed by optical rotatory power. The new cannabinoids (-)-trans-CBDP and (-)-trans-Δ9-THCP showed an [α]D20 of −145° and −166°, respectively, in chloroform. The [α]D20 values were in line with those of the homologs11,31, suggesting a (1R,6R) configuration for both CBDP and Δ9-THCP. A perfect superimposition between the 1H (Fig. 2b,e) and 13C NMR spectra (Fig. 2c,f) and the circular dichroism absorption (Fig. 2d,g) of both synthetic and extracted (-)-trans-CBDP and (-)-trans-Δ9-THCP was observed, confirming the identity of the two new cannabinoids identified in the FM2 cannabis variety. Binding affinity at human CB1 and CB2 receptors The binding affinity of (-)-trans-Δ9-THCP against purified human CB1 and CB2 receptors was determined in a radioligand binding assay, using [3H]CP55940 or [3H]WIN 55212-2 as reference compounds, and dose-response curves were determined (Fig. 3a,b). (-)-trans-Δ9-THCP binds with high affinity to both human CB1 and CB2 receptors with a Ki of 1.2 and 6.2 nM, respectively. (-)-trans-Δ9-THCP resulted 33-times more active than (-)-trans-Δ9-THC (Ki = 40 nM), 63-times more active than (-)-trans-Δ9-THCV (Ki = 75.4 nM) and 13-times more active than the newly discovered (-)-trans-Δ9-THCB (Ki = 15 nM) against CB1 receptor12,14. Moreover, the new identified (-)-trans-Δ9-THCP resulted about 5- to 10-times more active against CB2 receptor (Ki = 6.2 nM), in contrast with (-)-trans-Δ9-THC, (-)-trans-Δ9-THCB and (-)-trans-Δ9-THCV, which instead showed a comparable binding affinity with a Ki ranging from 36 to 63 nM (Fig. 3a)12,14. In vitro activity and docking calculation of Δ9-THCP. (a) Binding affinity (Ki) of the four homologues of Δ9-THC against human CB1 and CB2 receptors. (b) Dose-response studies of Δ9-THCP against hCB1 (in blue) and hCB2 (in grey). All experiments were performed in duplicate and error bars denote s.e.m. of measurements. (c) Docking pose of (-)-trans-Δ9-THCP (blue sticks), in complex with hCB1 receptor (PDB ID: 5XRA, orange cartoon). Key amino acidic residues are reported in orange sticks. H-bonds are reported in yellow dotted lines. Heteroatoms are color-coded: oxygen in red, nitrogen in blue and sulphur in yellow. (d) Binding pocket of hCB1 receptor, highlighting the positioning of the heptyl chain within the long hydrophobic channel of the receptor (yellow dashed line). The side hydrophobic pocket is bordered in magenta. Panels c and d were built using Maestro 10.3 of the Schrödinger Suite. The highest activity of (-)-trans-Δ9-THCP, compared to the shorter homologues, was investigated by docking calculation. The X-ray structure of the active conformation of hCB1 receptor in complex with the agonist AM11542 (PDB ID: 5XRA) was used as reference for docking since marked structural changes in the orthosteric ligand-binding site are observed in comparison with the conformation of the receptor bound to an antagonist32,33. AM11542 is a synthetic Δ8 cannabinoid with high affinity against hCB1 receptor (Ki = 0.11 nM) possessing a 7′-bromo-1′,1′-dimethyl-heptyl aliphatic chain at C3 of the resorcinyl moiety. As expected, due to the close chemical similarity, the predicted binding mode of (-)-trans-Δ9-THCP (Fig. 3c) reflected that of AM11542 in the CB1 crystal structure (Fig. SI-6a,b)18. (-)-trans-Δ9-THCP bound in the active conformation of CB1 in an L-shaped pose. The tetrahydro-6H-benzo[c]chromene ring system is located within the main hydrophobic pocket delimited by Phe174, Phe177, Phe189, Lys193, Pro269, Phe170, and Phe268. In particular, the aromatic ring of the resorcinyl moiety is involved in two edge-to-face π-π interactions with Phe170 and Phe268, whereas the phenolic hydroxyl group at C1 is engaged in a H-bond with Ser383 (Fig. 3c). Interestingly, the heptyl chain at C3 extended into a long hydrophobic tunnel formed by Leu193, Val196, Tyr275, Iso271, Leu276, Trp279, Leu359, Phe379, and Met363 (Fig. 3c,d). Because the predicted pose of the tricyclic tetrahydrocannabinol ring system is conserved among the four THC homologs (Supplementary Fig. SI-7a–c), the length of the alkyl chain at C3 of the resorcinyl moiety could account for the different binding affinity observed among the four cannabinoids. (-)-trans-Δ9-THCP (Fig. 3c) and (-)-trans-Δ9-THC (Supplementary Fig. SI-7a) share the same positioning of the alkyl 'tail' within the hydrophobic channel12,33,34. However, the long heptyl chain of Δ9-THCP is able to extend into the tunnel along its entire length, maximizing the hydrophobic interactions with the residues of the side channel. In contrast, the tunnel is only partially occupied by the shorter pentyl chain of (-)-trans-Δ9-THC, accounting for the higher affinity of Δ9-THCP (Ki = 1.2 nM) compared to Δ9-THC (Ki = 40 nM). A different positioning of the 'tail' was instead predicted for the shorter alkyl chain homologues, Δ9-THCV and Δ9-THCB. The propyl and butyl chain of Δ9-THCV and Δ9-THCB, respectively, are too short to effectively extend within the hydrophobic channel. As stated in our previous work12, these shorter chains accommodate within a small hydrophobic pocket delimitated by Phe200, Leu359, and Met363 (Supplementary Fig. SI–7b,c). This side pocket is located at the insertion between the main hydrophobic pocket and the long channel (Fig. 3d) and seems to accommodate small hydrophobic substituents (i.e. gem-dimethyl or cycloalkyl) introduced at C1′ position of the side chain of several synthetic cannabinoids, rationalizing the notable enhancement in potency and affinity for these derivatives35,36,37,38,39. In vivo determination of the cannabinoid profile of Δ9-THCP The cannabinoid activity of Δ9-THCP was evaluated by the tetrad of behavioural tests on mice. The tetrad includes the assessment of spontaneous activity, immobility index (catalepsy), analgesia and changes in rectal temperature. Decrease of locomotor activity, catalepsy, analgesia and hypothermia are well-known signs of physiological manifestations of cannabinoid activity40. After intraperitoneal (i.p.) administration, Δ9-THCP at 2.5 mg/kg markedly reduced the spontaneous activity of mice in the open field, while at 5 and 10 mg/kg it induced catalepsy on the ring with the immobility as compared to the vehicle treated mice (Fig. 4b,c) (0 mg/kg: 6888 cm ± 474.8, 10 mg/kg: 166.8 cm ± 20.50, 5 mg/kg: 127.5 cm ± 31.32, 2.5 mg/kg: 4072 cm ± 350.8, p = 0.0009). Moreover, Δ9-THCP administration induced a significant increase, at 10 and 5 mg/kg, in the latency for moving from the catalepsy bar (Fig. 4e) (0 mg/kg: 15.20 sec ± 4.33, 10 mg/kg: 484.5 sec ± 51.58, 5 mg/kg: 493.4 sec ± 35.68, 2.5 mg/kg: 346.1 sec ± 35.24, p = 0.0051). In the hot plate test (Fig. 4f), Δ9-THCP (10 and 5 mg/kg) induced antinociceptive effect, whereas at 2.5 mg/kg there was a trend in the induction of antinociception, which resulted not statistically significant as compared to the vehicle treated mice (0 mg/kg: 19.20 sec ± 2.65, 10 mg/kg: 57.0 sec ± 2.0, 5 mg/kg: 54.38 sec ± 2.86, 2.5 mg/kg: 40.22 sec ± 5.8, p = 0.0044). Δ9-THCP administration induced a dose dependent significant decrease, only at 10 mg/kg, in body temperature as compared to vehicle (0 mg/kg: 0.40 °C ± 0.25, 10 mg/kg: −7.10 °C ± 0.43, 5 mg/kg: −5.28 °C ± 0.36, 2.5 mg/kg: −4,12 °C ± 0.38, p = 0.0009) (Fig. 4d). Dose-dependent effects of Δ9-THCP administration (2.5, 5, or 10 mg/kg, i.p.) on the tetrad phenotypes in mice in comparison to vehicle. (a) Time schedule of the tetrad tests in minutes from Δ9-THCP or vehicle administration. (b,c) Locomotion decrease induced by Δ9-THCP administration in the open field test. (d) Decrease of body temperature after Δ9-THCP administration; the values are expressed as the difference between the basal temperature (i.e., taken before Δ9-THCP or vehicle administration) and the temperature measured after Δ9-THCP or vehicle administration. (e) Increase in the latency for moving from the catalepsy bar after Δ9-THCP administration. (f) Increase in the latency after the first sign of pain shown by the mouse in the hot plate test following Δ9-THCP administration. Data are represented as mean ± SEM of 5 mice per group. * indicate significant differences compared to 0 (vehicle injection), respectively. *p < 0.05, **p < 0.01, ***p < 0.001 versus Δ9-THCP 0 mg/kg (vehicle). The Kruskall-Wallis test followed by Dunn's post hoc tests were performed for statistical analysis. Semi-quantification of CBDP and Δ9-THCP in the FM2 extract A semi-quantification method based on LC-HRMS allowed to provide an approximate amount of the two new phytocannabinoids in the FM2 ethanol extract. Their pentyl homologues, CBD and Δ9-THC, showed a concentration of 56 and 39 mg/g respectively, in accordance with the values provided by the Military Chemical Pharmaceutical Institute (59 mg/g and 42 mg/g for CBD and Δ9-THC respectively), obtained by the official GC-FID quantitative method. The same semi-quantitative method provided an amount of about 243 and 29 µg/g for CBDP and Δ9-THCP respectively. Up to now, almost 150 phytocannabinoids have been detected in cannabis plant7,41,42, though most of them have neither been isolated nor characterized. The well-known CBD and Δ9-THC have been extensively characterized and proved to possess interesting pharmacological profiles43,44,45,46,47, thus the attention towards the biological activity of their known homologs like CBDV and Δ9-THCV has recently grown as evidenced by the increasing number of publications per year appearing on Scopus. Other homologs like those belonging to the orcinoid series are scarcely investigated likely due to their very low amount in the plant that makes their isolation very challenging. In recent years, the agricultural genetics research has made great progresses on the selection of rare strains that produce high amounts of CBDV, CBG and Δ9-THCV48,49,50, thus it would not be surprising to see in the near future cannabis varieties rich in other minor phytocannabinoids. This genetic selection would enable the production of extracts rich in a specific phytocannabinoid with a characteristic pharmacological profile. For this reason, it is important to carry out a comprehensive chemical profiling of a medicinal cannabis variety and a thorough investigation of the pharmacological activity of minor and less known phytocannabinoids. As the pharmacological activity of Δ9-THC is particularly ascribed to its affinity for CB1 receptor, the literature suggests that the latter can be increased by elongating the alkyl side chain, which represents the main cannabinoid pharmacophoric driving force14. Therefore, taking THC as the lead compound, a series of cannabinoids have been chemically synthesized and their biological potency resulted several times higher than Δ9-THC itself15. To the best of our knowledge, naturally occurring cannabinoids with a linear alkyl side chain longer than five terms have never been detected or even putatively identified in cannabis plant. However, the cutting-edge technological platform of the Orbitrap mass spectrometry and the use of advanced analytical techniques like metabolomics can enable the discovery and identification of new compounds with a high degree of confidence even when present in traces in complex matrices42,51. In the present work, we report for the first time the isolation and full characterization of two new CBD and Δ9-THC heptyl homologs, which we named cannabidiphorol (CBDP) and Δ9-tetrahydrocannabiphorol (Δ9-THCP), respectively. These common names were derived from the traditional naming of phytocannabinoids based on the resorcinyl residue, in this case corresponding to sphaerophorol. The biological results obtained in the in vitro binding assay indicated an affinity for CB1 receptor more than thirty-fold higher compared to the one reported for Δ9-THC in the literature14. Also, this encouraging data was supported by in vivo evaluation of the cannabimimetic activity by the tetrad test, where Δ9-THCP decreased locomotor activity and rectal temperature, induced catalepsy and produced analgesia miming the properties of full CB1 receptor agonists (Fig. 4). In particular, Δ9-THCP proved to be as active as Δ9-THC but at lower doses. In fact, the minimum THC dose used in this kind of test is 10 mg/kg, whereas Δ9-THCP resulted active at 5 mg/kg in three of the four tetrad tests. These results, accompanied by the docking data, are in line with the extensive structure-activity relationship (SAR) studies performed through the years on synthetic cannabinoids, revealing the importance of the length of the alkyl chain in position 3 on the resorcinyl moiety in modulating the ligand affinity at CB1 receptor. Although the amount of the heptyl homologues of CBD and Δ9-THC in the FM2 variety could appear trifling, both in vitro and in vivo preliminary studies reported herein on Δ9-THCP showed a cannabimimetic activity several times higher than its pentyl homolog Δ9-THC. Moreover, it is reasonable to suppose that other cannabis varieties may contain even higher percentages of Δ9-THCP. It is also important to point out that there exists an astonishing variability of subject response to a cannabis-based therapy even with an equal Δ9-THC dose52,53,54. It is therefore possible that the psychotropic effects are due to other extremely active phytocannabinoids such as Δ9-THCP. However, up to now nobody has ever searched for this potent phytocannabinoid in medicinal cannabis varieties. In our opinion, this compound should be included in the list of the main phytocannabinoids to be determined for a correct evaluation of the pharmacological effect of the cannabis extracts administered to patients. In fact, we believe that the discovery of an extremely potent THC-like phytocannabinoid may shed light on several pharmacological effects not ascribable solely to Δ9-THC. Ongoing studies are devoted to the investigation of the pharmacological activity of CBDP and to expand that of Δ9-THCP. It is known that CBD binds with poor affinity to both CB1 and CB2 receptors55. Therefore, the evaluation of the cannabimimetic activity of CBDP does not appear to be a high priority, although science can hold great surprises. Our current work is rather focused on testing its anti-inflammatory, anti-oxidant and anti-epileptic activity, which are typical of CBD46. FM2 cannabis variety is obtained from the strain CIN-RO produced by the Council for Agricultural Research and Economics (CREA) in Rovigo (Italy) and provided to the Military Chemical Pharmaceutical Institute (MCPI, Firenze, Italy) for breeding. FM2 inflorescence (batch n. 6A32/1) was supplied by the MCPI with the authorization of the Italian Ministry of Health (prot. n. SP/062). The raw plant material (10 g) was finely grinded and divided into two batches: one batch (500 mg) was extracted with 50 mL of ethanol 96% according to the procedure indicated by the monograph of Cannabis Flos of the German Pharmacopoeia56 and was analyzed by UHPLC-HESI-Orbitrap after proper dilution with acetonitrile (×100). The remaining 9.5 g were extracted following the protocol of Pellati et al. with some modifications26. Briefly, freeze-dried plant material was extracted with 400 mL of n-hexane for 15 min under sonication in an ice bath. Samples were centrifuged for 10 min at 2000 × g and the supernatants collected. The procedure was repeated twice more on the pellets. The combined supernatants were then dried under reduced pressure and resuspended in 10 mL of acetonitrile, filtered and used for the isolation of CBDPA and THCPA by semi-preparative liquid chromatography. Isolation of natural CBDP and Δ9-THCP Aliquots (1 mL) of the solution obtained as described in the 'Plant Material' section were injected in a semi-preparative LC system (Octave 10 Semba Bioscience, Madison, USA). The chromatographic conditions used are reported in the paper by Citti et al.11. The column employed was a Luna C18 with a fully porous silica stationary phase (Luna 5 µm C18(2) 100 Å, 250 × 10 mm) (Phenomenex, Bologna, Italy) and a mixture of acetronitrile:0.1% aqueous formic acid 70:30 (v/v) was used as mobile phase at a flow rate of 5 mL/min. CBDPA and THCPA (retention time 19.0 min and 75.5 min respectively) were isolated as reported in our previous work11. The fractions containing CBDPA and THCPA were analyzed by UHPLC-HESI-Orbitrap. The fractions containing predominantly either one or the other cannabinoid were separately combined and dried on the rotavapor at 70 °C. Each residue was subject to decarboxylation at 120 °C for two hours in oven. An amount of about 0.6 mg of CBDP and about 0.3 mg of Δ9-THCP was obtained. UHPLC-HESI-Orbitrap metabolomic analysis FM2 extracts were analyzed on a Thermo Fisher Scientific Ultimate 3000 system equipped with a vacuum degasser, a binary pump, a thermostated autosampler, a thermostated column compartment and interfaced to a heated electrospray ionization source and a Q-Exactive Orbitrap mass spectrometer (UHPLC-HESI-Orbitrap). The parameters of the HESI source were set according to Citti et al.11: capillary temperature, 320 °C; vaporizer temperature, 280 °C; electrospray voltage, 4.2 kV (positive mode) and 3.8 kV (negative mode); sheath gas, 55 arbitrary units; auxiliary gas, 30 arbitrary units; S lens RF level, 45. Analyses were acquired using the Xcalibur 3.0 software (Thermo Fisher Scientific, San Jose, CA, USA) in full scan data-dependent acquisition (FS-dd-MS2) in positive (ESI+) and negative (ESI−) mode at a resolving power of 70,000 FWHM at m/z 200. A scan range of m/z 250–400, an AGC of 3e6, an injection time of 100 ms and an isolation window for the filtration of the precursor ions of m/z 0.7 were chosen as the optimal parameters for the mass analyzer. A normalized collision energy (NCE) of 20 was used to fragment the precursor ions. Extracted ion chromatograms (EIC) of the [M + H]+ and [M-H]− molecular ions were derived from the total ion chromatogram (TIC) of the FM2 extracts and matched with pure analytical standards for accuracy of the exact mass (5 ppm), retention time and MS/MS spectrum. The chromatographic separation was carried out on a Poroshell 120 SB-C18 (3.0 × 100 mm, 2.7 µm, Agilent, Milan, Italy) following the conditions employed for our previous work11. A semi-quantitative analysis of Δ9-THC and CBD and their heptyl analogs CBDP and Δ9-THCP was achieved using a calibration curve with an external standard. A stock solution of CBD and Δ9-THC, CBDP and Δ9-THCP (1 mg/mL) was properly diluted to obtain five non-zero calibration points at the final concentrations of 50, 100, 250, 500, and 1000 ng/mL for CBD and Δ9-THC and of 1, 5, 10, 25, and 50 ng/mL for CBDP and Δ9-THCP. A standard solution of Δ9-THC-d3 was added at each calibration standard at a final concentration of 50 ng/mL. The linearity was assessed by the coefficient of determination (R2), which was greater than 0.993 for each analyte. Synthetic procedure All reagents and solvents were employed as purchased without further purification unless otherwise specified. The following abbreviations for common organic solvents have been used herein: diethyl ether (Et2O); dichloromethane (DCM); cyclohexane (CE). Reaction monitoring was performed by thin-layer chromatography on silica gel (60F-254, E. Merck) and checked by UV light, or alkaline KMnO4 aqueous solution57,58,59. Reaction products were purified, when necessary, by flash chromatography on silica gel (40−63 μm) with the solvent system indicated. NMR spectra were recorded on a Bruker 400 or Bruker 600 spectrometer working respectively at 400.134 MHz and 600.130 MHz for 1H and at 100.62 MHz or 150.902 MHz for 13C. Chemical shifts (δ) are in parts per million (ppm) and they were referenced to the solvent residual peaks (CDCl3 δ = 7.26 ppm for proton and δ = 77.20 ppm for carbon); coupling constants are reported in hertz (Hz); splitting patterns are expressed with the following abbreviations: singlet (s), doublet (d), triplet (t), quartet (q), double doublet (dd), quintet (qnt), multiplet (m), broad signal (b). Monodimensional spectra were acquired with a spectral width of 8278 Hz (for 1H-NMR) and 23.9 kHz (for 13C-NMR), a relaxation delay of 1 s, and 32 and 1024 number of transients for 1H-NMR and 13C-NMR, respectively12. The COSY spectra were recorded as a 2048 × 256 matrix with 2 transients per t1 increment and processed as a 2048 × 1024 matrix; the HSQC spectra were collected as a 2048 × 256 matrix with 4 transients per t1 increment and processed as a 2048 × 1024 matrix, and the one-bond heteronuclear coupling value was set to 145 Hz; the HMBC spectra were collected as a 4096 × 256 matrix with 16 transients per t1 increment and processed as a 4096 × 1024 matrix, and the long-range coupling value was set to 8 Hz12. Circular dichroism (CD) and UV spectra were acquired on a Jasco (Tokyo, Japan) J-1100 spectropolarimeter using a 50 nm/min scanning speed. Quartz cells with a 10 mm path length were employed to record spectra in the 500–220 nm range12. Optical rotation (λ) was measured with a Polarimeter 241 (cell-length 100 mm, volume 1 mL) from Perkin-Elmer (Milan, Italy). The synthetic procedures described below were adjusted from previously published works12,57. Synthesis of (1′R,2′R)-4-heptyl-5′-methyl-2′-(prop-1-en-2-yl)-1′,2′,3′,4′-tetrahydro-[1,1′-biphenyl]-2,6-diol, (-)-trans-CBDP (1 S,4 R)-1-methyl-4-(prop-1-en-2-yl)cycloex-2-enol (146 mg, 0.96 mmol, 0.9 eq.), solubilized in 15 mL of anhydrous DCM, was added over a period of 20 min to a stirred solution of 5-heptylbenzene-1,3-diol (222 mg, 1.07 mmol, 1 eq.) and p-toluenesulfonic acid (20 mg, 0.11 mmol, 0.1 eq.) in anhydrous DCM (15 mL) at room temperature and under a positive pressure of argon. After stirring in the same conditions for 1 h, the reaction was quenched with 10 mL of a saturated aqueous solution of NaHCO3. The mixture was partitioned between Et2O and water. The organic layer was separated and washed with brine, dried with anhydrous Na2SO4 and evaporated. The residue was chromatographed (ratio crude:silica 1/120, eluent: CE:DCM 8/2). All the chromatographic fractions were analyzed by HPLC-UV and UHPLC-HESI-Orbitrap and only the fractions containing exclusively CBDP were concentrated to give 76 mg of a colorless oil (23% yield, purity > 99%). 1H NMR (400 MHz, CDCl3) δ 6.10–6.30 (m, 2 H), 5.97 (bs, 1 H), 5.57 (s, 1 H), 4.66 (s, 1 H), 4.66 (bs, 1 H), 4.56 (s, 1 H), 3.89–3.81 (m, 1 H), 2.52–2.35 (m, 3 H), 2.24 (td, J = 6.1, 12.7 Hz, 1 H), 2.09 (ddt, J = 2.4, 5.1, 17.9 Hz, 1 H), 1.89–1.74 (m, 5 H), 1.65 (s, 3 H), 1.55 (qnt, J = 7.6 Hz, 2 H), 1.28 (td, J = 4.7, 8.2, 9.0 Hz, 8 H), 0.87 (t, J = 6.7 Hz, 3 H). 13C NMR (101 MHz, CDCl3) δ 156.27, 154.09, 149.56, 143.23, 140.22, 124.30, 113.93, 111.01, 109.91, 108.26, 46.33, 37.46, 35.70, 31.99, 31.14, 30.59, 29.43, 29.35, 28.60, 23.86, 22.84, 20.71, 14.29. HRMS m/z [M + H]+ calcd. for C23H35O2+: 343.2632. Found: 343.2629; [M-H]− calcd. for C23H33O2−: 341.2475. Found: 341.2482. [α]D20 = −146° (c 1.0, ACN). Synthesis of (6aR,10aR)-3-heptyl-6,6,9-trimethyl-6a,7,10,10a-tetrahydro-6H-benzo[c]chromen-1-ol, (-)-trans-Δ 8 -THCP The set-up of the reaction for the synthesis of (-)-trans-Δ8-THCP was performed as described for (-)-trans-CBDP and the resulting mixture was stirred at room temperature for 48 h. The mixture was diluted with Et2O, and washed with a saturated solution of NaHCO3 (10 mL). The organic layer was collected, washed with brine, dried (anhydrous Na2SO4) and concentrated. After purification over silica gel (ratio crude:silica 1/150, eluent: CE:Et2O 95/5) 315 mg of a colorless oil (46% yield) were obtained. 1H NMR (400 MHz, CDCl3) δ 6.28 (d, J = 1.6 Hz, 1 H), 6.10 (d, J = 1.6 Hz, 1 H), 5.46–5.39 (m, 1 H), 4.78 (s, 1 H), 3.20 (dd, J = 4.5, 16.0 Hz, 1 H), 2.70 (td, J = 4.7, 10.8 Hz, 1 H), 2.44 (td, J = 2.3, 7.4 Hz, 2 H), 2.21–2.10 (m, 1 H), 1.92–1.76 (m, 3 H), 1.70 (s, 3 H), 1.63–1.52 (m, 2 H), 1.38 (s, 3 H), 1.30 (tt, J = 4.3, 9.4, 11.8 Hz, 8 H), 1.11 (s, 3 H), 0.88 (t, J = 7.0 Hz, 3 H). Synthesis of (6aR,10aR)-3-heptyl-9-chloro-6,6,9-trimethyl-6a,7,8,9,10,10a-hexahydro-6H-benzo[c]chromen-1-ol (HCl-THCP) 1 N ZnCl2 in Et2O (440 µL, 0.44 mmol, 0.5 eq.) was added to a stirred solution of Δ8-THCP (300 mg, 0.87 mmol, 1 eq.) in 20 mL of anhydrous DCM, at room temperature and under nitrogen atmosphere. After 30 min, the reaction was cooled at 0 °C and 2 mL of 4 N HCl in dioxane was added. The resulting mixture was stirred at room temperature, overnight and then diluted with Et2O. The organic layer was collected and washed, in sequence, with an aqueous saturated solution of NaHCO3 and brine. After dehydration with anhydrous Na2SO4, the organic phase was concentrated to give 305 mg (93% yield) of a yellowish oil, pure enough to be used in the next step without further purification. 1H NMR (400 MHz, CDCl3) δ 6.24 (d, J = 1.7 Hz, 1 H), 6.07 (d, J = 1.6 Hz, 1 H), 4.94 (s, 1 H), 3.45 (dd, J = 2.9, 14.4 Hz, 1 H), 3.05 (td, J = 2.9, 11.3 Hz, 1 H), 2.42 (td, J = 1.5, 7.4 Hz, 2 H), 2.20–2.12 (m, 1 H), 1.80–1.71 (m, 1 H), 1.66 (s, 4 H), 1.60–1.51 (m, 2 H), 1.49–1.42 (m, 1 H), 1.38 (s, 3 H), 1.34–1.18 (m, 10 H), 1.13 (s, 3 H), 0.87 (t, J = 6.6 Hz, 3 H). ESI-MS m/z [M + H] + calcd. for C23H3635[Cl]O2+: 379.2. Found: 379.4. Calcd. for C23H3637[Cl]O2+: 381.2. Found: 381.3. Synthesis of (6aR,10aR)-3-heptyl-6,6,9-trimethyl-6a,7,8,10a-tetrahydro-6H-benzo[c]chromen-1-ol, (-)-trans-Δ9-THCP. HCl-THCP (305 mg, 0.82 mmol, 1 eq.) was solubilized in 10 mL of anhydrous toluene and cooled at −15 °C. 1.75 N potassium t-amylate in toluene (1.17 mL, 2.05 mmol, 2.5 eq.) was added dropwise with a syringe to the first solution under a positive pressure of argon. The mixture was stirred in the same condition for 15 min and then at 60 °C for 1 h. After cooling at room temperature, the reaction was quenched with a 1% solution of ascorbic acid and diluted with Et2O. The organic layer was washed with brine, dried over anhydrous Na2SO4 and concentrated. The residue was chromatographed (ratio crude/silica 1:300, hexane:i-propyl ether 9/1) to give 232 mg of a greenish oil (83% yield). 50 mg of (-)-trans-Δ9-THCP were further purified by semipreparative HPLC to prepare a pure analytic standard (purity > 99.9%). 1H NMR (600 MHz, CDCl3) δ 6.30 (t, J = 2.0 Hz, 1 H), 6.27 (d, J = 1.6 Hz, 1 H), 6.14 (d, J = 1.5 Hz, 1 H), 4.75 (s, 1 H), 3.20 (dt, J = 2.5, 10.8 Hz, 1 H), 2.43 (dd, J = 6.4, 8.9 Hz, 2 H), 2.22–2.11 (m, 2 H), 1.97–1.87 (m, 1 H), 1.69–1.65 (m, 4 H), 1.58–1.50 (m, 2 H), 1.43–1.37 (m, 4 H), 1.34–1.21 (m, 8 H), 1.09 (s, 3 H), 0.87 (t, J = 6.6 Hz, 3 H). 13C NMR (151 MHz, CDCl3) δ 154.97, 154.34, 143.02, 134.59, 123.92, 110.30, 109.22, 107.72, 77.38, 46.01, 35.72, 33.78, 31.99, 31.37, 31.16, 29.50, 29.38, 27.77, 25.22, 23.55, 22.87, 19.47, 14.29. HRMS m/z [M + H]+ calcd. for C23H35O2+: 343.2632. Found: 343.2633; [M-H]− calcd. for C23H33O2−: 341.2475. Found: 341.2481. [α]D20 = −166° (c 1.0, ACN). Binding at CB1 and CB2 Receptors The binding affinity of (-)-trans-Δ9-THCP against human CB1 and CB2 receptors was assessed by Eurofins Discovery using a radioligand binding assay. Ten concentrations of the phytocannabinoid from 1 nM to 30 µM were tested in duplicate. [3H]CP55940 (at 2 nM, Kd = 2.4 nM) and [3H]WIN 55212-2 (at 0.8 nM, Kd = 1.5 nM) were used as specific radioligand for hCB1 and hCB2, respectively60,61. Equation 1 was employed to calculate the percent inhibition (%in) of control specific binding obtained in the presence of the tested compounds. $$ \% in=100-(\frac{measured\,specific\,binding}{control\,specific\,binding}\ast 100)$$ A non-linear regression analysis of the competition curves generated with mean replicate values (Eq. 2) was used to calculate the IC50 values (concentration causing a half-maximal inhibition of control specific binding)62. $$Y=D+[\frac{A-D}{1+(\frac{C}{{C}_{50}})nH}]$$ Where Y is the specific binding, A is the left asymptote of the curve, D is the right asymptote f the curve, C is the compound concentration, C50 is the IC50 value and nH is the slope factor. This analysis was carried out using a software developed at Cerep (Hill software) and validated by comparing the data with that generated by the commercial software SigmaPlot 4.0 for Windows (1997 by SPSS Inc.). The inhibition constants (Ki) were determined using the Cheng Prusoff equation (Eq. 3): $$Ki=\frac{I{C}_{50}}{(1+\frac{L}{{K}_{D}})}$$ where L is the concentration of the radioligand, and KD is the affinity of the radioligand for the receptor. The data obtained for CP 55940 (CB1 IC50 = 1.7 nM, CB1 Ki = 0.93 nM) and WIN 55212-2 (CB2 IC50 = 2.7 nM, CB2 Ki = 1.7 nM) were in accordance with the values reported in literature60,61. Docking simulation The prediction of the binding mode of Δ9-THCP in complex with human CB1 receptor was performed using Maestro 10.3 of the Schrödinger Suite63. The crystallographic structure of the active conformation of CB1 in complex with AM11542 (PDB ID: 5XRA) was downloaded from the Protein Data Bank and was used as reference for docking calculation. The protein was prepared using the Protein Preparation Wizard module64. The chemical structure of (-)-trans-Δ9-THCP was sketched with ChemDraw 12.0 and converted from 2D to 3D with the LigPrep utility65. Five conformations per ligand were initially generated, and appropriate ionization state and tautomers were evaluated for each conformation at physiological pH66,67. Afterwards, ligand conformations were minimized with the OPLS_2005 force field. Rigid docking was performed in extra precision mode with Glide version 6.868. Tetrad test Male C57BL6/J mice (7 weeks old; n = 5) were treated with Δ9-THCP (10, 5 and 2.5 mg/kg) or vehicle (1:1:18; ethanol:Kolliphor EL:0.9% saline) by i.p. administration. Mice were evaluated for hypomotility (open field test), hypothermia (body temperature), antinociceptive (hot plate test), and cataleptic (bar test) effects, using the procedures of the tetrad tests as reported by Metna-Laurent et al.69. The same animals were used in all four behavioral tests. Statistical analysis was performed using the Kruskall-Wallis test and Dunn's post hoc tests. The mouse was immobilized and the probe gently inserted for 1 cm into the rectum until stabilization of temperature. Between each mouse the probe was cleaned with 70% ethanol and dried with paper towel. The open field test was used for the evaluation of motor activity. Behavioral assays were performed 30 min after drug (or vehicle) injection. The apparatus was cleaned before each behavioral session by a 70% ethanol solution. Naϊve mice were randomly assigned to a treatment group. Behaviors were recorded, stored, and analyzed using an automated behavioral tracking system (Smart v3.0, Panlab Harvard Apparatus). Mice were placed in an OFT arena (l × w × h: 44 cm × 44 cm × 30 cm), and ambulatory activity (total distance travelled in centimeter) was recorded for 15 min and analyzed. Bar test The bar test was used for the evaluation of catalepsy. The bar was a 40 cm in length and 0.4 cm in diameter glass rod, which was horizontally elevated by 5 cm above the surface. Both forelimbs of the mouse were positioned on the bar and its hind legs on the floor of the cage, ensuring that the mouse was not lying down on the floor. The chronometer was stopped when the mouse descended from the bar (i.e., when the two forepaws touched the floor) or when 10 min had elapsed (i.e., cut-off time). Catalepsy was measured as the time duration each mouse held the elevated bar by both its forelimbs (latency for moving in seconds). The hot plate test was performed to assess changes in the nociception. On the day of the experiment each mouse was placed on a hot plate (Ugo Basile) that was kept at a constant temperature of 52 °C. Licking of the hind paws or jumping were considered as a nociceptive response (NR) and the latency was measured in seconds 85 minutes after drug or vehicle administration, taking a cut-off time of 30 or 60 s in order to prevent tissue damage. Male C57BL/6 mice (Charles River, Italy) of 18–20 g weight were used for the tetrad experiments. A 12 h light/dark cycle with light on at 6:00 A.M., a constant temperature of 20–22 °C, and a 55–60% humidity were maintained for at least 1 week before beginning the experiments. Mice were housed three per cage with chow and tap water available ad libitum. The experimental procedures employed for the work presented herein were approved by the Animal Ethics Committee of the University of Campania "L. Vanvitelli", Naples. Animal care and welfare were entrusted to adequately trained personnel in compliance with Italian (D.L. 116/92) and European Commission (O.J. of E.C. L358/1, 18/12/86) regulations on the protection of animals used for research purposes. All efforts were made to minimize animal numbers and avoid unnecessary suffering during the experiments. Novack, G. D. Cannabinoids for treatment of glaucoma. Curr. Opin. Ophthalmol. 27, 146–150 (2016). Russo, E. B. Cannabis and epilepsy: an ancient treatment returns to the fore. Epilepsy Behav. 70, 292–297 (2017). Zeremski, T., Kiprovski, B., Sikora, V., Miladinović, J. & Tubić, S. In III International Congress," Food Technology, Quality and Safety", 25–27 October 2016, Novi Sad, Serbia. Proceedings 10–15 (University of Novi Sad, Institute of Food Technology). Mutje, P., Lopez, A., Vallejos, M., Lopez, J. & Vilaseca, F. Full exploitation of Cannabis sativa as reinforcement/filler of thermoplastic composite materials. Composites Part A: Applied Science and Manufacturing 38, 369–377 (2007). Westerhuis, W. Hemp for textiles: plant size matters, Wageningen University, (2016). Center for Behavioral Health Statistics and Quality. 2015 National Survey on Drug Use and Health: Detailed Tables (Rockville, MD, 2016). Hanuš, L. O., Meyer, S. M., Muñoz, E., Taglialatela-Scafati, O. & Appendino, G. Phytocannabinoids: a unified critical inventory. Nat. Prod. Rep. 33, 1357–1392, https://doi.org/10.1039/C6NP00074F (2016). Schultz, O.-E. & Haffner, G. Zur Frage der Biosynthese der Cannabinole. Arch. Pharm. 293, 1–8, https://doi.org/10.1002/ardp.19602930102 (1960). Niesink, R. J. M. & van Laar, M. Does Cannabidiol Protect Against Adverse Psychological Effects of THC? Frontiers in Psychiatry, 4, https://doi.org/10.3389/fpsyt.2013.00130 (2013). Kajima, M. & Piraux, M. The biogenesis of cannabinoids in Cannabis sativa. Phytochemistry 21, 67–69, https://doi.org/10.1016/0031-9422(82)80016-2 (1982). Citti, C. et al. Analysis of impurities of cannabidiol from hemp. Isolation, characterization and synthesis of cannabidibutol, the novel cannabidiol butyl analog. J. Pharm. Biomed. Anal. 175, 112752, https://doi.org/10.1016/j.jpba.2019.06.049 (2019). Linciano, P. et al. Isolation of a high affinity cannabinoid for human CB1 receptor from a medicinal cannabis variety: D9-Tetrahydrocannabutol, the butyl homologue of D9-tetrahydrocannabinol. J. Nat. Prod. in press, https://doi.org/10.1021/acs.jnatprod.9b00876 (2019). Robertson, L. W., Lyle, M. A. & Billets, S. Biotransformation of cannabinoids by Syncephalastrum racemosum. Biomed. Mass Spectrom 2, 266–271, https://doi.org/10.1002/bms.1200020505 (1975). Bow, E. W. & Rimoldi, J. M. The Structure–Function Relationships of Classical Cannabinoids: CB1/CB2 Modulation. Perspect. Medicin. Chem. 8, PMC.S32171, https://doi.org/10.4137/pmc.s32171 (2016). Martin, B. R. et al. Manipulation of the Tetrahydrocannabinol Side Chain Delineates Agonists, Partial Agonists, and Antagonists. J. Pharmacol. Exp. Ther. 290, 1065–1079 (1999). Adams, R., Loewe, S., Jelinek, C. & Wolff, H. Tetrahydrocannabinol Homologs with Marihuana Activity. IX1. J. Am. Chem. Soc. 63, 1971–1973, https://doi.org/10.1021/ja01852a052 (1941). Zajicek, J. P., Hobart, J. C., Slade, A., Barnes, D. & Mattison, P. G. Multiple sclerosis and extract of cannabis: results of the MUSEC trial. J. Neurol. Neurosurg. Psychiatry 83, 1125–1132, https://doi.org/10.1136/jnnp-2012-302468 (2012). Pretzsch, C. M. et al. Effects of cannabidiol on brain excitation and inhibition systems; a randomised placebo-controlled single dose trial during magnetic resonance spectroscopy in adults with and without autism spectrum disorder. Neuropsychopharmacology, 1 (2019). Russo, E. B., Guy, G. W. & Robson, P. J. Cannabis, Pain, and Sleep: Lessons from Therapeutic Clinical Trials of Sativex®, a Cannabis-Based Medicine. Chem. Biodivers. 4, 1729–1743, https://doi.org/10.1002/cbdv.200790150 (2007). Whiting, P. F. et al. Cannabinoids for Medical Use: A Systematic Review and Meta-analysisCannabinoids for Medical UseCannabinoids for Medical Use. JAMA 313, 2456–2473, https://doi.org/10.1001/jama.2015.6358 (2015). Garcia, A. N. & Salloum, I. M. Polysomnographic sleep disturbances in nicotine, caffeine, alcohol, cocaine, opioid, and cannabis use: A focused review. Am. J. Addict. 24, 590–598, https://doi.org/10.1111/ajad.12291 (2015). Black, N. et al. Cannabinoids for the treatment of mental disorders and symptoms of mental disorders: a systematic review and meta-analysis. The Lancet Psychiatry, https://doi.org/10.1016/S2215-0366(19)30401-8 (2019). Finnerup, N. B. et al. Pharmacotherapy for neuropathic pain in adults: a systematic review and meta-analysis. Lancet Neurol. 14, 162–173, https://doi.org/10.1016/s1474-4422(14)70251-0 (2015). Citti, C. et al. Chemical and spectroscopic characterization data of 'cannabidibutol', a novel cannabidiol butyl analog. Data in Brief 26, 104463, https://doi.org/10.1016/j.dib.2019.104463 (2019). Citti, C. et al. A Metabolomic Approach Applied to a Liquid Chromatography Coupled to High-Resolution Tandem Mass Spectrometry Method (HPLC-ESI-HRMS/MS): Towards the Comprehensive Evaluation of the Chemical Composition of Cannabis Medicinal Extracts. Phytochemical Analysis 29, 144–155, https://doi.org/10.1002/pca.2722 (2018). Pellati, F. et al. New Methods for the Comprehensive Analysis of Bioactive Compounds in Cannabis sativa L. (hemp). Molecules, 23, https://doi.org/10.3390/molecules23102639 (2018). Koch, O. G., Marcus, R; Looft, Jan; Voessing, Tobias. Preparation of mixtures of cannabinoid compounds useful for therapeutic treatment. Germany patent (2015). Kupper, R. J. Cannabinoid active pharmaceutical ingredient for improved dosage forms. (2006). Nikas, S., Thakur, G. & Makriyannis, A. Synthesis of side chain specifically deuterated (−)‐Δ9‐tetrahydrocannabinols. Journal of Labelled Compounds and Radiopharmaceuticals 45, 1065–1076, https://doi.org/10.1002/jlcr.626 (2002). Choi, Y. H. et al. NMR assignments of the major cannabinoids and cannabiflavonoids isolated from flowers of Cannabis sativa. Phytochem Anal 15, 345–354, https://doi.org/10.1002/pca.787 (2004). Mechoulam, R., Braun, P. & Gaoni, Y. Syntheses of.DELTA.1-tetrahydrocannabinol and related cannabinoids. J. Am. Chem. Soc. 94, 6159–6165, https://doi.org/10.1021/ja00772a038 (1972). Jung, S. W., Cho, A. E. & Yu, W. Exploring the Ligand Efficacy of Cannabinoid Receptor 1 (CB1) using Molecular Dynamics Simulations. Sci. Rep. 8, 13787–13787, https://doi.org/10.1038/s41598-018-31749-z (2018). Hua, T. et al. Crystal structures of agonist-bound human cannabinoid receptor CB1. Nature 547, 468–471, https://doi.org/10.1038/nature23272 (2017). Shao, Z. et al. High-resolution crystal structure of the human CB1 cannabinoid receptor. Nature 540, 602–606, https://doi.org/10.1038/nature20613 (2016). Nikas, S. P. et al. Novel 1′,1′-Chain Substituted Hexahydrocannabinols: 9β-Hydroxy-3-(1-hexyl-cyclobut-1-yl)-hexahydrocannabinol (AM2389) a Highly Potent Cannabinoid Receptor 1 (CB1) Agonist. J. Med. Chem. 53, 6996–7010, https://doi.org/10.1021/jm100641g (2010). Papahatjis, D. P. et al. Pharmacophoric Requirements for the Cannabinoid Side Chain. Probing the Cannabinoid Receptor Subsite at C1′. J. Med. Chem. 46, 3221–3229, https://doi.org/10.1021/jm020558c (2003). Huffman, J. W. et al. Structure–activity relationships for 1′,1′-dimethylalkyl-Δ8-tetrahydrocannabinols. Bioorg. Med. Chem. 11, 1397–1410, https://doi.org/10.1016/S0968-0896(02)00649-1 (2003). Nikas, S. P. et al. The role of halogen substitution in classical cannabinoids: a CB1 pharmacophore model. AAPS J. 6, e30–e30, https://doi.org/10.1208/aapsj060430 (2004). Papahatjis, D. P., Nikas, S. P., Andreou, T. & Makriyannis, A. Novel 1′,1′-chain substituted Δ8-tetrahydrocannabinols. Bioorg. Med. Chem. Lett. 12, 3583–3586, https://doi.org/10.1016/S0960-894X(02)00785-0 (2002). Varvel, S. A. et al. Δ9-tetrahydrocannbinol accounts for the antinociceptive, hypothermic, and cataleptic effects of marijuana in mice. J. Pharmacol. Exp. Ther. 314, 329–337 (2005). Citti, C. et al. Cannabinoid Profiling of Hemp Seed Oil by Liquid Chromatography Coupled to High-Resolution Mass Spectrometry. Frontiers in Plant Science, 10, https://doi.org/10.3389/fpls.2019.00120 (2019). Pavlovic, R. et al. Phytochemical and Ecological Analysis of Two Varieties of Hemp (Cannabis sativa L.) Grown in a Mountain Environment of Italian Alps. Frontiers in Plant Science, 10, https://doi.org/10.3389/fpls.2019.01265 (2019). Citti, C., Pacchetti, B., Vandelli, M. A., Forni, F. & Cannazza, G. Analysis of cannabinoids in commercial hemp seed oil and decarboxylation kinetics studies of cannabidiolic acid (CBDA). J. Pharm. Biomed. Anal. 149, 532–540, https://doi.org/10.1016/j.jpba.2017.11.044 (2018). Citti, C. et al. Untargeted rat brain metabolomics after oral administration of a single high dose of cannabidiol. J. Pharm. Biomed. Anal. 161, 1–11, https://doi.org/10.1016/j.jpba.2018.08.021 (2018). Palazzoli, F. et al. Development of a simple and sensitive liquid chromatography triple quadrupole mass spectrometry (LC–MS/MS) method for the determination of cannabidiol (CBD), Δ9-tetrahydrocannabinol (THC) and its metabolites in rat whole blood after oral administration of a single high dose of CBD. J. Pharm. Biomed. Anal. 150, 25–32, https://doi.org/10.1016/j.jpba.2017.11.054 (2018). Russo, E. B. C. Claims and Misconceptions. Trends Pharmacol. Sci. 38, 198–201, https://doi.org/10.1016/j.tips.2016.12.004 (2017). Carlini, E. The good and the bad effects of (−) trans-delta-9-tetrahydrocannabinol (Δ9-THC) on humans. Toxicon 44, 461–467 (2004). Brierley, D. I., Samuels, J., Duncan, M., Whalley, B. J. & Williams, C. M. A cannabigerol-rich Cannabis sativa extract, devoid of [INCREMENT]9-tetrahydrocannabinol, elicits hyperphagia in rats. Behav. Pharmacol. 28, 280–284, https://doi.org/10.1097/fbp.0000000000000285 (2017). Hill, T. D. M. et al. Cannabidivarin-rich cannabis extracts are anticonvulsant in mouse and rat via a CB1 receptor-independent mechanism. Br. J. Pharmacol. 170, 679–692, https://doi.org/10.1111/bph.12321 (2013). de Meijer, E. P. M. & Hammond, K. M. The inheritance of chemical phenotype in Cannabis sativa L. (V): regulation of the propyl-/pentyl cannabinoid ratio, completion of a genetic model. Euphytica 210, 291–307, https://doi.org/10.1007/s10681-016-1721-3 (2016). Citti, C., Braghiroli, D., Vandelli, M. A. & Cannazza, G. Pharmaceutical and biomedical analysis of cannabinoids: A critical review. J. Pharm. Biomed. Anal. 147, 565–579, https://doi.org/10.1016/j.jpba.2017.06.003 (2018). Wardle, M. C., Marcus, B. A. & de Wit, H. A Preliminary Investigation of Individual Differences in Subjective Responses to D-Amphetamine, Alcohol, and Delta-9-Tetrahydrocannabinol Using a Within-Subjects Randomized Trial. PLoS ONE 10, e0140501, https://doi.org/10.1371/journal.pone.0140501 (2015). Wachtel, S. R., ElSohly, M. A., Ross, S. A., Ambre, J. & de Wit, H. Comparison of the subjective effects of Delta(9)-tetrahydrocannabinol and marijuana in humans. Psychopharmacology (Berl.) 161, 331–339, https://doi.org/10.1007/s00213-002-1033-2 (2002). Bedi, G., Cooper, Z. D. & Haney, M. Subjective, cognitive and cardiovascular dose-effect profile of nabilone and dronabinol in marijuana smokers. Addict. Biol. 18, 872–881, https://doi.org/10.1111/j.1369-1600.2011.00427.x (2013). Pertwee, R. The diverse CB1 and CB2 receptor pharmacology of three plant cannabinoids: Δ9‐tetrahydrocannabinol, cannabidiol and Δ9‐tetrahydrocannabivarin. Br. J. Pharmacol. 153, 199–215 (2008). Bundesinstitut für Arzneimittel und Medizinprodukte. Cannabis Flos. New text of the German Pharmacopoeia (2018). Linciano, P. et al. Aryl thiosemicarbazones for the treatment of trypanosomatidic infections. Eur. J. Med. Chem. 146, 423–434, https://doi.org/10.1016/j.ejmech.2018.01.043 (2018). Christodoulou, M. S. et al. Probing an Allosteric Pocket of CDK2 with Small Molecules. ChemMedChem 12, 33–41, https://doi.org/10.1002/cmdc.201600474 (2017). Pulis, A. P. et al. Asymmetric Synthesis of Tertiary Alcohols and Thiols via Nonstabilized Tertiary α-Oxy- and α-Thio-Substituted Organolithium Species. Angewandte Chemie International Edition 56, 10835–10839, https://doi.org/10.1002/anie.201706722 (2017). Rinaldi-Carmona, M. et al. Characterization of two cloned human CB1 cannabinoid receptor isoforms. J. Pharmacol. Exp. Ther. 278, 871–878 (1996). Munro, S., Thomas, K. L. & Abu-Shaar, M. Molecular characterization of a peripheral receptor for cannabinoids. Nature 365, 61–65, https://doi.org/10.1038/365061a0 (1993). Ponzoni, L. et al. The cytisine derivatives, CC4 and CC26, reduce nicotine-induced conditioned place preference in zebrafish by acting on heteromeric neuronal nicotinic acetylcholine receptors. Psychopharmacology (Berl.) 231, 4681–4693, https://doi.org/10.1007/s00213-014-3619-x (2014). Schrodinger Release 2014-3: Maestro, Schrodinger LLC (New York, NY (USA), 2014). Schrödinger Suite 2014-3: Protein Preparation Wizard; Epik, Schrödinger, LLC (New York, NY (USA), 2014). Schrodinger Release 2014-3: LigPrep, Schrodinger LLC (New York, NY (USA), 2014). Citti, C. et al. 7-Chloro-5-(furan-3-yl)-3-methyl-4H-benzo[e][1,2,4]thiadiazine 1,1-Dioxide as Positive Allosteric Modulator of α-Amino-3-hydroxy-5-methyl-4-isoxazolepropionic Acid (AMPA) Receptor. The End of the Unsaturated-Inactive Paradigm? ACS Chem. Neurosci. 7, 149–160, https://doi.org/10.1021/acschemneuro.5b00257 (2016). Battisti, U. M. et al. 5-Arylbenzothiadiazine Type Compounds as Positive Allosteric Modulators of AMPA/Kainate Receptors. ACS Med. Chem. Lett. 3, 25–29, https://doi.org/10.1021/ml200184w (2012). Schrodinger Release 2014-3: Glide (Version6.8), Schrodinger LLC (New York, NY (USA), 2014). Metna‐Laurent, M., Mondésir, M., Grel, A., Vallée, M. & Piazza, P. V. Cannabinoid‐Induced Tetrad in Mice. Curr. Protoc. Neurosci. 80, 9.59. 51–59.59. 10 (2017). This work was supported by UNIHEMP research project "Use of iNdustrIal Hemp biomass for Energy and new biocheMicals Production" (ARS01_00668) funded by Fondo Europeo di Sviluppo Regionale (FESR) (within the PON R&I 2017–2020 – Axis 2 – Action II – OS 1.b). Grant decree UNIHEMP prot. n. 2016 of 27/07/2018; CUP B76C18000520005. We are also thankful to the Military Chemical Pharmaceutical Institute of Florence for providing the FM2 cannabis inflorescence. These authors contributed equally: Cinzia Citti and Pasquale Linciano. Mediteknology spin-off company of the National Council of Research (CNR), Via Arnesano, 73100, Lecce, Italy Cinzia Citti Institute of Nanotechnology of the National Council of Research (CNR NANOTEC), Via Monteroni, 73100, Lecce, Italy Cinzia Citti, Aldo Laganà, Giuseppe Gigli & Giuseppe Cannazza Department of Life Sciences, University of Modena and Reggio Emilia, Via Campi 103, 41125, Modena, Italy Cinzia Citti, Pasquale Linciano, Fabiana Russo, Flavio Forni, Maria Angela Vandelli & Giuseppe Cannazza Department of Experimental Medicine, Division of Pharmacology, Università della Campania "L. Vanvitelli", Via Santa Maria di Costantinopoli 16, 80138, Naples, Italy Livio Luongo, Monica Iannotta & Sabatino Maione Department of Chemistry, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185, Rome, Italy Aldo Laganà & Anna Laura Capriotti Pasquale Linciano Fabiana Russo Livio Luongo Monica Iannotta Sabatino Maione Aldo Laganà Anna Laura Capriotti Flavio Forni Maria Angela Vandelli Giuseppe Gigli Giuseppe Cannazza G.C. developed and supervised the project, C.C. and P.L. conceived the experiments plan and drafted the manuscript, C.C. and F.R. carried out the UHPLC-HRMS analyses, P.L. performed the stereoselective syntheses and characterization, P.L. and G.G. performed the docking simulations, L.L., M.I. and S.M. performed the in vivo tetrad tests, A.L. and A.L.C. developed the semi-quantification method, F.F. and M.A.V. analyzed the binding assay data. All authors reviewed the manuscript. Correspondence to Giuseppe Cannazza. Supplementary Information. Citti, C., Linciano, P., Russo, F. et al. A novel phytocannabinoid isolated from Cannabis sativa L. with an in vivo cannabimimetic activity higher than Δ9-tetrahydrocannabinol: Δ9-Tetrahydrocannabiphorol. Sci Rep 9, 20335 (2019). https://doi.org/10.1038/s41598-019-56785-1 Semi-quantitative analysis of cannabinoids in hemp (Cannabis sativa L.) using gas chromatography coupled to mass spectrometry Luca De Prato Matthew Timmins Graham O'Hara Journal of Cannabis Research (2022) Characterizing the degradation of cannabidiol in an e-liquid formulation Adrián Schwarzenberg Harry Carpenter Michał Brokl Comprehensive inventory of cannabinoids in Cannabis sativa L.: Can we connect genotype and chemotype? B. Markus Lange Jordan J. Zager Phytochemistry Reviews (2022) Cannabinoid and endocannabinoid system: a promising therapeutic intervention for multiple sclerosis Fareeha Khalid Ghori Saadia Zahid Molecular Biology Reports (2022) Beyond Δ9-tetrahydrocannabinol and cannabidiol: chemical differentiation of cannabis varieties applying targeted and untargeted analysis Manuela Carla Monti Priska Frei Katja Mercer-Chalmers-Bender Analytical and Bioanalytical Chemistry (2022)
CommonCrawl
Genetic parameters of resistance to pasteurellosis using novel response traits in rabbits Merina Shrestha1, Hervé Garreau1, Elodie Balmisse2, Bertrand Bed'hom3, Ingrid David1, Edouard Guitton4, Emmanuelle Helloin5, Guillaume Lenoir6, Mickaël Maupin7, Raphaël Robert8, Frédéric Lantier5 & Mélanie Gunia ORCID: orcid.org/0000-0001-7527-75471 Pasteurellosis (Pasteurella infection) is one of the most common bacterial infections in rabbits on commercial farms and in laboratory facilities. Curative treatments using antibiotics are only partly efficient, with frequent relapses. Breeding rabbits for improved genetic resistance to pasteurellosis is a sustainable alternative approach. In this study, we infected 964 crossbred rabbits from six sire lines experimentally with Pasteurella multocida. After post-mortem examination and bacteriological analyses, abscess, bacteria, and resistance scores were derived for each rabbit based on the extent of lesions and bacterial dissemination in the body. This is the first study to use such an experimental design and response traits to measure resistance to pasteurellosis in a rabbit population. We investigated the genetic variation of these traits in order to identify potential selection criteria. We also estimated genetic correlations of resistance to pasteurellosis in the experimental population with traits that are under selection in the breeding populations (number of kits born alive and weaning weight). Heritability estimates for the novel response traits, abscess, bacteria, and resistance scores, ranged from 0.08 (± 0.05) to 0.16 (± 0.06). The resistance score showed very strong negative genetic correlation estimates with abscess (− 0.99 ± 0.05) and bacteria scores (− 0.98 ± 0.07). A very high positive genetic correlation of 0.99 ± 0.16 was estimated between abscess and bacteria scores. Estimates of genetic correlations of the resistance score with average daily gain traits for the first and second week after inoculation were 0.98 (± 0.06) and 0.70 (± 0.14), respectively. Estimates of genetic correlations of the disease-related traits with average daily gain pre-inoculation were favorable but with high standard errors. Estimates of genetic and phenotypic correlations of the disease-related traits with commercial selection traits were not significantly different from zero. Disease response traits are heritable and are highly correlated with each other, but do not show any significant genetic correlations with commercial selection traits. Thus, the prevalence of pasteurellosis could be decreased by selecting more resistant rabbits on any one of the disease response traits with a limited impact on the selection traits, which would allow implementation of a breeding program to improve resistance to pasteurellosis in rabbits. Pasteurella multocida, a gram-negative bacterium, affects various birds and mammals worldwide, including humans [1]. It is an opportunistic pathogen that normally resides as part of the normal microbiota in oral, nasopharyngeal, and upper respiratory tracts in mammals, birds, and other species [2]. Infection with P. multocida (pasteurellosis) causes a variety of clinical manifestations in various species, including fowl cholera in poultry, atrophic rhinitis in pigs, and hemorrhagic septicemia in cattle and buffalo [2]. In rabbits, rhinitis ('snuffles'), pneumonia, septicemia, abscesses, and mastitis are some of the clinical manifestations caused by different strains of P. multocida [3]. Pasteurellosis is one of the most common bacterial infections both on commercial farms and in laboratory rabbits [3]. It is a highly epizootic infection and an economically major disease in rabbit meat industry. Eady et al. [4] reported a mortality rate of ~ 50% in a grower rabbit population that was diagnosed with bacterial infection (predominantly Staphylococcus aureus and P. multocida). Lopez et al. [5] reported pasteurellosis as the first cause of culling of young rabbit does. Vaccinations and curative treatments using antibiotics against pasteurellosis are only partly efficient, and relapses of pasteurellosis are frequent [6]. Thus, prophylactic measures are considered as economically efficient. In the rabbit meat breeding industry, antibiotics are used as a prophylactic measure, together with proper ventilation and strict environmental hygiene in housing buildings, to prevent and control the spread of infection. However, the use of antibiotics has several disadvantages. One risk is the development of antibiotic-resistant bacteria, which could spread to other species, including humans. In particular, the development of antibiotic-resistant bacteria has been observed in intensive farms [6]. Ferreira et al. [7] reported that 22 of 46 strains of P. multocida isolated from rabbits in Brazil showed resistance to at least one type of antibiotic. Wilson and Ho [2] identified multiple antibiotic-resistant genes in different strains of Pasteurella species. Another adverse effect of the use of antibiotics is dysbiosis, which is an imbalance of the normal microbiota of an organism that can occur when antibiotics are added to the feed [8]. The use of antibiotics can also mask the disease, thus preventing any selective advantage of natural resistance against pasteurellosis in animals [9]. Finally, the use of antibiotics is not well accepted by consumers [10] and various policies have been implemented to reduce exposure of humans and animals to antibiotics. Thus, the development of a sustainable alternative approach is desirable to reduce the use of antibiotics, while maintaining a low prevalence of pasteurellosis. One approach could be to breed rabbits for improved genetic resistance to pasteurellosis, which would have a lower probability of developing the disease. Moreover, the presence of resistant rabbits in a population will decrease disease transmission, thus reducing the risk of infection for susceptible individuals, imparting a certain level of herd immunity. Previous studies have suggested the possibility of selection for resistance to pasteurellosis in rabbits, since low to moderate heritability estimates ranging from 0.03 (± 0.01) to 0.28 (± 0.16) were found [11,12,13]. Most of these studies were based on observable clinical signs of infection from field data. Such field studies have three drawbacks that can lead to inaccurate diagnosis of pasteurellosis: (1) not all infected rabbits show visual signs of infection but remain carriers of the bacteria; (2) some clinical signs such as rhinitis and pneumonia in rabbits are also observed after infection with other bacteria such as Staphylococcus or Streptococcus [3]; and (3) due to an uneven exposure to infection in the field, a rabbit may not show its true response potential. Such inaccurate diagnosis of the health status can reduce estimates of heritability [14]. These drawbacks can be avoided by analyzing response traits in a population that is experimentally exposed to the pathogen. The aim of our study was to identify criteria to genetically select rabbits for increased resistance to pasteurellosis. We estimated the genetic parameters for pasteurellosis resistance traits in a population that was experimentally infected with a strain of P. multocida. We also estimated the genetic correlations of resistance traits with the production traits that are being selected for in commercial rabbits. We used two datasets. The first dataset contained information on 11,971 purebred rabbits born in 2016 and 2017 from six maternal lines: two lines from each of the following breeding companies Eurolap, Hycole, and Hypharm (Table 1). These lines have been operated as closed populations since their establishment between 1980 and 1985 (except for one line established in 1997). In these lines, maternal line traits, such as prolificacy, fertility, functional longevity, number of teats, direct and maternal effects on weaning weight, and homogeneity of weight at birth are continuously improved. For this study, we considered only two major traits that are under selection across all lines, i.e. number of kits born alive (NBA) and weaning weight (WW). The second dataset included data from an experimental population of crossbred rabbits that were submitted to an experimental infection trial with P. multocida. The experimental rabbit population included 1030 crossbred rabbits. These crossbreds were progenies from six sire lines (two lines from each of the breeding companies Eurolap, Hycole, and Hypharm) bred to one dam line (INRA 1777) (Table 2) that has been selected for traits such as number of kits born alive per litter, and direct and maternal effects on weaning weight. Hence, this experimental population was representative of the commercial breeding dams used by French rabbit breeders. Table 1 Description of the commercial population of 11,971 purebred rabbits Table 2 Description of the experimental population of 1030 crossbred rabbits The experimental population was generated in 2016 at the Pôle Expérimental Cunicole de Toulouse (INRAE PECTOUL) experimental unit by artificial insemination across five reproduction batches, at intervals of 1–3 months. The experimental population (1030 crossbred rabbits) was produced by mating 65 sires with 112 dams. The same sires and dams were used for the whole experiment. Breeding companies and INRAE provided pedigree information for the sires and dams, respectively. Each sire line contributed on average 171.7 progenies (Table 1), while each sire contributed on average 15.6 crossbred progenies (from 2 to 23), and each dam on average 9.2 crossbred progenies (from 1 to 21). The crossbred rabbits were delivered in a span of 3 consecutive days for the first batch, and of 4 consecutive days for the other batches. At weaning, kits without visible disease syndromes were pre-sorted for the experiment. Then, rabbits were chosen to achieve a balanced distribution of gender and paternal and maternal origins. The first batch included 110 rabbits and the remaining four batches included 230 rabbits, each. This population was raised in a new building with a high level of biosecurity and was monitored for the usual rabbit pathogens. The 65 sires that produced the experimental population were also used to produce the purebred progenies in the selection population of the breeding companies, along with other related sires. The selection population (11,971 purebred rabbits) was obtained by mating 116 sires with 1477 dams (Table 2). Each sire contributed on average 75.6 purebred progenies (from 1 to 290), and each dam on average 6.3 purebred progenies (from 1 to 28). The pedigree contained 20,206 rabbits across seven generations and phenotypic information was available for 12,951 of these rabbits (11,971 purebreds and 980 crossbreds) in the data file. Of the 11,971 purebred rabbits from the selection population, 1705 had records on both NBA and WW, 548 on NBA only, and 9718 on WW only. Inoculation of the crossbred experimental population In total, 50 rabbits (10 rabbits per batch) were used as controls and 980 rabbits were inoculated with P. multocida. The 1030 rabbits were transported in cages (5 rabbits per transport cage) to the Plateforme d'Infectiologie Expérimentale (INRAE PFIE) at 36 days of age, 1 day after weaning. At PFIE, they were all housed in cages identical to each other (with the same 5 rabbits per cage as during transport) in two separate rooms. Control rabbits were housed in the same rooms as the inoculated rabbits. Before inoculation, 16 of the 980 rabbits died or were euthanized due to digestive disorders, mostly epizootic rabbit enteropathy, which is a potentially fatal gastrointestinal disease with a currently unknown etiology but with a strong bacterial hypothesis [15]. At PFIE, the experimental population was left to adjust to the new environment for a week and inoculations were performed at 42 days of age by injecting the 964 rabbits subcutaneously between the shoulder blades with a standardized dose of 8000 bacteria in 0.1 mL saline solution, of the pyrogenic strain CIRMBP-0884 of P. multocida from a stock that is kept frozen at − 80 °C and checked for concentration at each inoculation. This strain was chosen based on a previous study [16] in which 174 Pasteurella strains isolated from French rabbits were characterized phenotypically and with molecular parameters. All the strains were genotyped by MLVA [16], and for 20 of the 174 strains that were selected to represent their diversity, pathogenicity was tested in rabbits by using a standard dose. The median lethal dose was not determined. Based on this information, the CIRMBP-0884 strain of P. multocida (also known as the LVT62 strain) [17], conserved at the Centre International de Ressources Microbiennes—Bactéries Pathogènes (CIRM-BP), was found to be a virulent strain that belongs to one of the major groups of Pasteurella field isolates. Resistance to pasteurellosis in rabbits should take resistance to different strains of P. multocida into account. Although the nasal route is the major natural route of penetration for Pasteurella [3], we used subcutaneous injection because Helloin et al. [16] found that it showed better reproducibility and quantifiable infection compared to the intranasal route. Health status of the rabbits was monitored daily for 14 days post-inoculation. Critically ill rabbits were euthanized for welfare reasons. All the other rabbits were euthanized at 14 days post-inoculation and their bodies were examined for signs of pasteurellosis. A brief timeline of this experiment is shown in Fig. 1. Of the 964 inoculated crossbred rabbits, 844 remained alive until the end of the experiment and were euthanized on day 14 post-inoculation. Among the 120 rabbits that died or were euthanized prior to the end of the experiment, 109 rabbits were confirmed to have died of pasteurellosis, thus the data on these were included in the analysis. In total, 953 crossbred rabbits were used for analysis. Overview of the experimental infection trials of crossbred rabbits, with age at body weight measurement, inoculation, and euthanasia. ADG-BW: average daily weight gain calculated from birth to weaning pre-inoculation; ADG-PI1: average daily weight gain calculated during first week post-inoculation; ADG-PI2: average daily weight gain calculated during second week post-inoculation. The red line refers to the stage post-inoculation, from the first day of inoculation to the last day of the experiment when rabbits were euthanized Both commercial selection traits, WW and NBA, were recorded in the six purebred populations by the breeding companies. Rabbits were weaned between 27 and 36 days of age, depending on the line. In the crossbred experimental population, the following disease-related and performance traits were recorded: During post-mortem examination, the same two scientists recorded the occurrence of abscesses during the experiment, by using a scoring grid. Presence or absence of abscesses was recorded and scored from 0 to 4, on different parts of the body: inoculation site, head, neck, chest, back, forelegs, hind legs, ribcages, abdomen, rump, thoracic cavity, peritoneal cavity, pleura, heart, lungs, liver, spleen, kidney, stomach, and digestive tract. Furthermore, a single score from 0 to 4, was assigned to each rabbit depending of the extent of abscess dissemination on different parts of the body. Details on the scores are in Table 3. Table 3 Description of scores for disease-related traits: abscess and bacteria Post-euthanasia, tissue samples from the spleen, lung, liver, and abscesses (from any area) were collected, rapidly frozen and kept at − 80 °C (2 to 3 months) until they were transferred to the Laboratoire de Touraine (Tours, France) and then homogenized individually for culture to identify and quantify P. multocida. Liver tissue was sampled only from rabbits that died or were euthanized prior to the end of the experiment because rabbits that die at an early stage may not show any abscess and sampling liver tissue increases the chance of obtaining bacterial cultures of P. multocida. In the laboratory, the cultures from lung, spleen, and liver samples were scored for the presence of P. multocida by enumeration of bacteria (viable plate count), while the culture obtained from abscess samples was only checked to determine if it belonged to the P. multocida species. The bacterial counts for each tissue were then rescored as 0 (no growth), 1 (numerable colonies), and 2 (innumerable colonies), which were used to generate a final score from 0 to 4 for each rabbit, as described in Table 3. For each rabbit, a score for resistance to pasteurellosis from 0 to 4 was derived by combining the scores for extent of abscesses (0 to 4), extent of bacteria growth from the tissue samples (0 to 4), and the status (dead/alive) of the rabbits at the end of the experiment. The description of the scores is in Table 4. The distribution of rabbits across scores of disease-related traits is in Fig. 2. Table 4 Description of scores for the disease-related trait resistance Percentage of rabbits with different scores for each disease-related response trait Growth traits Body weight was measured at birth, at weaning, 1 day before inoculation, and on days 7 and 14 post-inoculation. Average daily weight gain [ADG (g/days)] was calculated by dividing the difference in body weight (g) between time points by the number of days in between for three time periods: birth to weaning (ADG-BW), first week post-inoculation (ADG-PI1), and second week post-inoculation (ADG-PI2). Disease-related traits (abscess, bacteria, resistance), growth traits (ADG-BW, ADG-PI1, ADG-PI2) and commercial selection traits (NBA, WW) were analyzed using the Reml method with ASReml 3.0 [18]. Each trait was analyzed using specific models, which were all sub-models of the following "global" linear mixed animal model: $${\mathbf{y}} = {\mathbf{X}}{\varvec{\upbeta}} + {\mathbf{Zu}} + {\mathbf{Vm}} + {\mathbf{Wl}} + {\mathbf{Op}} + {\mathbf{Sb}} + {\varvec{\upvarepsilon}},$$ where \({\mathbf{y}}\) is a vector of phenotype measures (one of the 7 traits); \({\varvec{\upbeta}}\) is a vector of fixed effects; \({\mathbf{u}}\) is a vector of animal genetic random effects with \(\sim N(0,{\mathbf{A}}\sigma_{u}^{2}\)) where \({\mathbf{A}}\) is the pedigree-based relationship matrix, with line included as a genetic group; \({\mathbf{m}}\) is a vector of maternal genetic random effects with \(\sim N(0,{\mathbf{A}}\sigma_{m}^{2}\)); \({\mathbf{l}}\) is a vector of the litter random effects with \(\sim N(0,{\mathbf{I}}_{{\mathbf{l}}} \sigma_{l}^{2}\)), where \({\mathbf{I}}\) is the identity matrix of appropriate size; \({\mathbf{p}}\) is the vector of permanent environment random effects with \(\sim N(0,{\mathbf{I}}_{{\mathbf{p}}} \sigma_{p}^{2}\)); \({\mathbf{b}}\) is a vector of the combined random effects of batch, room, and cage (BRC) with \(\sim N(0,{\mathbf{I}}_{{\mathbf{b}}} \sigma_{b}^{2}\)); \({\mathbf{X}}\) is a known design matrix for fixed effects; \({\mathbf{Z}}\), \({\mathbf{V}}\), \({\mathbf{W}}\), \({\mathbf{O}}\) and \({\mathbf{S}}\) are known design matrices for random effects, i.e. animal genetic, maternal genetic, litter, permanent environment, and BRC, respectively; and \({\varvec{\upvarepsilon}}\) is a vector of residual errors with \(\sim N(0,{\mathbf{I}}_{{\mathbf{e}}} \sigma_{e}^{2}\)). For the experimental population, the fixed effects tested for each trait were: gender (2 levels), gestation length (4 levels i.e. 4 gestation lengths: 30, 31, 32, and 33 days), batch (5 levels i.e. five birth months within a year), parity of dam (6 levels), and signs of ERE/digestive disorders (3 levels). The scores for the ERE/digestive disorders were defined as follows: "0" for rabbits without signs of disorders, "1" for rabbits that showed signs of digestive disorders but not confirmed as signs of ERE, and "2" for rabbits with signs of ERE. The random environmental effects in the model were BRC and litter, with 196 levels (5 rabbits per level) and 305 levels, respectively, which will be referred to as non-genetic common environment shared by rabbits of the same litter. On average, each litter included 3.21 rabbits (from 1 to 10). Fixed effects were considered significant and included in the final model if the P-value was less than 0.05. To test the significance of the random effects, log likelihood values obtained from ASReml were used to perform a likelihood ratio test in the statistical software R [19] and included if the resulting P-value was less than 0.05. The final models used to estimate heritability and correlations contained only significant fixed and random effects. For the selection population, the fixed and random effects that are used routinely by the breeding companies were included in the models. The fixed effects for WW were a combined effect of farm-year-month of birth (24 levels), number of kits born alive (12 levels), litter size at weaning (10 levels), and parity of the dam (5 levels), and for NBA, the combined effects of farm-year-season of kitting (28 levels) and the parity-physiological status (lactating or not at insemination, 9 levels). The random environmental effects included in the model were a permanent environment effect (2253 levels) for NBA, which accounts for the permanent environment effect for repeated measurement of NBA on does, and a litter effect (1483 levels) for WW. On average, NBA was recorded on 2.9 litters per doe (range from 1 to 12), and WW was recorded on 4.7 rabbits in each litter (range from 1 to 16). Once the fixed and random effects were selected for each trait and population, heritabilities and genetic correlations between commercial selection traits and growth or disease-related trait were estimated with a linear model using three-trait analyses, including WW and NBA measured on the selection population, and a growth or disease-related trait measured on the experimental population. Due to convergence issues, correlations between growth and disease-related traits measured in the experimental population were estimated using two-trait analyses. The disease-related traits abscess, bacteria, and resistance were also analysed as binary traits [0/1] using a threshold model. Classification of the disease-related traits into different categories may not be fully correct, which could lead to biased estimates of heritability. For abscess (and bacteria) as binary traits, 0 was assigned to rabbits that did not show any abscesses (and bacteria), and 1 to rabbits that showed signs of abscesses (low to severe) and (bacteria). For resistance as a binary trait, 1 was assigned to rabbits survived until the end of the experiment and that did not show any abscess or bacteria, and 0 was assigned to all other rabbits. In the threshold model, the 0/1 phenotype of a rabbit is linked to the explanatory variables in model (1) by a probit link function. Fixed effects were considered significant and included in the model if their P-value was less than 0.05. Random effects that were significant (P-value < 0.05) in the linear mixed animal model for the corresponding trait were fitted as random effects in the final threshold models for that trait. The threshold model gives estimates of the heritability on the underlying scale \(\left( {h_{und}^{2} } \right)\). For comparison purposes, these estimates were transformed to the observed scale [20] using \(h_{obs}^{2} = h_{und}^{2} \frac{{z^{2} }}{{\left[ {p\left( {1 - p} \right)} \right]}}\), where \(p\) is the frequency of 1 s for the binary trait; \(z\) is the ordinate (height) of a standard normal curve for the threshold that corresponds to "\(p\)". The standard error of heritability estimate on the underlying scale was also transformed to the observed scale [20]. Of the 953 experimental rabbits used in the analysis, 72 showed no signs of abscess, 79 showed no bacterial growth in tissue samples, and 71 were resistant to pasteurellosis (no abscess, no bacterial growth, and alive until the end of the experiment). The mean of each trait with its standard deviation (SD) are in Table 5. Although it is preferable to include all the effects used to randomize animals, we included only the fixed effects that were significant at the 5% level in the final models (Table 6). We observed no significant effect of gender for any of the traits and ignoring the effect of gender had no impact on estimates of variances and covariances. A significant effect of batch was observed for all the disease-related and growth traits except for ADG-PI2. Gestation length and dam's parity showed significant effects on ADG-BW. There was no significant effect of the dam's parity on the disease-related traits, probably because of the high health status of these dams, which were not exposed to P. multocida. Table 5 Number of rabbits, means and standard deviations (in parentheses) for disease-related, growth, and commercial selection traits Table 6 Significant fixed effects for disease-related, growth, and commercial selection traits ERE/digestive disorders showed a significant effect for all disease-related and growth traits but not for binary disease-related traits, and for ADG-BW (Table 6). In comparison to rabbits without a ERE/digestive disorder (score 0), rabbits that were diagnosed with a ERE/digestive disorder (scores 1 and 2) showed more severe signs of abscess (+ 0.69 ± 0.17 versus + 0.92 ± 0.17), more signs of bacterial growth (+ 1.13 ± 0.17 versus + 1.02 ± 0.16), and less resistance to pasteurellosis (− 1.04 ± 0.15 versus − 1.30 ± 0.15). This shows a significant interaction between pasteurellosis and ERE/digestive disorders, likely because ERE and digestive disorders are aggravating factors for pasteurellosis. For growth traits, compared to rabbits without a ERE/digestive disorder (score 0), rabbits diagnosed with a ERE/digestive disorder (scores 1 and 2) had lower ADG-PI1 (− 8.34 ± 3.42 versus − 9.67 ± 3.39 g/days) and ADG-PI2 (− 7.52 ± 4.37 versus − 9.89 ± 4.50 g/days). Heritabilities and correlations Estimates of common litter and permanent environment effects, and of heritabilities are in Tables 7 and 8, respectively. For disease-related traits, heritability estimates ranged from 0.07 (± 0.04) to 0.16 (± 0.06) when based on the linear model, and from 0.16 (± 0.11) to 0.20 (± 0.08) when based on the threshold model, on the underlying scale. For growth traits, heritability estimates ranged from 0.11 (± 0.10) to 0.29 (± 0.07). For the commercial selection traits, heritability estimates were 0.33 (± 0.06) for WW and 0.05 (± 0.02) for NBA. Litter showed a significant effect for abscess on the linear scale, and for ADG-BW and WW. The permanent environment effect was significant for NBA. BRC and maternal genetic effects were not significant for any trait. Table 7 Estimates of litter effects and of heritabilities (standard errors in parenthesis) for the disease-related traitsa analyzed with a threshold model Table 8 Estimates of litter effect, permanent environment effect, and of heritabilities (standard errors in parenthesis) for the traitsa analyzed with a linear mixed model) Estimates of the correlations between traits are in Table 9. Disease-related traits were analyzed on the linear scale to estimate these correlations. Disease-related traits displayed high estimated genetic and phenotypic correlations with each other. Abscess and bacteria showed a genetic correlation of 0.99 (± 0.16) and a phenotypic correlation of 0.58 (± 0.02). Resistance showed strong negative genetic correlations with abscess (− 0.99 ± 0.05) and bacteria (− 0.98 ± 0.07). In comparison to the genetic correlations, the phenotypic correlations of resistance with abscess (− 0.80 ± 0.01) and bacteria (− 0.84 ± 0.01) were slightly lower. Estimates of genetic correlations of disease-related traits with ADG-BW had large standard errors and should be interpreted with caution. The corresponding phenotypic correlations ranged from − 0.13 (± 0.03) to 0.12 (± 0.03). Estimates of genetic correlations of disease-related traits with growth traits for the first week (ADG-PI1) and second week (ADG-PI2) after inoculation were strong, with absolute values ranging from 0.70 (± 0.14) to 0.98 (± 0.09). The corresponding phenotypic correlations were moderately high, between 0.50 (± 0.02) and 0.68 (± 0.01) (absolute value). Estimates of genetic correlations among growth traits were inconclusive because of high standard errors, except a moderate genetic correlation of 0.52 ± 0.16 between ADG-PI1 and ADG-PI2. The phenotypic correlations between growth traits were low, except between ADG-PI1 and ADG-PI2 (0.32 ± 0.03). Estimates of genetic and phenotypic correlations of disease-related traits and growth traits with the commercial selection traits were not significantly different from 0. Table 9 Estimates of genetic (above diagonal) and phenotypic (below diagonal) correlations (standard errors in parentheses) between disease-related, growth and commercial selection traitsa We investigated the potential of novel response traits as selection criteria for resistance to pasteurellosis, which to our knowledge, is the first study to investigate such diagnostic response traits. The systematic and detailed response trait measures on infected rabbits allowed us to detect genetic variation of resistance to pasteurellosis. The low to moderate estimates of genetic variance for these novel response traits, along with non-significant genetic correlations with the commercial selection traits, suggest that these response traits could be used as selection criteria for resistance to pasteurellosis. Effects included in the model We investigated the contribution of random components such as maternal genetics, litter, and BRC to the phenotypic variance for analysis of disease-related traits on a linear scale and for growth traits. BRC and maternal genetic effects showed no significant contributions to the phenotypic variance. A non-significant contribution of the maternal genetic effects for a disease trait in rabbits has already been described by Eady et al. [13]. However, the same authors observed significant maternal genetic effects for a weight-related trait. The ratio of litter variance to phenotypic variance was only significant for abscess, ADG-BW, and WW. In the literature, similar estimates for common litter environment effects compared to our result for abscess (0.08 ± 0.04) were reported for a respiratory syndrome (0.057 ± 0.002) [12] and bacterial infections (0.046 ± 0.006 to 0.209 ± 0.017) [4, 13]. For ADG-BW, our estimate for this ratio (0.47 ± 0.05) was comparable to estimates obtained in previous studies (0.31 [21], and 0.40 [22]), in which litter effects were also investigated at early age intervals. However, low estimates of litter effects have also been reported for growth traits (0.11 [23], 0.22 [24], and 0.16 [25]). Such small litter effects have been observed for ADG traits that cover a later period of life in rabbits [24]. In addition to common environmental effects, litter effects include maternal environmental effects, which could be related to the milk that the mother passes to her kits [24], which suggests the importance of the suckling mother in weight gain. The magnitude of the litter effect reflects its importance from birth until weaning, and its inclusion in the model is expected to decrease the probability of estimating inflated heritabilities for growth traits. Heritability estimates for disease-related traits Few studies have reported heritability estimates related to pasteurellosis resistance specifically in rabbits but genetic parameters for natural bacterial infections have been estimated in rabbits [4, 12, 13]. Pasteurellosis is one of the most common bacterial diseases in rabbits [3, 13] and usually manifests itself as a respiratory disease [3]. Hence, the estimates of heritability obtained here can be compared to estimates from these studies [4, 12, 13], as respiratory syndromes have been taken into account to define disease resistance in these studies. Bacterial disease resistance assessed under field conditions was previously investigated in two commercial meat rabbit populations [13]. Rabbits were scored for infection based on clinical signs such as, but not limited to, respiratory problems, snuffles, and abscesses. The highest heritability estimates were 0.042 (± 0.012) based on the linear model (treating the trait as continuous), and 0.379 (± 0.106) on the underlying scale when using the threshold model. Estimates of genetic variability for infectious diseases in French rabbits included respiratory syndromes, among many other traits [12]. Respiratory syndromes were observed for 4% of the population, with a heritability estimate of 0.041 (± 0.004). Our heritability estimates for the response traits were higher than observed in the above-mentioned studies [4, 12, 13], which could be due to several reasons. First, in our study, the rabbit population was experimentally exposed to P. multocida, whereas in the previous studies field data were used, for which uneven exposure may lead to a low incidence of disease. Such a scenario was also observed in a study on ERE, in which heritability estimates of the disease decreased from 0.21 (± 0.16) for an experimental rabbit population [26], to 0.08 (± 0.02) in field data [27]. Second, the diagnostic approach in our study provided the "true Pasteurella infection" status of each rabbit, thus our estimates better reflect true genetic differences in resistance in the population. The healthy rabbits did not show any internal signs of pasteurellosis, which might not be the case for previous studies, as they took only external visible signs into account. Our heritability estimates of pasteurellosis resistance traits are similar to those reported for lung lesions in Spanish rabbit lines [from 0.07 (± 0.03) to 0.18 (± 0.09)], which could be due to the use of a similar diagnostic approach in both studies. In [11], fresh lung lobes were scored based on the extent of the lesions in the lungs of euthanized rabbits under natural Pasteurella infection. Many infectious diseases, such as rabbit haemorrhagic disease, ERE, and myxomatosis exist in rabbit populations [3, 28]. However, genetic parameters for resistance to these diseases are scarce in the literature. Only Baselgea et al. [11] reported estimates of genetic parameters for pasteurellosis resistance in rabbits. Our heritability estimates follow a similar trend as those for ERE/digestive disorders in rabbits, which ranged from 0.05 (± 0.05) to 0.21 (± 0.16) in two studies [26, 27]. Based on these reports, this disease trait was included in the breeding program of the Hypharm breeder to reduce the prevalence of ERE. Furthermore, Garreau et al. [29] investigated the status of susceptibility to digestive disorders in experimental progenies from a rabbit population selected for resistance to digestive disorders and observed a significant reduction in mortality between resistant and susceptible experimental rabbits. Another study [30] on rabbits from the same line showed a reduction in clinical signs of disease by 0.12 genetic standard deviation per year between 2008 and 2016. This result suggests the potential of reducing the prevalence of pasteurellosis in rabbits by introducing pasteurellosis resistance traits in a breeding program. We also estimated the genetic variance of the response traits in a binary form [0/1]. Incidences for the binary traits were outside the 10 to 90% range (Table 5), which might dissociate the assumption of independence between the mean and variance when estimating variance components on the linear scale [31]. Thus, we applied a threshold probit link function for the analysis of the binary traits. To simplify comparisons, heritability estimates on the underlying scale were transformed to the observed scale. Resulting estimates of heritability on the observed scale ranged from 0.05 (± 0.02) to 0.06 (± 0.02) and were much lower than those obtained from the linear model (Table 8). Greater genetic variation on the linear scale shows the advantage of taking the severity of pasteurellosis into account when recording response traits. Heritability estimates for commercial selection traits Estimates of variance components for NBA and WW were consistent with data in the literature [32,33,34], which ranged from 0.04 to 0.16 for NBA and from 0.26 to 0.29 for WW. Heritability estimates for growth traits Before inoculation Heritability estimates obtained for ADG before inoculation (ADG-BW) were low, which could be explained by the use of data recorded at an early age interval. For rabbits at an early age, the effect of the maternal environment component is larger, which might reduce the effect of the rabbit itself [24]. We also observed a large significant effect of litter (0.47 ± 0.05) for ADG-BW. Such a pattern showing an increase in direct animal genetic effects (thus increasing heritability estimates) for ADG with increasing age of the animal was previously reported: low to moderate heritability estimates for ADG between 5/6 and 10 weeks of age (0.25 [35], 0.21 ± 0.01 [36], 0.23 ± 0.02 [25], 0.204 ± 0.008 [37], and 0.15-0.17 [38]) but higher estimates of heritability for ADG after 15 weeks of age (0.48 [23], and 0.29 [39]). After inoculation The heritability estimate for ADG during the first week post-inoculation (ADG-PI1) was moderate (0.29 ± 0.07) and similar to that for ADG during the second week post-inoculation (ADG-PI2) (0.20 ± 0.06). Most rabbits in our study showed a strong decrease in daily weight gain during the first week post-inoculation [from − 44.7 to 75.3; mean: 2.9 g/d (SD = 18.2)]. The reduced body weight gain in the first week post-inoculation appears to result directly from pasteurellosis infection. However, rabbits coped better during the second week post-inoculation, i.e. the mean of ADG-PI2 increased to 19.8 g/days (SD = 17.89) (from − 42.3 to 74.3). Thus, the heritability estimate of 0.29 for ADG-PI1 suggests that it could be advantageous to select rabbits against pasteurellosis resistance at an early phase of infection during which most of the genetic variability in ADG is observed. Heritability estimates of the traits obtained from both univariate analyses and bivariate analyses (results not shown) were similar, which indicates that our models were robust. Because we used crossbred populations, heritabilities might be slightly overestimated. The effect of heterosis was not analyzed because of the lack of phenotypic information for the sires and dams. Correlation estimates Among disease-related traits Disease-related traits showed strong estimates of genetic correlations but comparatively lower phenotypic correlations between each other. All the rabbits with a severe abscess score did not always have a severe bacteria score or vice versa. Since the resistance score is a composite trait based on the abscess and bacteria scores, strong relationships were expected at both the genetic and phenotypic levels. The strong genetic correlation estimates suggest that rabbits that are genetically more susceptible to harbor many bacteria will also show severe abscesses. This indicates that any of the response traits evaluated here could be an effective indicator to improve disease resistance against pasteurellosis. Between disease-related traits and growth traits Estimates of genetic correlations between growth pre-inoculation (ADG-BW) and disease-related traits (abscess and bacteria) were negative and favorable but had large standard errors (Table 9). The estimate of the genetic correlation between ADG-BW and resistance was positive and favorable, which could result from the definition of the resistance trait. Thus, favorable genetic correlations suggest that breeding programs can combine selection for resistance to pasteurellosis and selection for growth. Previous studies in rabbits with natural infection reported low to moderate negative genetic correlations between disease-related traits and growth traits [12, 13, 27]. Estimates of phenotypic correlations between ADG-BW and disease-related traits were weaker than the genetic correlations (Table 9). Post-inoculation, we observed strong negative genetic correlation estimates between disease-related traits (abscess and bacteria) and growth traits (ADG-PI1 and ADG-PI2) and strong positive genetic correlations between resistance and growth traits. These relationships suggest that rabbits that are genetically resistant to pasteurellosis have a better ADG post-inoculation than less resistant rabbits. Thus, ADG-PI1, measured over the first week post-inoculation could be a good indicator trait for pasteurellosis resistance since recording ADG-PI1 after inoculation is easier, cheaper, and faster than recording disease-related traits (abscess, bacteria, and resistance). The standard bivariate analyses that we used did not take any underlying simultaneous or recursive causal effects that may exist between traits into account [40]. A better understanding of such underlying causal effects between traits would support better decision making in breeding programs [41]. Various structural equation models have been suggested that investigate causal effects of genetic correlations [39, 40] but they have rarely been applied to infer genetic causal relationships between traits [41, 42] and should be explored in the future. Between disease-related, growth, and commercial selection traits Moderate and favorable genetic correlations between resistance to infectious diseases and WW (− 0.34 ± 0.12) were reported in [34]. In this study [34] and in [11], estimates of genetic correlations of resistance to infectious diseases or respiratory diseases with production traits such as NBA or live weight at market age were not significantly different from zero. Our results suggest a favorable genetic correlation between resistance and WW and an unfavorable genetic correlation between resistance and NBA (Table 9). However, the number of inoculated rabbits was not large enough to allow accurate estimation of genetic correlations (high standard errors) and thus to draw definite conclusions. Implications for breeding programs Our results show that the host genetics contributes to differences in resistance to pasteurellosis in rabbits. The remaining question is how to integrate these useful findings in a breeding scheme. Various options are possible, depending on the financial resources available. The easiest but probably less efficient option would be to keep selecting on weaning weight or growth rate before weaning to get an indirect correlated response on resistance to pasteurellosis. However, the high standard errors on the genetic correlations between resistance and ADG-BW or WW do not provide high confidence on the magnitude of this correlated response. A more efficient (but more costly) option would be to repeat this experimental infection trial with P. multocida on sibs of the selection candidates in each generation. Growth rate in the first week after inoculation (ADG-PI1) may be the most promising trait to select on because it had the highest heritability (0.29 ± 0.07) among the traits measured post-inoculation, and it had a very high estimated genetic correlation with resistance (0.98 ± 0.06). This trait is also easier to measure than the other response traits because no laboratory analyses or post-mortem examinations are required, and the duration of the challenge can be reduced to 1 week. However, performing repeated disease challenges raises ethical issues, and an adjustment of the challenge may be needed to decrease the intensity of the clinical signs, by reducing the inoculation dose or by changing the P. multocida strain. Alternatively, future research could focus on the detection of blood markers or on the development of in vitro immune response tests that could predict response to pasteurellosis without experimental challenge. Another approach that should be explored is the use of genetic markers. DNA samples from the rabbits used in this study and of their parents have been preserved and will be used for genotyping. If a limited number of genes or quantitative trait loci control resistance to pasteurellosis, gene or marker-assisted selection [43] could be proposed. If resistance is a polygenic trait, genomic prediction [43] should be considered. The main drawback would be the cost of genotyping and performing regular experimental challenges on sibs of the selection candidates to ensure sufficiently high prediction accuracy. To date, genomic prediction has not been implemented in rabbits, mainly for reasons summarized in [44, 45], i.e. the cost of genotyping is high compared to the individual animal value, rabbit breeding programs have a pyramidal structure, with selection in pure lines for performance expressed in crossbred animals, rabbits have a short generation interval, and several logistical issues. Thus, further economic studies are needed to compare strategies that could be used to incorporate selection for resistance to pasteurellosis in the breeding schemes. Genetic parameters for novel resistance traits to experimental infection P. multocida were evaluated. Results provide evidence for a genetic basis for these response traits in this experimental crossbred rabbit population and for the potential to decrease the prevalence of pasteurellosis by selecting resistant rabbits on any of the response traits evaluated. Strong favorable estimates of genetic correlations between disease response traits and growth traits post-inoculation suggest that growth after inoculation could be an alternative efficient indicator trait for pasteurellosis resistance. However, implementing the recording of these traits under industrial conditions is not easy. Instead, these traits could be used to facilitate the detection of markers of pasteurellosis resistance in rabbit populations. The moderate heritability of the disease response traits and their non-significant genetic correlations with the commercial selection traits suggest that a breeding program could combine selection for resistance to pasteurellosis and selection for growth rate without infection. The datasets analyzed for the current study are not publicly available because they are the property of private companies but are available from the corresponding author on reasonable request. Harper M, Boyce JD, Adler B. Pasteurella multocida pathogenesis: 125 years after pasteur. FEMS Microbiol Lett. 2006;265:1–10. Wilson BA, Ho M. Pasteurella multocida: from zoonosis to cellular microbiology. Clin Microbiol Rev. 2013;26:631–55. Coudert P, Rideaud P, Virag G, Cerrone A. Pasteurellosis in rabbits. In: Maertens L, Coudert P, editors. Recent advances in rabbit science. Melle: ILVO; 2006. p. 147–62. Eady SJ, Garreau H, Gilmour AR. Heritability of resistance to bacterial infection in meat rabbits. Livest Sci. 2007;112:90–8. Lopez S, Menard E, Favier C. Analyse des causes de réforme et de mortalité des femelles reproductrices en élevage cunicole. In: Proceedings of the 17th Journées de la Recherche Cunicole; 21–22 November 2017; Le Mans; 2017. p. 111–4. Kehrenberg C, Schulze-Tanzil G, Martel JL, Chaslus-Dancla E, Schwarz S. Antimicrobial resistance in Pasteurella and Mannheimia: epidemiology and genetic basis. Vet Res. 2001;32:323–39. Ferreira TSP, Felizardo MR, Sena de Gobbi DD, Gomes CR, Nogueira Filsner PHDL, Moreno M, et al. Virulence genes and antimicrobial resistance profiles of Pasteurella multocida strains isolated from rabbits in Brazil. Sci World J. 2012;2012:685028. Chattopadhyay MK. Use of antibiotics as feed additives: a burning question. Front Microbiol. 2014;5:334. World Health Organization. Global action plan on antimicrobial resistance. Geneva: WHO Document Production Services; 2015. http://www.who.int/drugresistance/global_action_plan/en/. Accessed 22 June 2020. Olynk NJ. Assessing changing consumer preferences for livestock production processes. Anim Front. 2012;2:32–8. Baselga M, Deltoro J, Camacho J, Blasco A. Genetic analysis on lung injury in four strains of meat rabbitle. In: Proceedings of the 4th World Rabbit Congress: 10–14 October 1988; Budapest; 1988. p. 120–7. Gunia M, David I, Hurtaud J, Maupin M, Gilbert H, Garreau H. Resistance to infectious diseases is a heritable trait in rabbits. J Anim Sci. 2015;93:5631–8. Eady SJ, Garreau H, Hurtaud J. Heritability of resistance to bacterial infection in commercial meat rabbit populations. In: Proceedings of the 8th World Rabbit Congress: 7–10 September 2004; Puebla; 2004. p. 51–6. Bishop SC, Woolliams JA. On the genetic interpretation of disease data. PLoS One. 2010;5:e8940. Huybens N, Houeix J, Szalo M, Licois D, Mainil J, Marlier D. Is epizootic rabbit enteropathy (ERE) a bacterial disease? In: Proceedings of the 9th World Rabbit Congress: 10–13 June 2008; Verona; 2008. p. 971–6. Helloin E, Lantier I, Slugocki C, Chambelllon E, Le Roux H, Berthon P, et al. Vers une amélioration de la résistance du lapin à la pasteurellose. In: Proceedings of the16th Journées de la Recherche Cunicole: 24–25 November 2015. Le Mans; 2015. p. 43–6. Kempf F, Chambellon E, Helloin E, Garreau H, Lantier F. Genome sequences of 17 Pasteurella multocida strains involved in cases of rabbit pasteurellosis. Microbiol Resour Announc. 2019;8:e00681-19. Gilmour AR, Gogel BJ, Cullis BR, Thompson R. ASReml user guide release 3.0. VSN Int Ltd; 2009. https://asreml.kb.vsni.co.uk/wp-content/uploads/sites/3/2018/02/ASReml-3-User-Guide.pdf. Accessed 22 June 2020. R Core Team R. R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2019. https://www.r-project.org/. Accessed 22 June 2020. Dempster ER, Lerner IM. Heritability of threshold characters. Genetics. 1950;35:212–36. McNitt J, Lukefahr S. Genetic and environmental parameters for postweaning growth traits of rabbits using an animal model. In: Proceedings of the 6th World Rabbit Congress: 9–12 July 1996; Toulouse; 1996. p. 325–9. Lukefahr SD, Odi HB, Atakora JKA. Mass selection for 70-day body weight in rabbits. J Anim Sci. 1996;74:1481–9. Moura ASAMT, Kaps M, Vogt DW, Lamberson WR. Two-way selection for daily gain and feed conversion in a composite rabbit population. J Anim Sci. 1997;75:2344–9. Nagy I, Ibáñez N, Romvári R, Mekkawy W, Metzger S, Horn P, et al. Genetic parameters of growth and in vivo computerized tomography based carcass traits in Pannon White rabbits. Livest Sci. 2006;104:46–52. Nagy I, Gyovai P, Radnai I, Nagyné Kiszlinger H, Farkas J, Szendrő Z. Genetic parameters, genetic trends and inbreeding depression of growth and carcass traits in Pannon terminal line rabbits. Arch Anim Breed. 2013;56:191–9. Garreau H, Licois D, Rupp R, Rochambeau H De. Variabilité génétique de la résistance à l'entéropathie épizootique du lapin : nouveaux résultats. In: Proceedings of the 11th Journées la Recherche Cunicole: 29–30 November 2005; Paris; 2005. p. 277–80. Garreau H, Eady S, Hurtaud J, Legarra A. Genetic parameters of production traits and resistance to digestive disorders in a commercial rabbit population. In: Proceedings of the 9th World Rabbit Congress: 10–13 June 2008; Verona; 2008. p. 103–8. Abrantes J, van der Loo W, Le Pendu J, Esteves PJ. Rabbit haemorrhagic disease (RHD) and rabbit haemorrhagic disease virus (RHDV): a review. Vet Res. 2012;43:12. Garreau H, Brad S, Hurtaud J, Guitton E, Cauquil L, Licois D, et al. Divergent selection for digestive disorders in two commercial rabbit lines: response of crossbred young rabbits to an experimental in oculation of Escherichia coli O-103. In: Proceedings of the 10th World Rabbit Congress: 3–6 September 2012; Sharm El- Sheikh; 2012. p. 153–7. Garreau H, Maupin M, Hurtaud J, Gunia M. Genetic analysis for production and health traits in a commercial rabbit line. In: Proceedings of the 69th Annual Meeting of the European Federation of Animal Science: 27–31 August 2018; Dubrovnik; 2018. p. 465–465. Kadarmideen HN, Thompson R, Coffey MP, Kossaibati MA. Genetic parameters and evaluations from single- and multiple-trait analysis of dairy cow fertility and milk production. Livest Prod Sci. 2003;81:183–95. Robert R, Li M, Garreau H. Comparison of the genetic parameters and evolution of two raised populations separately but with the same origin and renewed from the same nucleus. In: Proceedings of the 11th World Rabbit Congress: 15–18 June 2016; Qingdao; 2016. p. 111–4. Piles M, García ML, Rafel O, Ramon J, Baselga M. Genetics of litter size in three maternal lines of rabbits: repeatability versus multiple-trait models. J Anim Sci. 2006;84:2309–15. Gunia M, David I, Hurtaud J, Maupin M, Gilbert H, Garreau H. Genetic parameters for resistance to non-specific diseases and production traits measured in challenging and selection environments; application to a rabbit case. Front Genet. 2018;9:467. Garreau H, Szendro Z, Larzul C, Rochambeau H. Genetic parameters and genetic trends of growth and litter size traits in the White Pannon breed. In: Proceedings of the 7th World Rabbit Congress: 4–7 July 200; Valencia; 2000. p. 403–8. Nagy I, Farkas J, Gyovai P, Radnai I, Szendrő Z. Stability of estimated breeding values for average daily gain in Pannon White rabbits. Czech J Anim Sci. 2011;2011:365–9. Mínguez C, Sanchez JP, Nagar ELAG, Ragab M, Baselga M. Growth traits of four maternal lines of rabbits founded on different criteria: comparisons at foundation and at last periods after selection. J Anim Breed Genet. 2016;133:303–15. Nagy I, Szendro K, Garreau H. Developing selection indices for Pannon large rabbits selected for average daily gain and thigh muscle volume. In: Proceedings of the 11th World Rabbit Congress: 15–18 June 2016; Qingdao; 2016. p. 89–92. Gomez E, Rafel O, J R. Genetic relationships between growth and litter size traits at first parity in a specialized dam line in rabbits. In: Proceedings of the 6th World Congress on Genetics Applied to Livestock Production: 11–16 January 1998; Armidale; 1998. p. 552–5. Gianola D, Sorensen D. Quantitative genetic models for describing simultaneous and recursive relationships between phenotypes. Genetics. 2004;167:1407–24. Rosa GJ, Valente BD, de los Campos G, Wu X-L, Gianola D, Silva MA. Inferring causal phenotype networks using structural equation models. Genet Sel Evol. 2011;43:6. De Los Campos G, Gianola D, Boettcher P, Moroni P. A structural equation model for describing relationships between somatic cell score and milk yield in dairy goats. J Anim Sci. 2006;84:2934–41. Hayes B, Goddard M. Genome-wide association and genomic selection in animal breeding. Genome. 2010;53:876–83. Fontanesi L. The rabbit in the genomics era: applications and perspectives in rabbit biology and breeding. In: Proceedings of the 11th World Rabbit Congress: 15–18 June 2016; Qingdao; 2016. p. 3–18. Garreau H, Gunia M. La génomique du lapin: avancées, applications et perspectives. INRA Prod Anim. 2018;31:13–22. We acknowledge Alain Fadeau (Laboratoire de Touraine; Parçay-Meslay, F-37210) for his rigorous bacteriological analysis of numerous infected samples, and Emilie Chambellon (CIRM-BP), who prepared and checked the standardized inoculum. We also acknowledge all the participants in this project for the quality of their work, especially the teams at the Pôle d'expérimentation cunicole toulousain (PECTOUL), Plateforme d'Infectiologie Expérimentale (PFIE) and the Eurolap, Hycole and Hypharm breeding companies. This work was funded by Institut Carnot Santé Animal 2015, by Eurolap, Hycole, and Hypharm breeding companies, and by the French inter professional rabbit meat committee CLIPP. The research leading to these results also received funding from the People Programme (Marie Curie Actions) of the European Union's Seventh Framework Programme (FP7/2007-2013) under REA grant agreement n° PCOFUND-GA-2013-609102, through the PRESTIGE program coordinated by Campus France. GenPhySE, INRAE, ENVT, Université de Toulouse, 31326, Castanet-Tolosan, France Merina Shrestha, Hervé Garreau, Ingrid David & Mélanie Gunia PECTOUL, INRAE, 31326, Castanet-Tolosan, France Elodie Balmisse GABI, INRAE, AgroParisTech, Université Paris-Saclay, 78352, Jouy-en-Josas, France Bertrand Bed'hom PFIE, INRAE, 37380, Nouzilly, France Edouard Guitton ISP, INRAE, Université François Rabelais de Tours, UMR 1282, 37380, Nouzilly, France Emmanuelle Helloin & Frédéric Lantier HYCOLE, Route de Villers-Plouich, 59159, Marcoing, France Guillaume Lenoir HYPHARM SAS, La Corbière, Roussay, 49450, Sèvremoine, France Mickaël Maupin EUROLAP, Le Germillan, B.P. 21, 35140, Gosné, France Raphaël Robert Merina Shrestha Hervé Garreau Ingrid David Emmanuelle Helloin Frédéric Lantier Mélanie Gunia MS performed the statistical analyses and wrote the manuscript. HG and MG designed the experiment, supervised the statistical analyses, and had a major contribution in editing the manuscript. ID supervised the statistical analyses and had a major contribution in editing the manuscript. BB had a major contribution in editing the manuscript. EB contributed in producing the experimental population and editing the manuscript. EG, EH, and FL were involved in the design of the experiment, the inoculation, collection of phenotypes and editing of the manuscript. GL, MM, RR edited the manuscript. All authors read and approved the final manuscript. Correspondence to Mélanie Gunia. All experiments were conducted in accordance with the guidelines of the Directive 2010/63/EU of the European Parliament and of the Council, in the facilities of the EU-1277 Plateforme d'Infectiologie Expérimentale (PFIE, INRAE, 2018. Infectiology of farm, model and wild animals facility, https://doi.org/10.15454/1.5572352821559333e12), Centre Val de Loire, Nouzilly, France. All experimental procedures were approved by the Loire Valley ethical review board (CEEA VdL, committee number 19, N° APAFiS#3866). Shrestha, M., Garreau, H., Balmisse, E. et al. Genetic parameters of resistance to pasteurellosis using novel response traits in rabbits. Genet Sel Evol 52, 34 (2020). https://doi.org/10.1186/s12711-020-00552-8
CommonCrawl
On a stage-structured population model in discrete periodic habitat: III. unimodal growth and delay effect Weak time discretization for slow-fast stochastic reaction-diffusion equations Global existence and Gevrey regularity to the Navier-Stokes-Nernst-Planck-Poisson system in critical Besov-Morrey spaces Jinyi Sun 1, , Zunwei Fu 2,3,, , Yue Yin 4, and Minghua Yang 4, College of Mathematics and Statistics, Northwest Normal University, Lanzhou, 730070, China ICT School, The University of Suwon, Wau-ri, Bongdam-eup, Hwaseong-si, Gyeonggi-do, 445-743, South Korea School of Mathematical Sciences, Qufu Normal University, Qufu, 273100, China Department of Mathematics, Jiangxi University of Finance and Economics, Nanchang, 330032, China *Corresponding author: Zunwei Fu Received March 2020 Revised May 2020 Published August 2020 Fund Project: This paper was partially supported by the National Natural Science Foundation of China (Grant No. 11801236), the Postdoctoral Science Foundation of China (Grant Nos. 2018M632593, 2019M660555), the Natural Science Foundation of Gansu Province for Young Scholars (Grant No. 18JR3RA102), the innovation capacity improvement project for colleges and universities of Gansu Province (Grant No. 2019A-011), the Natural Science Foundation of Jiangxi Province for Young Scholars (Grant No. 20181BAB211001), the Postdoctoral Science Foundation of Jiangxi Province (Grant No. 2017KY23) and Educational Commission Science Programm of Jiangxi Province (Grant No. GJJ190272) The paper is concerned with the Navier-Stokes-Nernst-Planck-Poisson system arising from electrohydrodynamics in $ \mathbb{R}^d $. By means of the implicit function theorem, we prove the global existence of mild solutions for Cauchy problem of this system with small initial data in critical Besov-Morrey spaces. In comparison to the previous works, our existence result provides a new class of initial data, for which the problem is global solvability. Meanwhile, based on the so-called Gevrey estimates, we verify that the obtained mild solutions are analytic in the spatial variables. As a byproduct, we show the asymptotic stability of solutions as the time goes to infinity. Furthermore, decay estimates of higher-order derivatives of solutions are deduced in Morrey spaces. Keywords: Navier-Stokes-Nernst-Planck-Poisson system, Gevrey regularity, global solution, Besov-Morrey space. Mathematics Subject Classification: Primary: 35Q30, 35Q35, 76D03, 42B37; Secondary: 35E15. Citation: Jinyi Sun, Zunwei Fu, Yue Yin, Minghua Yang. Global existence and Gevrey regularity to the Navier-Stokes-Nernst-Planck-Poisson system in critical Besov-Morrey spaces. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2020237 D. R. Adams, A note on Riesz potentials, Duke Math. J., 42 (1975), 765-778. doi: 10.1215/S0012-7094-75-04265-9. Google Scholar H. Bae, A. Biswas and E. Tadmor, Analyticity and decay estimates of the Navier-Stokes equations in critical Besov spaces, Arch. Ration. Mech. Anal., 205 (2012), 963-991. doi: 10.1007/s00205-012-0532-5. Google Scholar H. Bahouri, J.-Y. Chemin and R. Danchin, Fourier Analysis and Nonlinear Partial Differential Equations, Grundlehren der Mathematischen Wissenschaften, vol. 343, Springer-Verlag, Berlin, Heidelberg, 2011. doi: 10.1007/978-3-642-16830-7. Google Scholar M. Z. Bazant, K. Thornton and A. Ajdari, Diffuse-charge dynamics in electrochemical systems, Phys. Rev. E, 70 (2004), 021506. doi: 10.1103/PhysRevE.70.021506. Google Scholar M. Cannone, Y. Meyer and F. Planchon, Solutions auto-similaires des équations de Navier-Stokes(French), Séminaire sur les équations aux Dérivées Partielles, 1993–1994. Google Scholar M. Cannone and G. Wu, Global well-posedness for Navier-Stokes equations in critical Fourier-Herz spaces, Nonlinear Anal., 75 (2012) 3754–3760. doi: 10.1016/j.na.2012.01.029. Google Scholar C. Deng, J. Zhao and S. Cui, Well-posedness of a dissipative nonlinear electrohydrodynamic system in modulation spaces, Nonlinear Anal., 73 (2010), 2088-2100. doi: 10.1016/j.na.2010.05.037. Google Scholar C. Deng, J. Zhao and S. Cui, Well-posedness for the Navier-Stokes-Nernst-Planck-Poisson system in Triebel-Lizorkin space and Besov space with negative indices, J. Math. Anal. Appl., 377 (2011), 392-405. doi: 10.1016/j.jmaa.2010.11.011. Google Scholar C. Foias and R. Temam, Gevrey class regularity for the solutions of the Navier-Stokes equations, J. Funct. Anal., 87 (1989), 359-369. doi: 10.1016/0022-1236(89)90015-3. Google Scholar H. Fujita and T. Kato, On the Navier-Stokes initial value problem I, Arch. Ration. Mech. Anal., 16 (1964), 269-315. doi: 10.1007/BF00276188. Google Scholar C. Huang and B. Wang, Analyticity for the (generalized) Navier-Stokes equations with rough initial data, arXiv: 1310.2141. Google Scholar T. Iwabuchi and R. Takada, Global well-posedness and ill-posedness for the Navier-Stokes equations with the Coriolis force in function spaces of Besov type, J. Funct. Anal., 267 (2014), 1321-1337. doi: 10.1016/j.jfa.2014.05.022. Google Scholar J. W. Joseph, Analytical approaches to charge transport in a moving medium, Transport Theory Statist. Phys., 31 (2002), 333-366. doi: 10.1081/TT-120015505. Google Scholar T. Kato, Strong $L^p$-solutions of the Navier-Stokes equation in $\mathbb{R}^m$, with applications to weak solutions, Math. Z., 187 (1984), 471-480. doi: 10.1007/BF01174182. Google Scholar T. Kato, Strong solutions of the Navier-Stokes equation in Morrey spaces, Bol. Soc. Brasil. Mat.(N.S.), 22 (1992), 127–155. doi: 10.1007/BF01232939. Google Scholar H. Koch and D. Tataru, Well-posedness for the Navier-Stokes equations, Adv. Math., 157 (2001), 22-35. doi: 10.1006/aima.2000.1937. Google Scholar P. Konieczny and T. Yoneda, On dispersive effect of the Coriolis force for the stationary Navier-Stokes equations, J. Differential Equations, 250 (2011), 3859-3873. doi: 10.1016/j.jde.2011.01.003. Google Scholar H. Kozono and M. Yamazaki, Semilinear heat equations and the Navier-Stokes equation with distributions in new function spaces as initial data, Comm. Partial Differential Equations, 19 (1994), 959-1014. doi: 10.1080/03605309408821042. Google Scholar Z. Lei and F. Lin, Global mild solutions of Navier-Stokes equations, Comm. Pure Appl. Math., 64 (2011), 1297-1304. doi: 10.1002/cpa.20361. Google Scholar [20] P. G. Lemarié-Rieusset, The Navier-Stokes Problem in the 21st Century, CRC Press, Boca Raton, FL, 2016. doi: 10.1201/b19556. Google Scholar J. Leray, Sur le mouvement d'un liquide visqueux emplissant l'espace, Acta Math., 63 (1934), 193-248. doi: 10.1007/BF02547354. Google Scholar Q. Liu and S. Cui, Regularizing rate estimates for mild solutions of the incompressible magneto-hydrodynamic system, Commun. Pure Appl. Anal., 11 (2012), 1643–1660. doi: 10.3934/cpaa.2012.11.1643. Google Scholar Q. Liu, J. Zhao and S. Cui, Existence and regularizing rate estimates of solutions to a generalized magneto-hydrodynamic system in pseudomeasure spaces, Ann. Mat. Pura Appl., 191 (2012), 293-309. doi: 10.1007/s10231-010-0184-8. Google Scholar F. Li, Quasineutral limit of the electro-diffusion model arising in electrohydrodynamics, J. Differential Equations, 246 (2009), 3620-3641. doi: 10.1016/j.jde.2009.01.027. Google Scholar A. L. Mazzucato, Besov-Morrey spaces function space theory and applications to non-linear PDE, Trans. Amer. Math. Soc., 355 (2003), 1297-1364. doi: 10.1090/S0002-9947-02-03214-2. Google Scholar J. Newman and K. Thomas-Alyea, Electrochemical Systems(3rd Edition), J. Wiley, Hoboken, 2004. Google Scholar M. Oliver and E. S. Titi, Remark on the rate of decay of higher order derivatives for solutions to the Navier-Stokes equations in $\mathbb{R}^n$, J. Funct. Anal., 172 (2000), 1-18. doi: 10.1006/jfan.1999.3550. Google Scholar R. J. Ryham, An energetic variational approach to mathematical modeling of charged fluids: Charge phases, simulation and well posedness (Doctoral dissertation), The Pennsylvania State University, 2006, 83pp. Google Scholar M. Schmuck, Analysis of the Navier-Stokes-Nernst-Planck-Poisson system, Math. Models Methods Appl. Sci., 19 (2009), 993-1014. doi: 10.1142/S0218202509003693. Google Scholar J. Sun, M. Yang and S. Cui, Existence and analyticity of mild solutions for the 3D rotating Navier-Stokes equations, Ann. Mat. Pura Appl., 196 (2017), no. 4, 1203–1229. doi: 10.1007/s10231-016-0613-4. Google Scholar M. E. Taylor, Analysis on Morrey spaces and applications to Navier-Stokes and other evolution equations, Comm. Partial Differential Equations, 17 (1992), 1407–1456. doi: 10.1080/03605309208820892. Google Scholar M. Yang, Z. Fu and J. Sun, Existence and Gevrey regularity for a two-species chemotaxis system in homogeneous Besov spaces, Sci. China Math., 60 (2017), 1837-1856. doi: 10.1007/s11425-016-0490-y. Google Scholar M. Yang and J. Sun, Gevrey regularity and existence of Navier-Stokes-Nernst-Planck-Poisson system in critical Besov spaces, Commun. Pure Appl. Anal., 16 (2017), 1617-1639. doi: 10.3934/cpaa.2017078. Google Scholar M. Yang, Z. Fu and J. Sun, Existence and large time behavior to coupled chemotaxis-fluid equations in Besov-Morrey spaces, J. Differential Equations, 266 (2019), 5867-5894. doi: 10.1016/j.jde.2018.10.050. Google Scholar J. Zhao, C. Deng and S. Cui, Global well-posedness of a dissipative system arising in electrohydrodynamics in negative-order Besov spaces, J. Math. Phys., 51 (2010), 093101. doi: 10.1063/1.3484184. Google Scholar J. Zhao, C. Deng and S. Cui, Well-posedness of a dissipative system modeling electrohydrodynamics in Lebesgue spaces, Differential Equations Appl., 3 (2011), 427-448. doi: 10.7153/dea-03-27. Google Scholar J. Zhao, Q. Liu and S. Cui, Regularizing and decay rate estimates for solutions to the Cauchy problem of the Debye-Hückel system, NoDEA Nonlinear Differential Equations Appl., 19 (2012), 1-18. doi: 10.1007/s00030-011-0115-4. Google Scholar Xing Wu, Keqin Su. Global existence and optimal decay rate of solutions to hyperbolic chemotaxis system in Besov spaces. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021002 Zhilei Liang, Jiangyu Shuai. Existence of strong solution for the Cauchy problem of fully compressible Navier-Stokes equations in two dimensions. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020348 Yang Liu. Global existence and exponential decay of strong solutions to the cauchy problem of 3D density-dependent Navier-Stokes equations with vacuum. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1291-1303. doi: 10.3934/dcdsb.2020163 Petr Čoupek, María J. Garrido-Atienza. Bilinear equations in Hilbert space driven by paths of low regularity. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 121-154. doi: 10.3934/dcdsb.2020230 Do Lan. Regularity and stability analysis for semilinear generalized Rayleigh-Stokes equations. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021002 Xuhui Peng, Rangrang Zhang. Approximations of stochastic 3D tamed Navier-Stokes equations. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5337-5365. doi: 10.3934/cpaa.2020241 Zhiting Ma. Navier-Stokes limit of globally hyperbolic moment equations. Kinetic & Related Models, 2021, 14 (1) : 175-197. doi: 10.3934/krm.2021001 Michael Winkler, Christian Stinner. Refined regularity and stabilization properties in a degenerate haptotaxis system. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 4039-4058. doi: 10.3934/dcds.2020030 Hai-Liang Li, Tong Yang, Mingying Zhong. Diffusion limit of the Vlasov-Poisson-Boltzmann system. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2021003 Liam Burrows, Weihong Guo, Ke Chen, Francesco Torella. Reproducible kernel Hilbert space based global and local image segmentation. Inverse Problems & Imaging, 2021, 15 (1) : 1-25. doi: 10.3934/ipi.2020048 Xing-Bin Pan. Variational and operator methods for Maxwell-Stokes system. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3909-3955. doi: 10.3934/dcds.2020036 Kazunori Matsui. Sharp consistency estimates for a pressure-Poisson problem with Stokes boundary value problems. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1001-1015. doi: 10.3934/dcdss.2020380 Hao Wang. Uniform stability estimate for the Vlasov-Poisson-Boltzmann system. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 657-680. doi: 10.3934/dcds.2020292 Leanne Dong. Random attractors for stochastic Navier-Stokes equation on a 2D rotating sphere with stable Lévy noise. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020352 Stefan Doboszczak, Manil T. Mohan, Sivaguru S. Sritharan. Pontryagin maximum principle for the optimal control of linearized compressible navier-stokes equations with state constraints. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020110 Cung The Anh, Dang Thi Phuong Thanh, Nguyen Duong Toan. Uniform attractors of 3D Navier-Stokes-Voigt equations with memory and singularly oscillating external forces. Evolution Equations & Control Theory, 2021, 10 (1) : 1-23. doi: 10.3934/eect.2020039 Andrea Giorgini, Roger Temam, Xuan-Truong Vu. The Navier-Stokes-Cahn-Hilliard equations for mildly compressible binary fluid mixtures. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 337-366. doi: 10.3934/dcdsb.2020141 Xin-Guang Yang, Rong-Nian Wang, Xingjie Yan, Alain Miranville. Dynamics of the 2D Navier-Stokes equations with sublinear operators in Lipschitz-like domains. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020408 Xiaopeng Zhao, Yong Zhou. Well-posedness and decay of solutions to 3D generalized Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 795-813. doi: 10.3934/dcdsb.2020142 Helmut Abels, Andreas Marquardt. On a linearized Mullins-Sekerka/Stokes system for two-phase flows. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020467 Jinyi Sun Zunwei Fu Yue Yin Minghua Yang
CommonCrawl
Decomposition and Spatio-temporal analysis of health care access challenges among reproductive age women in Ethiopia, 2005–2016 Getayeneh Antehunegn Tesema1, Zemenu Tadesse Tessema1 & Koku Sisay Tamirat1 The high maternal mortality, home delivery, unwanted pregnancies, incidence of unsafe abortion, and unmeet family planning needs are maternal health gaps attributed to health care access barriers and responsible for the observed health care disparities. Over the last decades remarkable achievements have made in relation to maternal health problems and the reduction of health care access barriers. Thus, this study aimed to assess the decomposition and spatial-temporal analysis of health care access challenges among reproductive-age women in Ethiopia. Secondary data analysis was conducted based on the three consecutive Ethiopian Demographic and Health Surveys (2005–2016 EDHSs). A total weighted sample of 46,235 reproductive-age women was included in this study. A logit based multivariate decomposition analysis was employed for identifying factors contributing to the overall decrease in health care access challenges over time. For the spatial analysis, ArcGIS version 10.6 and SaTScan™ version 9.6 were used to explore hotspot areas of health care access challenges in Ethiopia over time. Variables with p-value < 5% in the multivariable Logit based multivariate decomposition analysis were considered as significantly contributed predictors for the decrease in health care access challenges over time. The mean age of the women was 27.8(±9.4) years in 2005, 27.7(±9.2) years in 2011, and 27.9 (±9.1) years in 2016. Health care access challenges have been significantly decreased from 96% in 2005 to 70% in 2016 with the Annual Rate of Reduction (ARR) of 2.7%. In the decomposition analysis, about 85.2% of the overall decrease in health care access challenge was due to the difference in coefficient and 14.8% were due to differences in the composition of the women (endowment) across the surveys. Socio-demographic characteristics (age, residence, level of education, female household head, better wealth and media exposure) and service utilization history before the survey (facility delivery and had ANC follow up) contribute to the observed decrease over time. The spatial analysis revealed that health care access challenges were significantly varied across the country over time. The SaTScan analysis identified significant hotspot areas of health care access challenges in the southern, eastern, and western parts of Ethiopia consistently over the surveys. Perceived health care access challenges have shown a remarkable decrease over time but there was variation in barriers to health care access across Ethiopia. Media exposure improved mothers' health care access in Ethiopia. Public health programs targeting rural, uneducated, unemployed, and women whose husband had no education would be helpful to alleviate health care access problems in Ethiopia. Besides, improving mother's media exposure plays a significant role to improve mothers' health care access. Health care access challenges have significantly varied across the country. This suggests that further public health interventions are important for further reduction of health care access barriers through the uplifting socio-demographic and economic status of the population. Remarkable progress has been made in the reduction of maternal and child mortality in the last two decades, given that preventable maternal death dropped from 386 per 100,000 live births in 1990 to 216 deaths in 2015 according to the global burden of disease study [1, 2]. However, maternal and child mortality is the unfinished agenda of the millennium development goal (MDG) is also the agenda under sustainable development goal (SDG) for further reduction of preventable maternal deaths below 70 per 100, 000 live births [3, 4]. The Sub-Saharan region is one of the highly affected with a death toll of 546 per 100, 000 live births in 2015 [2, 5]. Ethiopia is the country with a high magnitude of maternal deaths and achieved about 50% maternal mortality reduction with recent figures of 401 deaths per 100,000 live births [6]. Maternal health problems are still the leading public health concerns in developing countries. Health service availability, utilization, and accessibility contributed to maternal health problems. Health care accessibility is defined as the opportunity to have the health care needs fulfilled and measured in terms of utilization which depends on the affordability, physical accessibility, and acceptability of services and not merely adequacy of supply [7,8,9]. Access to comprehensive, quality health care services is important for promoting and maintaining health, preventing and managing the disease, reducing unnecessary disability and premature deaths, and achieving health equity for all women [10,11,12,13]. The high maternal mortality, home delivery, unwanted pregnancies, incidence of unsafe abortion, and unmeet family planning needs are maternal health gaps attributed to health care access barriers and responsible for the observed health care disparities. According to health care access barrier model, financial, structural, and cognitive are the three categories of health care access barrier (HCB) associated with decreased screening, late presentation to care, and lack of treatment, which in turn result in poor health outcomes and health disparities [7,8,9,10,11, 14]. Besides, geographical disparity, high costs of health care services, lack of transportation, and low socio-economic conditions of the population were barriers to health access problems [14,15,16,17,18]. A study conducted in South Africa, Cape Town only 35.2% of women accessed maternal health services. Another study in the same setting showed that availability, affordability, and acceptability of maternal health services were 30.5, 14, and 18.5%, respectively [9]. A comparative study in Nigeria and Ethiopia showed that maternal health services inequalities were observed between urban and rural, given that 48% women in urban and 55% in the rural area of Ethiopia perceived that institutional delivery is not necessary which is higher as compared to 42.7% women in urban and 45.9% of rural areas of Nigeria [19]. According to the 2016 Ethiopia Demographic and Health Survey, about 70% of women had health access problems which showed significant reductions from 96% in 2005 with geographical and socio-demographic variations [6, 20]. The Federal Democratic Republic of Ethiopia Government made interventions like urbanization, health facility expansion, providing maternal services free of charge, increasing health insurance enrollment, implementation of the health extension programs, women education and empowerment are some of the contributing factors for the reduction of health care access challenges among women [21,22,23]. However, there is scarce evidence about the spatial-temporal distributions and the contribution of each of the variables for the observed changes over time. Therefore, this study aimed to assess the decomposition and spatial-temporal analysis of health care access challenges among reproductive-age women in Ethiopia based on EDHS 2005–2016. Findings from this study could help health system planners for evidence-based interventions and resource allocations. In addition, important lessons also are learned about the effect of characteristics and population structure changes for the reduction of health care access challenges. Evidnces from the previous litratures showed that socio-demographic attributes (age, marital status, level of education, occupation, wealth index, health insurance ceoverage, and media exposre and others), geographical chracterstics (residence and administarive regions) and previous maternal charactersics and service utilization (pregnancy during data collection. ANC follow up and facility delivery) were factors perceived to affect the healthcare acces among reproductive aage (Fig. 1). The conceptual framework adapting from literatures for analyzing factors contributing to health care access challenges Study design, setting and period Secondary data analysis was conducted based on the three consecutive EDHSs data conducted in 2005, 2011, and 2016 [24,25,26]. These surveys are a nationally representative study conducted in Ethiopia, which is situated in the Horn of Africa. Ethiopia is the second most populous country in Africa next to Ngeria, and has 9 Regional states (Afar, Amhara, Benishangul-Gumuz, Gambela, Harari, Oromia, Somali, Southern Nations, Nationalities, and People's Region (SNNP) and Tigray) and two Administrative Cities (Addis Ababa and Dire-Dawa). Ethiopia is an agrarian country and 84% of the population lives in a rural area, and 80% of the country's total population lives in the regional states of Amhara, Oromia, and SNNP [27]. About 60% of the total population are living in the pastoral regions (Somali, Afar, Oromiya, Southern region, Gambella, and Benishangul-Gumuz regions) where people are sparsely populated and the community are least benefitted the health sector development [28]. Ethiopia is a multi-religious country with the domination of orthodox Christian and Muslim religious followers and having more than 80 ethnics groups that exercise their own culture and language. Ethiopia currently has one of the fastest growing economies in Africa and agriculture accounts for 40.5% of national GDP [29]. Ethiopia has registered an average annual growth rate of 11% of GDP, but 24% of the population still live below the national poverty line [30, 31]. Healthcare funding for the county is highly dependent on donors followed by households in the form of out-of-pocket care expenditure [32]. Ethiopia has 3 tiers health systems, Primary health care unit (Primary hospital, health center, health post, primary clinic, and medium clinic); Secondary health care (General hospital, specialty clinics, and specialty centers); and Tertiary health care (Specialized hospital). The number of hospitals varies from region to region in response to differences in population size [33]. Sample and population For this study, the data were obtained from eligible women aged 15–49 years who were participated in the survey. A stratified two-stage cluster sampling technique was employed for all three EDHS surveys using the population and housing census as a sampling frame. In total, 21 sampling strata have been created. In the first stage, a total of 540 Enumeration Areas (EAs) in EDHS 2005, 624 EAs in EDHS 2011, and 645 EAs in EDHS 2016 were selected with probability proportional to the EA size and with independent selection in each sampling stratum. At the second stage, on average 28–32 households were systematically selected. Based on this a total weighted sample of 14,062 reproductive-age women in EDHS 2005, 16,490 in EDHS 2011, and 15,863 in EDHS 2016 were included for the analysis. The detailed sampling procedure was presented in the full EDHS report [24,25,26]. For the spatial analysis, the geographic coordinate (longitude and latitude) data were taken from the selected enumeration areas. The EDHSs data set and the geographic coordinate data were accessed through an online request to the measure DHS program by explaining the objective of the study and we receive an authorization letter. Measurement of variables The dependent variable was a score, health care access challenges were categorized dichotomously as Yes/No. To measure health care access challenges, each reproductive-age women were asked whether each of the following factors is a big problem in seeking medical advice or treatment for themselves when they are sick: 1) getting permission to go to the doctor, 2) getting money for advice or treatment, 3) distance to a health facility and 4) not wanting to go alone [34]. Then we created a composite variable that labeled as "health care access challenges" if the women responded to at least one the item as big problem classified as "had health care access challenges "and when women had responded as not a big problem to all of the questions then she was classified as "had no health access challenge" [35, 36]. Based on prior similar studies [37,38,39], the independent variables included in this study were maternal age (recoded as 15–24, 25–34, and 35–49), residence (recoded as urban, and rural), maternal education (recoded as no, primary education, and secondary and above), husband education (recoded as no, primary education, and secondary and above), marital status (recoded as never married, married/living together, and separated/widowed/divorced), wealth status (recoded as poor, middle and rich), visiting health facility in the last 12 months (recoded as Yes, and No), ANC visit (recoded as Yes and No), place of delivery (recoded as home and health facility), maternal occupation status (recoded as working and not working), contraceptive use and intention (recoded as using modern method, using traditional method, non-users and intends to use latter, and doesn't intends to use), household head (recorded as male and female), preceding birth interval (recoded as < 2 years and ≥ 2 years), media exposure (generated by aggregating the three variables (reading news paper, listening to radio and watching television and recoded as No and Yes), and current pregnancy. Data collection procedure This study was performed based on the three EDHSs data obtained from the official DHS measure system website www.measuredhs.com after permission was given via online request through specifying our analysis objective. We used the set of individual (IR) data and extracted the outcome and the independent variables. The location data (latitude and longitude) was obtained from the measure DHS program. The data were weighted using sampling weight, primary sampling unit, and strata before any statistical analysis to restore the representativeness of the survey and to tell the STATA to take into account the sampling design when calculating standard errors to get reliable statistical estimates. Cross tabulations and summary statistics were conducted to describe the study population. Descriptive and summary statistics were conducted using STATA version 14, ArcGIS version 10.6, SaTScan version 9.6, and R software. Decomposition analysis Data from EDHS 2005, and 2016 were appended together with the decomposition analysis. The trend was assessed separately in three phases (phase 1 (2005–2011), phase 2 (2011–2016), and phase 3 (2005–2016)). A multivariate decomposition analysis of the decrease in health care access challenges over time was fitted to identify the significant factors contributing to the decrease in health care access challenges for the last 11 years (2005–2016). Logit based multivariate decomposition analysis technique for non-linear response model (MVDCMP) was used for identifying factors significantly contributing to the decrease in health care access challenges since it was a binary outcome. It was a regression analysis of the decrease in the health care access challenges between EDHS 2005 and 2016. The model utilizes the output from a logit based multivariate decomposition model to parcel out the observed decrease in the percentage of health care access problems across the survey into two components. The multivariate decomposition analysis decomposes the overall decrease in health care access challenge overtime into the decrease due to the difference in women's composition (endowment) across the surveys and the decrease due to the difference in the effect of the characteristics (coefficient) between the surveys. In the overall decomposition analysis, we can measure the percentage in an overall decrease in health care access challenges over time attributed to the compositional difference in women (difference in characteristics or endowment) and the percentage of overall decrease due to the difference in the effect of explanatory variables (difference in coefficient) between the surveys. Hence, the observed decrease in health care access challenges between surveys is additively decomposed into a characteristics (or endowments) component and a coefficient (or effects of characteristics) component. For logistic regression, the Logit or log-odd of health care access problem is taken as: $$ \mathrm{Logit}\;\left(\mathrm{A}\right)\hbox{-} \mathrm{Logit}\;\left(\mathrm{B}\right)=\mathrm{F}\;\left(\mathrm{XA}\upbeta \mathrm{A}\right)\hbox{-} \mathrm{F}\;\left(\mathrm{XB}\upbeta \mathrm{B}\right). $$ $$ =\frac{\left[\mathrm{F}\;\left(\mathrm{XA}\upbeta \mathrm{A}\right)\hbox{-} \mathrm{F}\;\left(\mathrm{XA}\upbeta \mathrm{A}\right)\right]}{\mathrm{E}}+\frac{\left[\mathrm{F}\;\left(\mathrm{XB}\upbeta \mathrm{B}\right)\hbox{-} \mathrm{F}\;\right(\mathrm{XB}\upbeta \mathrm{B}\Big].}{\mathrm{C}} $$ The E component refers to the part of the differential owing to differences in endowments or characteristics. The C component refers to that part of the differential attributable to differences in coefficients or effects. The equation can be presented as: $$ \mathrm{Logit}\;\left(\mathrm{A}\right)\hbox{-} \mathrm{Logit}\;\left(\mathrm{B}\right)=\left[\upbeta 0\mathrm{A}\hbox{-} \upbeta 0\mathrm{B}\right]+\sum \mathrm{XijB}\ast \left[\upbeta \mathrm{ijA}\hbox{-} \upbeta \mathrm{ijB}\right]+\sum \upbeta \mathrm{ijB}\ast \left[\mathrm{XijA}\hbox{-} \mathrm{XijB}\right]. $$ - XijB is the proportion of the jth category of the ith determinant in the DHS 2005, - XijA is the proportion of the jth category of the ith determinant in DHS 2016, - ΒijB is the coefficient of the jth category of the ith determinant in DHS 2005, - ΒijA is the coefficient of the jth category of the ith determinant in DHS 2016, - Β0B is the intercept in the regression equation fitted to DHS 2005, and. - Β0A is the intercept in the regression equation fitted to DHS 2016. The recently developed multivariate decomposition for the non-linear model was used for the decomposition analysis of health care access challenges using the mvdcmp STATA command [40]. In this study variable with p-value <, 0.2 in the bivariable multivariate decomposition analysis were considered for the multivariable multivariate decomposition analysis. In the multivariable multivariate analysis variables with p-value< 5% in the endowment and coefficient component were considered as significant contributing factors for the decrease in health care access challenges over time. Variance Inflation Factor (VIF) and tolerance were done to check whether there is significant multicollinearity between the independent factors. The mean VIF in this study was less than 10 and tolerance greater than 0.1, it indicates there is no significant multicollinearity. ArcGIS version 10.6 software and SaTScan version 9.6 software were used to explore the Spatio-temporal distribution of health care access challenges. The global spatial autocorrelation (Global Moran's I) was done to assess whether women's health care access challenges were dispersed, clustered, or randomly distributed in the study area [25]. Global moran's I is a spatial statistics used to measure spatial autocorrelation by taking the entire data set and produce a single output value which ranges from − 1 to + 1. Moran's I value close to − 1 indicates that health care access challenges is dispersed, whereas moran's I close to + 1 indicate health care access challenges are clustered and if moran's I close to 0 revealed that health care access challenge is randomly distributed. A statistically significant Moran's I (p < 0.05) showed that women's health care access challenge is non-random. Kriging interpolation was employed to explore the burdens of health care access challenges in the unsampled areas of the country based on the observed data. The spatial interpolation technique is used to predict women's health care access challenges on the un-sampled areas in the country based on the value observed form sampled EAs. Therefore, part of a certain area can be predicted by using observed data using a method called interpolation. There are various deterministic and geostatistical interpolation methods. Among all of the methods, ordinary Kriging and empirical Bayesian Kriging are considered the best method since it incorporates the spatial autocorrelation and it statistically optimizes the weight [26]. In this study, the ordinary kriging spatial interpolation method was used for the predictions of women's health care access challenge in unobserved areas of Ethiopia since it had the lowest residual. Bernoulli based spatial scan statistical analysis was employed to detect the primary and secondary significant spatial clusters of health care access challenges using Kuldorff's SaTScan version 9.6 software. The spatial scan statistic uses a circular scanning window that moves across the study area. A woman with health care access challenge was taken as cases and women with no health care access challenges were taken as controls to fit the Bernoulli model. The default maximum spatial cluster size of < 50% of the population was used since it allowed both small and large clusters to be detected and ignored clusters that contained more than the maximum limit. For each potential cluster, a likelihood ratio test statistic and the p-value were used to determine if the number of observed health care access challenge cases within the potential cluster was significantly higher than expected or not. The scanning window with maximum likelihood was the most likely performing cluster, and the p-value was assigned to each cluster using Monte Carlo hypothesis testing by comparing the rank of the maximum likelihood from the real data with the maximum likelihood from the random datasets. The primary and secondary clusters were identified and assigned p-values and ranked based on their likelihood ratio test, based on 999 Monte Carlo replications [27]. Since the study was a secondary data analysis of publically available survey data from the MEASURE DHS program, ethical approval and participant consent were not necessary for this particular study. We requested DHS Program and permission was granted to download and use the data for this study from http://www.dhsprogram.com. There are no names of individuals or household addresses in the data files. The geographic identifiers only go down to the regional level (where regions are typically very large geographical areas encompassing several states/provinces. In surveys that collect GIS coordinates in the field, the coordinates are only for the enumeration area (EA) as a whole, and not for individual households, and the measured coordinates are randomly displaced within a large geographic area so that specific enumeration areas cannot be identified. Characteristics of the study population The mean age of the women was 27.8(±9.4) years in 2005, 27.7(±9.2) years in 2011, and 27.9 (±9.1) years in 2016. About one-third of the reproductive age women in all three surveys were found in the Oromia region. There was a slight increase in urban residence from 17.7% in 2005 to 22.2% in 2016. Regarding maternal education, the proportion of women who had formal education has decreased from 65.9% in 2005 to 47.8% in 2016 whereas mothers who had attained secondary and above education have increased from 11.9% in 2005 to 17.2% in 2016. The percentage of media exposure among reproductive-age women was increased by 46.3% in 2005 to 48.2% in 2016 (Table 1). Table 1 Percentage distribution of characteristics of respondents in 2005, 2011 and 2016 Ethiopian Demographic and Health Surveys Trends of health care access challenges The overall health care access challenge among reproductive-age women has been decreased from 96% (95% CI: 95.2, 96.8) in 2005 to 70% (95% CI: 69.3, 70.7) in 2016 with Annual Rate of Reduction (ARR) of 2.7%. The trend in the health care access challenge has decreased in Addis Ababa, Harari, Amhara, Afar, Tigray, and Gambela regions over time (Fig. 2). About the place of residence, the percentage of health care challenges were decreased at a 24.1 point percentage among urban residents from 2005 to 2016. According to maternal education, there was a decline in health care access challenges among all education categories with the highest decrement in women with secondary and higher education at a 19.8% decrease in health care access problems in the entire study period (Table 2). The trends of women health care access challenges across regions in Ethiopia Table 2 Trends in health care access challeneges among reproductive age women by selected characteristics in 2005, 2011, and 2016 Ethiopia Demographic and Health Surveys The overall multivariate decomposition analysis revealed that about 85.2% of the overall decrease in health care access challenges among reproductive-age women was due to the difference in coefficient (difference in the effect of characteristics) across the surveys whereas the remaining 14.8% of the overall decrease in health care access challenge was due to the difference in composition of the respondent (endowment) across the surveys (Table 3). In the detailed decomposition analsyis, among the change due to composition (endowment); change in composition of rural residence (B = -0.005, 95% CI: − 0.007, − 0.004), women with secondary and higher education (B = 0.005, 95% CI: 0.003, 0.006), women aged 25–34 (B = 0.001, 95% CI: 0.0004, 0.002), history of health facility delivery (B = − 0.017, 95% CI: − 0.026, − 0.008), women whose husband attained primary education (B = 0.002, 95% CI: 0.0005, 0.003), had history of ANC follow up (B = − 0.017, 95% CI: − 0.026, − 0.008), female household head (B = 0.002, 95% CI: 0.0002, 0.004), middle wealth status (B = 0.002, 95% CI: 0.001, 0.004), rich wealth status (B = 0.006, 95% CI: 0.005, 0.008) and having media exposure (B = 0.002, 95% CI: 0.0009, 0.003) were signifcatly contributed for the decrease in health care access challenges over 11 years (from 2005 to 2016). Among the overall decrease in health care access challenges attributed to the difference in coefficients; the difference in effects of rural residence (B = 0.007, 95% CI: 0.001, 0.01), women aged 25–34 (B = 0.02, 95% CI: 0.003, 0.03), women having an occupation (B = − 0.009, 95% CI: − 0.02, − 0.003), and had a history of ANC follow-up (B = 0.01, 95% CI: 0.002, 0.02) were the factors significantly contributed for the decrease in health care access problem (Table 4). Table 3 The overall decomposition analysis result of the decrease in health care access challenges over the last 11 years (2005–2016) Table 4 Detailed decomposition analysis of health care access challenges among reproductive age women in Ethiopia, 2005–2016 Variations in rural-urban inequality in health care access challenges across regions over time in Ethiopia Figure 3 and Fig. 4 show that the risk difference in women health care access challenges across regions over time (2005–2016) in Ethiopia. the risk difference in women health care access challenges were significantly varied across regions in Ethiopia across the three EDHS surveys (EDHS 2005, 2011 and 2016). In EDHS 2005, overall there was significant risk difference in health care access challenges across residence (RD = 0.74, 95% CI: 0.73, 0.75). The highest significant urban-rural health care challenge inequality were observed in SNNP region which was (RD = 0.82, 95%: 0.82, 0.84) followed by Benishangul-Gumuz (RD = 0.77, 95% CI: 0.68, 0.83). In EDHS 2011, overall there was significant urban-rural difference in health care access challenges in Ethiopia (RD = 0.60, 95% CI: 0.59, 0.60). The highest risk difference was observed SNNPR (RD = 0.68, 95% CI: 0.67, 0.70) and Oromia region (RD = 0.68, 95% CI: 0.67, 0.69) while the lowest risk difference was observed in Harari region (RD = 0.04, 95% CI: 0.01, 0.14). In EDHS 2016, there was singnificant risk difference in women health care challenge between urban and rural areas across regions in Ethiopia (RD = 0.63, 95% CI: 0.63, 0.64). The highest residential inequality in health care access challenges was observed in oromia regions (RD = 0.65, 95% CI: 0.64, 0.66) while the lowest risk difference in Harari region (RD = 0.05, 95% CI: 0.01, 0.17). Forest plot of risk difference between women from urban and rural area in health care access challenge across regions in Ethiopia from 2005 to 2016 Risk difference between women in rural and urban area across regions in Ethiopia over time Spatial distribution of health care access challenges The spatial distribution of health care challenges showed significant spatial variation across the country over time (Fig. 5). The highest prevalence of health care access challenges was identified in Somali, Harari, Benishangul Gumuz, east SNNPRs, and Afar regions consistently over time (Figs. 6, 7, and 8). The global spatial autocorrelation of health care access challenges in Ethiopia 2005, 2011 and 2016 The spatial distribution of health care access challenges among reproductive-age women in Ethiopia, 2005 (Source: CSA 2013, using Arc-GIS version 10.6 and SaTScan version 9.6 statistical software) The spatial distribution of health care access problem challenges among reproductive-age women in Ethiopia, 2011 (Source: CSA 2013, using Arc-GIS version 10.6 and SaTScan version 9.6 statistical software) In EDHS 2005, the spatial scan statistics identified a total of 286 primary and secondary clusters of health care access challenges. Of these, 173 clusters were most likely (primary cluster), the spatial window was located in Gambela, SNNPR, south Benishangul and southwest Oromia regions centered at 7.222698 N, 35.310296 E with 415.38-km radius, a Relative Risk (RR) of 1.12 and Log-Likelihood (LLR) of 131.7, at p-value < 0.01 (Table 5). It showed that women within the spatial window had 1.12 times higher likelihood of health care access challenges as compared to women outside the spatial window. Whereas the secondary clusters were located in central Oromia, Dire Dawa, Harari, northeast Somali, and central Amhara regions (Fig. 9). Table 5 SaTScan analysis result of health care access challenges among reproductive age women in Ethiopia, 2005 The SaTScan analysis of hotspot areas of women health care access challenges among reproductive-age women in Ethiopia, 2005 (Source: CSA 2013, using Arc-GIS version 10.6 and SaTScan version 9.6 statistical software) In EDHS 2011, the spatial scan statistics identified a total of 286 primary and secondary clusters of health care access challenges. Of these, 146 clusters were most likely clusters, which was located in south Benishangul, Gambela, SNNPR, and southwest Oromia regions, centered at 7.437690 N, 35.059032 E with 356.58-km radius, a Relative Risk (RR) 1.16, and Log-Likelihood Ratio (LRR) of 220.2, at p-value< 0.01 (Table 6). It showed that women within the spatial window had 1.16 times higher likelihood of health care access challenges as compared to women outside the spatial window. Whereas the secondary clusters were located in the Somali, Harari, Afar, and central Amhara regions (Fig. 10). In EDHS 2016, the SaTScan statistics identified a total of 280 primary and secondary clusters of these, of these 153 were most likely clusters which were located in SNNPR, Gambella and Benishangul regions centered at 8.268721 N, 33.486779 E with 485.32 km radius, RR of 1.31 and LLR of 215.9, at p-value< 0.01 (Table 7). It showed that women within the spatial window had a 1.31 times higher likelihood of health care access challenges as compared to women outside the spatial window (Fig. 11). Overall the SaTScan analysis revealed that Gambella, SNNPR, and Benishangul regions were persistently at higher risk of health care access challenges across the three surveys. Kriging interpolation of health care access challenges Based on EDHS 2005, Kriging interpolation predict that the highest health care access challenges were detected in the southern and eastern part of Somali, east SNNPR, west Benishangul, and Gambella regions whereas, predicted relatively low health care access challenge located in the Addis Ababa, south Oromia and Dire Dawa (Fig. 12). The Kriging interpolation of health care access challenges among reproductive-age women in Ethiopia, 2005 (Source: CSA 2013, using Arc-GIS version 10.6 and SaTScan version 9.6 statistical software) In 2011, Kriging interpolation revealed that the highest predicted prevalence of health care access challenges was found in Benishangul, west Gambella, SNNPR, south Oromia and Somali regions. In contrast, predicted low health care access challenges were detected in Tigray, Afar, Amhara, Addis Ababa, Harari, and Dire Dawa (Fig. 13). From EDHS 2016 data, Kriging interpolation predicted that east Somali, southeast, and west Oromia, central SNNPR, and east Benishangul contained the highest health care access problem while Tigray, Addis Ababa, and Amhara regions contained relatively low health care access problem (Fig. 14). Over the last 15 years, Ethiopia has achieved different socioeconomic development and health system improvements reflected by a reduction of maternal mortality, increased per-capita, and life expectancy of citizens. Besides, remarkable improvements were also shown with increased health facility coverage in the population with the construction of different health facilities and the deployment of health professionals and resources [6, 41]. Trend analysis in this study showed that over 10 years the magnitude of health access perceived barriers/ challenges among reproductive-age women has decreased from 96% in 2005 to 70% 2016, according to Ethiopian demographic and health survey data [6, 20]. There was significant risk difference in health care access challenges between urban and rural areas across regions over the three surveys. About 14.8% of the overall reduction of health care access perceived challenges was attributed to the change in the composition of the respondents and the remaining 85.2% of the overall decrease in health care perceived barriers were due to the change in the effects of explanatory variables (coefficients). Population structure changes such as increased literacy level and improvement of socio-demographic and economic characteristics contributed to the reduction of health care access problems among reproductive-age women in Ethiopia. In addition, government commitment to the realization of the millennium development goal (MDG) through the provision of maternal health care services free of charge may also be contributed to the reduction of health care access barriers. This finding was supported by previous studies [42,43,44]. Socio-demographic characteristics like husband and woman level of education, home delivery, residence, female household head, and had previous ANC follow up were factors contributed to the overall changes health access perceived barriers among reproductive-age women in the last 10 years. This finding was consistent with previous studies in Ethiopia and Tanzania [36, 43]. Also, increased health-seeking behavior and through the implementation of health extension programs which increased accessibility and availability of health services at the grassroots level. Particularly, female household head and women age 25–34 years old associated with decreased perceived barriers of health care access, this could be since female autonomy could increase health-seeking behavior [45]. Findings from the spatial analysis showed that health access problems distributions were not random. The highest health care access problem was spatially clustered in Somali, Harari, Benishangul Gumuz, east SNNPRs, and Afar regions consistently over time as depicted in Fig. 4. This finding was consist of previous studies in Ethiopia [18, 43, 46]. This could be because the above-mentioned areas/regions are less developed and the majority of habitats are pastoralists with no permanent residence to establish health facilities and provide services. Besides, the socio-demographic characteristics of society might affect the health-seeking behavior of women like cultural barriers. In addition to this spatial interpolations also the highest magnitude of health care access problems was detected in Benishangul, west Gambella, SNNPR, south Oromia, and Somali regions of Ethiopia. This finding was consistent with previous studies [18]. These regions are who most of the populations are pastoralists and there are documented security issues that might affect health access of the population. This study had several strengths. Firstly, the study was based on nationally representative large datasets, and thus, it has adequate statistical power. Secondly, the estimates of the study were done after the data were weighted for the probability sampling and non-response, to make it representativeness at national and regional levels: therefore, it can be generalized to all reproductive-age women in the study setting. Thirdly, multivariate decomposition analysis was applied to understand the sources of changes in health care access problems over time. Finally, the use of GIS and SaTScan statistical tests helped to detect similar and statistically significant hotspot areas of health care access problems across the surveys and to design effective public health programs. Limitations, the outcome variables were not collected in EDHS 2000. The other limitation was, the SaTScan detect only circular clusters, and irregularly shaped clusters were not detected. Furthermore, the EDHS survey did not incorporate community-level variables like community norm, culture, and beliefs and medical factors rather it relied on mothers or caregivers report and might have the possibility of social desirability and recall bias through CSA claim that strong effort was made to minimize it mainly through extensive training of data collectors, recruiting experienced data collectors and supervisors this might underestimate our finding. Perceived health care access challenges have shown a remarkable decrease over time but there was variation in barriers to health care access across Ethiopia. Media exposure improved mothers' health care access in Ethiopia. Public health programs targeting rural, uneducated, unemployed, and women whose husband had no education would be helpful to alleviate health care access problems in Ethiopia. Socio-demographic characteristics like partner and women's level of education, home delivery, and ANC utilization were factors that contributed to the observed changes over the last decades. Besides, health access problems were not randomly distributed and south and eastern Ethiopia were regions of high health access problems. This suggests that further public health interventions are important for further reduction of health care access barriers through the uplifting socio-demographic and economic status of the population. Data we used for this study is publicly available in the MEASURE DHS program and you can access it from www.measuredhs.com after explaining the objectives of the study. Then after receive the authorization letter, the data is accessible and freely downloaded. DHS: Demographic health survey EAs: Enumeration areas EDHS: Ethiopian demographic and health survey LLR: Log likelihood ratio Likelihood ratio RR: Relative risk SNNP: Southern nations and nationalities of people SSA: Blaauw D, Penn-Kekana L. Maternal health: reflections on the millennium development goals. S Afr Health Rev. 2010;2010(1):3–28. Alkema L, et al. Global, regional, and national levels and trends in maternal mortality between 1990 and 2015, with scenario-based projections to 2030: a systematic analysis by the UN maternal mortality estimation inter-agency group. Lancet. 2016;387(10017):462–74. Bloom G, Katsuma Y, Rao KD, Makimoto S, Leung GM. 2030 AGENDA FOR SUSTAINABLE DEVELOPMENT Deliberate Next Steps toward a New Globalism for Universal Health Coverage (UHC). Organization, W.H. Monitoring health for the SDGs: sustainable development goals. Geneva: World Health Organization; 2016. Kyei-Nimakoh M, Carolan-Olah M, McCann TV. Access barriers to obstetric care at health facilities in sub-Saharan Africa—a systematic review. Syst Rev. 2017;6(1):110. ICF, C.S.A.C.E.a. Ethiopia Demographic and Health Survey 2016. Addis Ababa, Ethiopia, and Rockville, Maryland, USA; 2016.. Carrillo JE, et al. Defining and targeting health care access barriers. J Health Care Poor Underserved. 2011;22(2):562–75. Gulliford M, et al. What does' access to health care'mean? J Health Serv Res Policy. 2002;7(3):186–8. Zimmermann K, et al. Healthcare eligibility and availability and healthcare reform: are we addressing rural women's barriers to accessing care? J Health Care Poor Underserved. 2016;27(4A):204. Erasmus, M.O., The barriers to access for maternal health care amongst pregnant adolescents in the Mitchells plain sub-district. 2017. Munthali AC, et al. "This one will delay us": barriers to accessing health care services among persons with disabilities in Malawi. Disabil Rehabil. 2019;41(6):683–90. Rutherford ME, Mulholland K, Hill PC. How access to health care relates to under-five mortality in sub-Saharan Africa: systematic review. Tropical Med Int Health. 2010;15(5):508–19. Washington DL, et al. Access to care for women veterans: delayed healthcare and unmet need. J Gen Intern Med. 2011;26(2):655. Munguambe K, et al. Barriers and facilitators to health care seeking behaviours in pregnancy in rural communities of southern Mozambique. Reprod Health. 2016;13(1):31. DeVoe JE, et al. Insurance+ access≠ health care: typology of barriers to health care access for low-income families. Ann Fam Med. 2007;5(6):511–8. Edward J, Biddle DJ. Using geographic information systems (GIS) to examine barriers to healthcare access for Hispanic and Latino immigrants in the US south. J Racial Ethn Health Disparities. 2017;4(2):297–307. Harris B, et al. Inequities in access to health care in South Africa. J Public Health Policy. 2011;32(1):S102–23. Okwaraji YB, Webb EL, Edmond KM. Barriers in physical access to maternal health services in rural Ethiopia. BMC Health Serv Res. 2015;15(1):493. Yaya S, et al. Why some women fail to give birth at health facilities: a comparative study between Ethiopia and Nigeria. PLoS One. 2018;13(5):e0196896. Central Statistical Agency (CSA) [Ethiopia] and ICF. Ethiopia Demographic and Health Survey 2005. Addis Ababa, Ethiopia; 2005. Medhanyie A, et al. The role of health extension workers in improving utilization of maternal health services in rural areas in Ethiopia: a cross sectional study. BMC Health Serv Res. 2012;12(1):352. Mekonen AM, Gebregziabher MG, Teferra AS. The effect of community based health insurance on catastrophic health expenditure in Northeast Ethiopia: a cross sectional study. PLoS One. 2018;13(10):e0205972. Tey N-P, Lai S-l. Correlates of and barriers to the utilization of health services for delivery in South Asia and sub-Saharan Africa. Sci World J. 2013;2013.. macro, C.s.A.E.a.O. Ethiopian Demographic and Health survey 2005. Addis Ababa, Ethiopia and calverton , maryland, USA: CSA and ORC macro; 2005. International, C.S.A.E.a.I. Ethiopia Demographic and Health Survey 2011. Addis Ababa, Ethiopia and calverton , maryland, USA: CSA and ICF International; 2012. ICF, C.S.A.C.E.a. Ethiopia Demographic and Health Survey 2016. Addis Ababa, Ethiopia, and Rockville , maryland, USA: CSA and ICF; 2016. Central statistical agency (CSA), I. Ethiopian Demographic and Health survey. Addis Ababa, Ethiopia, and Rockville, Maryland, USA: CSA and ICF: Addis Abeba; 2016. Fratkin E. Ethiopia's pastoralist policies: development, displacement and resettlement. Nomadic Peoples. 2014;18(1):94–114. Diao X, Hazell P, Thurlow J. The role of agriculture in African development. World Dev. 2010;38(10):1375–83. Bigsten A, et al. Growth and poverty reduction in Ethiopia: evidence from household panel surveys. World Dev. 2003;31(1):87–106. Ababa, A., Ethiopia. Abstract available from: http://www. xcdsystem. com/icfp2013/program/index. cfm, 2005. Ali EE. Health care financing in Ethiopia: implications on access to essential medicines. Value Health Reg Issu. 2014;4:37–40. Adugna, A., Health Institutions and Services. July 2014: Addis Abeba. https://dhsprogram.com/data/.. Central Statistical Agency, Ethiopia demographic and health survey 2016, in ORC Macro, Calverton, Maryland, USA. 2016. Bintabara D, Nakamura K, Seino K. Improving access to healthcare for women in Tanzania by addressing socioeconomic determinants and health insurance: a population-based cross-sectional survey. BMJ Open. 2018;8(9):e023013. Bayati M, Feyzabadi VY, Rashidian A. Geographical disparities in the health of iranian women: health outcomes, behaviors, and health-care access indicators. Int J Prev Med. 2017;8. Moyer CA, et al. Understanding the relationship between access to care and facility-based delivery through analysis of the 2008 Ghana demographic health survey. Int J Gynecol Obstet. 2013;122(3):224–9. Adedini SA, et al. Barriers to accessing health care in Nigeria: implications for child survival. Glob Health Action. 2014;7(1):23499. Powers DA, Yoshioka H, Yun M-S. mvdcmp: Multivariate decomposition for nonlinear response models. Stata J. 2011;11(4):556–76. Admasu K, Balcha T, Ghebreyesus TA. Pro–poor pathway towards universal health coverage: lessons from Ethiopia. J Glob Health. 2016;6(1). Audibert, M. and J. Mathonnat, Facilitating access to healthcare in low-income countries: a contribution to the debate. Field Actions Science Reports. The journal of field actions, 2013(Special Issue 8). Lemma, S., et al., How to improve maternal health service utilisation in Ethiopia. 2018. Organization, W.H. Universal access to reproductive health: accelerated actions to enhance progress on Millennium Development Goal 5 through advancing Target 5B: World Health Organization; 2011.. Woldemicael G, Tenkorang EY. Women's autonomy and maternal health-seeking behavior in Ethiopia. Matern Child Health J. 2010;14(6):988–98. King R, et al. Barriers and facilitators to accessing skilled birth attendants in Afar region, Ethiopia. Midwifery. 2015;31(5):540–6. We would like to thank the measure DHS program for providing the data set. No funding was obtained for this study. Department of Epidemiology and Biostatistics, Institute of Public Health, College of Medicine and Health Sciences, University of Gondar, Gondar, Ethiopia Getayeneh Antehunegn Tesema, Zemenu Tadesse Tessema & Koku Sisay Tamirat Getayeneh Antehunegn Tesema Zemenu Tadesse Tessema Koku Sisay Tamirat ZTT, KST and GAT initiated the research concept, wrote up of the research proposal, interpreted results and discussions, drafted and finalized the manuscript. All authors read and approved the final manuscript. Correspondence to Getayeneh Antehunegn Tesema. Authors declare that they have no conflict of interest. Tesema, G.A., Tessema, Z.T. & Tamirat, K.S. Decomposition and Spatio-temporal analysis of health care access challenges among reproductive age women in Ethiopia, 2005–2016. BMC Health Serv Res 20, 760 (2020). https://doi.org/10.1186/s12913-020-05639-y Health care access challenge Multivariate decomposition analysis Spatio-temporal analysis
CommonCrawl
Irreversible and reversible compression work Why is work done on the gas when it is compressed from (p2,V2) (pressure,volume) to (p1,V1) against constant pressure external pressure p1 maximum when it is done irreversibly? In reversible process, there is no loss of energy as the system is always in equilibrium with surrounding, so I thought that it would be max for reversible process. Also work done on the gas is positive for compression, so there is no sly trick in the question as to which is less negative. thermodynamics energy work reversibility $\begingroup$ What is your understanding of the equation that can be used to calculate the work done on a gas in both reversible and irreversible compressions? $\endgroup$ – Chet Miller Dec 30 '18 at 13:26 $\begingroup$ For irreversible process, we can directly compute using final and initial states i.e. W_irreversible = -P_ext(V1-V2) as it is compression V2>V1 so work done on the gas = P_ext(V2-V1) is positive. For reversible process, we must integrate for small changes $\endgroup$ – user600016 Dec 30 '18 at 13:46 I'm going to assume that your comment is part of your question. Actually, you have it backwards. The volumetric work done by the surroundings on the gas is always $$W=-\int_{V_1}^{V_2}{P_{ext}dV}$$ irrespective of whether the process is reversible or irreversible, where $P_{ext}$ is the external force per unit area exerted by the surroundings on the gas at the interface between the system and surroundings at which the displacement is occurring. So, for compression W is positive if $V_1>V_2$ and negative if $V_1<V_2$. For a reversible volume change, $P_{ext}$ can be taken as being equal to the pressure of the gas calculated from the equation of state, but for an irreversible volume change, since the equation of state does not apply to non-equilibrium states of a gas, the equation of state can't be used. In an irreversible process, there are (un-quantified)viscous stresses also present within the gas that contribute to the force per unit area exerted by the gas at its interface with its surroundings. Therefore, for an irreversible process, our only option for being able to calculate the work is to specify the external pressure $P_{ext}$ manually by imposing a known force per unit area (as a function of time) at the interface or somehow automatically controlling the external pressure. Chet MillerChet Miller $\begingroup$ +1 for sure. Definitely removed by misconception. But coming to the question, is there a way to calculate the reversible work done? Is there any other way we can do the comparison? $\endgroup$ – user600016 Dec 30 '18 at 14:46 $\begingroup$ Is it because addition work has to be done to overcome the viscous stresses of the gas(to compress it by same amount as in reversible process)that makes the work done greater in an irreversible process compression? $\endgroup$ – user600016 Dec 30 '18 at 14:47 $\begingroup$ To get the reversible work done, you use the equation of state with $$P_{ext}=P(n,V,T)$$in the integration. For an ideal gas, for example, this would be $$P=\frac{nRT}{V}$$In an isothermal reversible volume change T would be constant. For an adiabatic reversible volume change of an ideal gas, you would have to integrate using the first law of thermodynamics$$dU=nC_vdT=-\frac{nRT}{V}dV$$ $\endgroup$ – Chet Miller Dec 30 '18 at 14:53 $\begingroup$ With regard to your second comment regarding irreversible work, the answer is "yes." $\endgroup$ – Chet Miller Dec 30 '18 at 14:56 Not the answer you're looking for? Browse other questions tagged thermodynamics energy work reversibility or ask your own question. What is work done to make a system state? What is the difference between reversible and irreversible adiabatic expansion? Reversible and Irreversible Process reversible vs irreversible work for adiabatic process Is thermodynamic work always defined, even for irreversible processes? How does non-useful work manifest in this irreversible example? Reversible vs. Irreversible Expansion Work done by reversible and irreversible process Less loss of energy in reversible process $PV$ work for irreversible expansion vs. irreversible compression?
CommonCrawl
What is the intuition behind "jumps" causing volatility skew? Some models use jumps as a way to explain volatility skew. I understand that if jumps exist, then you are "mishedged" as you no longer can continuously hedge. Options have a gamma component and being short an option means you may lose more on the option as you will be longer/shorter deltas than your replicating portfolio during a jump. However, that in itself should not explain skew correct? All options have some gamma and ATM options have the most gamma. So what makes the wing options have the most IV? I would imagine a 2SD move during market close lead to a greater loss on a 50 delta option than a 1 delta option. volatility implied-volatility stochastic-volatility jump-diffusion confusedconfused Jumps are an attempt to solve a math mistake in Modern Portfolio Theory. In the 19502-70s, economists were working on solving the variance-mean tradeoff. Furthermore, they needed to do so with punchcard computing. That radically restricted the set of computable, potential solutions. Both the normal distribution and the log-normal distribution are tractable with punchcard computing. There are two problems, however. The first is one that most economists are unaware of. In 1958, a mathematician by the name of John White proved that there is no solution to an equation of the form $w_{t+1}=Rw_t+\epsilon_{t+1},R>1$ in Frequentist statistics. Of course, you would not invest money if $R\le{1}$. We will come back to this. The second is that Mandelbrot, beginning in 1963, started publishing articles that returns had heavy tails and could not be from a distribution with a variance. In other words, there is no variance-mean tradeoff because the first central moment does not exist. Going back to the sixties and seventies with its heavy-tailed discussions and the results of the Fama-MacBeth work excluding the CAPM from empirical science, there was sort of a choice to be made. Embrace distributions without a mean and for which there was no undergirding of math for economists to work in, or decide for reality that there is a mean and just add jumps to try and cover the large shifts. That math was easily tractable. That was an unfortunate choice. What makes it unfortunate is that the distribution of returns is $$R_{total}=R_G\times{\Pr(G)}+R_M\times{\Pr(M)}+0\times{\Pr(B)}+R_D\times{\Pr(D)}-R_L,$$ where $G$ denotes a going concern, $M$ denotes mergers, $B$ denotes bankruptcy, $D$ denotes dividends and $L$ denotes the lost return from liquidity costs. The distribution of $R_G$ is $$\left[\frac{\pi}{2}+\tan\left(\frac{\mu}{\sigma}\right)\right]^{-1}\frac{\sigma}{\sigma^2+(R_G-\mu)^2}.$$ Going back to White, from earlier, his proof was that the sampling distribution of the slope estimator was the Cauchy distribution. Models like Black-Scholes are built on either Ito or Stratonovich calculus. Both assume that all parameters are known. As such, it is a model built on parameters. If you didn't know them, then you cannot build models on them. You would have built them on sufficient statistics instead. As sufficient statistics are independent of the parameter, you wouldn't reference the parameter. So models like Black-Scholes are valid if the parameters are known but invalid, as per White and a later generalizing article by Sen, if the parameters are unknown. There cannot exist a Frequentist solution to models like the CAPM or Black-Scholes as it is known to be impossible unless you use non-mean and non-variance based tools. That opens up the possibility of a Bayesian solution, except the Bayesian solution ends up having no mean or variance because the results do not come out the same. That should serve as a deep warning as well. All Bayesian estimators are admissible estimators. Frequentist estimators are admissible only in two cases. The first is that the Bayesian and the Frequentist solution are the same for every sample. The second is that the Frequentist solution matches the Bayesian solution at the limit. That is why $\bar{x}$ is an admissible solution to estimate $\mu$ for the normal distribution but $\frac{\sum(\sin(x_i))}{n-33}$ is not. Although Frequentist estimators do not have to be admissible and sometimes are not, when they are not, there should be an investigation. It could be the model was derived incorrectly. The skew in the volatility is an artifact of the algorithm. Consider, instead, for returns rather than option prices as it is simpler to discuss, where returns are $R=\frac{FV}{PV}-1,$ an algorithm such as $$\Pr(\sigma|X,\mu)=\int_{-1}^\infty\frac{\prod_{i=1}^I\left[\frac{\pi}{2}+\tan\left(\frac{\mu}{\sigma}\right)\right]^{-1}\frac{\sigma}{\sigma^2+(x_i-\mu)^2}\times{1}}{\int_{-1}^\infty\int_0^\infty\prod_{i=1}^I\left[\frac{\pi}{2}+\tan\left(\frac{\mu}{\sigma}\right)\right]^{-1}\frac{\sigma}{\sigma^2+(x_i-\mu)^2}\times{1}\mathrm{d}\mu\mathrm{d}\sigma}\mathrm{d}\mu,\forall\sigma\in\Re^{++}.$$ There will be a small natural amount of skew because the distribution should converge to the standard deviation ratio distribution but it will be small. It is related to the Snecdor's F distribution. Note that I multiplied by $1$ and really should not have. Good priors exist for this but I didn't want to impose a prior on it as you should use your own. The volatility skew is an artifact of the tool used to measure it and the fact that there is a non-existence theorem surrounding Ito models. Dave HarrisDave Harris Jumps do not imply fat tails. See the simulation in R. Note that the excess kurtosis of [normal variable + jump] is negative. > set.seed(1) > Normal_Variable <- rnorm(1e8) > kurtosis(Normal_Variable) [1] -0.000628316 > Jump <- 2 * ((runif(1e8) < 0.5) * 2 - 1) > kurtosis(Normal_Variable + Jump) [1] -1.280009 stans - Reinstate Monicastans - Reinstate Monica Actually, I do not think it's true. Jumps, when added to the Black-Scholes (BS) dynamics, do modify the volatility surface. However, the volatility skew may get inverted: the implied BS volatility may be higher when the strike is closer to the current value $S(0)$ of the underlying asset $S$. Consider an idealized example: $$ \log(S(t+dt) / S(t)) ={\rm[normal\ variable\ with\ infinitesimally\ small\ volatility]} \pm 0.1 * {\rm Poisson}(3 * dt). $$ The second term is the jumps. Consider a one touch binary option $C$ with strike $K$ and expiration $dt$. This option pays \$1 if the underlying price $S(t)$ touches the strike. Consider portfolio $P = (1/dt $ units of $C)$. Then, if strike $K$ falls outside $[S(0)e^{-0.1}, S(0)e^{+0.1}]$, the true price of portfolio $P$ converges to 0 as $dt$ converges to 0. On the other hand, if strike $K\in [S(0)e^{-0.1}, S(0)e^{+0.1}]$, the true price of portfolio $P$ stays around $3 * \$1$ as $dt$ converges to 0. The implied volatility in the Black-Scholes model has to compensate for this phenomenon (as $dt$ converges to 0). When $K$ moves from $0$ into $[S(0)e^{-0.1}, S(0)e^{+0.1}]$, the implied volatility jumps from 0 to a positive value. Likewise, when $K$ is leaving $[S(0)e^{-0.1}, S(0)e^{+0.1}]$ on the way to $+\infty$, the implied volatility drops from a positive value to 0. The argument given in the OP doesn't convince me. Yes, the dollar gamma of ATM options is the largest. But so is the Vega. Therefore the amount of implied volatility increase necessary to compensate is not clear. For me, the intuition is simply that jumps-> the distribution of log returns has fat tails relative to a normal distribution. And then it is a mechanical fact that the BS IV shape has a smile with OTM option vol > ATM option vol. [edit]. However, it is shown by @stans that the presence of jumps is not sufficient to provide fat tails. In practice , a jump in the market is often accompanied by an increase in implied volatility, which would indeed increase the relative value of otm options. So it depends what type of jump is specified. dm63dm63 $\begingroup$ Jumps do not necessarily imply fat tails. See my answer. Also please check out the illustration in R displayed in my second answer. Note that the excess kurtosis of [normal variable + jump] is negative. $\endgroup$ – stans - Reinstate Monica Oct 22 '19 at 11:14 $\begingroup$ Is that true if normalized to the std dev ? Ie kurtosis / std dev ratio ? Thx $\endgroup$ – dm63 Oct 22 '19 at 12:21 $\begingroup$ I don't know, have not checked. But that is not the point. Excess kurtosis is an indicator of fat tails, not the ratio that you mentioned. Excess kurtosis already accounts for standard deviation. It is defined in terms of the [standardized random variable] = [original random variable] / Stand.Dev[original random variable]... So my message here: "fat tails" are not synonymous to "jumps". $\endgroup$ – stans - Reinstate Monica Oct 22 '19 at 12:37 $\begingroup$ Ok I see it. If you add a negative kurtosis jump such as a Bernoulli +/-1 coin flip, indeed this does not produce fat tails. $\endgroup$ – dm63 Oct 22 '19 at 17:30 Thanks for contributing an answer to Quantitative Finance Stack Exchange! Not the answer you're looking for? Browse other questions tagged volatility implied-volatility stochastic-volatility jump-diffusion or ask your own question. What is the implied volatility skew? Black-Scholes: Why the focus on volatility? Why use implied volatility Why can't you arb skew by buying options with low implied vol and selling high implied vol in the same month and dynamically hedging? Comparing Implied Vol. to Historical Vol. using intraday data Spot and Vol Correlation in Idealised Regimes of the Volatility Surface Variance swap volatility - ATMF vol, Skew and Curvature Intuition behind Implied Volatility Surface
CommonCrawl
Differentiation of measures 2010 Mathematics Subject Classification: Primary: 28A15 Secondary: 49Q15 [MSN][ZBL] Some authors use this name for the outcome of the Radon-Nikodym theorem or for the density of the Radon-Nykodim decomposition (see for instance Section 32 of [Ha]). Other authors use the name for the following theorem which gives an explicit characterization of the Radon-Nykodim decomposition for locally finite Radon measures on the euclidean space. This theorem is used often in Geometric measure theory and credited to Besicovitch. Theorem (cp. with Theorem 2.12 of [Ma] and Theorem 2 in Section 1.6 of [EG]) Let $\mu$ and $\nu$ be two locally finite Radon measures on $\mathbb R^n$. Then, \[ f(x) := \lim_{r\downarrow 0} \frac{\nu (B_r (x))}{\mu (B_r (x))} \] exists at $\mu$-a.e. $x$ and defines a $\mu$-measurable map; \begin{equation}\label{e:singular} S:= \left\{ x: \lim_{r\downarrow 0} \frac{\nu (B_r (x))}{\mu (B_r (x))} = \infty\right\} \end{equation} is $\nu$-measurable and a $\mu$-null set; $\nu$ can be decomposed as $\nu_a + \nu_s$, where \[ \nu_a (E) = \int_E f\, d\mu \] and \[ \nu_s (E) = \nu (S\cap E)\, . \] Moreover, for $\mu$-a.e. $x$ we have: \begin{equation}\label{e:Lebesgue} \lim_{r\downarrow 0} \frac{1}{\mu (B_r (x))} \int_{B_r (x)} |f(y)-f(x)|\, d\mu (y) = 0\qquad \mbox{and}\qquad \lim_{r\downarrow 0} \frac{\nu_s (B_r (x))}{\mu (B_r (x))}= 0\, . \end{equation} The first identity in \ref{e:Lebesgue} relates to the concept of Lebesgue point. The theorem can be generalized to signed measures $\nu$ and measures taking values in a finite-dimensional Banach space $V$. In that case: $\|\nu (B_r (x))\|_V$ substitutes $\nu (B_r (x))$ in \ref{e:singular}; $\|f (y)-f(x)\|_V$ substitutes the integrand $|f(y)-f(x)|$ in \ref{e:Lebesgue}; $|\nu| (B_r (x))$ substitutes $\nu (B_r (x))$ in \ref{e:Lebesgue}, where $|\nu|$ denotes the total variation of $\nu$ (see Signed measure for the relevant definition). The theorem does not hold in general metric spaces. It holds provided the metric space satisfies some properties about covering of sets with balls, see Covering theorems (measure theory). [AFP] L. Ambrosio, N. Fusco, D. Pallara, "Functions of bounded variations and free discontinuity problems". Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, New York, 2000. MR1857292Zbl 0957.49001 [De] C. De Lellis, "Rectifiable sets, densities and tangent measures" Zurich Lectures in Advanced Mathematics. European Mathematical Society (EMS), Zürich, 2008. MR2388959 Zbl 1183.28006 [EG] L.C. Evans, R.F. Gariepy, "Measure theory and fine properties of functions" Studies in Advanced Mathematics. CRC Press, Boca Raton, FL, 1992. MR1158660 Zbl 0804.2800 [Fe] H. Federer, "Geometric measure theory". Volume 153 of Die Grundlehren der mathematischen Wissenschaften. Springer-Verlag New York Inc., New York, 1969. MR0257325 Zbl 0874.49001 [Ha] P.R. Halmos, "Measure theory" , v. Nostrand (1950) MR0033869 Zbl 0040.16802 [Ma] P. Mattila, "Geometry of sets and measures in euclidean spaces". Cambridge Studies in Advanced Mathematics, 44. Cambridge University Press, Cambridge, 1995. MR1333890 Zbl 0911.28005 Differentiation of measures. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Differentiation_of_measures&oldid=28983 Retrieved from "https://encyclopediaofmath.org/index.php?title=Differentiation_of_measures&oldid=28983" Measure and integration Control theory and optimization Calculus of variations and optimal control; optimization Classical measure theory TeX done
CommonCrawl
Genetic Variation and Genetic Relationship of Seventeen Chinese Indigenous Pig Breeds Using Ten Serum Protein Loci Mo, D.L.;Liu, B.;Wang, Z.G.;Zhao, S.H.;Yu, M.;Fan, B.;Li, M.H.;Yang, S.L.;Zhang, G.X.;Xiong, T.A.;Li, K. 939 https://doi.org/10.5713/ajas.2003.939 PDF KSCI Seventeen Chinese indigenous pig breeds and three introduced pig breeds had been carried out by means of vertical polyacrylamide gel electrophoresis (PAGE). According to the results, eight serum protein loci were highly polymorphic except Pi-2 and Cp. The polymorphism information content (PIC) of Hpx was the highest (0.5268), while that of Cp was the lowest (0.0257). The population genetic variation index showed that about 84% genetic variation existed in the population, and the rest of 16% distributed between the populations. The genetic variation of Yimeng black pig and Duroc were the highest and the lowest, respectively. The genetic variation of Chinese indigenous pig breeds was much more than that of exotic groups. Genetic distance results showed that Chinese indigenous pig breeds were classified into four groups with the three introduced pig breeds clustered into another group. The results also supported the geographic distribution of Chinese indigenous pig breeds in certain extent. Genetic Diversity of Indigenous Cattle Populations in Bhutan: Implications for Conservation Dorji, T.;Hanotte, O.;Arbenz, M.;Rege, J.E.O.;Roder, W. 946 The Genetic diversity and relationship of native Siri (Bos indicus) cattle populations of Bhutan were evaluated using 20 microsatellite markers. A total of 120 Siri cattle were sampled and were grouped into four populations according to their geographical locations which were named Siri West, Siri South, Siri Central and Siri East cattle. For each, 30 individuals were sampled. In addition, 30 samples each of Indian Jaba (B. indicus), Tibetan Goleng (B. taurus), Nepal Hill cattle (B. indicus), Holstein Friesian (B.taurus) and Mithun (B. frontalis) were typed. The mean number of alleles per loci (MNA) and observed heterozygosity (Ho) were high in the Siri populations ($MNA=7.2{\pm}0.3$ to $8.9{\pm}0.5$ and $Ho=0.67{\pm}0.04$ to $0.73{\pm}0.03$). The smallest coefficient of genetic differentiation and genetic distance ($F_{ST}=0.015$ and $D_A=0.073$) were obtained between Siri West and Siri Central populations. Siri East population is genetically distinct from the other Siri populations being close to the Indian Jaba ($F_{ST}=0.024$ and $D_A=0.084$). A high bootstrap value of 96% supported the close relationship of Siri South, Siri Central and Siri West, while the relationship between Siri East and Jaba was also supported by a high bootstrap value (82%). Data from principal component analysis and individual assignment test were in concordance with the inference from genetic distance and differentiation. In conclusion we identified two separate Siri cattle populations in Bhutan at the genetic level. One population included Siri cattle sampled from the West, Central and South of the country and the other Siri cattle was sampled from the East of the country. We suggest that Siri cattle conservation program in Bhutan should focus on the former population as it has received less genetic influence from other cattle breeds. Effects of Tropical Climate on Reproduction of Cross- and Purebred Friesian Cattle in Northern Thailand Pongpiachan, P.;Rodtian, P.;Ota, K. 952 In the first part of the study, rates of estrus occurrence and success of A.I. service in the Thai-native and Friesian crossbred, and purebred Friesian cows fed in the National Dairy Training and Applied Research Institute in Chiang Mai, Thailand were traced monthly throughout a year. An electric fan and a water sprinkler cooled the stall for the purebred cows during the hot season (March-September). Both rates in pure Friesians were at their highest in the cold-dry season (October- February), but they decreased steadily during the hot-dry season (March-May) and were at their lowest in the hot-wet season (June-September). Seasonal change of a similar pattern was observed in the incidence of estrus, but not in the success rate of insemination in the crossbred cows. By the use of reproductive data, compiled in the same institute, on the 75 % cross- and purebred Friesian cows, and climatological data in Chiang Mai district, effects of ambient temperature and humidity on the reproductive traits of cows were examined by regression analysis in the second half of the study. Significant relationships in the crossbred, expressed by positive-linear and parabola regressions, were found between reproductive parameters such as days to the first estrus (DTFE), A.I. service (DTFAI), and conception, the number of A.I. services required for conception and some climatic factors. However, regarding this, no consistent or intelligible results were obtained in purebred cows, perhaps because electric fans and water sprinklers were used for this breed in the hot season. Among climatic factors examined, the minimum temperature (MINT) in early lactation affected reproductive activity most conspicuously. As the temperature during one or two months prior to the first estrus and A.I. service rose, DTFE and DTFAI steadily became longer, although, when MINT depleted below $17-18^{\circ}C$, the reproductive interval tended to be prolonged again on some occasions. The maximum temperature also affected DTFE and DTFAI, but only in limited conditions. The effect of humidity was not clear, although the inverse relationship between DTFE and minimum humidity during 2 months before the first estrus in the crossbred seemed to be significant. Failure to detect any definite effect of climate on the reproductive traits of pure Friesians seemed to indicate that forced ventilation by electric fans and water sprinklers were effective enough to protect the reproductive ability of this breed from the adverse effects of a hot climate. Production of Retinol-binding Protein by Caprine Conceptus during the Time Period of Maternal Recognition of Pregnancy Liu, K.H. 962 The purpose of the study were to characterize the proteins secreted by elongating caprine conceptus, to identify a group of low molecular weight proteins as retinol-binding protein (RBP), to identify RBP cell-specific localization in conceptus tissue, and to demonstrate that the conceptuses secreted continuously RBP during the time period maternal recognition of pregnancy. Caprine conceptuses were removed from the uterus between days 16 and 22 of pregnancy, the time period maternal recognition of pregnancy. Isolated conceptuses were cultured in a modified minimum essential medium in the presence of radiolabeled amino acids. Proteins synthesized and secreted into medium were analyzed by fluorography of two-dimensional polyacrylamide gel electrophoresis and fluorography. At least five proteins showed consistently a grouping of spots with characteristic location on two-dimensional gels. A major low molecular weight protein consisted of two major isoforms (pI 5.3-6.0) of similar molecular mass (21 kDa) was identified as RBP by using antiserum against RBP. Presence of RBP in conceptus culture medium and uterine flushings between days 16 and 22 of pregnancy were determined by immunoprecipitation and Western blotting using anti-RBP serum. In immunocytochemical study, strong immunostaining for RBP was localized in trophectoderm and endoderm of conceptus. These results clearly demonstrated that the caprine conceptus was active in protein synthesis as early as day 16 of pregnancy. Secretion of RBP by caprine conceptuses (days 16-22) coincident with the rapid transformation of the conceptus from a spherical blastocyst to a filamentous structure. Production of RBP by the elongating conceptuses may be indicative of an important role for conceptus RBP in the transport, availability and metabolism of retinol during maternal recognition of pregnancy. Effects of Tween 80 on In Vitro Fermentation of Silages and Interactive Effects of Tween 80, Monensin and Exogenous Fibrolytic Enzymes on Growth Performance by Feedlot Cattle Wang, Y.;McAllister, T.A.;Baah, J.;Wilde, R.;Beauchemin, K.A.;Rode, L.M.;Shelford, J.A.;Kamande, G.M.;Cheng, K.J. 968 The effects of monensin, Tween 80 and exogenous fibrolytic enzymes on ruminal fermentation and animal performance were studied in vitro and in vivo. In Expt 1, the effects of the surfactant Tween 80 (0.2% wt/wt, DM basis) on ruminal fermentation of alfalfa, corn and orchardgrass silages were investigated using in vitro gas production techniques. Tween 80 did not affect (p>0.05) cumulative gas production at 24 h, but it reduced (p<0.05) the lag in fermentation of all three silages. With corn silage and orchardgrass silage, gas production rates and concentrations of total volatile fatty acids (VFA) were increased (p<0.05) by Tween 80; with alfalfa silage, they were reduced (p<0.05). Tween 80 increased (p<0.05) the proportion of propionate in total VFA, and reduced (p<0.05) acetate to propionate ratios (A:P) with all three silages. In Expt 2, exogenous fibrolytic enzymes (E; at 0, 37.5 or 75 g/tonne DM), monensin (M; at 0 or 25 ppm and Tween 80 (T; at 0 or 2 L/tonne DM) were added alone or in combination to backgrounding and finishing diets fed to 320 crossbred steers in a feeding trial with a $3{\times}2{\times}$2 factorial arrangement of treatments. The backgrounding and finishing diets contained barley grain and barley silage in ratios of 57.8:42.2 and 93.5:6.5 (DM basis), respectively. Added alone, none of the additives affected DM intake (p>0.1) in the backgrounding or in the finishing period, but interactive $M{\times}T$ effects were observed in the finishing period (p=0.02) and overall (p=0.04). In the finishing period, T without M tended to reduce DM intake (p=0.11), but T with M increased (p=0.05) DM intake. Monensin increased average daily gain (ADG) during backgrounding (p=0.07) and finishing (p=0.01), and this ionophore also improved overall feed efficiency (p=0.02). Warm carcass weight was increased (p<0.001) by M, but dressing percentage was reduced (p=0.07). In the backgrounding period, T increased ADG by 7% (p=0.06). Enzymes increased (p=0.07) ADG by 5 and 6% (low and high application rates, respectively) during backgrounding, but did not affect (p>0.10) ADG during finishing, or overall feed efficiency. Whereas T enhanced the positive effects of M on ADG during backgrounding (p=0.04) and overall (p=0.05), it had no impact (p>0.1) on the effects of E. Interactions between M and T suggest that the surfactant may have potential for enhancing the positive effects of monensin on beef production, but this requires further research. Assessment of Ruminal and Post Ruminal Amino Acid Digestibility of Chinese and Canadian Rapeseed (Canola) Meals Chen, Xibin;Campbell, Lloyd D. 979 Two rapeseed meal samples (Sample A, hybrid 5900 and sample B, double low rapeseed No.4) obtained from China and one Canola meal sample obtained from a local crushing plant in Canada were used to investigate the amino acid degradability of rapeseed/Canola meal in rumen and amino acid digestibility of ruminal incubation residues by precision-fed rooster bioassay. Results show that in ruminal incubation the degradation rate of non amino acid nitrogen in crude protein is higher than that for amino acid nitrogen in crude protein, the results also suggest that the degradation rate of amino acid nitrogen in Chinese rapeseed meal sample B was lower than that for Canadian Canola, but that in Chinese rapeseed meal sample A is much close to that for Canadian canola meal. For all amino acids the digestibility of the bypass or residual protein as measured by the precision-fed rooster bioassay tended to be lower for Chinese rapeseed meal sample A than for sample B or Canadian canola meal which had similar digestibility values. However following a calculation of total amino acid availability, involving the digestibility of amino acids in the rumen and rooster bioassay the results are less contradictory. Results indicated that in traditional roasting-expelling process, heat treatment, especially dry heat treatmeat could decrease amino acids degradability in rumen of rapeseed/canola meal, but also may decrease total availability of amino acids of rapeseed/canola meal. Effect of Supplementary Feeding of Concentrate on Nutrient Utilization and Production Performance of Ewes Grazing on Community Rangeland during Late Gestation and Early Lactation Chaturvedi, O.H.;Bhatta, Raghavendra;Santra, A.;Mishra, A.S.;Mann, J.S. 983 Malpura and Kheri ewes (76) in their late gestation, weighing $34.40{\pm}0.95kg$ were randomly selected and divided into 4 groups of 19 each (G1, G2, G3 and G4). Ewes in all the groups were grazed on natural rangeland from 07.00 h to 18.00 h. Ewes in G1were maintained on sole grazing while ewes in G2, G3 and G4, in addition to grazing received concentrate mixture at the rate of 1% of their body weight during late gestation, early lactation and entire last quarter of pregnancy to early quarter of lactation, respectively. The herbage yield of the community rangeland was 0.82 metric ton dry matter/hectare. The diet consisted of (%) Guar (Cyamopsis tetragonoloba) bhusa, (59.2), Babool pods and leaves (17.2), Bajra (Pennisetum typhoides) stubbles (8.8), Doob (5.3), Aak (4.2) and others (5.3). The nutrient intake and its digestibility were higher (p<0.01) in G2, G3 and G4 as compared to G1 because of concentrate supplementation. The intakes of DM ($g/kg\;W{^0.75}$), DCP ($g/kg\;W{^0.75}$) and ME ($MJ/kg\;W{^0.75}$) were 56.7, 5.3 and 0.83; 82.7, 12.2 and 1.16; 82.7, 12.1 and 1.17 and 83.1, 12.3 and 1.18 in G1, G2, G3 and G4, respectively. The per cent digestibility of DM, OM, CP, NDF, ADF and cellulose was 57.9, 68.8, 68.7, 52.3, 37.5 and 68.4; 67.6, 76.1, 82.3, 60.6, 44.5 and 73.4; 67.6, 76.1, 81.5, 60.6, 44.8 and 74.5 and 67.6, 76.1, 82.3, 60.6, 44.7 and 73.3 in G1, G2, G3 and G4, respectively. The nutrient intake of G2, G3 and G4 ewes was sufficient to meet their requirements. The ewes raised on sole grazing lost weight at lambing in comparison to advanced pregnancy. However, ewes raised on supplementary feeding gained 1.9-2.5 kg at lambing. The birth weight of lambs in G2 (3.92) and G4 (4.07) was higher (p<0.01) than G1 (2.98), where as in G1 and G3 it was similar. The weight of lambs at 15, 45 and 60 days of age were higher in G2, G3 and G4 than in G1. Similarly, the average daily gain (ADG) after 60 days was also higher in G2, G3 and G4 than in G1. The milk-yield of lactating ewes in G2, G3 and G4 increased up to 150-250 g per day in comparison to G1. The birth weight, weight at 15, 30, 45 and 60 days, weight gain and ADG at 30 or 60 days was similar both in male and female lambs. It is concluded from this study that the biomass yield of the community rangeland is low and insufficient to meet the nutrient requirements of ewes during late gestation and early lactation. Therefore, it is recommended concentrate supplementation at the rate of 1% of body weight to ewes during these critical stages to enhance their production performance, general condition as well as birth weight and growth rate of lambs. Functional Characterization of Mammary Gland of Holstein Cows under Humid Tropical Summer Climates Lu, C.H.;Chang, C.J.;Lee, P.N.;Wu, C.P.;Chen, M.T.;Zhao, X. 988 Physiological parameters were measured on six primiparous, non-pregnant Holstein cows prior to peak lactation over a 3-month summer season in southwestern Taiwan. The objectives were to characterize heat stress-induced change in functionality of mammary gland under natural climates of tropical summer and to establish physiological indices applicable to this environment in referring to this change. Environmental and physiological readings, milk and blood samples were taken at 15:00 h biweekly for totally five time points during the study. Climate readings showed that the afternoon humidex value reached the highest (53.5) around mid summer. Rectal temperature of cows taken simultaneously varied between $38.26^{\circ}C$ and $40.02^{\circ}C$ in parallel to humidex. Milk production declined drastically from 29.2 to 22.2 kg/d the first month entering summer but leveled up at end of the summer season suggesting effects exerted by heat stress rather than stages of lactation. Lactose content decreased linearly (p<0.05) with times in summer, from 4.69 to 4.38%. On the other hand, activity of N-acetylglucosaminidase (NAGase) in milk increased linearly to over two folds (p<0.05) during the same intervals. Elevations of fractional constituent of BSA in whey protein and serum cortisol level were also noticed in the course. Measurement of arteriovenous concentration (A-V) difference across the mammary gland demonstrated net uptake of glucose and net release of urea throughout the study period. The amount of urea released from mammary gland increased (p<0.05) progressively from 1.54 to 7.76 mg/dl during summer. It is concluded that gradual regression of mammary gland occurred along the humid tropical summer season. This regression is likely initiated through elevation of body temperature, which is irreversible above certain point. The increased release of urea from mammary gland during heat stress suggests its potential role as an early indicator of suboptimal mammary function. The Effect of Harvesting Interval on Herbage Yield and Nutritive Value of Napier Grass and Hybrid Pennisetums Manyawu, G.J.;Chakoma, C.;Sibanda, S.;Mutisi, C.;Chakoma, I.C. 996 A 6 (accession)${\times}$5 (cutting interval) factorial experiment was conducted over two years to investigate the effect of stage of growth on herbage production, nutritive value and water soluble carbohydrate (WSC) content of Napier grass and Napier grass${\times}$Pearl millet hybrids (hybrid Pennisetum). The purpose of the experiment was to determine the optimum stage of growth to harvest the Pennisetums for ensilage. Two Napier accessions (SDPP 8 and SDPP 19) and four hybrid Pennisetum (SDPN 3, SDPN 29, SDPN 38 and Bana grass) were compared at five harvest intervals (viz. 2, 4, 6, 8, and 10 weeks). Basal fertilizers were similar in all treatment plots, although nitrogen (N) top-dressing fertilizer was varied proportionately, depending on the harvesting interval. The application was based on a standard rate of 60 kg N/ha every six weeks. Stage of growth had significant effects on forage yield, WSC content and nutritive value of the Pennisetums. Herbage yields increased in a progressively linear manner, with age. Nutritive value declined as the harvesting interval increased. In particular, crude protein content declined rapidly (p<0.001) from $204g\;kg^{-1}$ DM at 2 weeks to $92g\;kg^{-1}$ DM at 8 weeks of growth. In vitro dry matter digestibility decreased from 728 to $636g\;kg^{-1}$ DM, whilst acid and neutral detergent fibre contents increased from 360 and 704 to 398 and $785g\;kg^{-1}$ DM, respectively. Rapid changes in nutritive value occurred after 6 weeks of growth. The concentration of WSC increased in a quadratic manner, with peaks ($136-182g\;kg^{-1}$ DM) at about 6 weeks. However, the DM content of the forage was low ($150-200g\;DM\;kg^{-1}$) at 6 weeks. Therefore, it was concluded that Pennisetums should be harvested between 6 and 7 weeks, to increase DM content and optimize herbage production without seriously affecting nutritive value and WSC content. Accessions SDPN 29 and SDPP 19 appeared to be most suited for ensilage. It was suggested that WSC content should be incorporated as a criterion in the agronomic evaluation and screening of Pennisetum varieties. Effects of Rumen Protected Oleic Acid in the Diet on Animal Performances, Carcass Quality and Fatty Acid Composition of Hanwoo Steers Lee, H-J.;Lee, S.C.;Oh, Y.G.;Kim, K.H.;Kim, H.B.;Park, Y.H.;Chae, H.S.;Chung, I.B 1003 https://doi.org/10.5713/ajas.2003.1003 PDF KSCI The effects of different rumen protected forms, oleamide, Ca oleate, of dietary oleic acid on the carcass quality and fatty acid composition in intramuscular and subcutaneous fat tissues of Hanwoo steer were examined. Sixty, 25 month old Hanwoo steers divided into three groups were fed no supplement (Control), 2% of oleamide (Oleamide) or Ca-oleate (Ca-Oleate) in their diet for 45 or 90 days. Disappearance rates of oleic acid supplements in digestive tracts (Rumen bypass, abomasal and intestinal disappearance rate) were 48.5, 68.4 for oleamide and Ca oleate, respectively. Both oleic acid supplements affected feed intake, growth rate, cold carcass weight and carcass fatness. Live weight gain, carcass weight, backfat thickness and marbling score were higher in the oleic acid supplemented steers compared with those from the control. Oleic acid supplements increased marbling score and ether extract in Hanwoo steer m. logissi thoracicmus. Rumen protected oleic acid increased not only the level of oleic acid but also polyunsaturated fatty acids in intramuscular and subcutaneous fat tissue. Total saturated fatty acid contents in both fat tissues were decreased whereas total unsaturated fatty acid content was increased compared with those from control. Linoleic acid, linolenic acid and polyunsaturated fatty acid contents were significantly higher in Ca oleate than any other steers. Lipid metabolites in blood were increased in rumen protected oleic acid treatments. HDL content in blood was increased in Ca-oleate supplemented steers whereas LDL was decreased compared with control. The changes of fatty acid compositions in the rumen protected oleic acid supplemented steers suggest that the oleic acid and unsaturated fatty acid were protected from rumen biohydrogenation and can be deposited in the fat tissues. Factors Affecting Oxygen Uptake by Yeast Issatchenkia orientalis as Microbial Feed Additive for Ruminants Lee, J.H.;Lim, Y.B.;Park, K.M.;Lee, S.W.;Baig, S.Y.;Shin, H.T. 1011 The objective of this work was to evaluate a thermotolerant yeast Issatchenkia orientalis DY252 as a microbial feed additive for ruminants. In the present study, the influence of volatile fatty acids (VFA) and temperature on oxygen uptake rate by I. orientalis DY 252 was investigated. It was evident that the oxygen uptake rate was decreased gradually as the VFA concentrations increased in a range of 30 to 120 mM. Although the oxygen uptake rate was not greatly affected by temperature in the range 37 to $43^{\circ}C$, a maximum value of $0.45mg\;O_2/g$ cell/ min was obtained at $39^{\circ}C$. With regard to the oxygen uptake rate by yeast, viability was found to be less important than the metabolic activity of yeast. Effect of Varying Levels of Aflatoxin, Ochratoxin and Their Combinations on the Performance and Egg Quality Characteristics in Laying Hens Verma, J.;Johri, T.S.;Swain, B.K. 1015 A 50 day feeding trial was conducted with White Leghorn (WL) laying hens, 42 weeks old, to determine if feeding of varying levels of aflatoxin (AF), ochratoxin A (OA) or their combinations has any effect on their performance and egg quality parameters. Feeding of $T_4$, $T_7$, $T_8$, $T_9$ and $T_10$ caused significant reduction in feed intake of hens. Hen day egg productions were significantly reduced at all the levels of toxins except 0.5 ppm of AF. Maximum reduction in egg production was noticed at 2 and 4 ppm of AF and OA, respectively. Average body weight and egg weight were not affected by toxin feeding. The feed efficiency in terms of net feed efficiency and feed consumed per dozen egg produced was significantly reduced at higher levels of both the toxins and their combinations. Feed consumption for production of 1 kg egg mass remained uninfluenced due to aflatoxin feeding whereas significant increase in the value of the same was noticed at 4 ppm level of OA and combination of 1 and 2 ppm of AF and 2 and 4 ppm of OA ($T_9$ and $T_10$), respectively. Various levels of OA (1-4 ppm) and all the combination of two toxins ($T_8$, $T_9$ and $T_10$) significantly altered the shape index of eggs in laying hens. The shell thickness was significantly reduced by higher level of AF (2 ppm), OA (2 and 4 ppm) and their combination. Albumen index, Haugh Unit and yolk index remained unchanged due to incorporation of toxins in the diet. It is concluded that AF, OA either singly or in combination at higher levels could depress the performance in terms of egg production and feed efficiency significantly. The egg quality parameters i.e. shape index and shell thickness were also significantly affected. Apparent Ileal Digestibility of Nutrient in Plant Protein Feedstuffs for Finishing Pigs Han, Y.K.;Kim, I.H.;Hong, J.W.;Kwon, O.S.;Lee, S.H.;Kim, J.H.;Min, B.J.;Lee, W.B. 1020 Five barrows (average initial body weight 58.6 kg) were used to determine the apparent ileal digestibilities of amino acids, DM, N and energy in various soybean meal, rapeseed meal and coconut meal in finishing pigs. Dietary treatments included 1) KSBM (Korean soybean meal), 2) CSBM (Chinese soybean meal), 3) ASBM (Argentine soybean meal), 4) RSM (Rapeseed meal), and 5) CNM (Coconut meal). The diets were corn starch-based and formulated so that each protein source provided the same amount of total ME (3,490 kcal/kg), CP (15.70%), lysine (1.00%), Ca (0.80%) and P (0.60%). Protein content of the KSBM was higher than the CSBM and ASBM, with all values similar to those expected, and protein content of the CNM was lower than that of the SBM preparation and RSM. Apparent ileal digestibilities of histidine, lysine, threonine, alanine, asparatic acid, cystine, glutamic acid and serine were greater for the KSBM, CSBM, ASBM and RSM than for the CNM (p<0.05). Also, the apparent ileal digestibilities of methionine, leucine, phenylalanine, valine and tyrosine were greater for the KSBM than for the CSBM, ASBM, RSM and CNM (p<0.05). Overall, the apparent ileal digestibilities of total essential amino acids were greater for the KSBM than for the CSBM, ASBM, RSM and CNM (p<0.05), and the apparent ileal digestibilities of total non essential amino acids were greater for the KSBM, CSBM, ASBM and RSM than for the CNM (p<0.05). No difference (p>0.05) in apparent digestibility of DM at the small intestine was observed among the treatments. However, the apparent digestibility of DM at the total tract was greater for the KSBM than for the CSBM, ASBM, RSM and CNM (p<0.05). Also, apparent digestibilities of N and digestible energy at the small intestine and total tract were greater for the KSBM than for the RSM and CNM (p<0.05). In conclusion, nutrient digestibility values of SBM preparations and RSM were relatively high compared to CNM. Shrimp By-product Feeding and Growth Performance of Growing Pigs Kept on Small Holdings in Central Vietnam Nguyen, Linh Q.;Everts, Henk;Beynen, Anton C. 1025 The effect studied was that of the feeding of shrimp by-product meal, as a source of eicosapentaenoic and docosahexaenoic acid, on growth performance and fatty acid composition of adipose tissue in growing pigs kept on small holdings in Central Vietnam. Shrimp by-product meal was exchanged with ruminant meal so that the diets contained either 0, 10 or 20% shrimp byproduct meal in the dry matter. The diets were fed on 6 different small-holder farms. The farmers fed a base diet according to their personal choice, but were instructed as to the use of shrimp by-product and ruminant meal. The diets were fed to the pigs from 70 to 126 days of age. There were three animals per treatment group per farm. The diets without and with 20% shrimp by-product meal on average contained 0.01 and 0.14 g docosahexaenoic acid/MJ of metabolisable energy (ME). Due to the higher contents of ash and crude fiber, the shrimp by-product meal containing diets had lower energy densities than the control diets. Eicosapentaenoic acid was not detectable in adipose tissue; the content of docosahexaenoic acid was generally increased after consumption of shrimp by-product meal. In spite of the concurrent high intakes of ash and crude fiber, the feeding of shrimp by-product meal had a general stimulatory effect on growth performance of the growing pigs. The intake of docosahexaenoic acid or its content in adipose tissue was not related with average daily gain. It is suggested that shrimp by-product meal may contain an unknown growth enhancing factor. Effect of Caecectomy on Body Weight Gain, Intestinal Characteristics and Enteric Gas Production in Goslings Chen, Yieng-How;Wang, Shu-Yin;Hsu, Jenn-Chung 1030 Two experiments of four-week duration were conducted to investigate the effect of caecectomy on the intestinal characteristics, body weight gain and gas production in the caeca of White Roman goslings. In experiment I, forty eight 2-wk-old female goslings with similar body weight were randomly divided into four treatments: sham (SHAM), left side caecum removed (LSCR), right side caecum removed (RSCR) and both caeca removed (CAECECTOMY). Smimilarly, experiment II was conducted with twelve 5-wkold male goslings in two treatments: SHAM and CAECECTOMY. Free choice water with ad libitum feed was provided during experiment. At the end of experiment I, goslings were sacrificed and gut length and weight were determined. At 7 and 9 wks of age, birds in experiment II were subjected to respiration calorimetry studies. In both experiments, final body weights were not affected by caecectomy. Results of experiment I indicated that caecectomy did not significantly affect the relative weight (g/100 g BW) of gizzard, small intestine, rectum and colon (p>0.05); however, the relative length of colon and rectum did increase (p<0.05). The remaining caecum did not show compensatory growth in both LSCR and RSCR treatments. In experiment II, results indicated that the average enteric methane production from the caecetomised goslings was significantly lower than that from the bird in SHAM goslings (p<0.05). In comparison with SHAM goslings, calorific loss from entric methane in caecetomised birds was lower (p<0.05). There was no effect of age on methane production. The enteric nitrous oxide production in caeca of goslings was very low with no significantly different between two treatments. Effects of Fat Sources on Growth Performance, Nutrient Digestibility, Serum Traits and Intestinal Morphology in Weaning Pigs Jung, H.J.;Kim, Y.Y.;Han, In K. 1035 This experiment was conducted to investigate the effects of fat sources on growth performance, nutrient digestibility, serum traits and intestinal morphology in weaning pigs. A total of 128 weaning pigs (Landrace${\times}$Large White${\times}$Duroc, $21{\pm}2$ days of age, $5.82{\pm}0.13kg$ of average initial body weight) were allotted in a randomized complete block (RCB) design with four treatments: 1) corn oil, 2) soybean oil, 3) tallow and 4) fish oil. Each treatment had 8 replicates with 4 pigs per pen. During phase I period (d 0 to 14), pigs fed corn oil or soybean oil diet tended to show higher ADG and FCR than any other treatments although there was no significant difference. During phase II period (d 15 to 28), pigs fed corn oil diet showed better ADG and ADFI than pigs fed soybean oil, tallow or fish oil. For overall period, growth performance of weaning pigs was improved (p<0.05) when pigs were fed soybean oil or corn oil. Apparent digestibility of energy and fat was improved when pigs were fed corn oil diet (p<0.05). Supplementation of corn oil resulted in higher serum triglyceride concentration than the other treatments (p<0.05). However, there was a lower cholesterol concentration when corn oil was provided compared to tallow or fish oil. Pigs fed corn oil tended to have increased villus height compared with soybean oil, tallow or fish oil treatment (p<0.05). This experiment suggested that vegetable oils such as corn oil or soybean oil, were much better fat source for improving growth performance of weaning pigs. Electrophoretic Behaviors of α-Lactalbumin and β-Lactoglobulin Mixtures Caused by Heat Treatment Lee, You-Ra;Hong, Youn-Ho 1041 In order to study the reaction behaviors of bovine $\alpha$-lactalbumin ($\alpha$-La), $\beta$-lactoglobulin ($\beta$-Lg), and their mixtures during heat treatment, samples were analyzed using native-polyacrylamide gel electrophoresis (Native-PAGE), sodium dodecylsulfate (SDS)-PAGE, and two-dimensional (2-D)-PAGE. The electrophoresis demonstrated that the loss of native-$\alpha$-La increased as temperature increased, and that the loss of apo-$\alpha$-La was slightly higher than that of holo-$\alpha$-La. The tests also showed that during heat treatment, a mixture of $\alpha$-La and $\beta$-Lg was less stable than $\alpha$-La alone. As such, it was assumed that $\beta$-Lg induced holo-$\alpha$-La to be less stable than apo-$\alpha$-La during heat treatment. The reaction behavior of $\alpha$-La (holo-, apo-form) during heat treatment showed similar patterns in the 2-D-PAGE electropherogram, but the mixture of $\alpha$-La and $\beta$-Lg created new bands. In particular, the results showed a greater loss of native $\alpha$-La in the holo-$\alpha$-La and $\beta$-Lg mixture than in the apo-$\alpha$-La and $\beta$-Lg mixture. Thus, it can be concluded that the holo-$\alpha$-La and $\beta$-Lg mixture was more intensively affected by heat treatment than other samples, and that free sulphydryl groups took part in the heat-induced denaturation. Development of PCR Assay for Identification of Buffalo Meat Rajapaksha, W.R.A.K.J.S.;Thilakaratne, I.D.S.I.P.;Chandrasiri, A.D.N.;Niroshan, T.D. 1046 A polymerase chain reaction (PCR) assay was developed to differentiate buffalo meat from the meat of Ceylon spotted deer (Axis axis ceylonensis), Ceylon sambhur (Cervus unicolor unicolor), cattle (Bovine), goat (Caprine), pig (Porcine), and sheep (Ovine). A set of primers were designed according to the sequence of the mitochondrial cytochrome b gene of bubalus bubalis and by PCR amplification a band of approximately 242 bp band was observed with buffalo DNA. These primers did not cross-react with DNA of other animal species tested in the study under the specified reaction conditions. A band of 649 bp was observed for all animal species tested when DNA was amplified with the universal primers indicating the presence of mitochondrial DNA in the samples. The technique was sensitive enough to identify rotten (10 days post slaughter), dried and cooked buffalo meat. The absence of a cross reaction with human DNA using the buffalo specific primers eliminates possible false positive reactions. Carcass Traits Determining Quality and Yield Grades of Hanwoo Steers Moon, S.S.;Hwang, I.H.;Jin, S.K.;Lee, J.G.;Joo, S.T.;Park, G.B. 1049 A group of Hanwoo (Korean cattle) steers (n=14,386) was sampled from a commercial abattoir located in Seoul over one year period (spring, summer, autumn and winter) and their carcass traits were collected. Carcass traits assessed by an official meat grader comprised degree of marbling, meat color, fat color, texture and maturity for quality grade, and back fat thickness, ribeye area and carcass weight for yield grade. A heavier carcass with a higher marbling score, more red meat color and white fat color received better quality grade (p<0.05). Regression analysis showed that the marbling score was the strongest attribute (partial $R^2=0.88$) for quality grade. Lighter carcasses with a thinner back fat and larger ribeye area received higher yield grade score. The back fat thickness was the most negative determinant of yield grade (Partial $R^2=-0.66$). The slaughter season had a little effect on quality and yield grades. As slaughter weight increased, back fat thickness and ribeye area increased linearly, whereas marbling score reached its asymptotic level at approximately 570 kg. As a consequence, quality grade showed a considerable improvement up to 570 kg, but increases in slaughter weight afterward showed a little benefit on quality grade. There was a clear curvilinear relationship between slaughter weight and yield grade in that the yield grade reached its highest point at approximately 490 kg and decreased afterward. These results suggested that 570kg at the age of 24 months might be the economic slaughter weight for quality grade but 490 kg for yield grade. Characterization of Korean Cattle Keratin IV Gene Kim, D.Y.;Yu, S.L.;Sang, B.C.;Yu, D.Y. 1055 Keratins, the constituents of epithelial intermediate filaments, are precisely regulated in a tissue and development specific manner. There are two types of keratin in bovine. The type I is acidic keratin and the type II is neutral/basic keratin. 1.5 kb of 5' flanking sequence of Korean cattle Keratin IV gene, type II keratin (59 kDa), was cloned and sequenced. A symmetrical motif AApuCCAAA are located in a defined region upstream of the TATA box. Proximal SP1, AP1, E-box and CACC elements as the major determinants of transcription are identified. When it was compared to the bovine sequence from -600 bp to ATG upstream, the homology was 97% in nucleotide sequence. Several A and T sequences, located in the promoter region, are deleted in the Korean cattle. An expression vector consisted of Korean cattle Keratin IV gene promoter/SV40 large T antigen was transfected to HaCaT cell (Epithelial keratinocyte). The transformed HaCaT cells showed active proliferation when treated with PDGF (Platelet-derived growth factor) in 0.3% soft agar compared to control cells. These results indicate that Korean cattle Keratin IVgene promoter can be used as a promoter for transfection into epithelial cell. Effects of Conjugated Linoleic Acid and Stearic Acid on Apoptosis of the INS-1 β-cells and Pancreatic Islets Isolated from Zucker Obese (fa/fa) Rats Jang, I.S.;Hwang, D.Y.;Lee, J.E.;Kim, Y.K.;Kang, T.S.;Hwang, J.H.;Lim, C.H.;Chae, K.R.;Jeong, J.H.;Cho, J.S. 1060 To determine whether dietary fatty acids affect pancreatic $\beta$-cell function, the INS-1 $\beta$-cells and the pancreatic islets isolated from Zucker obese (fa/fa) rats were cultured with stearic acid and conjugated linoleic acid (CLA). As a result, DNA fragmentation laddering was substantially decreased in the INS-1 $\beta$-cells and the isolated pancreatic islets cultured with 2 mM CLA compared to those cultured with stearic acid. To investigate the mechanism by which CLA alleviates cell apoptosis under DNA fragmentation assay, we examined mRNA expressions of apoptosis-related proteins including Bax and Bcl-2 associated with cell death agonist and antagonist, respectively, in both INS-1 cells and islets cultured with 2 mM fatty acids. Bax mRNA expression was not altered by either stearic acid or CLA, whereas Bcl-2 mRNA expression was enhanced by CLA when compared to the stearic acid cultures. However, there were no changes in cell apoptosis and apoptotic-regulating gene products in either INS-1 cells or isolated islets treated with or without 2 mM CLA. It is concluded that CLA maintains $\beta$-cell viability via increased Bcl-2 expression compared to the stearic acid cultures, which may help to alleviate, at least somewhat, the onset of NIDDM in the physiological status. More detailed study is still needed to elucidate the effect of CLA on the prevention of fatty acid-induced $\beta$-cell apoptosis. Identification of Differentially Expressed Genes in the Longissimus Dorsi Muscle Tissue between Duroc and Erhualian Pigs by mRNA Differential Display Pan, P.W.;Zhao, S.H.;Yu, M.;Liu, B.;Xiong, T.A.;Li, K. 1066 In order to identify differentially expressed mRNAs (which represent possible candidates for significant phenotypic variances of muscle growth, meat quality between introduced European and Chinese indigenous pigs) in the longissimus dorsi muscle tissue between adult Duroc and Erhualian pigs, mRNA differential display was performed. Five 3' anchor primers in combination with 20 different 5' arbitrary primers (100 primer sets) were used and nearly 5,000 cDNA bands were examined, among which 10 differential display cDNAs were obtained, cloned and sequenced. Six of the 10 cDNAs showed similarity to identified genes from GenBank and the other 4 had no matches in GenBank. Differential expression was tested by Northern blot hybridization and could be confirmed for 2 cDNAs. The method used in this study provides a useful molecular tool to investigate genetic variation that occurs at the transcriptional level between different breeds. Reproductive Biotechnologies for Improvement of Buffalo: The Current Status Purohit, G.N.;Duggal, G.P.;Dadarwal, D.;Kumar, Dinesh;Yadav, R.C.;Vyas, S. 1071 Reproductive biotechnologies continue to be developed for genetic improvement of both river and swamp buffalo. Although artificial insemination using frozen semen emerged some decades back, there are still considerable limitations. The major problem appears to be the lack of efficient methods for estrus detection and timely insemination. Controlled breeding experiments in the buffalo had been limited and similar to those applied in cattle. Studies on multiple ovulation and embryo transfer are essentially a replica of those in cattle, however with inherent problems such as lower number of primordial follicles on the buffalo ovary, poor fertility and seasonality of reproduction, lower population of antral follicles at all stages of the estrous cycle, poor endocrine status and a high incidence of deep atresia in ovarian follicles, the response in terms of transferable embryo recovery has remained low with 0.51 to 3.0 per donor and pregnancy rates between 15 to 30%. In vitro production of buffalo embryos is a valid alternative to recovery of embryos by superovulation. This aspect received considerable attention during the past decade, however the proportion of embryos that develops to the blastocyst stage is still around 25-30% and hence the in vitro culture procedures need substantial improvement. Embryo cryopreservation procedures for direct transfer post thaw need to be developed for bubaline embryos. Nuclear transfer and embryo cloning is a technique that has received attention in various species during recent years and can be of immense value in buffaloes as they have a low rate of embryo recoveries by both in vitro and in vivo procedures. Gender pre-selection, genome analysis, gene mapping and gene transfer are a few of the techniques that have been studied to a limited extent during recent years and are likely to be included in future studies on buffaloes. Very recently, reproductive biotechnologies have been applied to feral buffaloes as well, but the results obtained so far are modest. When fully exploited they can play an important role in the preservation of endangered species.
CommonCrawl
Title: Quantum spin Hall effect and topological phase transition in InNxBiySb1−x−y/InSb quantum wells Authors: Song, Zhigang Bose, Sumanta Fan, Weijun Zhang, Dao Hua Zhang, Yan Yang Li, Shu Shen Keywords: Topological insulator Quantum spin Hall effect Source: Song, Z., Bose, S., Fan, W., Zhang, D. H., Zhang, Y. Y., & Li, S. S. (2017). Quantum spin Hall effect and topological phase transition in InNxBiySb1−x−y/InSb quantum wells. New Journal of Physics, 19, 073031-. Series/Report no.: New Journal of Physics Abstract: Quantum spin Hall (QSH) effect, a fundamentally new quantum state of matter and topological phase transitions are characteristics of a kind of electronic material, popularly referred to as topological insulators (TIs). TIs are similar to ordinary insulator in terms of their bulk bandgap, but have gapless conducting edge-states that are topologically protected. These edge-states are facilitated by the time-reversal symmetry and they are robust against nonmagnetic impurity scattering. Recently, the quest for new materials exhibiting non-trivial topological state of matter has been of great research interest, as TIs find applications in new electronics and spintronics and quantum-computing devices. Here, we propose and demonstrate as a proof-of-concept that QSH effect and topological phase transitions can be realized in ${\mathrm{InN}}_{x}{\mathrm{Bi}}_{y}{\mathrm{Sb}}_{1-x-y}$/InSb semiconductor quantum wells (QWs). The simultaneous incorporation of nitrogen and bismuth in InSb is instrumental in lowering the bandgap, while inducing opposite kinds of strain to attain a near-lattice-matching conducive for lattice growth. Phase diagram for bandgap shows that as we increase the QW thickness, at a critical thickness, the electronic bandstructure switches from a normal to an inverted type. We confirm that such transition are topological phase transitions between a traditional insulator and a TI exhibiting QSH effect—by demonstrating the topologically protected edge-states using the bandstructure, edge-localized distribution of the wavefunctions and edge-state spin-momentum locking phenomenon, presence of non-zero conductance in spite of the Fermi energy lying in the bandgap window, crossover points of Landau levels in the zero-mode indicating topological band inversion in the absence of any magnetic field and presence of large Rashba spin-splitting, which is essential for spin-manipulation in TIs. DOI: http://dx.doi.org/10.1088/1367-2630/aa795c Rights: © 2017 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft. Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. Appears in Collections: EEE Journal Articles Quantum spin Hall effect and topological phase transition in InN x Bi y Sb1xy InSb quantum wells.pdf 726.25 kB Adobe PDF
CommonCrawl
What is so special about elliptic curves? There seems to be sources like this, this also, and some introductions that discuss elliptic curves in general and how they're used. But what I'd like to know is why these particular curves are so important in cryptography as opposed to, let's say, any other polynomial degree $\gt$ 2 which you can then mod over some group. It seems like once a modulus is applied then other function types should be acceptable as well. It seems even less intuitive when just looking at the bubble vs curve as here: Since there are other curves (let's say anything from a sin wave to the $x^3 + x$ or even just some unusually shaped contour) that could do the job. It seems like they would provide much more surface area to get a larger space in $\mathbb{Z}_p$ or really just more possible combinations of connecting lines from some arbitrary $P$ and $Q$ to get $R$ as opposed to something as restrictive (on the graph) as beginning from some bubble (which would seem to unnecessarily reduce the possible combinations) and then use a modulus to implement the discrete logarithm problem. Sorry if this seems a little naive of a question, I'm trying to write an implementation right now and just to understand it fully even if that means asking something that is taken for granted. Perhaps just walking through a simple example (most of the ones I've searched are anything but), just a few sentences, would be rather helpful, from "A wants to talk to B" all the way up to "now E can't listen in between A and B". So it seems like this is the version of elliptic curves over a finite field: Yes that looks pretty random. But I'm still not really seeing why they are the only equations that have cryptographic significance. It's difficult to imagine that if you simply took some other higher degree equation and applied modulus (to place within a group), then it seems like it would make sense that you'd get something that's also comparatively random. elliptic-curves discrete-logarithm finite-field stackuserstackuser Elliptic curves are not the only curves that have groups structure, or uses in cryptography. But they hit the sweet spot between security and efficiency better than pretty much all others. For example, conic sections (quadratic equations) do have a well-defined geometric addition law: given $P$ and $Q$, trace a line through them, and trace a parallel line that goes through the identity element. Here's a handy picture for one of the best known conics, the unit circle $x^2 + y^2 = 1$: If you take the identity element to be $(1, 0)$, then you get the very simple addition formula (modulo your favorite prime) $$ (x_3, y_3) = (x_1x_2 - y_1y_2, x_1y_2 + x_2y_1)$$ This is much faster than regular elliptic curve formulas, so why not use this? Well, the problem with conics is that the discrete logarithm in this group is no stronger than the discrete logarithm over the underlying field! So we would need very large keys, like prime-field discrete logarithms, without any advantage. That's not good. So we move on to elliptic curves, which do not have reductions to the logarithm on the underlying field. But wait, we can generalize elliptic curves to higher degrees. In fact, $$ y^2 = x^{2g+1} + \ldots $$ when $g > 1$ and some restrictions are respected, is called a hyperelliptic curve, and we can work on it too. But for these curves there does not exist a nice geometric rule to add points, like in conics and elliptic curves. So we are forced to work in the Jacobian group of these curves, which is not the group of points anymore, but of divisors (which are kind of like polynomials of points, if that makes any sense). This group has size $\approx p^g$, when working modulo a prime $p$. Hyperelliptic curves do have some advantages: since the group size is much larger than the prime, we can work modulo smaller primes for the same cryptographic strength. But ultimately, hyperelliptic curves fall prey to index calculus as well when $g$ starts to grow. In practice, only $g \in \{2,3\}$, i.e., polynomials of degree $5$ or $7$ offer similar security as elliptic curves. To add insult to injury, as Watson said, the addition formulas also get much more complicated as $g$ grows. There are also further generalizations of hyperelliptic curves, like superelliptic curves, $C_{a,b}$ curves, and so on. But similar comments apply: they simply do not bring advantages in either speed or security over elliptic curves. answered Nov 5, 2013 at 7:21 Samuel NevesSamuel Neves 12k3939 silver badges5151 bronze badges $\begingroup$ So elliptic curves are like the sweet spot over a continuum of equations that might be used over a group. That continuum seems like it ranges from "way too expensive for computation" (hyper/super elliptic curves) to "the DLP is not difficult enough" (putting P and Q over a conic section or circle). Although your conic section made me wonder why a sphere in $R^{3}$ wouldn't work, too expensive perhaps. +1 and accepted. $\endgroup$ – stackuser $\begingroup$ A sphere ($x^2 + y^2 + z^2 = 1$) would still not be secure; however, if you intersect two quadric surfaces (i.e. surfaces defined by quadratic polynomials) you actually can get a secure curve, which --- guess what --- is actually an elliptic curve! The Jacobi intersection curves are an example of this. $\endgroup$ – Samuel Neves Elliptic curves have a number of nice features that make them good for cryptography. One could write a whole book on the topic (as some have), so I'll highlight a few points. The points on an elliptic curve over a finite field forms a group. The same is not true for the ideas you mentioned. Discrete log on many of these EC groups is hard. In fact, there are no sub exponential algorithms to solve DLP in these groups as there are for other groups we often use in crypto (e.g., $\mathbb{Z}_p$). This means we have smaller key sizes and faster operations. Elliptic curves have been successfully applied to cryptanalytic problems such as factoring. We have been able to do some other cool things with elliptic curves such as pairings that we haven't gotten in any other setting. mikeazomikeazo $\begingroup$ 1. Wouldn't any curve be able to form a group if some modulus is applied? Many ec's don't even have the bubble unless $a \lt 0$ and $b \lt 1$ so it seems like any other wavy line in that case. $\endgroup$ $\begingroup$ The ECs that you see that look nice are over the reals. The elliptic curves over finite fields do not look that way at all. In fact they look pretty random which is another reason they are good for cryptography. AFAIK other functions over finite fields do not form a group. $\endgroup$ – mikeazo $\begingroup$ OK so it makes more sense now, after seeing the before and after transformation of it going over the finite field (like caterpillar to butterfly). I added an EDIT as to what I'm still not really getting about why the ec's are so special in that way. It just seems like applying modulus (placing within a group) most equations with 2 higher degree variables would have a comparable effect to create something random enough for cryptographic purposes. $\endgroup$ $\begingroup$ @stackuser While you can define some kind of "addition" for most types of curves using a similar geometric construction like the one for elliptic curves, it is not automatically given that this operation is associative and has neutral and inverse elements, i.e. forms a group. $\endgroup$ $\begingroup$ Neither points itself, nor "other functions" form a group; points with an operation do, and that operation can be creative subject to associativity, neutral and inverse as Paulo Ebermann say. $\endgroup$ – Vadym Fedyukovych You are not wrong: given any variety $V$, we can form the Jacobian $J(V)$ as an abelian variety, in particular an abelian group over which we could use the Diffie-Hellman problem. However, there are several details that get in the way of doing this. First, it is necessary to compute the order of the Jacobian. We only know how to do this for elliptic curves. Secondly for higher genus there are various reductions to the case of lower genus. Lastly the greater the genus, and hence degree, the more complex the formulas that have to be used to do these calculations. The Handbook of Elliptic and Hyperelliptic Curve Cryptography is an excellent reference on these issues. Watson LaddWatson Ladd The addition of points on elliptic curves has a different definition that is much more natural, can be defined for any curve, and makes it more obvious why it is interesting for elliptic curves specifically. For any variety (curves, surfaces, etc etc) you can define something called the divisor class group, whose elements are just free sums of a finite number of points, modulo sums of points given by zeroes and poles of rational functions (with their multiplicities). For any flat projective space of any dimension this is the trivial group since there a rational function can have any finite number poles and zeros wherever, while for a general surface this group will be larger the more "holes" the original surface has, and the theorem that quantifies that by relating the number of rational functions to topology is called the Riemann-Roche theorem. By sheer miracle, it turns out that for Elliptic curves (i.e. curves of genus one, i.e. that have one "hole" and are topologically like a torus), the Riemann-Roche theorem implies there is a bijection between points on the curve and the divisor class group that is uniquely defined if you pick a specific point to be the identity, so that each curve point can be mapped to an element of the divisor class group. The specific cubic equations that you have seen are important for specific calculations but they don't tell you why the thing works. There's a number of ways in which you can embed an elliptic curve in projective space, and the cubic form is convenient for calculations as the multiplication happens to have a nice form in terms of intersecting lines. However, the actual group law actually exists because of the Riemann-Roch theorem, and the important part is that you have an algebraic curve that is topologically like a torus. Over the complex numbers specifically, it's also possible to intuitively explain that torus topology -> definable addition law because you can parametrize those curves with elliptic functions which are doubly periodic, much like how you can parametrize a circle with a periodic function like e^ix, which maps addition of points to addition on the complex numbers modulo the periods. answered Feb 2, 2022 at 14:22 saolofsaolof What is the difference between Elliptic curves and Hyper-elliptic curves in terms of security? Why do they use elliptic curve instead of circle or other simpler curves? Basic explanation of Elliptic Curve Cryptography? Why can we not use the group $Z_{p}^{*}$ for cryptography? How Were secp*k1 elliptic curve generators chosen? ECC - ElGamal with Montgomery or Edwards type curves (curve25519, ed25519) - possible? Elliptic curve ElGamal with homomorphic mapping What is the correct elliptic curve representation? questions about modular reduction algorithm over $F_{2^m}$ Can you use ECDSA on pairing-friendly curves? What's the difficulty of using elliptic curves to design homomorphic encryption protocols? Corner cases of addition on short Weierstrass elliptic curves
CommonCrawl
Impact of lenvatinib on renal function: long-term analysis of differentiated thyroid cancer patients Chie Masaki1, Kiminori Sugino1, Sakiko Kobayashi2, Yoshie Hosoi1, Reiko Ono1, Haruhiko Yamazaki1, Junko Akaishi1, Kiyomi Y. Hames1, Chisato Tomoda1, Akifumi Suzuki1, Kenichi Matsuzu1, Keiko Ohkuwa1, Wataru Kitagawa1, Mitsuji Nagahama1 & Koichi Ito1 Because lenvatinib is well known to induce proteinuria by blocking the vascular endothelial growth factor (VEGF) pathway, renal function is a concern with long-term administration of lenvatinib. The long-term effects of lenvatinib on renal function in patients with advanced differentiated thyroid carcinoma (DTC) were analyzed. This study involved 40 DTC patients who continued lenvatinib therapy for ≥6 months. Estimated glomerular filtration rate (eGFR) was calculated as an indicator of renal function. The temporal course of eGFR, effects of baseline eGFR on eGFR changes, and factors affecting renal impairment were investigated. The overall cohort showed sustainable decreases in eGFR, with decreased values of 11.4, 18.3, and 21.0 mL/min/1.73 m2 at 24, 36, and 48 months after starting treatment, respectively. No differences in eGFR decrease every 6 months were seen for three groups classified by baseline eGFR ≥90 mL/min/1.73 m2 (n = 6), < 90 but ≥60 mL/min/1.73 m2 (n = 26), or < 60 but ≥45 mL/min/1.73 m2 (n = 8). Grade 3 proteinuria was associated with declines in eGFR (p = 0.0283). Long observation period was also associated with decreases in eGFR (p = 0.0115), indicating that eGFR may decrease in a time-dependent manner. Lenvatinib can induce declines in eGFR, particularly with treatment duration > 2 years, regardless of baseline eGFR. Proteinuria is a risk factor for declines in eGFR. Patients who start lenvatinib with better renal function show a renal reserve capacity, prolonging clinical outcomes. Decision-making protocols must balance the benefits of lenvatinib continuation with acceptable risks of harm. Lenvatinib is an agent that shows strong tumor suppression, targeting multiple receptors including vascular endothelial growth factor receptor (VEGFR)-1 to − 3 [1, 2]. The characteristics of inducing proteinuria and hypertension as shared class effects are also well known [2,3,4,5], particularly due to VEGFR-2 suppression. The effects of lenvatinib on renal function have actually received relatively little attention due to the rarity of acute renal injury, but are becoming a new concern for patients on long-term treatment. While several reports have examined the effects of lenvatinib on renal function [6,7,8], those studies were either case reports or short-term investigations. The magnitude of urinary protein excretion is recognized as a factor associated with increased risk of progressive renal damage [9] and subsequent end-stage renal disease (ESRD) [10,11,12]. Iseki et al. reported the ultimate incidences of ESRD among screened individuals with a 17-year follow-up period as 0.2, 1.4, 7.1, and 15.4% among proteinuria-negative, 1+, 2+, and 3+ cases, respectively [9]. This suggests the importance of further investigation of renal outcomes among patients with proteinuria. However, the median observation period of approximately 3 years for lenvatinib in the SELECT trial [13] was far shorter than the period suggested to involve concerns regarding proteinuria in healthy individuals. Our clinical experience [14] showed almost the same prognosis as the SELECT trial. Some patients continue treatment, balancing the degree of disease progression and adverse events (AEs) and the difficulty of proteinuria management [8]. Renal function thus represents a potential new concern when the treatment period is extended in those patients. More than 5 years have passed since lenvatinib was approved for use in patients with advanced differentiated thyroid carcinoma (DTC). The temporal course of renal function and the impact of proteinuria on renal function with long-term lenvatinib exposure have yet to be clarified, with little evidence available on whether lenvatinib induces renal failure. Furthermore, the indications for lenvatinib are now expanding to several cancer types [15,16,17]. Lenvatinib is metabolized hepatically and excreted renally, so the recommended starting dose differs among types of malignancy. DTC is a cancer type with a low frequency of liver and renal metastases, both of which can affect pharmacokinetics. This study analyzed the long-term effect of lenvatinib on renal function in patients with advanced DTC treated with lenvatinib. This study involved DTC patients with the evidence of radioactive iodine-refractory disease who received lenvatinib therapy and who had results available for renal function tests performed at Ito Hospital, Tokyo, Japan, from May 2015 to December 2019. To reveal the long-term renal effects of lenvatinib, patients treated for ≥6 months were investigated. Of the total of 59 DTC patients treated with lenvatinib, 40 (68%) satisfied these criteria and were investigated in this study. Management and efficacy of lenvatinib Lenvatinib was prescribed at a starting dose of 24 mg once daily. Dose interruption or dose reduction in response to adverse events (AEs) was required for treatment continuation. Accordingly, the intensity of treatment was represented as dose intensity (DI), as the average lenvatinib dose in milligrams per day within the treatment period. Morphologic and prognostic treatment efficacy was evaluated. AEs were assessed based on the National Cancer Institute Common Terminology Criteria for Adverse Events (CTCAE) version 4.0 at every outpatient follow-up, at least every 2 weeks for the first 2 months, then every month thereafter, if the condition of the patient was clinically stable. When treatment-related grade 3 or intolerable grade 2 AEs were encountered, lenvatinib was interrupted until the event in question resolved to grade ≤ 2 or baseline, then sequential dose reductions were implemented if necessary [18]. In addition, proteinuria was assessed based on CTCAE version 4.0, defining: 1+ proteinuria, urinary protein ≥ the upper limit of normal – < 1.0 g/24 h as Grade 1, 2+ and 3+ proteinuria; urinary protein ≥1.0 – < 3.5 g/24 h as Grade 2; and 4+ proteinuria, urinary protein ≥3.5 g/24 h as Grade 3. Instead of a 24-h urine sample, urine protein-to-creatinine ratio (UPCR, g/gCre) was graded based on a previous report confirming its feasibility. After urinalysis performed using a qualitative dipstick test, samples that tested positive (1+ on the dipstick for proteinuria) were sent for UPCR testing the same day [18]. Dose adjustment was decided based on the results of UPCR, as follows: lenvatinib was interrupted for UPCR ≥3.5 g/gCre (i.e., grade 3 proteinuria); and was restarted when proteinuria improved to UPCR < 3.5 g/gCre (i.e., ≤grade 2), as reported previously [18]. Blood pressure (BP) during treatment was controlled mainly using Ca blockers and angiotensin II receptor blockers (ARBs), following the regulatory goal of systolic BP < 120 mmHg, diastolic BP < 80 mmHg in patients without hypertension as a comorbidity and CTCAE grade 1 in patients with hypertension on medication. Evaluation of renal function Estimated glomerular filtration rate (eGFR, mL/min/1.73 m2) was calculated as an indicator of renal function. The calculation formulae for eGFR are as follows: $$ \mathrm{Male}:\left[\mathrm{eGFR}\right]\ \left(\mathrm{mL}/\min /1.73\;{\mathrm{m}}^2\right)=194.00\times \left[\mathrm{creatinine}\right]\ {\left(\mathrm{mg}/\mathrm{dl}\right)}^{-1.094}\times \left[\mathrm{age}\right]\ {\left(\mathrm{years}\right)}^{-0.287} $$ $$ \mathrm{Female}:\left[\mathrm{eGFR}\right]\ \left(\mathrm{mL}/\min /1.73\;{\mathrm{m}}^2\right)=194.00\times \left[\mathrm{creatinine}\right]\ {\left(\mathrm{mg}/\mathrm{dl}\right)}^{-1.094}\times \left[\mathrm{age}\right]\ {\left(\mathrm{years}\right)}^{-0.287}\times 0.739 $$ Values for eGFR were calculated every visit, and data at baseline and 1 month, 3 months, and every 6 months until the 5th year were adopted for evaluation. The adopted data include only until the decision to discontinue treatment was made. Absolute values and change from baseline of eGFR were used for analyses. Temporal changes in eGFR were investigated for all patients. The definition of renal impairment in this study was set based on these results. Correlations between baseline eGFR and clinical outcomes were investigated. Furthermore, with reference to Kidney Disease: Improving Global Outcomes (KDIGO) chronic kidney disease (CKD) classifications [19], baseline eGFR was divided into three groups: Group H, high eGFR, defined as ≥90 mL/min/1.73 m2; Group M, middle eGFR group, defined as ≥60 but < 90 mL/min/1.73 m2; and Group L, low eGFR group, defined as ≥45 but < 60 mL/min/1.73 m2. The temporal course of changes in eGFR was analyzed for these three groups. Furthermore, patients were categorized into two groups as Group D (decreased group) and Group ND (not-decreased group) based on the results of time-dependent eGFR changes in all 40 patients. Background characteristics and treatment efficacy were compared between these two groups. Data up to July 1, 2020 were assessed and retrospectively reviewed. Statistical analyses were performed using JMP software v12.0 (SAS Institute, Cary, NC). Differences between groups were analyzed using the Wilcoxon test. All p-values were two-sided, and values of p < 0.05 were considered significant. Survival curves were plotted using the Kaplan–Meier method. All study participants provided informed consent, and the study protocol was approved by the institutional ethics review committee at Ito Hospital and met the guidelines of our responsible agency. All methods were carried out in accordance with relevant guidelines and regulations. The background characteristics and renal parameters of patients are shown in Table 1. Median age was 67 years, and 15 patients were male. Five patients showed performance status (PS) 2, all due to bone metastasis. Median baseline eGFR was 72.2 mL/min/1.73 m2, and all patients showed eGFR ≥30 mL/min/1.73 m2. Six patients had a past renal history of note, including hypertensive nephropathy, drug-induced nephropathy, pyelonephritis, nephrolithiasis resulting in hydronephrotic kidney, and post-nephrectomy status due to malignancy. Renal and liver metastases were detected as of the latest computed tomography (CT) evaluation in 3 and 7 patients, respectively. Table 1 Background characteristics of patients Efficacy of lenvatinib This study excluded patients with short-term treatment (< 6 months), and only patients who could receive continuous, long-term treatment were analyzed. Median DI was 9.6 mg/day. Best response to treatment was partial response (PR) in 29 patients (73%), stable disease in 10 (25%), and progressive disease in 1 (2%), according to RECIST version 1.1 guidelines [20]. Median values for overall survival (OS), time to treatment failure (TTF), and progression-free survival of these patients were 45.4 months (95% confidence interval [CI], 32.4 months–not reached [NR]), 44.1 months (95%CI, 22.5 months–NR), and 19.9 months (95%CI, 14.5–35.3 months), respectively. Proteinuria was the second most common AE after hypertension. Of the total 39 patients (97.5%) who showed proteinuria, grade 1, 2, and 3 proteinuria was the highest grade in 9, 9, and 21 patients, respectively. Median interval to onset was 12.4 months (95%CI, 0.7–28.5 months). Lenvatinib administration was continued in 19 patients (47.5%) as of the cut-off time. Reasons for lenvatinib discontinuation were deteriorating PS due to disease progression in 14 patients, uncontrollable proteinuria with disease progression in 5 patients, and decreased eGFR in 2 patients (36.6 and 19.9 mL/min/1.73 m2, respectively). No patients required initiation of hemodialysis. Temporal course of eGFR Courses of changes in absolute eGFR value and changes in value for all 40 patients are shown in Fig. 1. A mild decrease in eGFR was seen over time. Compared to baseline, eGFR at each time point showed significant decreases except for at 18 (n = 32), 54 (n = 6), and 60 months (n = 3), respectively. Average decreases in eGFR were 11.4, 18.3, and 21.0 mL/min/1.73 m2 at 24, 36, and 48 months, respectively. The decreased value reached > 20 mL/min/1.73 m2 by 48 months. Median final eGFR was 64.8 ± 22.5 mL/min/1.73 m2. Based on the results of time-dependent eGFR changes in all 40 patients, renal impairment in this study was defined a decline in eGFR of > 15 mL/min/1.73 m2 continuing ≥6 months, with the final eGFR showing a decrease of > 20 mL/min/1.73 m2. Thirteen patients (32.5%) met this definition, and were categorized as Group D. Time course for eGFR changes for the patient population. Mean ± standard deviation baseline eGFR for all 40 patients was 73.9 ± 18.1 mL/min/1.73 m2. A mild decrease in eGFR is seen over time. Mean eGFR at 12, 24, 36, 48, and 60 months was 70.6 ± 20.9, 66.6 ± 20.5, 61.4 ± 19.3, 59.5 ± 15.7, and 81.3 ± 28.5 mL/min/1.73 m2, respectively. *: Time point showing a significant difference compared to baseline eGFR Baseline eGFR effect on treatment Background characteristics and treatment efficacy were investigated in the three groups according to baseline eGFR, with 6 (15%) patients in Group H, 26 (65%) in Group M, and 8 (20%) in Group L (Table 2). Older age (p = 0.0206), male sex (p = 0.0055), and current hypertension (p = 0.0207) tended to be associated with low baseline eGFR. Observation period was significantly longer in Group H (p = 0.0431). A best response of PR was significantly more frequent in Group H than in other groups (p = 0.0463). Temporal changes in eGFR for these three groups were calculated with both the absolute value and the change value (Fig. 2). The eGFR decreased sustainably in all groups, whereas no significant difference in degree of decrease was seen between groups at the same time point. Table 2 Clinical characteristics according to baseline eGFR Time course for eGFR changes according to baseline eGFR. Baseline eGFR is divided into the three groups of ≥90 mL/min/1.73 m2 (Group H: high eGFR, n = 6), ≥60 but < 90 mL/min/1.73 m2 (Group M: middle eGFR, n = 26), and ≥ 45 but < 60 mL/min/1.73 m2 (Group L: low eGFR, n = 8). Changes in eGFR over time show no significant differences among these three groups Based on the correlation between baseline eGFR and clinical outcome as divided into three groups, values between baseline and latest eGFR were compared (Fig. 3). A significant decrease in eGFR was seen for Group H (p = 0.0228), but no significant decreases were evident for Group M (P = 0.0546) or Group L (p = 0.8345). The latest eGFR values in Groups H, M, and L were 86.1 ± 15.9, 64.0 ± 22.4, and 51.3 ± 13.6 mL/min/1.73 m2, respectively. Comparison of baseline eGFR and latest eGFR by baseline eGFR. A) Group H (high eGFR, n = 6): eGFR changes from 106.0 mL/min/1.73 m2 to 86.1 mL/min/1.73 m2, for a mean difference of −20.0 mL/min/1.73 m2 (p = 0.0228). B) Group M (middle eGFR, n = 26): eGFR changes from 73.8 mL/min/1.73 m2 to 64.0 mL/min/1.73 m2, for a mean difference of − 9.7 mL/min/1.73 m2 (p = 0.0546). C) Group L (low eGFR, n = 8): eGFR changes from 50.4 mL/min/1.73 m2 to 51.3 mL/min/1.73 m2, for a mean difference of 0.9 mL/min/1.73 m2 (p = 0.8354) Lenvatinib was discontinued in Groups H, M, and L due to uncontrollable proteinuria with disease progression in 1, 4, and 0 patients, due to decreased eGFR in 0, 1, and 1 patients, and due to PS deteriorating due to disease progression in 2, 9, and 3 patients, respectively. Decrease in eGFR and risk factors A total of 13 patients (37.5%) who met our renal impairment criteria were labeled as Group D, and the remaining 27 patients (62.5%) as Group ND. Median baseline eGFR was 78.5 mL/min/1.73 m2 in Group D and 63.8 mL/min/1.73 m2 in Group ND (p = 0.0165). A decrease of > 15 mL/min/1.73 m2 in eGFR was started at 8.9 months (0.8–37.3 months) in Group D patients. Temporal changes in eGFR in these two groups were calculated with both the change value (Fig. 4) and the absolute value (Table 3). The eGFR of Group D was obviously decreased, since this was defined as the eGFR-decrease group, with decreased values of 18.3, 28.5, and 29.0 mL/min/1.73 m2 in months 24, 36, and 48, respectively. Meanwhile, eGFR in Group ND was only slightly decreased, reaching a decrease of > 5 mL/min/1.73 m2 after 24 months. The number of patients with baseline eGFR ≥60 mL/min/1.73 m2 was significantly higher in Group D than that in Group ND, and was also associated with decreased eGFR (p = 0.0072). The long observation period was also associated with a decrease in eGFR, which was considered to indicate that eGFR may decrease in a time-dependent manner. Grade 3 proteinuria was identified as a risk factor for renal impairment (p = 0.0283). Of the total of 21 patients with grade 3 proteinuria, 10 patients (47.6%) were allocated to Group D. Of the total of 27 Group ND patients, Grade 3 proteinuria was seen in 3 (11.1%). Clinical factors associated with renal impairment are shown in Table 4. No difference between the two groups was seen in DI calculated as the cumulative dose up to the same time point for each year (Table 5). Lenvatinib was discontinued due to uncontrollable proteinuria with disease progression in 1 and 4 patients, due to decreased eGFR in 1 and 1 patients, and due to PS deterioration resulting from disease progression in 3 and 11 patients in Groups D and ND, respectively. The degree of eGFR decrease in 1 patient discontinued due to eGFR decrease in Group ND was not compatible with our definition. Comparison of eGFR changes between baseline and latest eGFR according to baseline eGFR. Patients are divided into two groups according to degree of eGFR decrease satisfying our definition of renal impairment. Group N (n = 13) shows a sustained decrease in eGFR, particularly at 24 months. Group ND (n = 27) shows no sustained decrease in eGFR, but tends to show a slight decrease that does not meet our definition Table 3 Changes in eGFR value according to renal function decrease Table 4 Clinical factors for renal function decrease Table 5 Dose intensity according to decrease in renal function This investigation was conducted to clarify the long-term effects of lenvatinib on renal function. VEGF is an essential factor for glomerular structure [21], and this study was supported by the fact that VEGFR-suppressing agents such as lenvatinib can induce proteinuria [1,2,3,4, 6]. Lenvatinib is indicated at present as a monotherapy in patients with radioiodine-refractory DTC [13] and unresectable hepatocellular carcinoma [15]. Further indications are expected [16, 17]. The recommended initiation dose of lenvatinib differs according to the type of malignancy. DTC is a cancer type with a low frequency of liver or renal metastases, which can affect drug metabolism and excretion in rare cases. The results of this investigation could provide insights into the treatment of other malignancies. Overall, renal function decreased over time to a relatively small degree within 2 years, then declined continuously thereafter. Renal impairment in this study was uniquely defined as a decline in eGFR of > 15 mL/min/1.73 m2 for ≥6 months, with a total decrease of > 20 mL/min/1.73 m2 as of the latest eGFR. Approximately one-third of patients met the definition of renal impairment, confirming that lenvatinib can affect renal function. The international definition of chronic kidney disease (CKD) is a glomerular filtration rate (GFR) < 60 mL/min/1.73 m2, or markers of kidney damage, or both, for ≥3 months, regardless of the underlying cause. Unlike that general definition, a slight eGFR decrease during cancer therapy regardless of baseline eGFR can be detected by our definition. Adopting this definition as a valid indicator for recognizing that eGFR is starting to decline can trigger closer attention to renal function. With this definition, the comparatively acute renal impairment due to end-stage cancer that results in deterioration of whole organs can be differentiated from renal impairment induced by lenvatinib. Conversely, short-term declines in eGFR due to lenvatinib cannot be detected using this definition, but such declines are rare. Distinguishing between these two factors is also difficult in patients with end-stage cancer. The observation period was significantly longer in Group H among the three groups divided by baseline eGFR. The change between baseline and latest eGFR was significantly different in Group H (Fig. 3). Furthermore, no differences in degree of decrease were seen among the three groups at the same time point (Fig. 2, Table 2). That is, the degree of decline in eGFR was unaffected by baseline eGFR. This suggests that no special attention needs to be given to renal function when baseline renal function is acceptably low (e.g., eGFR ≥45 but < 60 mL/min/1.73 m2). This also suggests that patients with high renal function have abundant reserve, resulting in an ability to continue treatment for longer. This is supported by the fact that the rate of RECIST-PR and frequency of proteinuria were highest among patients in Group H, and that neither PS at baseline nor renal reason for lenvatinib discontinuation differed significantly between groups. From these assessments, although not definitive, eGFR at baseline is not considered a prognostic predictor as much as a predictor of tolerance for AEs. When patients were divided into two groups according to the presence or absence of renal impairment, a marked decrease in eGFR was certainly seen after 2 years in Group D. A mild decline was seen even in Group ND, although the degree did not meet the definition (Fig. 4, Table 4). Lenvatinib can thus induce renal impairment in some patients, increasing the potential for deterioration over time. No involvement of DI in the same observation period was seen (Table 5). Proteinuria was revealed to increase the risk of renal impairment. This may indicate the same phenomenon, i.e., the increased risk of ESRD in a span of > 10 years, seen healthy subjects with proteinuria, but over a shorter time span. Still, of the 21 patients with grade 3 proteinuria, only 10 patients (47.6%) showed a decrease in eGFR, whereas even among the 19 patients without grade 3 proteinuria, 3 patients (16.0%) showed a decrease in eGFR. Proteinuria may be just one phenotype of renal damage caused by VEGFR inhibitors, and even patients without proteinuria should be aware of the potential for changes in renal function. Patients with baseline grade 1 proteinuria appear able to receive treatment safely for a long time regardless of the appearance of proteinuria. Since this pathology is not high-grade proteinuria equivalent to grade 3, these patients were examined together with cases showing no proteinuria in this study. Proteinuria is managed continuously with lenvatinib DI regulation while looking at the balance with disease control. Meanwhile, renal impairment cannot be immediately improved just with regulation of the lenvatinib dose. Once ultimate renal impairment occurs, treatment must be suspended irrespective of successful disease control. Although no patients required initiation of dialysis in this study, eGFR could logically decrease enough to require dialysis over a long treatment period. The timing of a change to the next treatment line is thus the next important clinical question [8], but that issue cannot be addressed using the present results. Only limited lines of treatment are available for DTC, unlike for some other malignancies. Where multiple treatment options are available, treatment with one agent does not need to be prolonged when eGFR is decreasing. Sorafenib, another agent approved for DTC, rarely induces proteinuria [8] and was confirmed as safe by Tatsugami et al., albeit in a 1-year investigation [22, 23]. Dialysis can directly affect quality of life. Ideally, the decision should be made in advance regarding whether dialysis should be initiated when renal function finally fails, in accordance with recommendations from the field of onconephrology [24, 25]. Prolongation of OS with anti-cancer treatment is obviously given very high priority [14], along with consideration of renal prognosis commensurate with the oncological prognosis in patients receiving lenvatinib. The balance between acceptable risk of harm and potential benefit from lenvatinib treatment is an important aspect of therapy [14]. In our study, recovery of renal function after lenvatinib discontinuation was not able to be discussed due to insufficient data from patients after lenvatinib cessation. Two key limitations to this study should be considered. First, this analysis was limited to Japanese patients. This population reportedly shows a high frequency of proteinuria induced by lenvatinib compared to all subsets, including other ethnicities [13, 15, 26]. The smaller number of nephrons may be related to this phenomenon, although the details have yet to be clarified [27]. Ethnicity-specific renal effects of lenvatinib also remain unclear. The DI for Japanese populations in the real world may tend to be lower than DIs reported from other countries, and the relationship of a high frequency of proteinuria with this point is also unclear. The second limitation was the lack of consideration given to the muscle mass of each patient. As eGFR is an index using serum creatinine level, values are affected by muscle mass. Since some patients receiving treatment may have had sarcopenia [28], eGFR may have been overestimated in cachexic patients. To the best of our knowledge, this is the first study to describe the long-term efficacy of lenvatinib on renal function in patients with advanced DTC treated with lenvatinib in actual clinical practice. Our study revealed that lenvatinib can induce renal impairment, especially in treatment periods > 2 years, regardless of baseline eGFR. Lenvatinib can be used safely, at least in terms of renal effects, for periods within 2 years. Patients who start therapy with better renal function have a larger standby capacity, allowing longer clinical application. Grade 3 proteinuria is a risk factor for renal impairment. Decreased eGFR does not necessarily warrant immediate treatment discontinuation, and ideally treatment continuation should be decided according to the balance between acceptable risk of harm and potential benefit from lenvatinib. All data generated or analyzed during this study are included in this published article. Yamamoto Y, Matsui J, Matsushima T, Obaishi H, Miyazaki K, Nakamura K, et al. Lenvatinib, an angiogenesis inhibitor targeting VEGFR/FGFR, shows broad antitumor activity in human tumor xenograft models associated with microvessel density and pericyte coverage. Vasc Cell. 2014;6(1):18. https://doi.org/10.1186/2045-824X-6-18. Stjepanovic N, Capdevila J. Multikinase inhibitors in the treatment of thyroid cancer: specific role of lenvatinib. Biologics. 2014;8:129–39. https://doi.org/10.2147/BTT.S39381. Izzedine H, Massard C, Spano JP, Goldwasser F, Khayat D, Soria JC. VEGF signalling inhibition-induced proteinuria: mechanisms, significance and management. Eur J Cancer. 2010;46(2):439–48. https://doi.org/10.1016/j.ejca.2009.11.001. Wu S, Keresztes RS. Antiangiogenic agents for the treatment of nonsmall cell lung cancer: characterizing the molecular basis for serious adverse events. Cancer Investig. 2011;29(7):460–71. https://doi.org/10.3109/07357907.2011.597815. den Deurwaarder ES, Desar IM, Steenbergen EJ, et al. Kidney injury during VEGF inhibitor therapy. Neth J Med. 2012;70(6):267–71. Cosmai L, Gallieni M, Liguigli W, Porta C. Renal toxicity of anticancer agents targeting vascular endothelial growth factor (VEGF) and its receptors (VEGFRs). J Nephrol. 2017;30(2):171–80. https://doi.org/10.1007/s40620-016-0311-8. Cavalieri S, Cosmai L, Genderini A, Nebuloni M, Tosoni A, Favales F, et al. Lenvatinib-induced renal failure: two first-time case reports and review of literature. Expert Opin Drug Metab Toxicol. 2018;14(4):379–85. https://doi.org/10.1080/17425255.2018.1461839. Goto H, Kiyota N, Otsuki N, Imamura Y, Chayahara N, Suto H, et al. Successful treatment switch from lenvatinib to sorafenib in a patient with radioactive iodine-refractory differentiated thyroid cancer intolerant to lenvatinib due to severe proteinuria. Auris Nasus Larynx. 2018;45(6):1249–52. https://doi.org/10.1016/j.anl.2018.05.003. Iseki K, Ikemiya Y, Iseki C, Takishita S. Proteinuria and the risk of developing end-stage renal disease. Kidney Int. 2003;63(4):1468–74. https://doi.org/10.1046/j.1523-1755.2003.00868.x. Lea J, Greene T, Hebert L, Lipkowitz M, Massry S, Middleton J, et al. The relationship between magnitude of proteinuria reduction and risk of end-stage renal disease: results of the African American study of kidney disease and hypertension. Arch Intern Med. 2005;165(8):947–53. https://doi.org/10.1001/archinte.165.8.947. Abbate M, Zoja C, Remuzzi G. How does proteinuria cause progressive renal damage? J Am Soc Nephrol. 2006;17(11):2974–84. https://doi.org/10.1681/ASN.2006040377. Usui T, Kanda E, Iseki C, Iseki K, Kashihara N, Nangaku M. Observation period for changes in proteinuria and risk prediction of end-stage renal disease in general population. Nephrology (Carlton). 2018;23(9):821–9. https://doi.org/10.1111/nep.13093. Schlumberger M, Tahara M, Wirth LJ, Robinson B, Brose MS, Elisei R, et al. Lenvatinib versus placebo in radioiodine-refractory thyroid cancer. N Engl J Med. 2015;372(7):621–30. https://doi.org/10.1056/NEJMoa1406470. Masaki C, Sugino K, Saito N, Akaishi J, Hames KY, Tomoda C, et al. Efficacy and limitations of Lenvatinib therapy for radioiodine-refractory differentiated thyroid Cancer: real-world experiences. Thyroid. 2020;30(2):214–21. https://doi.org/10.1089/thy.2019.0221. Kudo M, Finn RS, Qin S, Han KH, Ikeda K, Piscaglia F, et al. Lenvatinib versus sorafenib in first-line treatment of patients with unresectable hepatocellular carcinoma: a randomised phase 3 non-inferiority trial. Lancet. 2018;391(10126):1163–73. https://doi.org/10.1016/S0140-6736(18)30207-1. Taylor MH, Lee CH, Makker V, Rasco D, Dutcus CE, Wu J, et al. Phase IB/II trial of Lenvatinib plus Pembrolizumab in patients with advanced renal cell carcinoma, endometrial Cancer, and other selected advanced solid tumors. J Clin Oncol. 2020;38(11):1154–63. https://doi.org/10.1200/JCO.19.01598. Sato J, Satouchi M, Itoh S, Okuma Y, Niho S, Mizugaki H, et al. Lenvatinib in patients with advanced or metastatic thymic carcinoma (REMORA): a multicentre, phase 2 trial. Lancet Oncol. 2020;21(6):843–50. https://doi.org/10.1016/S1470-2045(20)30162-5. Masaki C, Sugino K, Kobayashi S, Akaishi J, Hames KY, Tomoda C, et al. Urinalysis by combination of the dipstick test and urine protein-creatinine ratio (UPCR) assessment can prevent unnecessary lenvatinib interruption in patients with thyroid cancer. Int J Clin Oncol. 2020;25(7):1278–84. https://doi.org/10.1007/s10147-020-01678-x. Levey AS, Eckardt KU, Tsukamoto Y, Levin A, Coresh J, Rossert J, et al. Definition and classification of chronic kidney disease: a position statement from kidney disease: improving global outcomes (KDIGO). Kidney Int. 2005;67(6):2089–100. https://doi.org/10.1111/j.1523-1755.2005.00365.x. Eisenhauer EA, Therasse P, Bogaerts J, Schwartz LH, Sargent D, Ford R, et al. New response evaluation criteria in solid tumours: revised RECIST guideline (version 1.1). Eur J Cancer. 2009;45(2):228–47. https://doi.org/10.1016/j.ejca.2008.10.026. Kitamoto Y, Tokunaga H, Miyamoto K, et al. VEGF is an essential molecule for glomerular structuring. Nephrol Dial Transplant. 2002;17(Suppl 9):25–7. https://doi.org/10.1093/ndt/17.suppl_9.25. Tatsugami K, Oya M, Kabu K, Akaza H. Efficacy and safety of sorafenib for advanced renal cell carcinoma: real-world data of patients with renal impairment. Oncotarget. 2018;9(27):19406–14. https://doi.org/10.18632/oncotarget.24779. Tatsugami K, Oya M, Kabu K, Akaza H. Evaluation of efficacy and safety of sorafenib in kidney cancer patients aged 75 years and older: a propensity score-matched analysis. Br J Cancer. 2018;119(2):241–7. https://doi.org/10.1038/s41416-018-0129-3. Cosmai L, Porta C, Perazella MA, Launay-Vacher V, Rosner MH, Jhaveri KD, et al. Opening an onconephrology clinic: recommendations and basic requirements. Nephrol Dial Transplant. 2018;33(9):1503–10. https://doi.org/10.1093/ndt/gfy188. Capasso A, Benigni A, Capitanio U, Danesh FR, di Marzo V, Gesualdo L, et al. Summary of the international conference on Onco-nephrology: an emerging field in medicine. Kidney Int. 2019;96(3):555–67. https://doi.org/10.1016/j.kint.2019.04.043. Kiyota N, Schlumberger M, Muro K, Ando Y, Takahashi S, Kawai Y, et al. Subgroup analysis of Japanese patients in a phase 3 study of lenvatinib in radioiodine-refractory differentiated thyroid cancer. Cancer Sci. 2015;106(12):1714–21. https://doi.org/10.1111/cas.12826. Kanzaki G, Puelles VG, Cullen-McEwen LA, et al. New insights on glomerular hyperfiltration: a Japanese autopsy study. JCI Insight. 2017;2(19). Yamazaki H, Sugino K, Matsuzu K, Masaki C, Akaishi J, Hames K, et al. Sarcopenia is a prognostic factor for TKIs in metastatic thyroid carcinomas. Endocrine. 2020;68(1):132–7. https://doi.org/10.1007/s12020-019-02162-x. The authors would like to thank NK for the imaging evaluation of patients. Department of Surgery, Ito Hospital, Tokyo, 150-8308, Japan Chie Masaki, Kiminori Sugino, Yoshie Hosoi, Reiko Ono, Haruhiko Yamazaki, Junko Akaishi, Kiyomi Y. Hames, Chisato Tomoda, Akifumi Suzuki, Kenichi Matsuzu, Keiko Ohkuwa, Wataru Kitagawa, Mitsuji Nagahama & Koichi Ito Department of Internal Medicine, Keio University School of Medicine, Tokyo, 160-8582, Japan Sakiko Kobayashi Chie Masaki Kiminori Sugino Yoshie Hosoi Reiko Ono Haruhiko Yamazaki Junko Akaishi Kiyomi Y. Hames Chisato Tomoda Akifumi Suzuki Kenichi Matsuzu Keiko Ohkuwa Wataru Kitagawa Mitsuji Nagahama Koichi Ito Contributed substantially to study conception and design, data acquisition, data analysis and interpretation: KS. Data extraction and quality assessment of the evidence: YH, RO, HY, JA, KY, CT, AS, KM, KO, WK and MN. Involved in drafting the manuscript or revising it critically: KS and SK. Gave final approval of the version to be published: KS. Agreed to be held accountable: KS and KI. C. Masaki wrote the main manuscript text and all authors read and approved the final manuscript. Correspondence to Chie Masaki. Written informed consent was obtained from the patients for publication of this study and the accompanying examination data. The authors have no competing financial interests to declare. Masaki, C., Sugino, K., Kobayashi, S. et al. Impact of lenvatinib on renal function: long-term analysis of differentiated thyroid cancer patients. BMC Cancer 21, 894 (2021). https://doi.org/10.1186/s12885-021-08622-w DOI: https://doi.org/10.1186/s12885-021-08622-w Advanced thyroid carcinoma Renal function Targeted Therapies
CommonCrawl
Pure Mathematics home About Pure Math Visitor Directions What is Pure Math? What is Algebra? What is Analysis? What is Number Theory? What is Topology? Department members Officers & Administration Postdoctoral fellows & visitors Emeritus/Adjunct Undergraduate student awards MMath Program PhD comprehensive examination Course offerings/descriptions Application procedure and admissions Recent MMath graduates Recent PhD graduates Contact information - graduate studies Current and upcomingTodayPast2015 Thursday, August 20, 2015 — 2:00 PM EDT Computability Learning Seminar Michael Deveau, Department of Pure Mathematics, University of Waterloo "Embedding Lattices in the Computably Enumerable Degrees (Part 3)" Wednesday, August 19, 2015 — 1:30 PM EDT Geometry Working Seminar Anthony McCormick, Pure Math Department, University of Waterloo "Iterated Function Systems with Overlap" Friday, August 14, 2015 — 10:30 AM EDT Master's Research Paper Lecture Jim Haley, Pure Mathematics, University of Waterloo "Strongly Reductive Operators and Operator Algebras" Jonny Stephenson, Pure Mathematics, University of Waterloo "Embedding Lattices in the Computably Enumerable Degrees (continued)" This talk is a continuation of one given August 6th. Thursday, August 13, 2015 — 10:30 AM EDT Master's Thesis Seminar Ehsaan Hossain, Pure Mathematics, University of Waterloo "The Algebraic Kirchberg--Phillips Conjecture" Wednesday, August 12, 2015 — 10:30 AM EDT Zack Cramer, Pure Mathematics, University of Waterloo "Approximation of Normal Operators by Nilpotents in Purely Infinite $C^*$-algebras" Friday, August 7, 2015 — 10:30 AM EDT Master's Essay Lecture Sam Harris, Pure Mathematics, University of Waterloo "Kadison Similarity Problem and the Similarity Degree" Thursday, August 6, 2015 — 2:00 PM EDT "Embedding Lattices into the Computably Enumerable Degrees" The question of which finite lattices can be embedded into the c.e. degrees first arose with the construction of a minimal pair by Yates, and independently by Lachlan, showing the 4 element Boolean algebra can be embedded. This result was rapidly generalised to show any finite distributive lattice can also be embedded. For non-distributive lattices, the situation is more complicated. Tuesday, August 4, 2015 — 1:30 PM EDT Universal Algebra Seminar Stanley Burris, Pure Mathematics, University of Waterloo "An Introduction to Boole's Algebra of Logic for Classes" Boole's mysterious algebra of logic, based on the algebra of numbers and idempotent variables, has only been properly understood and justified in the last 40 years, more than a century after Boole published his most famous work, Laws of Thought. In this talk an elementary and natural development of Boole's system, from his partial algebra models up to his four main theorems, will be presented. Thursday, July 30, 2015 — 4:00 PM EDT Graduate Student Colloquium Richard Mack, Pure Mathematics, University of Waterloo "Dual spaces and von Neumann Algebras" A canonical construction in Linear Algebra is that of the dual space. In this talk, we consider two questions: When is a Banach space a dual space? When is there only one possible predual? Examples will be presented illustrating that these are not trivial questions, and a major theorem (Sakai's) will be presented giving a broad class of examples for the second question. Wednesday, July 29, 2015 — 4:30 PM EDT Joint Pure Mathematics and Combinatorics & Optimization Colloquium Patrick Dornian, Combinatorics and Optimization, University of Waterloo "Disproving the Hirsch Conjecture" Raymond Cheng, Pure Mathematics, University of Waterloo "Gutting Bundles on Tori" Tuesday, July 28, 2015 — 2:30 PM EDT Chantal David, Concordia University "Averages of Euler products and statistics of elliptic curves Joint work with D. Koukoulopoulos and E. Smith." Tuesday, July 28, 2015 — 10:30 AM EDT PhD Thesis Defence Shuntaro Yamagishi, Pure Mathematics, University of Waterloo "Some additive results in $\mathbb{F}_q[t]$" We collected several results in $\mathbb{Z}$ of additive number theory and translated to results in $\mathbb{F}_q[t]$. The results we collected are related to slim exceptional sets and the asymptotic formula in Waring's problem, diophantine approximation of polynomials over $\mathbb{Z}$ satisfying divisibility conditions, and the problem of Sidon regarding existence of certain thin sequences. Monday, July 27, 2015 — 1:30 PM EDT Algebra Learning Seminar "The Invariant Basis Number Property" Mohammad Mahmoud, Pure Mathematics, University of Waterloo "A Class of Structures without a Turing Ordinal" We continue to show that the class Kw has no Turing Ordinal. We construct a set D which is not enumeration reducible to R_\A for any structure \A in Kw. This will imply directly that if the Turing ordinal exists then it must be strictly greater than 0. On the other hand Joe Miller showed that, for our class, if the Turing Ordinal exists it must be 0. Both statements tell us that the Turing Ordinal can't exist. Geometry & Topology Seminar Oscar Garcia-Prada, ICMAT, Madrid ''Involutions of Higgs bundle moduli spaces" Student Number Theory Seminar Asif Zaman, Department of Mathematics, University of Toronto "Bounding the least prime in an arithmetic progression" Alejandra Vicente Colmenares, Pure Mathematics, University of Waterloo "Semistable rank 2 co-Higgs bundles over Hirzebruch surfaces" Ty Ghaswala, Pure Mathematics, University of Waterloo "If you love a group, set it free." The existence of algebraic topology (even if you claim to know nothing about it) should be enough to convince you that doing topology without group theory is difficult, frustrating, alcoholism inducing, and above all, a disservice to topology. In this talk I hope to convince you, at least for a moment, that thinking about groups while ignoring topology is a disservice to group theory. Mohammad Mahmoud, Department of Pure Mathematics, University of Waterloo "The Turing Ordinal" We present the notion of Turing Ordinal of a class of structures. The Turing Ordinal was introduced by Jockusch and Soare as a computability theoretic method for comparing complexities of classes of structures. We explain an example by Montalban of a class of structures that doesn't have a Turing Ordinal. Mohamed El Alami, Pure Math Department, University of Waterloo "Rank 2 vector bundles on Inoue Surfaces" Brendan Nolan, University of Kent "The Dixmier–Moeglin Equivalence" Thursday, July 9, 2015 — 4:00 PM EDT Student Colloquium Alejandra Vincente Colmenares, Department of Pure Mathematics, University of Waterloo "Stable vector bundles over Riemann surfaces" Wednesday, July 8, 2015 — 2:30 PM EDT Samin Riasat, Pure Math Department, University of Waterloo "Division by subtraction, and ordered groups" N2L 3G1 Departmental office: MC 5304 Phone: 519 888 4567 x33484 Email: [email protected] PDF files require Adobe Acrobat Reader
CommonCrawl
Mechanics of Advanced Materials and Modern Processes Processing of Al2O3–SiCw–TiC ceramic composite by powder mixed electric discharge grinding M. K. Satyarthi1 and Pulak M. Pandey2Email author Mechanics of Advanced Materials and Modern Processes20162:5 The machining of conductive alumina ceramic was successful by the electric discharge grinding (EDG). Therefore, the aim of the present work is to increase the material removal rate (MRR) during EDG of conductive alumina ceramic by addition of ceramic powder with dielectric. To achieve the objective through experimental investigation is carried out and the influence of input process parameters (powder concentration, duty ratio, pulse on time, table speed and wheel speed) on surface roughness (SR), MRR and surface integrity has been studied. The fine grade silicon carbide powder of #1000 mesh sizes was mixed in dielectric medium with varying concentration to understand the influence of the powder concentration and its interaction with other process parameters during powder mixed electric discharge grinding (PMEDG). The central composite rotatable design (CCRD) has been used to plan the experiments. Optimization of the obtained statistical models of MRR and SR has been done to obtain highest MRR and lowest SR. It was observed that the MRR achieved by PMEDG was 3 – 10 times higher than EDG. It was found that all the process factors and interactions show significant contribution on SR. The SR obtained by PMEDG was 2 – 4 times higher than EDG. It has been established that the PMEDG process is a better option for processing of Al2O3–SiCw–TiC ceramic material as preliminary operation before EDG to achieve high MRR. In the present work the surface and subsurface damages were also assessed and characterized by the scanning electron microscope (SEM). Electric discharge grinding Electric discharge machining Powder mixed electric discharge grinding Conductive alumina ceramic The need of economical machining process is demanded in present day situations, to fulfill the expectations of manufacturing industries. The process should have capability of obtaining high material removal rate (MRR), low surface roughness (SR) and good surface integrity (defect free surface). But, due to physical and mechanical properties of conductive alumina ceramic materials which are retained at elevated temperatures and corrosive environments, makes machining difficult by conventional processes. Experiencing increasing use of alumina ceramics in modern manufacturing industries (Azarafza et al. 2013; Darolia 2013; Mendez-Vilas 2012; Mohanty et al. 2013; Senthil Kumar et al. 2004; Sornakumar et al. 1995; Sugano et al. 2013; Yeniyol et al. 2013), few attempts of processing by electric discharge machining (EDM) (Patel et al. 2009a, 2009b, 2009c) and conventional diamond grinding (Patnaik Durgumahanti et al. 2010; Singh et al. 2011; Verma et al. 2010) have been reported successful in the recent past. The results (Patel et al. 2009a, 2009b, 2009c; Patnaik Durgumahanti et al. 2010; Singh et al. 2011; Verma et al. 2010) were interesting, which motivated to explore grey field of processing extremely hard and brittle materials. Therefore, the process like electric discharge grinding (EDG) which deploys the advantages of its parent processes like EDM and conventional diamond grinding and is based on thermo-mechanical concept of machining was focused. The process has been addressed as thermo-mechanical, due to the utilization of thermal energy for work softening (Koshy et al. 1996) by sparking and mechanical energy for abrasion of work material (Koshy et al. 1996) by grinding. Further, the area of study has been expanded to find out the effect of powder mixed dielectric usages in EDG (Satyarthi and Pandey 2012). The process physics of powder mixed electric discharge machining (PMEDM) has been conceived to understand the role of powder mixed dielectric in EDG. In the following section a brief literature review has been presented to describe the effects of powder mixing in the dielectric during EDM. Thereafter, few attempts made in PMEDG of Al2O3–SiCw–TiC ceramic has been discussed. PMEDM of conductive ceramic materials Numerous research attempts (Chow et al. 2008; Chow et al. 2000; Han et al. 2007; Kansal et al. 2005a, 2005b, 2006, 2007a, 2007b; Peças and Henriques 2003; Wong et al. 1998; Wu et al. 2005; Yeo et al. 2007) have been reported in the field of powder mixed electric discharge machining (PMEDM). In electric discharge machining (EDM), to achieve better surface finish negative polarity (tool + ve, work –ve) was found as one of the prominent factors (Wu et al. 2005). The negative polarity gave better results in PMEDM (Wong et al. 1998) than positive polarity (tool –ve, work + ve). Additional requirements for achieving good surface finish in PMEDM as reported in the literature were low pulse on time (Wong et al. 1998), low discharge current (Chow et al. 2000; Han et al. 2007), uniform dispersion of discharges (Chow et al. 2008; Chow et al. 2000; Wong et al. 1998), reduction in breakdown voltage (Han et al. 2007) and low discharge energy (Han et al. 2007; Wu et al. 2005; Yeo et al. 2007). The incorporation of powder particles in the dielectric medium promotes bridging effect in the insulating dielectric (Chow et al. 2008; Chow et al. 2000; Kansal et al. 2007b). The bridging helped in the dispersion of single pulse discharge energy (Chow et al. 2008; Chow et al. 2000), into multiple sparks. The presence of conductive phase powder particles in the dielectric medium increased the spark gap (Chow et al. 2008; H. M. Chow et al. 2000; Kansal et al. 2007b) and helped in achieving stable machining (Han et al. 2007; Kansal et al. 2007b; Wong et al. 1998; Wu et al. 2005). The powder mixed in the dielectric supported reduction in insulating strength, being conductive in nature (Kansal et al. 2007b) which increased viscosity of dielectric fluid (Yeo et al. 2007). In plasma channel the heat flux decreased due to presence of powder in dielectric fluid and increased the rate of heat dissipation from tool-work interface (Yeo et al. 2007). The researchers used different type of electrically conductive phase powders in their studies such as Al (0.1 g/L) (Chow et al. 2000; Wu et al. 2005), SiC (Chow et al. 2008; Chow et al. 2000; Satyarthi and Pandey 2012), silicon (Kansal et al. 2005a, 2005b, 2006, 2007a; Peças and Henriques 2003), graphite (Han et al. 2007; Kansal et al. 2005a, 2005b; Satyarthi and Pandey 2012), copper and tungsten (Bhattacharya et al. 2011). Wu et al. (2005) also used surfactant for separation of Al (0.25 g/L) for high concentration of powder in dielectric. Various dielectric mediums used were kerosene (Bhattacharya et al. 2011; Chow et al. 2000; Han et al. 2007; Kansal et al. 2005a, 2005b, 2006, 2007a, 2007b; Peças and Henriques 2003; Wong et al. 1998; Wu et al. 2005; Yeo et al. 2007), spark erosion oil (Bhattacharya et al. 2011) and water (Chow et al. 2008). The use of water as dielectric was mainly focused due to emphasis of manufacturers for initiation of green manufacturing technology. During experimentation with water as dielectric fluid it was observed that the electrical conductivity of the fluid was increased (Chow et al. 2008). The powder mixed machining helps in achieving reduced operating time of the same component than without powder (Peças and Henriques 2003), due to dis-integration of spark it results in high MRR. The result of PMEDM showed increase in MRR (Chow et al. 2008; H. M. Chow et al. 2000; Kansal et al. 2005a, 2005b, 2007b; Zhao et al. 2002) and surface roughness (Chow et al. 2000; Kansal et al. 2007b; Zhao et al. 2002), whereas improved surface (Bhattacharya et al. 2011; Chow et al. 2008; Chow et al. 2000; Furutania et al. 2001; Han et al. 2007; Kansal et al. 2005a, 2005b, 2007a; Kumar and Batra 2012; Kumar et al. 2009; Peças and Henriques 2003; Wong et al. 1998; Wu et al. 2005) and mirror like surface (Peças and Henriques 2003) was also obtained. It is quite noticeable here to quote that the improved surface has been noticed with the conductive powders like Al (Wu et al. 2005), Si (Peças and Henriques 2003), and Cu (Bhattacharya et al. 2011). Even few researchers (Bhattacharya et al. 2011) mentioned that "to overcome problems of poor finish at high current settings in EDM, the dielectric should be mixed with powder", which showed that the addition of these powder particles induced surface modification rather than high quality machining to achieve reduced surface finish. This may be due to inclusion of the powders in the recast layer (surface modification), serving as filler material in the pits and voids of the recast, which resulted in reduced surface roughness (Bhattacharya et al. 2011; Wu et al. 2005). The improved MRR may be due to formation of minor craters which facilitated easy debris extrusion resulting in reduction of surface roughness (Chow et al. 2008). It was also observed that the surface becomes corrosion and abrasion resistant due to surface modification (Furutania et al. 2001; Kansal et al. 2007b; Kumar and Batra 2009, 2012; Kumar et al. 2009). The surface modification was due to inclusion of powder, debris and hydrocarbon present in dielectric medium (Chow et al. 2008; Kumar and Batra 2012; Kumar et al. 2009). The SEM analysis revealed presence of surface defects like shallow overlapping of recast, re-solidified circular shapes, deep craters, pock marks, debris and globules (Chow et al. 2008; Han et al. 2007; Peças and Henriques 2008; Wong et al. 1998; Yeo et al. 2007). PMEDG of conductive ceramic materials The powder mixed electric discharge grinding (PMEDG) process has not been explored so far. The work done by authors (Satyarthi and Pandey 2012) compared effects of different powders such as graphite, silicon oxide and silicon carbide in PMEDG at constant powder concentration and varying other input parameters. It was found that the MRR achieved by PMEDG was 2 to 13 times higher than EDG processing while using silicon carbide (SiC) and graphite powders, whereas 2 to 7 times higher with silicon oxide (SiO). The surface roughness obtained by EDG process was lower than PMEDG. The formation of surface and subsurface damages was not evident. The results for silicon carbide and graphite powders were found interesting, which led to explore effects of powder concentration and its interactions with other process parameters while PMEDG processing of Al2O3–SiCw–TiC ceramic. Experimental procedure and analysis of experimental data The powder mixed electric discharge grinding (PMEDG) experiments were conducted on the setup designed and developed in the laboratory as shown in Fig. 1. The EDM head assembly of "Electronica leader ZNC" die-sinking EDM machine was attached with the developed setup mounted with servo motor to get desired rotational motion of the grinding wheel, as shown in Fig. 2. The setup has been facilitated with servo motors mounted on the EDM bed to get desired linear motion. These servo motors were connected to a dedicated system through role-based collaboration (RBC) break out box and ACR processor-based 4 axis motion controller. Aries view software was used for ACR processor-based 4 axis motion controller. A separate dielectric tank with the provision of controlled dielectric flow was used, to avoid mixing of conductive phase powder with the fresh dielectric stored in the dielectric tank of the EDM facility. The dielectric medium used for all of the experiments was kerosene. Schematical diagram of the setup Attached experimental setup on EDM machine Details of workpiece The electrically conductive Al2O3-SiCw-TiC ceramic composite supplied by Industrial Ceramic Technology was selected as workpiece. The processing steps used by supplier of ceramic composite are shown in Fig. 3. In Table 1 the physical and mechanical properties of the Al2O3-SiCw-TiC ceramic composite have been summarized. The optical micrograph of Al2O3-SiCw-TiC composite has been shown in Fig. 4. The size of the workpiece selected suitably was a square of 20 × 20 mm2 having uniform thickness of 5 mm. The workpiece was grounded by diamond grinding wheel of 200 mesh, to achieve uniform average surface roughness in the range of 0.28 to 0.30 μm before experimentation. Processing steps of Al2O3-SiCw-TiC Composite (Courtesy; Industrial Ceramic Technology, Inc, USA) Physical and mechanical properties of Al2O3–SiCw–TiC (Patel et al. 2009a, 2009b, 2009c) Hardness (Hv) Fracture toughness KIC (MPa (m) 0.5) Thermal conductivity k (W/mK at 400°K) Electrical Resistivity (Ωcm) Density ρ (g/cm3) The optical micrograph of Al2O3-SiCw-TiC composite Selection of process parameters It is evident from the literature presented in section 1, that the PMEDG process is governed by numerous process parameters. Pilot experiments were carried out by one variable at a time approach to determine the range of process parameters on the present setup. The selected process factors and range has been given in Table 2. The choice of the conductive phase powder was based on preliminary experimentation for better MRR and surface integrity. The preliminary experimentation was performed with SiC, SiO2 and graphite powders as described in section 1.2. It was observed that the SiC powder gave maximum MRR with acceptable surface integrity. Therefore, the SiC power was used in this work to study PMEDG process characteristics in detail. Patel et al. (2009a, 2009b) reported the gap voltage to be an insignificant factor for MRR of Al2O3–SiCw–TiC, as it is used to maintain the inter-electrode gap by servo control. The study of influence of process parameters on EDG of Al2O3–SiCw–TiC by the authors of this paper has shown insignificant contribution of discharge current in the selected range therefore the discharge current and gap voltage were kept constant in this study. Table 3 shows the grinding wheel specification, wheel dressing parameters and parameters which were kept constant during PMEDG process. The dressing of the wheel was carried out at the beginning of experimentation. The machining time of 20 min was suitably decided for all the experiments. PMEDG process factors and ranges Process factors Powder concentration (g/L) Duty-ratio (%) Pulse on time (μs) Table speed (m/min) Wheel speed (m/min) Grinding wheel specifications, dressing and other parameters used in experiments Grinding wheel specification Parameters kept constant during PMEDG processing Wheel bonding Gap voltage Abrasive grit size #800 mesh Discharge current Wheel thickness Parameters for wheel dressing Pulse peak current Duty ratio Pulse on time 200 μs Wheel speed 3.93 m/min Experimental design The half factorial central composite rotatable design (CCRD) was considered in present work since it requires fewer numbers of experiments to describe the influence of input process parameters on the response than full factorial CCRD. Powder concentration, Duty ratio, pulse on time, table speed and wheel speed were selected as process factors as given in Table 2. The measurements of surface roughness of machined surface were carried out on "Talysurf 6, Rank Taylor Hobson, England". A traverse length of 2 mm with a cutoff evaluation length of 0.8 mm was selected. The weight measurement was carried out on "METTLER TOLEDO AB265-S/FACT" weighing machine with least count of 0.01 mg. The weight loss of material was taken as the average of 5 readings to minimize errors. The experimentally obtained MRR and SR values are given in Table 4. The extent of surface damages was characterized by Scanning Electron Microscope (SEM) EVO 50. Measured responses corresponding to each experimental run Pc (g/L) Dc (%) Ton (μs) Vt (m/min) Vw (m/min) MRR (mg/min) Ra (μm) The analysis of variance (ANOVA) has been conducted to check the adequacy of the model and understand the significance of process factors and interactions. The ANOVA table for MRR after dropping insignificant terms and interactions has been presented in Table 5. The value of R2 is 94.75% which shows that regression model provides strong correlation among process factors and interactions at α = 0.01. The model is adequate and the lack of fit is insignificant. The regression equation for MRR has been given by Eq. (1). Analysis of Variance for MRR after dropping insignificant factors and interactions Seq SS Adj SS Adj MS Residual error Lack-of-Fit Pure error DF Degree of freedom, SS Sum of squares, MS Mean square R-Sq = 94.75%; R-Sq(pred) = 87.74%; R-Sq(adj) = 92.24% F > F0.01,10,21 Flack of fit < F0.01,16,21 Model is not adequate (F0.01,10,21 = 3.37) Lack of fit is insignificant (F0.01,16,21 = 3.03) $$ \mathbf{M}\mathbf{R}\mathbf{R} = 0.0954-0.002{\mathbf{P}}_{\boldsymbol{c}}-0.00617{\mathbf{D}}_{\boldsymbol{c}}-0.000126{\mathbf{T}}_{\boldsymbol{on}}-48.1{\mathbf{V}}_{\boldsymbol{t}}-\mathrm{0.009.5}{\mathbf{V}}_{\boldsymbol{w}}+0.000124{\mathbf{P}}_{\boldsymbol{c}}{\mathbf{D}}_{\boldsymbol{c}}+0.000003{\mathbf{P}}_{\boldsymbol{c}}{\mathbf{T}}_{\boldsymbol{on}}+1.03{\mathbf{P}}_{\boldsymbol{c}}{\mathbf{V}}_{\boldsymbol{t}}+4.82{\mathbf{D}}_{\boldsymbol{c}}{\mathbf{V}}_{\boldsymbol{t}}+0.00056{\mathbf{D}}_{\boldsymbol{c}}{\mathbf{V}}_{\boldsymbol{w}}+0.000038{\mathbf{T}}_{\boldsymbol{on}}{\mathbf{V}}_{\boldsymbol{w}} $$ $$ {\mathbf{R}}_{\mathbf{a}}=1.84+0.065{\mathbf{P}}_{\mathbf{c}}+0.49{\mathbf{D}}_{\mathbf{c}}+0.00109{\mathbf{T}}_{\mathbf{on}}-0.28{\mathbf{V}}_{\mathbf{w}}-0.0000979{\mathbf{P}}_{\mathbf{c}}^2-0.0661{\mathbf{D}}_{\mathbf{c}}^2-0.00000322{\mathbf{T}}_{\mathbf{on}}^2-0.0000283{\mathbf{V}}_{\mathbf{t}}^2+0.15{\mathbf{V}}_{\mathbf{w}}^2-0.00937{\mathbf{P}}_{\mathbf{c}}{\mathbf{D}}_{\mathbf{c}}-0.000083{\mathbf{P}}_{\mathbf{c}}{\mathbf{T}}_{\mathbf{on}}-3.04\ {\mathbf{P}}_{\mathbf{c}}{\mathbf{V}}_{\mathbf{t}}+0.00643{\mathbf{V}}_{\mathbf{w}}{\mathbf{P}}_{\mathbf{c}}+0.000104{\mathbf{D}}_{\mathbf{c}}{\mathbf{T}}_{\mathbf{on}}-51.79{\mathbf{D}}_{\mathbf{c}}{\mathbf{V}}_{\mathbf{t}}-0.05{\mathbf{D}}_{\mathbf{c}}{\mathbf{V}}_{\mathbf{w}}+3.73{\mathbf{T}}_{\mathbf{on}}{\mathbf{V}}_{\mathbf{t}}-0.000449{\mathbf{T}}_{\mathbf{on}}{\mathbf{V}}_{\mathbf{w}}-125.16{\mathbf{V}}_{\mathbf{t}}{\mathbf{V}}_{\mathbf{w}} $$ The ANOVA table for surface roughness (SR) after dropping insignificant terms and interactions has been presented in Table 6. Table shows that the value of R2 is 99.95% representing strong correlation between process factors and interactions at significance level of α = 0.01. The model is adequate and the lack of fit is insignificant. The regression equation for SR (Ra) may be given by Eq. (2). Further, due to experimental error and noise present in the system, the value of estimated parameters and the responses like MRR and R a , are subjected to uncertainty. Therefore, the confidence interval was calculated to estimate the precision of MRR and R a and is given by Eq. (3). Analysis of Variance for SR after dropping insignificant factors and interactions Model is adequate (F0.01,15,16 = 3.41) $$ \varDelta \mathbf{Y}={\mathbf{t}}_{\left(\frac{\boldsymbol{\upalpha}}{2},\kern0.75em \mathbf{D}\mathbf{F}\right)}\sqrt{{\mathbf{V}}_{\mathbf{e}}} $$ This section includes detailed discussion on the outcome of data analysis with respect to the material removal rate (MRR), surface roughness (SR) and surface integrity during the powder mixed electric discharge grinding (PMEDG) of Al2O3–SiCw–TiC ceramic composite. The effects and percentage contributions of significant process factors and its interactions have also been presented and discussed in this section. The response surfaces have been presented and the trends are explained in order to have a feel of associated process physics of MR and surface generation in PMEDG process. Material removal rate The main effect plot and the percentage contribution of various process factors and interactions with respect to MRR have been shown in Figs. 5 and 6 respectively. It can be seen from Fig. 5 that all input process factors selected for study affects the PMEDG process significantly. The powder serves as a bridge for the ions imposed due to ionization of dielectric. The conductive phase powder also reduces the insulating strength of the dielectric fluid, and creates several parallel paths of ion transfer. The high temperature produced due to EDM action thermally softens the work material in the grinding zone in addition to partial melting and vaporization. The bridging effect caused by inclusion of conductive phase powders in the dielectric medium promotes dis-integration of spark into several increments (Chow et al. 2008; Chow et al. 2000). The inclusion of powder in recast layer (surface modification) makes it weak which may be easily removed by grinding action of the grits (Chow et al. 2008; Kumar and Batra 2012; Kumar et al. 2009). Therefore, the increase in powder concentration increases the MRR. The increase in Duty ratio results in reduced pulse off time, which shows that the sparking takes place after small interval of time, which also promoted increase in discharge energy, resulting in increased MRR. Discharge energy is a function of discharge current, discharge voltage and pulse on time. Therefore increase in pulse on time increased the discharge energy which resulted in increased MRR upto 400 μs. Further increase in discharge energy promotes wheel loading due to which the material removed by grits is seized, resulting in decreased MRR. The increase in table speed raises the feed rate hence the amount of softened work material availability per unit time is also increased which is swept by abrasives, which promotes increase in MRR with the increased table speed. The increase in wheel speed raises the number of active grits per unit time resulting in increased MRR (Satyarthi and Pandey 2013b). The interaction terms having contribution of less than 5% (Fig. 6) are considered to be insignificant, but these cannot be excluded from the statistical model as exclusion of these terms results in inadequacy of the model and significant lack of fit. The response surfaces showing the effect of significant interaction terms (>5%) affecting MRR have been presented in Figs. 7 and 8. Figures 7a and 8a-b shows the interaction of Duty ratio and table speed. The small increase in Duty ratio increases the MRR as the spark interval is decreased. The increase in table speed increases the MRR, due to the increased feed rate that is availability of more material for abrasion per unit time. The increase in table also promotes ductile mode grinding which is dominated by the number of active grits (Xie and Lu 2011). Figs. 7b and 8c-d shows the interaction of pulse on time and wheel speed. From Fig. 8c it is clear that at low wheel speeds upto 1.57 m/min the MRR is reduced with the increase in the pulse on time, but for wheel speed beyond 1.57 m/min the MRR increases with the increase in pulse on time. Which shows that the EDG action is prominent and in agreement to the findings of Satyarthi and Pandey (2013b). The MRR first reduces with the increase in wheel speed upto certain limit and thereafter further increase in wheel speed increases the MRR (Satyarthi and Pandey 2013b). From Fig. 8d it could be seen that the increase in wheel speed reduces the MRR for pulse on time less than 300 μs, whereas for pulse on time greater than 300 μs, the MRR is increased. The reduction in MRR with the increase in wheel speed at low pulse on time may be due to low discharge energy, as at low discharge energies the grinding action is prominent (Satyarthi and Pandey 2013b). Whereas, the increase in wheel speed at increased pulse on time supports the EDG action and results in increased MRR (Satyarthi and Pandey 2013b). Main effect plot for MRR Percentage contributions of process factors and interactions on MRR Response surface plots of process factor's interactions on MRR. a Interactions of Duty ration and table speed. b Interactions of pulse on time and wheel speed Line plots conceived with respect to response surface plots. a-d Material removal rate (mg/min) The main effect plot and the percentage contribution of various process factors and interactions with respect to surface roughness (SR) have been shown in Figs. 9 and 10 respectively. It can be seen from Fig. 9 that all of the input process factors affect the PMEDG process significantly. The increase in Duty ratio, pulse on time and table speed increases the SR. The increase in Duty ratio and pulse on time upto 72% and 400 μs respectively increases the SR, thereafter further increase results in decreased SR. The increased SR may be attributed to the formation of bigger size craters due to increased discharge energy and grain dislodgement due to increased feed rate. Whereas, the increase in powder concentration and wheel speed reduces the SR. The increase in powder concentration supports disintegration of spark into several branches which as a results forms small sized overlapping craters and results in reduced SR. The increase in wheel speed raises the number of active grits per unit time, which as a result sweeps more material from the work surface resulting in reduced SR. It is quite evident that the increase in wheel speed upto certain limit reduces the SR, but further increase in wheel speed do not contribute significantly, which may be due to the wheel loading. The interaction terms having contributions less than 5% (Fig. 10) are considered to be insignificant, but these cannot be excluded from the model as exclusion of these terms results in inadequacy of the model and significant lack of fit. Main effect plot for surface roughness Percentage contributions of process factors and interactions on surface roughness The response surfaces showing the effect of significant interaction terms (>5%) affecting SR have been presented in Figs. 11 and 12. Figures 11a and 12a-b show the interaction of powder concentration and Duty ratio. The increased Duty ratio reduces the spark interval and promotes overlapping of the small craters formed, resulting in reduced SR. The increase in powder concentration increases the SR, which may be due to disintegration of the spark and hence formation of more number of craters per pulse, however this phenomenon is dominated by the main effect of powder concentration; and therefore overall effect is reduction in SR. Figures 11b and 12c-d show the interaction of wheel speed and Duty ratio. The effect of increased Duty ratio has been discussed for Fig. 12a. From Fig. 12d it may be noticed that the increase in wheel speed results in reduced SR, which may be due to the increased number of abrasives available per unit time supporting even distribution/sweeping of work material as well as recast prior to the solidification. Figures 11c and 12e-f shows the interaction of pulse on time and table speed. The increase in pulse on time results in the increased SR, which may be due to formation of bigger craters promoted by increased discharge energy and dislodgement of constituting elements of ceramic material by thermal loading. From Fig. 12f it may be seen that the increase in table speed results in reduced SR for low pulse on time (<200 μs), whereas for increased pulse on time the SR increases with the increase in table speed. The reason for the reduced SR for low pulse on time may be attributed to the low discharge energy promoting grinding action prominently (Satyarthi and Pandey 2013b). Whereas, at increased discharge energy the EDG/EDM action becomes prominent (Satyarthi and Pandey 2013b) and results in increased SR. Response surface plots of process factor's interactions on surface roughness. a Interactions of Duty ratio and powder concentrations. b Interactions of wheel speed and Duty ratio. c Interactions of pulse on time and table speed Line plots conceived with respect to response surface plots. a-f Surface roughness (μm) In the present work an attempt has been made to estimate the processing conditions for the highest possible MRR and the lowest possible SR. To achieve this, optimization of Eqs. (1) and (2) has been done by a standard MATLAB 2011a function, fmincon (Bacchewar et al. 2007), which can handle optimization problems of the nonlinear nature. The obtained results has been validated by conducting experiment and are presented in Table 7. Optimum process parameters PMEDG (g/ltr) (μs) (m/min) MRRmax 49.26 (mg/min) 0.1679 (μm) Surface integrity The outcome of data analysis and its discussion in previous sections revealed that the PMEDG process is governed prominently by powder concentration, discharge energy and grinding action. The discharge energy is a function of discharge voltage, discharge current and pulse on time/Duty ratio. The increase in discharge current and pulse on time increases the discharge energy. The amount of discharge energy transferred from tool to workpiece is disintegrated into several increments due to conductive phase powders present in the dielectric medium and simultaneous grinding action supports the removal/sweeping of the recast prior to its solidification. Figure 13a-b shows the scanning electron micrographs obtained when workpiece was acted with low discharge energies, and the EDG action remained prominent (softened material was removed by the melting and abrasion). The soft recast material being swept along the work surface resulted in good surface finish as shown in Fig. 13b. The high MRR with considerably low surface roughness may be attributed to the grinding action of abrasives on the softened material and/or recast layer. Figure 13c-d shows the effect of increased powder concentration. The increased powder concentration increased disintegration of discharges. This led to the formation of more number of craters per discharge. The craters so formed were partially filled by molten material/recast. The grinding action in the case was unable to remove the complete molten material due increase in the inter-electrode gap (Chow et al. 2008; Chow et al. 2000; Kansal et al. 2007b), which seized the grinding action, hence a huge re-solidified layer was deposited on the surface, giving very rough surface. The surface characterization of these samples indicated the presence of small pit marks, grinding marks and deposited recast layer. SEM micrographs showing effect of PMEDG at various input process parameters. a-b Pw-08, DC-0.56, Ton-300 Vt-0.06, Vw-2.355. c-d Pw-40, DC-0.56, Ton-300 Vt-0.06, Vw-2.355. e SEMmicrograph showing presence of SiC powder particles after PMEDG of alumina ceramic Further, The MRR obtained by PMEDG process was found to be 3 to 10 times higher than the EDG (Satyarthi and Pandey 2016). It was found that the SR obtained by PMEDG was 2 – 4 times higher than EDG but lower than EDM process (Satyarthi and Pandey 2012, 2013a). In the present work PMEDG processing of Al2O3–SiCw–TiC has been successfully performed on the developed EDG setup. The results indicated that the selected input parameters and its interactions significantly influenced the MRR. The addition of powders in the dielectric significantly improved the MRR. The highest MRR obtained by PMEDG was 49.69 mg/min. The defects induced by EDM and conventional diamond grinding processes like heat affected zone, surface and subsurface cracks were not observed on PMEDG processed surfaces. The surface produced by PMEDG was obtained free from defects like surface/subsurface cracks, heat affected zone and micro-pores although recast layer and big size craters were found on the surface in certain processing conditions. It has been established that the PMEDG process is a better option for processing extremely hard, brittle and fragile Al2O3–SiCw–TiC ceramic material as preliminary operation before applying EDG process to achieve increased MRR. The machining of Al2O3–SiCw–TiC ceramic has been successfully performed on the developed PMEDG setup. The MRR obtained by PMEDG process was found to be 3 to 10 times higher than the EDG and the highest MRR obtained was 49.26 mg/min. The surface roughness achieved by PMEDG was 2 to 4 times higher than EDG. PMEDG process may be used before EDG process to obtain high MRR. The defects induced by EDM and conventional diamond grinding processes were not observed on PMEDG processed surfaces. The present work is an attempt to fill the research gap in the field of Powder mixed EDG, which has not been witnessed (attempted) so far in my knowledge. The following are the key observations noticed by the author's. The machining of Al2O3–SiCw–TiC ceramic has been successfully performed on the developed PMEDG setup. The MRR obtained by PMEDG process was found to be 3 to 10 times higher than the EDG (as compared with published work of author's) and the highest MRR obtained was 49.26 mg/min. The surface roughness achieved by PMEDG was 2 to 4 times higher than EDG (as compared with published work of author's). PMEDG process may be used before EDG process to obtain high MRR. The defects induced by EDM and conventional diamond grinding processes were not observed on PMEDG processed surfaces (as compared with published work of author's). Both authors read and approved the final manuscript. The authors would like to express their sincere thanks to Mr. John J. Schuldies, President, Industrial Ceramic Technology Inc., Ann Arbor Michigan, USA, for supplying the work material. The authors would also like to acknowledge the financial support of Department of Science and Technology (DST) Delhi, India to carry out this work. The authors declare that they have no competing interests. USICT, Guru Gobind Singh Indraprastha University Delhi, New Delhi, 110078, India Indian Institute of Technology Delhi, New Delhi, 110016, India Azarafza R, Arab A, Mehdipoor A, Davar A (2013) Impact Behavior of Ceramic-Metal Armour by Al2O3-Nano SiC Nano Composite. Int J Adv Des Manuf Technol 5(5):83–87Google Scholar Bacchewar PB, Singhal SK, Pandey PM (2007) Statistical modelling and optimization of surface roughness in the selective laser sintering process. Proc Inst Mech Eng H 221(1):35–52View ArticleGoogle Scholar Bhattacharya, A., Batish, A., & Singh, G. (2011) Optimization of powder mixed electric discharge machining using dummy treated experimental design with analytic hierarchy process. Proc Inst Mech Eng H 226(January 2012), 103–116 doi: 10.1177/0954405411402876 Chow H-M, Yang L-D, Lin C-T, Chen Y-F (2008) The use of SiC powder in water as dielectric for micro-slit EDM machining. J Mater Process Technol 195(1–3):160–170View ArticleGoogle Scholar Chow HM, Yan BH, Huang FY, Hung JC (2000) Study of added powder in kerosene for the micro-slit machining of titanium alloy using electro-discharge machining. J Mater Process Technol 101(1):95–103View ArticleGoogle Scholar Darolia, R. (2013) Thermal barrier coatings technology: critical review, progress update, remaining challenges and prospects. International Materials Reviews. doi: 10.1179/1743280413Y.0000000019 Furutania K, Saneto A, Takezawa H, Mohri N, Miyake H (2001) Accretion of titanium carbide by electrical discharge machining with powder suspended in working fluid. Precis Eng 25(2):138–144View ArticleGoogle Scholar Han M-S, Min B-K, Lee SJ (2007) Improvement of surface integrity of electro-chemical discharge machining process using powder-mixed electrolyte. J Mater Process Technol 191(1–3):224–227View ArticleGoogle Scholar Kansal HK, Sehijpal S, Kumar P (2005a) Application of Taguchi method for optimisation of powder mixed electrical discharge machining. Int J Manuf Technol Manage 7(2/3/4):329–341. doi:10.1504/IJMTM.2005.006836 Google Scholar Kansal HK, Singh S, Kumar P (2005b) Parametric optimization of powder mixed electrical discharge machining by response surface methodology. J Mater Process Technol 169(3):427–436View ArticleGoogle Scholar Kansal HK, Singh S, Kumar P (2006) Performance parameters optimization (multi-characteristics) of powder mixed electric discharge machining (PMEDM) through taguchi's method and utility concept. Indian J Eng Mater Sci 13:209–216Google Scholar Kansal, H. K., Singh, S., & Kumar, P. (2007a) Effect of Silicon Powder Mixed EDM on Machining Rate of AISI D2 Die Steel. J Manuf Process 9(1), 13–22.Google Scholar Kansal, H. K., Singh, S., & Kumar, P. (2007b) Technology and research developments in powder mixed electric discharge machining (PMEDM). J Mater Process Technol, 184(1–3), 32–41Google Scholar Koshy P, Jain VK, Lal GK (1996) Mechanism of material removal in electrical discharge diamond grinding. Int J Mach Tool Manuf 36(10):1173–1185View ArticleGoogle Scholar Kumar S, Batra U (2012) Surface modification of die steel materials by EDM method using tungsten powder-mixed dielectric. J Manuf Process 14(1):35–40View ArticleGoogle Scholar Kumar S, Singh R, Singh TP, Sethi BL (2009) Surface modification by electrical discharge machining: A review. J Mater Process Technol 209(8):3675–3687View ArticleGoogle Scholar Mendez-Vilas, A. (2012) Fuelling the Future: Advances in Science and Technologies for Energy Generation, Transmission and Storage: Universal-PublishersGoogle Scholar Mohanty, S., Rameshbabu, A. P., & Dhara, S. (2013) Net shape forming of green alumina via CNC machining using diamond embedded tool. Ceramics International (Accepted manuscript). doi: 10.1016/j.ceramint.2013.04.099 Patel KM, Pandey PM, Rao PV (2009a) Determination of an optimum parametric combination using a surface roughness prediction model for EDM of Al2O3/SiCw/TiC ceramic composite. J Manuf Process 24(6):675–682View ArticleGoogle Scholar Patel, K. M., Pandey, P. M., & Venkateswara Rao, P. (2009a) Determination of an Optimum Parametric Combination Using a Surface Roughness Prediction Model for EDM of Al2O3–SiCw–TiC Ceramic Composite (Vol. 24). Colchester, ROYAUME-UNI: Taylor &amp; Francis.Google Scholar Patel, K. M., Pandey, P. M., & Venkateswara Rao, P. (2009b) Surface integrity and material removal mechanisms associated with the EDM of Al2O3 ceramic composite. Int J Refractory Met Hard Mat 27(5), 892–899.Google Scholar Patnaik Durgumahanti US, Singh V, Venkateswara Rao P (2010) A new model for grinding force prediction and analysis. Int J Mach Tool Manuf 50(3):231–240View ArticleGoogle Scholar Peças P, Henriques E (2003) Influence of silicon powder-mixed dielectric on conventional electrical discharge machining. Int J Mach Tool Manuf 43(14):1465–1471View ArticleGoogle Scholar Peças P, Henriques E (2008) Electrical discharge machining using simple and powder-mixed dielectric: the effect of the electrode area in the surface roughness and topography. J Mater Process Technol 200(1–3):250–258View ArticleGoogle Scholar Satyarthi, M. K., & Pandey, P. M. (2012, December 14–16, 2012) Processing of conductive ceramic composite by EDG and powder mixed EDG: A comparative study. Paper presented at the 4th International and 25th All India Manufacturing Technology, Design and Research Conference (AIMTDR-2012), Jadavpur University, Kolkata, India.Google Scholar Satyarthi, M. K., & Pandey, P. M. (2013a) Comparison of EDG, Diamond Grinding, and EDM Processing of Conductive Alumina Ceramic Composite. Mater Manuf Process 28(4), 369–374. doi: 10.1080/10426914.2012.736663 Satyarthi, M. K., & Pandey, P. M. (2013b) Modeling of material removal rate in electric discharge grinding process. Int J Mach Tools Manuf 74(0), 65–73. doi: http://dx.doi.org/10.1016/j.ijmachtools.2013.07.008 Satyarthi, M. K., & Pandey, P. M. (2016) Experimental Investigations into Electric Discharge Grinding of Al2O3–SiCw–TiC Ceramic Composite. Int J Eng Res Technol 5(07). doi: http://dx.doi.org/10.17577/IJERTV5IS070127 Senthil Kumar A, Raja Durai A, Sornakumar T (2004) Development of alumina–ceria ceramic composite cutting tool. Int J Refract Met Hard Mater 22(1):17–20View ArticleGoogle Scholar Singh V, Ghosh S, Rao PV (2011) Comparative study of specific plowing energy for mild steel and composite ceramics using single grit scratch tests. Maters Manuf Process 26(2):272–281. doi:10.1080/10426914.2010.526979 View ArticleGoogle Scholar Sornakumar T, Gopalakrishnan MV, Krishnamurthy R, Gokularathnam CV (1995) Development of alumina and Ce-TTZ ceramic-ceramic composite (ZTA) cutting tool. Int J Refract Met Hard Mater 13(6):375–378View ArticleGoogle Scholar Sugano N, Takao M, Sakai T, Nishii T, Nakahara I, Miki H (2013) 20-year survival of cemented versus cementless total hip arthroplasty for 60-year old or younger patients with hip dysplasia. Bone Joint J Orthop Proc Suppl 95-B(SUPP 15):343–343Google Scholar Verma VK, Singh V, Ghosh S (2010) Comparative grindability study of composite ceramic and conventional ceramic. Int J Abrasive Technol 3(3):259–273. doi:10.1504/IJAT.2010.034055 View ArticleGoogle Scholar Wong YS, Lim LC, Rahuman I, Tee WM (1998) Near-mirror-finish phenomenon in EDM using powder-mixed dielectric. J Mater Process Technol 79(1–3):30–40View ArticleGoogle Scholar Wu KL, Yan BH, Huang FY, Chen SC (2005) Improvement of surface finish on SKD steel using electro-discharge machining with aluminum and surfactant added dielectric. Int J Mach Tool Manuf 45(10):1195–1201View ArticleGoogle Scholar Xie J, Lu YX (2011) Study on axial-feed mirror finish grinding of hard and brittle materials in relation to micron-scale grain protrusion parameters. Int J Mach Tool Manuf 51(1):84–93MathSciNetView ArticleGoogle Scholar Yeniyol S, Bölükbaşı N, Çakır AF, Bilir A, Yeniyol M, Ozdemir T (2013) Relative contributions of surface roughness and crystalline structure to the biocompatibility of titanium nitride and titanium oxide coatings deposited by PVD and TPS coatings. ISRN Biomaterials 2013:9. doi:10.5402/2013/783873 View ArticleGoogle Scholar Yeo SH, Tan PC, Kurnia W (2007) Effects of powder additives suspended in dielectric on crater characteristics for micro electrical discharge machining. J Micromechanics Microengineering 17(2007):N91–N98. doi:10.1088/0960-1317/17/11/N01 View ArticleGoogle Scholar Zhao WS, Meng QG, Wang ZL (2002) The application of research on powder mixed EDM in rough machining. J Mater Process Technol 129(1–3):30–33View ArticleGoogle Scholar
CommonCrawl
QoS-based ranking and selection of SaaS applications using heterogeneous similarity metrics Azubuike Ezenwoke ORCID: orcid.org/0000-0002-2094-33271,2, Olawande Daramola3 & Matthew Adigun4 The plethora of cloud application services (Apps) in the cloud business apps e-marketplace often leads to service choice overload. Meanwhile, existing SaaS e-marketplaces employ keyword-based inputs that do not consider both the quantitative and qualitative quality of service (QoS) attributes that characterise cloud-based services. Also, existing QoS-based cloud service ranking approaches rank cloud application services are based on the assumption that the services are characterised by quantitative QoS attributes alone, and have employed quantitative-based similarity metrics for ranking. However, the dimensions of cloud service QoS requirements are heterogeneous in nature, comprising both quantitative and qualitative QoS attributes, hence a cloud service ranking approach that embrace core heterogeneous QoS dimensions is essential in order to engender more objective cloud selection. In this paper, we propose the use of heterogeneous similarity metrics (HSM) that combines quantitative and qualitative dimensions for QoS-based ranking of cloud-based services. By using a synthetically generated cloud services dataset, we evaluated the ranking performance of five HSM using Kendall tau rank coefficient and precision as accuracy metrics benchmarked with one HSM. The results show significant rank order correlation of Heterogeneous Euclidean-Eskin Metric, Heterogeneous Euclidean-Overlap Metric, and Heterogeneous Value Difference Metric with human similarity judgment, compared to other metrics used in the study. Our results confirm the applicability of HSM for QoS ranking of cloud services in cloud service e-marketplace with respect to users' heterogeneous QoS requirements. Cloud computing is a model of service provisioning in which dynamically scalable and virtualized resources, that includes infrastructure, platform, and software, are delivered and accessed as services over the internet [1, 2]. The popularity of the cloud attracts a variety of providers that offer a wide range of cloud-based services to users in an e-marketplace environment, culminating in an exponential increase in the number of available functionally equivalent cloud services [3, 4]. Currently, there exist a number of cloud-based digital distribution services such as Saasmax.com,Footnote 1 Appexchange.comFootnote 2 (viz. cloud e-marketplaces), which host SaaS cloud services (business cloud apps) that are designed to provide specific user-oriented services when selected. The proliferation of cloud application services in the cloud e-marketplace without a systematic framework to guide the selection of the most relevant ones usually leaves the users with the problem of which service to select, a phenomenon that can be described as service choice overload [5,6,7,8]. Currently, these existing cloud service e-marketplaces elicit keyword-based search queries that do not allow users to indicate their preferences in terms of quality of service (QoS) requirements and present search results as an unordered list of icons that must be explored individually by a user before making a decision [9]. This mode of presentation does not enable the user to discriminate among services in terms of their suitability with respect to user's request, which complicates decision making [10]. Decision making can be simplified and service choice overload can be reduced by considering user's QoS requirements and ranking of services based on their QoS attributes so that users can gain quicker insight on the best services that are more likely to satisfy their requirements. QoS are measurable non-functional attributes that describe and distinguish services and forms the basis for service selection [11, 12]. However, QoS attributes are usually heterogeneous in nature, covering both quantitative and qualitative (or categorical) attributes. The Service Measurement Index (SMI) [13] defines seven main categories to be considered when comparing QoS of cloud services, which are a combination of quantitative and qualitative measures. These are Accountability, Agility, Assurance, Financial, Performance, Security and Privacy, and Usability. Each category has multiple attributes, which are either quantitative or qualitative in nature. For example, quantitative attributes such as service response time, accuracy, availability, and cost can be measured quantitatively by using relevant software and hardware monitoring tools, whereas qualitative attributes such as usability, flexibility, suitability, operability, elasticity etc. which cannot be quantified are mostly deduced based on user experiences. These qualitative attributes are measured using an ordinal scale consisting of a set of predefined qualifier tags such as good, high, medium, fair, excellent rating etc. [13,14,15]. Most of the existing cloud service selection approaches hitherto reported in the literature have overlooked critical dimensions of QoS requirements that are qualitative such as security and privacy, usability, accountability, and assurance in formulating a basis for cloud service ranking and selection. A number of cloud service selection approaches are based on a content-based recommendation scheme that explores the similarity between the QoS attributes of the user's requirements and the features description of specific cloud services in order to rank them [16,17,18,19]. Most of these approaches have only considered quantitative attributes for their ranking of services, which is based on the assumption that all QoS attributes are quantitative in nature, and therefore used quantitative similarity metrics such as exponential weighted difference metric or weighted difference metric [17]. This form of assumption is deficient to adequately model the heterogeneous nature of QoS requirements, as a precursor to creating a credible basis for comparing and ranking cloud services. Also, there are instances such as [20, 21], where steps were taken to quantify specific qualitative attributes such as security or usability in order to apply homogeneous distance metrics on them for the purpose of decision making. The drawback of this approach is that since cloud QoS attributes are usually heterogeneous in nature, heterogeneous metrics are more likely to produce better generalization over time on heterogeneous data [22, 23]. This scenario imposes a limitation on approaches where quantification of qualitative attributes has been undertaken for the purpose of cloud service ranking and selection. In order to achieve an effective QoS-based ranking of cloud services in cloud service e-marketplaces, there is a need for a service selection approach that considers both the quantitative and qualitative QoS dimensions that characterises cloud services and is able to rank cloud services accurately with respect to user requirements using heterogeneous similarity metrics. In this paper, we propose the use of in similarity metrics that combines quantitative and qualitative dimensions to rank cloud services in cloud e-marketplace context based on QoS attributes. An experimental study of five heterogeneous similarity metrics was conducted to ascertain their suitability for cloud service ranking and selection using a simulated dataset of cloud services. This is in contrast to previous work in the domain of cloud service selection. The remaining part of this paper is as follows: Section "Background and Related Work" provides background to the context of this work, and also a discussion of related work. In section "Heterogeneous Similarity Metrics for Cloud Service Ranking and Selection" we give the descriptions of the five heterogeneous similarity metrics used in this study, while the empirical results of the comparison of the ranking performance of the metrics were presented in Section "Experimental Evaluation and Results". A discussion of the findings of this study is contained in Section "Discussion". The paper is concluded in Section "Conclusion" with a brief note and an overview of further work. Background and related work The relevant concepts that underpin this study and an overview of related work are presented in this section. Cloud service e-marketplace The e-marketplace of cloud services provides an electronic emporium where service providers offer users a wide range of services for users to select from [24,25,26]. Similar to AmazonFootnote 3 or AlibabaFootnote 4 that deal in commodity products, the goal of a cloud service e-marketplace such as SaaSMax, and AppExchange is to provide a facility for finding and consuming cloud services, by allowing users to search for suitable business apps that offer user-oriented services that match their QoS requirements. However, unlike commodity products, cloud services possess QoS attributes that distinguish functionally equivalent services from each other. The profitability of the cloud service e-marketplace is realised by users' ability to easily and quickly find and select suitable services that meet their QoS requirements. However, most cloud service e-marketplaces in existence do not consider QoS information from the users but rely on keyword matching, and the results are not ranked in a manner that makes the differences among the services to be obvious with respect to users' requirements. This leads to service choice overload because a large number of services are presented as an unordered list of icons that require the user to further investigate the differences between the services by checking them one after the other. The discrimination of services based on their QoS information is a panacea towards reducing service choice overload as the cloud service QoS model encompasses Key Performance Indicators for decision making [27]. Besides, the QoS model comprises the important comparable characteristics of each service, and suitable for matching user QoS requirements to services' QoS attribute [28]. One of the most comprehensive International Standard Organization (ISO) certified QoS model for cloud services is the Service Measurement Index (SMI) [13]. Service measurement index The Service Measurement Index (SMI) is developed by the Cloud Services Measurement Initiative Consortium (CSMIC). The SMI is a framework of critical characteristics, associated attributes, and metrics that can be used to compare and evaluate cloud-based services from different service providers [27, 29]. SMI was designed as the standard method to measure any type of cloud service (i.e. XaaS) based on the user requirements. The SMI is a hierarchical framework, with seven top-level categories, which are Accountability, Agility, Assurance, Financial, Performance, Security and Privacy, and Usability and each category is further broken into four or more attributes that underscore the categories. Based on the SMI QoS model, it is obvious that some metrics are quantitative in nature while others are qualitative. Quantitative QoS metrics are those which can be measured and quantified (e.g. response time, throughput); whereas, qualitative QoS metrics is subjective in nature and are only inferred by user's feedback (e.g. security, usability etc.). Cloud services can be assessed and ranked based on both QoS metric dimensions, i.e., quantitative and qualitative, by comparing the similarity of user's QoS requirements and service QoS properties, thus following a content-based approach. QoS similarity-driven cloud service ranking The similarity is a measure of proximity between two or more objects or variables [30] and it has been applied in domains that require distance computation. Similarity can be measured on two types of data: quantitative data (also called numerical data) and qualitative (also called categorical/nominal data) [31]. Many metrics have been proposed for computing similarity on either quantitative data or qualitative data. However, few metrics have been proposed to handle datasets containing a mixture of both quantitative and qualitative data. Such metrics usually combines quantitative and qualitative distance functions. For quantitative data, a generic method for computing distance is Minkowsky [32], with widely used specific instances such as the Manhattan (of order 1) and Euclidean (of order 2). The computation of similarity for quantitative data is more direct, compared to qualitative data, because quantitative data can be completely ordered, while comparing two qualitative values is somewhat complex [31]. For example, the overlap metric [33], assigns a similarity value of 1 when two qualitative values are the same and 0 otherwise. In the context of selecting cloud services from the list of available services, the ranking of services based on the heterogeneous QoS model necessitates the application of similarity metrics that can handle mixed QoS data. The notion of similarity considered in this paper is between vectors with the same set of QoS properties, which might differ in their QoS values i.e. users' QoS requirements and service QoS descriptions. The success of a cloud service e-marketplace is hinged on adequate support for satisfactory selection based on the QoS requirements of the user. So far in the literature, the approaches used for cloud service ranking and selection can be broadly classified as content-based filtering, collaborative filtering, and multi-criteria decision-making methods. Instances of collaborative filtering-based approaches include CloudRank, which is a personalised ranking prediction framework that utilises a greedy-based algorithm. It was proposed in [18] to predict QoS ranking by leveraging on similar cloud service user's past service usage experiences of a set of cloud services. The ranking is achieved by finding the similarity between the user-provided QoS requirements and those of other users in the past. Similar users are identified based on these similarity values and services are ranked accordingly. In contrast to our work, CloudRank [18] did not consider the computation of vector similarity between cloud services and user-defined QoS requirements. CloudAdvisor, a Recommendation-as-a-Service platform was proposed in [34] for recommending optimal cloud offerings based on a given user preference requirements. Users supply preference values to each property (energy-level, budget, performance etc.) of the cloud offerings, and the platform recommends available optimal cloud offerings that match user's requirements. Service recommendations in [34] are determined by solving a constraint optimization model and users can compare several offerings automatically derived by benchmarking-based approximations. However, the QoS dimensions considered in [34] are mainly quantitative and do not reflect the holistic heterogeneous QoS model of cloud services. Selection of cloud services in the face of many QoS attributes is a type of Multi-criteria Decision Making (MCDM) [14]. Considering the multiple QoS criteria involved in selecting cloud services, [14] propose a ranking mechanism based on Analytical Hierarchical Process (AHP) to assign weights to non-functional attributes to quantitatively realise cloud services ranking. Apart from the complexity in computing the pairwise comparisons of the attributes of the cloud service alternatives, this approach is most suitable when the number of cloud services is few, which is not the case in a cloud service e-marketplace that comprises numerous services. Besides, in the approach proposed in [14], users cannot determine the desired values of the QoS service properties, and services are ranked based on quantitative QoS attributes alone. Content-based filtering approaches include [17] in which a ranked list of services that best match user requirements is returned based on the nearness of user's QoS requirement to the QoS properties of cloud services in the marketplace. Also, Rahman et al. [17] proposed an approach to select cloud service based on multiple criteria that select services that best match the user's QoS requirements from a list of services by comparison. The authors introduced two methods, Weighted Difference, and Exponential weighted Difference, for computing similarity values. It is however assumed in [17] that all cloud service QoS attributes are quantitative, thereby ignoring the qualitative QoS attributes of services. In [35] a QoS-driven approach called MSSOptimiser, which supports the service selection for multi-tenant cloud-based software applications (Software as a Service - SaaS) was proposed. In the work, certain qualitative and non-numerical QoS parameters such as reputation were mapped to numerical values based on a pre-defined semantics-based hierarchical structure of all possible values of a non-numerical QoS parameter in order to quantify the qualitative parameters. Also, in [20] Multi-attribute Decision-Making framework for cloud adoption - MADMAC was proposed. The framework allows the comparison of multiple attributes with diverse units of measurements in order to select the best alternative. The work requires the definition of Attributes, Alternatives and Attribute Weights, to construct a Decision Matrix and arrive at a relative ranking to identify the optimal alternative. An adapted Likert-type scale from 1 to 10 was used by the MADMAC to convert all qualitative attributes to their quantitative equivalent, where 1 indicates very unfavourable, 5 indicates neutral, 6 indicates favourable, and 10 indicates a near perfect solution. However, in all of these cases, a standard cloud services measurement and comparison model such as SMI was not considered, which means that the QoS attributes used only covered a limited range of heterogeneous dimensions (qualitative and quantitative), which may not provide a sufficiently robust basis for decision making on cloud services. In contrast to previous approaches, our approach considers the heterogeneity of cloud QoS Model that combines quantitative and qualitative QoS data, which to the best of our knowledge, represents a first attempt to use heterogeneous similarity metrics for QoS ranking and selection of services in the context of a cloud service e-marketplace. Heterogeneous similarity metrics for cloud service ranking and selection By giving due consideration to the heterogeneous nature of the cloud services QoS model, this paper proposes the use of heterogeneous similarity metrics (HSM) for cloud service ranking and selection. In this Section, we present an overview of HSM, the rationale for selection of HSM that have been selected in this study, and a description of the five selected HSM for cloud service ranking and selection. Overview of heterogeneous similarity metrics To measure the similarity between quantitative data, metrics such as Murkowski metrics [32], its derivatives (Manhattan and Euclidean), Chebyshev and Canberra metrics have been proposed. Also, metrics such as Overlap [33], Eskin [36], Lin [37] and Goodall [38], have also been proposed for qualitative similarity computations. However, these quantitative or qualitative metrics alone are insufficient for handling heterogeneity, except when combined into a unified metric that applies different similarity metrics to different types of QoS attributes [22]. The resultant combination can be referred to as a heterogeneous similarity metric (HSM) [22]. Authors in [22] proposed Heterogeneous Euclidean-Overlap Metric (HEOM) and Heterogeneous Value Difference Metric (HVDM) as metrics for computing similarity operations on heterogeneous datasets. The HEOM metric employs range-normalized Euclidean metric (Eq. 4) for quantitative QoS attributes, while Overlap metric is employed for qualitative QoS attributes; while the HVDM uses the standard-deviation-normalized Euclidean distance (Eq. 7) and value difference metric, for quantitative and qualitative QoS attributes respectively. The HEOM and HVDM have been applied for feature selection and instance-based learning in real-world classification tasks [22]. Rationale for selected qualitative metrics A number of qualitative similarity metrics have been proposed in the literature and we selected at least one qualitative metric from each of the categories defined in [31] to create additional heterogeneous similarity metrics for QoS-based cloud service ranking and selection. The categories are as follows: Metrics that fills diagonal entries only: Qualitative metrics that fall into this category include the Overlap [33] and Goodall qualitative metrics [38]. In the overlap metric, the similarity between two multivariate data points is directly proportional to the number of attributes or dimensions in which they both match. However, the overlap metric does not distinguish between the different values taken by an attribute as it treats all similarities and dissimilarities in the same manner. On the other hand, the Goodall metric takes into account the frequency distribution of different attribute values in a given dataset and computes the similarity between two qualitative attribute values by assigning higher similarity to a match when the attribute value is frequent. Metrics that fill off-diagonal entries only: an example of a metric in this category includes the Eskin metric [36]. The Eskin metric gives more weight to mismatches that occur on attributes that take many values. In addition, the maximum value is attained when all the attributes have unique values. Metrics that fill both diagonal and off-diagonal entries: the Lin metric [37] is a typical example of such metrics. The Lin qualitative metric is applied in contexts that involve ordinal, string, word and semantic similarities. The metric assigns higher weights to matches on frequent values, and lower weight to mismatches on infrequent values. Five heterogeneous similarity metrics for cloud service ranking and selection Apart from HEOM and HVDM, we introduced an additional three HSM by combining existing similarity metrics used for either quantitative or qualitative data alone. The new HSM are as follows: Heterogeneous Euclidean-Eskin Metric (HEEM), Heterogeneous Euclidean-Lin Metric (HELM), and Heterogeneous Euclidean-Goodall Metric (HEGM). HEEM combines range-normalized Euclidean distance for the quantitative dataset, while Eskin metric [36] was employed for qualitative QoS. While the range-normalized Euclidean distance is employed for computing quantitative QoS values in both HELM and HEGM, HELM applies the Lin metric and HEGM used the Goodall metric to compute on qualitative QoS values. In all, the five HSM considered in this paper are as follows: HEOM (Eq. 1), HVDM (Eq. 5), HEEM (Eq. 9), HELM (Eq. 12) and HEGM (Eq. 15). While the components for measuring quantitative and qualitative data aspects are shown in Table 1, the underlying mathematical equations that describe each of the HSM are presented subsequently based on the assumption that X and Y are vectors representing the values of the user QoS requirements and a QοS vector of a cloud service si belonging to service list S, such that X = (x1, x2, … xm) and Y = (y1, y2, … ym); xm and ym corresponds to the value of the mth QoS attribute of the users requirement and QoS attribute of the cloud service si respectively. Table 1 Summary of Heterogeneous Similarity Metrics Subsequently, we describe each of the proposed heterogeneous metrics in details. Heterogeneous Euclidean-overlap metric (HEOM) $$ HEOM\left(x,y\right)=\sqrt{\sum \limits_{i=1}^m{h}_i{\left({x}_i,{y}_i\right)}^2} $$ $$ {h}_i\left(x,y\right)=\left\{\begin{array}{l}1,\kern4em if\kern0.5em x\kern0.5em or\kern0.5em y\kern0.5em is\kern0.5em unknown,\kern0.5em else\\ {} overlap\left(x,y\right), if\kern0.5em i\kern0.5em is\kern0.5em norminal\kern0.5em data, else\\ {} rn\_{diff}_i\left(x,y\right)\end{array}\right. $$ And overlap (x, y) and rn _ diffi (x, y) are defined as $$ overlap\left(x,y\right)=\left\{\begin{array}{l}0,\kern3.5em if\kern0.5em x=y\\ {}1,\kern3.5em Otherwise\end{array}\right. $$ $$ rn\_{diff}_i\left(x,y\right)=\frac{\left|x-y\right|}{{\mathit{\operatorname{Max}}}_i-{\mathit{\operatorname{Min}}}_i} $$ Heterogeneous value difference metric $$ HVDM\left(x,y\right)=\sqrt{\sum \limits_{i=1}^m{d}_i{\left({x}_i,{y}_i\right)}^2} $$ $$ {d}_i\left(x,y\right)=\left\{\begin{array}{l}1,\kern4.5em if\kern0.5em x\kern0.5em or\kern0.5em y\kern0.5em is\kern0.5em unknown, else\\ {}{vdm}_i\left(x,y\right),\kern4.5em if\kern0.5em i\kern0.5em is\kern0.5em Qualitative\\ {}{diff}_i\left(x,y\right),\kern4.9em if\kern0.5em i\kern0.5em is\kern0.4em Quantitative\end{array}\right. $$ $$ {vdm}_i\left(x,y\right)=\sqrt{\sum \limits_{c=1}^C{\left|\frac{N_{qi,\kern1em x,\kern1em c}}{N_{qi,\kern1em x}}-\frac{N_{qi,\kern1em y,\kern1em c}}{N_{qi,\kern1em y}}\right|}^2}=\sqrt{\sum \limits_{c=1}^C{\left|{P}_{qi,x,c}-{P}_{qi,y,c}\right|}^2} $$ $$ {diff}_i=\frac{\left|x-y\right|}{4{\sigma}_{qi}} $$ \( {N}_{q_i,x} \) is the number of instances (cloud app services) available in the marketplace that have value x for QoS attribute qi; \( {N}_{q_i\kern0.5em ,x\kern0.5em ,c} \) is the number of instances available on the marketplace that have value x for QoS attribute qi and output class c; C is the number of output classes in the problem domain (in this case, C = 3, corresponding to the High, Medium and Low); \( {P}_{q_i,\kern0.5em x,c} \) is the conditional probability of output class c given that QoS attribute qi has the value x, i.e. P(c| qi = x), computing as \( \frac{N_{q_i\kern0.5em ,x\kern0.5em ,c}}{N_{\begin{array}{cc}{q}_i&, x\end{array}}} \). However, if \( {N}_{q_i\kern0.5em ,x}=0 \), then P(c| qi = x) is also regarded as 0. Heterogeneous Euclidean-Eskin metric $$ HEEM\left(x,y\right)=\sqrt{\sum \limits_{i=1}^m{e}_i{\left({x}_i,{y}_i\right)}^2} $$ $$ {e}_i\left(x,y\right)=\left\{\begin{array}{l}1,\kern4.5em if\kern0.5em x\kern0.5em or\kern0.5em y\kern0.5em is\kern0.5em unknown,\kern0.5em else\\ {}{eskin}_i\kern0.5em \left(x,y\right), if\kern0.5em i\kern0.5em is\kern0.5em norminal\kern0.5em data,\kern0.5em else\\ {} rn\_{diff}_i\left(x,y\right)\kern0.5em \end{array}\right. $$ $$ {eskin}_i\left(x,y\right)=\left\{\begin{array}{l}0,\kern5em if\kern0.5em x=y\\ {}\frac{n_i^2}{n_i^2+2}\kern2em Otherwise\end{array}\right. $$ Heterogeneous Euclidean-Lin metric $$ HELM\left(x,y\right)=\sqrt{\sum \limits_{i=1}^m{l}_i{\left({x}_i,{y}_i\right)}^2} $$ $$ {l}_i\left(x,y\right)=\left\{\begin{array}{l}1,\kern4.5em if\kern0.5em x\kern0.5em or\kern0.5em y\kern0.5em is\kern0.5em unknown,\kern0.5em else\\ {}{lin}_i\left(x,y\right),\kern1.5em if\kern0.5em i\kern0.5em is\kern0.5em norminal\kern0.5em data,\kern0.5em else\\ {} rn\_{diff}_i\left(x,y\right)\end{array}\right. $$ $$ {lin}_i\left(x,y\right)=\left\{\begin{array}{l}2\kern0.5em \log {\widehat{p}}_{qi}(x),\kern7em if\kern0.5em x\kern0.5em =y\\ {}2\kern0.5em \log \left({\widehat{p}}_{qi}(x)+{\widehat{p}}_{qi}(y)\right)\kern2.8em Otherwise\end{array}\right. $$ Heterogeneous Euclidean-Goodall metric $$ HEGM\left(x,y\right)=\sqrt{\sum \limits_{i=1}^m{g}_i{\left( xi, yi\right)}^2} $$ $$ {g}_i\left(x,y\right)=\left\{\begin{array}{l}1,\kern4.4em if\kern0.5em x\kern0.5em or\kern0.5em y\kern0.5em is\kern0.5em unknown,\kern0.3em else\\ {}g{oodall}_i\left(x,y\right), if\kern0.3em i\kern0.5em is\kern0.5em norminal\kern0.5em data, else\\ {} rn\_{diff}_i\left(x,y\right)\end{array}\right. $$ \( g{oodall}_i\left(x,y\right)=\left\{\begin{array}{l}{\widehat{p}}_{qi}^2(x)\kern4.5em if\kern0.5em x=y\\ {}0\kern6em Otherwise\end{array}\right. \) (17) Where ni = the number of values that QoS attribute qi can assume (e.g. for security QoS attribute denoted by qsecurity, nsecurity = 3; corresponding to the number of values that security QoS attribute can assume: High, Medium and Low) Where \( {\widehat{p}}_{qi}(x) \) and \( {\widehat{p}}_{qi}^2(x) \) are the sample probability of QoS attribute qi to take the value of x in the data set (in this case the available services on the e-marketplace); computed as \( {\widehat{\mathrm{p}}}_{\mathrm{q}\mathrm{i}}\left(\mathrm{x}\right)=\frac{{\mathrm{N}}_{{\mathrm{q}}_{\mathrm{i}}\kern0.5em ,\mathrm{x}}}{\mathrm{N}} \) and \( {\widehat{p}}_{qi}^2(x)=\frac{N_{qi,x}\left({N}_{qi,x}-1\right)}{N\left(N-1\right)} \) The total number of services is denoted as N. Experimental evaluation and results In this section, we present an experimental assessment of the ranking accuracy of the five selected HSM on a synthetically generated dataset for cloud services. A synthetically generated QoS dataset was used because a real QoS dataset for cloud services that perfectly fit the context of our experiment could not be found. Alkalbani et al. [39] alluded to the paucity of viable datasets for cloud services. The Blue Pages dataset in [39] is the closest dataset on cloud services that we got, but it is not based on QoS cloud services. Rather, it provides data on different service offerings such as service name, the date the service was founded, service category, free trial (yes/no), mobile app (yes/no), starting price, service description, service type, and provider link as extracted from two cloud services review sites – getapp.com , and cloudreviews.com , which does not fit perfectly for the purpose of this study. However, we found some previous studies on cloud services that relied on a synthetically generated dataset or simulated datasets to perform experiments on cloud services [40,41,42,43], which motivated our decision to use a synthetically generated dataset. In order to synthesise the dataset, 6 attributes were selected from 6 categories of the SMI (see Table 2). The SMI was used as the basis for data synthesis because it provides a standardised method for measuring and comparing cloud-based business services [14]. The 6 selected attributes comprising 3 quantitative and 3 qualitative attributes were those considered to be relevant to the context of SaaS. The 6 selected attributes are service response time, availability, cost, security, usability, and flexibility. Table 2 Definition and Description of the Six QoS Attributes The goal of the experiment is to investigate the ranking accuracy of the HSM compared to a gold standard obtained by human similarity judgment. Dataset preparation The data values for the selected SMI attributes were synthesised based on examples from previous evaluation studies [44,45,46,47], and related papers on cloud service selection such as [14, 28, 41, 47] that revealed acceptable data formats for quantitative attributes such as response time, cost, and availability. We generated random qualifier values for the other qualitative attributes, which are usability, security, and flexibility. Consequently, we used a total of six QoS attributes with a typical data format as shown in Table 3. For simplicity, we limited the qualifier values for usability, security, and flexibility to high, medium and low. We simulated multiple instances of the adopted format for the six attributes in order to obtain a dataset comprising a total of 63 services after sorting by response time in ascending order. It must be said that in order to deploy our approach in a real case scenario, the QoS attributes of a service will have to be specified by the service provider and made accessible to the user as part of the service documentation that a user needs to consider in order to take a decision on which service to select. One of the available means to do this is to leverage relevant SMI measurement templates provided by the Cloud Service Measurement Index Consortium (CSMIC) [48]. Table 3 Perfect Match of services and user requirements Furthermore, the initial set of SMI templates by CSMIC has been extended by Scott Feuless in [49] to evolve metrics and SMI scored frameworks that enable specific SMI attributes to be scored by an organisation. The purpose of the SMI scored framework [50] is to enable a customer to evaluate a cloud service in order to make a right choice. By using the SMI scored framework or a similar model, the cumulative scores for specific SMI categories, and the scores for individual SMI attributes of a cloud service can be obtained. However, determining the cumulative scores for each SMI attribute is a manual process that is qualitatively driven by experts within an organization. Thus, having the SMI scored frameworks (or similar scoring models) for several cloud services, creates the basis for the application of the HSM that this paper proposes. The HSM can be applied for automated ranking and selection of the cloud services in real-time in order to determine the best cloud service offerings in the midst of several alternatives. This will offer a major advantage over the use of a manually-generated SMI scored frameworks [50] for ranking and selection of cloud services. Evaluation metrics Kendal tau coefficient Kendall's tau coefficient, denoted as τ is used to measure the ordinal association between two variables. The Kendall correlation between two variables will be high when the top-k list produced by the five HSM and gold standard has a correlation value of 1, and low with a correction value of − 1. The Kendall tau coefficient is computed as follows: $$ \tau =\frac{\left(C-D\right)}{\frac{k\left(k-1\right)}{2}} $$ Where C = Concordant pairs; D = Discordant pairs; k is the number of top-k items produced by the methods. Precision metric Precision, a measure used in information retrieval domains, was adapted here to evaluate the relevance of the output obtained from each metric with respect to the content of the gold standard. Precision is the fraction of cloud services obtained from the HSM that is contained in the gold standard. The gold standard output was used as the benchmark to determine the precision of each metric as we determined how many of the top-k services returned by the metrics include the services contained in the gold standard. We computed the precision of each metric as we varied the number of k. We define Precision as: $$ \frac{\left|\mathbf{TKS}\kern0.4em \bigcap \kern0.4em \mathbf{GS}\right|}{\mathbf{TKS}} $$ Where TKS = Top-k Cloud Services returned by HSM and GS = Number of Services in Gold Standard. Experiment design and protocol We recruited 12 undergraduates students in Computing and Engineering fields (male = 9, Female =3), on the basis that 12 participants offer an acceptably tight confidence interval [51]. We used one of the services from the dataset as the user requirements and asked participants to rank the remaining 63 services according to similarity to the user requirements. The user requirements vector R selected is as follows {302.75, 126, 99.99, Medium, Low, Low} respectively corresponding to values for Response Time, Cost, Availability, Usability, Security Management, and Flexibility. To simplify the similarity judgement exercise, we converted the QoS values of the services in the dataset into line graphs, such that the user requirements is plotted against each of the remaining 63 services; and the qualitative values High, Medium and Low were mapped to numerical values of 50, 30 and 10 respectively for illustration purposes. For example, Fig. 1 shows the line graphs of the user requirement with another service, based on the QoS information contained in Table 4. Line Graph showing Cloud Service QoS Vs. User QoS Requirements. The line graph graphically depicts the similarity of the QoS properties of the cloud services and the QoS requirements of the users. Panel (a) shows that there is a perfect match between the User's QoS requirement and the QoS properties of the cloud service; while Panel (b) shows a variance between the QoS properties of the cloud service and the QoS requirement of the user Table 4 Difference in Service and User Requirements The participants were taken through a 15 min tutorial to explain the purpose of the experiment and basic training on the similarity evaluation exercise. After the training, the participants were shown the 63 line graphs and were asked to agree or disagree (on a 1 to 7 Likert scale) with the proposition: 'The two Lines graphs are similar.' The questionnaire contained 63 items corresponding to the 63 services been ranked. The responses from the 12 participants were analysed and we determined the Mean of the response to each item, which indicates unanimously which service is most similar to the user requirements. We aggregated the responses from all participants by finding the median responses across the 63 items presented in the questionnaire. The median scores were sorted in descending order to indicate the degree of similarity of the 63 services to the user requirements. Higher median scores indicate higher similarity and vice versa. The HSM was implemented in Java and used to rank the 63 services used in this experiment with respect to the user requirements. The simulation was conducted on an HP Pavilion with Intel Core (TM) i3-3217 U CPU at 1.80GHz 1.80 GHz processor and 4.00GB RAM on 64-bit Operating System, an × 64-based processor running Windows 8.1. The ranking produced by the HSM was compared with those produced by human subjects using the Kendall tau coefficient, while the accuracy of the ranking produced was measured using the gold standard as a benchmark based the precision metric. Rank correlation coefficient We applied the Kendall Tau Rank Correlation Coefficient metric to measures the rankings obtained from the HSM. Table 5 shows the rank order correlation among all five HSM, as well as the ranking obtained from human similarity judgment. The results show that 10 of 15 correlations were statistically significant at 2-tailed (p < 0.01). The strongest correlations occur for HEEM-HEOM (τ = 0.929, p < 0.01), HVDM-HEOM (τ = 0.515, p < 0.01), HVDM-HEEM (τ = 0.573, p < 0.01), and HEGM-HELM (τ = 0.436, p < 0.01). The weaker correlations occur among the following: HELM with HEOM, HVDM, and HEEM; HEGM with HEOM, HVDM, and HEEM. However, there are positive correlations among the ranking results from the human similarity judgements with HEOM, HVDM and HEEM; and a negative correlation with HELM and HEGM. The ranking produced by HEEM (τ = 0.449, p < 0.01) correlates highly with the human similarity judgements, closely followed by HEOM (τ = 0.229, p < 0.01). HVDM and HELM have a weak rank correlation with human similarity judgements, whereas HEGM had a significant negative correlation with human similarity judgements. Table 5 Kendall Tau Rank Correlation Coefficients High precision connotes that the heterogeneous similarity metrics ranked and returned more relevant services as contained in the gold standard. We used the ranking produced by HEOM as the gold standard and served as the benchmark to measure the precision of the rankings produced by the other HSM used in the evaluation. The value of top-k ranged from 5, 10, 15, 20 and 25. Based on the analysis shown in Fig. 2, we observed that HEEM consistently gave the highest precision accuracy across the ranges of k, followed slightly by HVDM, meanwhile HELM had the least. Precision Score of the heterogeneous similarity metrics (HEEM, HEGM, HVDM, HELM). Precision of the heterogeneous similarity metrics (HSM) measures how many relevant cloud services were ranked and returned by HSM as contained in the gold standard. The gold standard contained the ranking of services produced by HEOM, and it served as the benchmark to measure the precision of the rankings produced by other HSM including HEEM, HEGM, HVDM and HELM. The value of top-k ranged from 5, 10, 15, 20 and 25. HEEM had the highest precision score on all values of k compared to other HSM Based on the results of the rank order correlation and ranking accuracy measured by precision metrics precision, HEEM performed relatively well in comparison to HVDM viz a viz the ranking produced by HEOM. Although the HEOM and the HVDM are known heterogeneous similarity metrics and have been employed for similarity computations [22, 52], this paper was the first to apply these metrics, together with the three proposed in this paper, to rank cloud services by considering heterogeneous nature of cloud services QoS model. The application of HSM in ranking cloud services provides a more credible basis for cloud service ranking and selection. In this paper, we have been able to consider the heterogeneous dimensions of the QoS model that defines cloud services that have been hitherto overlooked by previous cloud ranking and selection approaches. Based on the results of the experimental evaluations, we showed that not only is the HEEM a promising metric for ranking heterogeneous dataset, it can also be applied to accurately rank cloud services in cloud service e-marketplace contexts with respect to user requirements. Generally, the results of the experimental evaluation show the suitability of HSM for ranking cloud services in a cloud service e-marketplace context. More specifically, HEOM, HEEM, and HVDM show considerable ranking accuracy compared to HEGM and HELM. Therefore, a cloud service selection approach that uses HSM to rank cloud services is more suitable compared to approaches that consider only quantitative QoS attributes. The emergence of cloud service e-marketplaces such as AppExchange, SaaSMax, and Google Play Store as a one-stop shop for demand and supply of SaaS applications further contributes to the popularity of cloud computing, as a preferred means of provisioning and purchasing cloud-based services. Despite the fact that existing cloud e-marketplaces do not consider user's QoS requirements, the search results are presented as an unordered list of icons making it difficult for users to discriminate among services shown. Moreover, existing cloud service ranking approaches assume that cloud services are only characterised by quantitative QoS attributes. The main objective of this paper is to extend existing approaches by ranking cloud services in accordance with user requirements while considering the heterogeneous nature of QoS attributes. We demonstrated the plausibility of applying heterogeneous similarity metrics in ranking cloud services and evaluated the performance of five (two known metrics and three new metrics) heterogeneous similarity metrics using rankings produced by the human judgement as a benchmark. The experimental results show that the QoS rankings obtained from HEOM, HEEM and HVDM correlates closely with human similarity assessments compared to other heterogeneous similarity metrics used in this study. Thus, confirming the suitability of heterogeneous similarity metrics for QoS-based ranking of cloud services with respect to the user's QoS requirements in the context of a cloud service e-marketplace. Although we have used only one user's QoS requirements as an example to describe the scenario of a QoS-based ranking of cloud services, similar studies can be performed using a variety of user QoS requirements and QoS datasets to further validate the results obtained in this paper. In the nearest future, the proposed heterogeneous similarity metrics will be integrated into a holistic framework for cloud service selection, and more experimental evaluations would be performed to ascertain the user experience of metrics proposed to rank and select cloud services in cloud service e-marketplace. https://www.saasmax.com/marketplace#!/ https://appexchange.salesforce.com www.alibaba.com HEEM: HEGM: Heterogeneous Euclidean Goodall Metric HEOM: Heterogeneous Euclidean-Overlap Metric HSM: Heterogeneous Similarity Metric HVDM: QoS: SMI: Rimal BP, Jukan A, Katsaros D, Goeleven Y (2011) Architectural requirements for cloud computing systems: an Enterprise cloud approach. J Grid Comput 9:3–26. https://doi.org/10.1007/s10723-010-9171-y Ezenwoke A, Omoregbe N, Ayo CK, Sanjay M (2013) NIGEDU CLOUD: model of a national e-education cloud for developing countries. IERI Procedia 4:74–80. https://doi.org/10.1016/j.ieri.2013.11.012 Buyya R, Yeo CS, Venugopal S (2008) Market-oriented cloud computing. IEEE, pp 5–13 Fortiş TF, Munteanu VI, Negru V (2012) Towards a service friendly cloud ecosystem. In: Proceedings - 2012 11th international symposium on parallel and distributed computing, ISPDC 2012, pp 172–179 Townsend C, Kahn BE (2014) The "visual preference heuristic": the influence of visual versus verbal depiction on assortment processing, perceived variety, and choice overload. J Consum Res 40:993–1015. https://doi.org/10.1086/673521 Chernev A, Böckenholt U, Goodman J (2012) Choice overload: a conceptual review and meta-analysis. J Consum Psychol 25:333–358. https://doi.org/10.1016/j.jcps.2014.08.002 Toffler A (1970) The future shock. Amereon Ltd. ISBN: 0553277375, New York Alrifai M, Skoutas D, Risse T (2010) Selecting skyline services for QoS-based web service composition. In: Proceedings of the 19th international conference on world wide web - WWW'10. ACM, p 11 Ezenwoke A, Daramola O, Adigun M (2017) Towards a visualization framework for service selection in cloud E-marketplaces. In: Proceedings - 2017 IEEE 24th international conference on web services, ICWS 2017 Ezenwoke AA (2018) Design of a QoS-based framework for service ranking and selection in cloud e-marketplaces. Asian J Sci Res 11:1–11 Chen X, Zheng Z, Liu X et al (2013) Personalized QoS-aware web service recommendation and visualization. IEEE Trans Serv Comput 6:35–47. https://doi.org/10.1109/TSC.2011.35 Abdelmaboud A, Jawawi DNA, Ghani I et al (2015) Quality of service approaches in cloud computing: a systematic mapping study. J Syst Softw 101:159–179. https://doi.org/10.1016/j.jss.2014.12.015 CSMIC (2014) Service measurement index framework version 2.1 introducing the service measurement index (SMI). http://csmic.org/downloads/SMI_Overview_TwoPointOne.pdf. Accessed 3 Feb 2018 Garg SK, Versteeg S, Buyya R (2011) SMICloud: a framework for comparing and ranking cloud services. In: Proceedings - 2011 4th IEEE international conference on utility and cloud computing, UCC 2011. IEEE, pp 210–218 Soltani S, Asadi M, Gašević D et al (2012) Automated planning for feature model configuration based on functional and non-functional requirements. Proc 16th Int Softw Prod Line Conf:56–65. https://doi.org/10.1145/2362536.2362548 Gui Z, Yang C, Xia J et al (2014) A service brokering and recommendation mechanism for better selecting cloud services. PLoS One 9. https://doi.org/10.1371/journal.pone.0105297 Mirmotalebi R, Ding C, Chi CH (2012) Modeling user's non-functional preferences for personalized service ranking. In: Liu C, Ludwig H, Toumani F, Yu Q (eds) Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics). Springer, Berlin Heidelberg, pp 359–373 ur Rehman Z, Hussain FK, Hussain OK (2011) Towards Multi-criteria Cloud Service Selection. In: 2011 Fifth International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing. IEEE, pp 44–48 Zheng Z, Wu X, Zhang Y et al (2013) QoS ranking prediction for cloud services. IEEE Trans Parallel Distrib Syst 24:1213–1222. https://doi.org/10.1109/TPDS.2012.285 Qu L, Wang Y, Orgun MA et al (2014) Context-aware cloud service selection based on comparison and aggregation of user subjective assessment and objective performance assessment. In: Proceedings - 2014 IEEE international conference on web services, ICWS 2014, pp 81–88 Saripalli P, Pingali G (2011) MADMAC: multiple attribute decision methodology for adoption of clouds. In: Proceedings - 2011 IEEE 4th international conference on CLOUD computing, CLOUD 2011. IEEE, pp 316–323 He Q, Han J, Yang Y et al (2012) QoS-driven service selection for multi-tenant SaaS. In: Proceedings - 2012 IEEE 5th international conference on CLOUD computing, CLOUD 2012. IEEE, pp 566–573 Wilson DR, Martinez TR (1997) Improved heterogeneous distance functions. J Artif Intell Res 6:1–34. https://doi.org/10.1613/jair.346 Zhai X, Peng Y, Xiao J (2013) Heterogeneous metric learning with joint graph regularization for cross-media retrieval. Twenty-Seventh AAAI Conf Artif Intell Heterog:1198–1204 Menychtas A, Vogel J, Giessmann A et al (2014) 4CaaSt marketplace: an advanced business environment for trading cloud services. Futur Gener Comput Syst 41:104–120. https://doi.org/10.1016/j.future.2014.02.020 Khadka R, Saeidi A, Jansen S, et al (2011) An evaluation of service frameworks for the management of service ecosystems. In: Pacific Asia conference on information systems (PACIS) 2011 proceedings. P paper 93 Vigne R, Mach W, Schikuta E (2013) Towards a smart web service marketplace. In: Proceedings - 2013 IEEE international conference on business informatics, IEEE CBI 2013. IEEE, pp 208–215 Garg SK, Versteeg S, Buyya R (2013) A framework for ranking of cloud computing services. Futur Gener Comput Syst 29:1012–1023. https://doi.org/10.1016/j.future.2012.06.006 Tajvidi M, Ranjan R, Kolodziej J, Wang L (2014) Fuzzy cloud service selection framework. In: 2014 IEEE 3rd international conference on cloud networking, CloudNet 2014. IEEE, pp 443–448 Ayeldeen H, Shaker O, Hegazy O (2015) Distance similarity as a CBR technique for early detection of breast Cancer: an Egyptian case study similarity measure. In: Information Systems Design and Intelligent Applications, pp 449–456 Boriah S, Chandola V, Kumar V (2008) Similarity measures for categorical data: a comparative evaluation. In: Proceedings of the 2008 SIAM international conference on data mining, pp 243–254 Batchelor BG (1977) Pattern recognition: ideas in practice. Springer US Stanfill C, Waltz D (1986) Toward memory-based reasoning. Commun ACM 29:1213–1228. https://doi.org/10.1145/7902.7906 Jung G, Mukherjee T, Kunde S et al (2013) CloudAdvisor: A recommendation-as-a-service platform for cloud configuration and pricing. In: Proceedings - 2013 IEEE 9th world congress on SERVICES, SERVICES 2013. IEEE, pp 456–463 He Q, Han J, Yang Y et al (2012) QoS-driven service selection for multi-tenant SaaS. In: Proceedings - 2012 IEEE 5th international conference on CLOUD computing, CLOUD 2012, pp 566–573 Eskin E, Arnold A, Prerau M et al (2002) A geometric framework for unsupervised anomaly detection. Springer, Boston, pp 77–101 Lin D (1998) An information-theoretic definition of similarity. In: Proceedings of the 15th international conference on machine learning Goodall DW (1966) A new similarity index based on probability. Biometrics 22:882. https://doi.org/10.2307/2528080 Alkalbani AM, Ghamry AM, Hussain FK, Hussain OK (2015) Blue pages: software as a service data set. In: 2015 10th international conference on broadband and wireless computing, Communication and Applications (BWCCA). IEEE, pp 269–274 Sundareswaran S, Squicciarini A, Lin D (2012) A brokerage-based approach for cloud service selection. In: CLOUD computing (CLOUD), 2012 IEEE 5th international conference on. IEEE, pp 558–565 Le S, Dong H, Hussain FK et al (2014) Multicriteria decision making with fuzziness and criteria interdependence in cloud service selection. In: IEEE International Conference on Fuzzy Systems, pp 1929–1936 Karim R, Ding C, Miri A (2013) An End-to-End QoS Mapping Approach for Cloud Service Selection. In: 2013 IEEE ninth world congress on services. IEEE, pp 341–348 Sun L, Ma J, Zhang Y et al (2016) Cloud-FuSeR: fuzzy ontology and MCDM based cloud service selection. Futur Gener Comput Syst 57:42–55. https://doi.org/10.1016/j.future.2015.11.025 Li A, Yang X, Kandula S, Zhang M (2010) CloudCmp: comparing public cloud providers. Proc 10th Annu Conf internet Meas - IMC'10 1. https://doi.org/10.1145/1879141.1879143 Schad J, Dittrich J, Quiané-Ruiz J-A (2010) Runtime measurements in the cloud. Proc VLDB Endow 3:460–471. https://doi.org/10.14778/1920841.1920902 Iosup A, Yigitbasi N, Epema D (2011) On the performance variability of production cloud services. In: 2011 11th IEEE/ACM international symposium on cluster, Cloud and Grid Computing. IEEE, pp 104–113 Rehman ZU, Hussain OK, Hussain FK (2014) Parallel cloud service selection and ranking based on QoS history. Int J Parallel Prog 42:820–852. https://doi.org/10.1007/s10766-013-0276-3 Service Measurement Index (SMI) Measures Definitions (2014) Available at: http://csmic.org/downloads/SMI_Measures_Version_TwoPointOne.zip Feuless S (2016) The cloud service evaluation handbook: how to choose the right service, CreateSpace Independent Publishing Platform Feuless S (2016) Sample Scored Framework (2014): available at https://cloudserviceevaluation.files.wordpress.com/2016/06/sample-scored-framework-bw.xlsx Nielsen J (2006) Quantitative Studies: How Many Users to Test? Available at: https://www.nngroup.com/articles/quantitative-studies-how-many-users/ Tiihonen J, Felfernig a. (2010) Towards recommending configurable offerings. Int J Mass Cust 3:389. https://doi.org/10.1504/IJMASSC.2010.037652 This research was funded by the Landmark University Centre for Research and Development (LUCERD), and Covenant University Centre for Research, Innovation and Discovery (CUCRID). The Cape Peninsula University of Technology provided support during the revision stage of this paper. The simulated QoS dataset used for this study can be found at http://bit.ly/QOSDATASET. Landmark University, Omu-Aran, Nigeria Azubuike Ezenwoke Covenant University, Ota, Nigeria Cape Peninsula University of Technology, Cape Town, South Africa Olawande Daramola University of Zululand, Zululand, South Africa Matthew Adigun Search for Azubuike Ezenwoke in: Search for Olawande Daramola in: Search for Matthew Adigun in: AE designed and conducted the experiments, performed the statistical analysis, and prepared the initial draft of the manuscript. OD significantly revised and rewrote fundamental portions of the manuscript, as well as contributed to the data collection methodology of the simulation experiment. MA formulated the research hypothesis, provided the guideline for the design of the experiments and contributed in writing to the revised manuscript. All authors read and approved the final manuscript. Correspondence to Azubuike Ezenwoke. Ezenwoke, A., Daramola, O. & Adigun, M. QoS-based ranking and selection of SaaS applications using heterogeneous similarity metrics. J Cloud Comp 7, 15 (2018) doi:10.1186/s13677-018-0117-4 Cloud service selection Similarity metrics
CommonCrawl
A Kohn-Vogelius formulation to detect an obstacle immersed in a fluid Inverse fixed angle scattering and backscattering for a nonlinear Schrödinger equation in 2D February 2013, 7(1): 159-182. doi: 10.3934/ipi.2013.7.159 Inverse problem for a coupled parabolic system with discontinuous conductivities: One-dimensional case Michel Cristofol 1, , Patricia Gaitan 2, , Kati Niinimäki 3, and Olivier Poisson 1, Aix-Marseille Universite, LATP, Technopôle Château-Gombert, 39, rue F. Joliot Curie, 13453 Marseille Cedex 13, France, France Aix-Marseille Universite, CPT, Campus de Luminy, Case 907, 13288 Marseille cedex 9, France Department of Applied Physics, University of Eastern Finland, Kuopio campus, P.O.Box 1627, FIN-70211 Kuopio, Finland Received March 2012 Revised November 2012 Published February 2013 We study the inverse problem of the simultaneous identification of two discontinuous diffusion coefficients for a one-dimensional coupled parabolic system with the observation of only one component. The stability result for the diffusion coefficients is obtained by a Carleman-type estimate. Results from numerical experiments in the one-dimensional case are reported, suggesting that the method makes possible to recover discontinuous diffusion coefficients. Keywords: interior-point method., Carleman, quadratic programming, parabolic system, Inverse problems. Mathematics Subject Classification: Primary: 35R30, 35K57, 90C20, 90C5. Citation: Michel Cristofol, Patricia Gaitan, Kati Niinimäki, Olivier Poisson. Inverse problem for a coupled parabolic system with discontinuous conductivities: One-dimensional case. Inverse Problems & Imaging, 2013, 7 (1) : 159-182. doi: 10.3934/ipi.2013.7.159 F. Alvarez, J. Bolte, J. F. Bonnans and F. Silva, Asymptotic expansions for interior penalty solutions of control constrained linear-quadratic problems,, Technical Report RR 6863, (2009). Google Scholar A. Benabdallah, M. Cristofol, P. Gaitan and M. Yamamoto, Inverse problem for a parabolic system with two components by measurements of one component,, Applicable Analysis, 88 (2008), 683. doi: 10.1080/00036810802555490. Google Scholar A. Benabdallah, M. Cristofol, P. Gaitan and L. de Teresa, A new Carleman inequality for parabolic systems with a single observation and applications,, C. R. Math. Acad. Sci. Paris, 348 (2010), 25. doi: 10.1016/j.crma.2009.11.001. Google Scholar A. Benabdallah, Y. Dermenjian and J. Le Rousseau, Carleman estimates for the one-dimensional heat equation with a discontinuous coefficient and applications to controllability and a inverse problem,, Journal of Mathematical Analysis and Applications, 336 (2007), 865. doi: 10.1016/j.jmaa.2007.03.024. Google Scholar A. Benabdallah, P. Gaitan and J. Le Rousseau, Stability of discontinuous diffusion coefficients and initial conditions in an inverse problem for the heat equation,, SIAM Journal on Control and Optimization, 46 (2007), 1849. doi: 10.1137/050640047. Google Scholar S. Boyd and L. Vandenberghe, "Convex Optimization,", Cambridge University Press, (2004). Google Scholar M. Cristofol, P. Gaitan and H. Ramoul, Inverse problems for a two by two reaction-diffusion system using a carleman estimate with one observation,, Inverse Problems, 22 (2006), 1561. doi: 10.1088/0266-5611/22/5/003. Google Scholar M. Cristofol, P. Gaitan, H. Ramoul and M. Yamamoto, Identification of two coefficients with data of one component for a nonlinear parabolic system,, Applicable Analysis, (2011), 1. Google Scholar A. V. Fiacco and G. P. McCormick, "Nonlinear Programming: Sequential Unconstrained Minimization Techniques,", John Wiley and Sons, (1968). Google Scholar M. Hinze and A. Schiela, Discretization of interior point methods for state constrained elliptic optimal control problems: Optimal error estimates and parameter adjustment,, Computational Optimization and Applications, 48 (2010), 581. doi: 10.1007/s10589-009-9278-x. Google Scholar V. Kolehmainen, M. Lassas, K. Niinimäki and S. Siltanen, Sparsity-promoting Bayesian inversion,, Inverse Problems, 28 (2012). doi: 10.1088/0266-5611/28/2/025005. Google Scholar O. A. Ladyženskaja, V. A. Solonnikov and N. N. Ural'ceva, "Linear and Quasi-Linear Equations of Parabolic Type,", Translations of Mathematical Monographs, 23 (1968). Google Scholar J. Le Rousseau and L. Robbiano, Local and global Carleman estimates for parabolic operators with coefficients with jumps at interfaces,, Inventiones Mathematicae, 183 (2011), 245. doi: 10.1007/s00222-010-0278-3. Google Scholar J.-L. Lions and E. Magenes, "Problèmes aux Limites Non Homogènes et Applications,", Vol. 1, (1968). Google Scholar S. Mehrotra, On the implementation of a primal-dual interior point method,, SIAM Journal on Optimization, 2 (1992), 575. doi: 10.1137/0802028. Google Scholar I. Neitzel, U. Prüfert and T. Slawig, Strategies for time-dependent PDE control using an integrated modeling and simulation environment. Part one: problems without inequality constraints,, Technical Report 408, (2007). Google Scholar I. Neitzel, U. Prüfert and T. Slawig, Strategies for time-dependent PDE control with inequality constraints using an integrated modeling and simulation environment,, Numerical Algorithms, 50 (2008), 241. doi: 10.1007/s11075-008-9225-4. Google Scholar J. Nocedal and S. J. Wright, "Numerical Optimization,", Second edition, (2006). Google Scholar O. Poisson, Uniqueness and Hölder stability of discontinuous diffusion coefficients in three related inverse problems for the heat equation,, Inverse Problems, 24 (2008). doi: 10.1088/0266-5611/24/2/025012. Google Scholar U. Prüfert and F. Tröltzsch, An interior point method for a parabolic optimal control problem with regularized pointwise state constraints,, ZAMM Z. Angew. Math. Mech., 87 (2007), 564. doi: 10.1002/zamm.200610337. Google Scholar L. Roques and M. Cristofol, The inverse problem of determining several coefficients in a non linear Lotka-Volterra system,, Inverse Problems, 28 (2012). doi: 10.1088/0266-5611/28/7/075007. Google Scholar K. Sakthivel, N. Branibalan, J.-H. Kim and K. Balachandran, Erratum to: Stability of diffusion coefficients in an inverse problem for the lotka-volterra competition system,, Acta Applicandae Mathematicae, 111 (2010), 149. doi: 10.1007/s10440-010-9570-x. Google Scholar A. Schiela, Barrier methods for optimal control problems with state constraints,, SIAM Journal on Optimization, 20 (2009), 1002. doi: 10.1137/070692789. Google Scholar A. Schiela and A. Günther, An interior point algorithm with inexact step computation in function space for state constrained optimal control,, Numerische Mathematik, 119 (2011), 373. doi: 10.1007/s00211-011-0381-4. Google Scholar A. Schiela and M. Weiser, Superlinear convergence of the control reduced interior point method for PDE constrained optimization,, Computational Optimization and Applications, 39 (2008), 369. doi: 10.1007/s10589-007-9057-5. Google Scholar M. Ulbrich and S. Ulbrich, Primal-dual interior point methods for PDE-constrained optimization,, Mathematical Programming, 117 (2009), 435. doi: 10.1007/s10107-007-0168-7. Google Scholar R. J. Vanderbei and D. F. Shanno, An Interior-point algorith for nonconvex nonlinear programming,, Computational Optimization and Applications, 13 (1999), 231. doi: 10.1023/A:1008677427361. Google Scholar M. Weiser, T. Gänzler and A. Schiela, A control reduced primal interior point method for a class of control constrained optimal control problems,, Computational Optimization and Applications, 41 (2008), 127. doi: 10.1007/s10589-007-9088-y. Google Scholar S. J. Wright, "Primal-Dual Interior-Point Methods,", SIAM, (1997). doi: 10.1137/1.9781611971453. Google Scholar W. Wollner, A posteriori error estimates for a finite element discretization of interior point methods for an elliptic optimization problem with state constraints,, Computational Optimization and Applications, 47 (2010), 133. doi: 10.1007/s10589-008-9209-2. Google Scholar Yanqin Bai, Pengfei Ma, Jing Zhang. A polynomial-time interior-point method for circular cone programming based on kernel functions. Journal of Industrial & Management Optimization, 2016, 12 (2) : 739-756. doi: 10.3934/jimo.2016.12.739 Soodabeh Asadi, Hossein Mansouri. A Mehrotra type predictor-corrector interior-point algorithm for linear programming. Numerical Algebra, Control & Optimization, 2019, 9 (2) : 147-156. doi: 10.3934/naco.2019011 Yanqin Bai, Xuerui Gao, Guoqiang Wang. Primal-dual interior-point algorithms for convex quadratic circular cone optimization. Numerical Algebra, Control & Optimization, 2015, 5 (2) : 211-231. doi: 10.3934/naco.2015.5.211 Yanqin Bai, Lipu Zhang. A full-Newton step interior-point algorithm for symmetric cone convex quadratic optimization. Journal of Industrial & Management Optimization, 2011, 7 (4) : 891-906. doi: 10.3934/jimo.2011.7.891 Boshi Tian, Xiaoqi Yang, Kaiwen Meng. An interior-point $l_{\frac{1}{2}}$-penalty method for inequality constrained nonlinear optimization. Journal of Industrial & Management Optimization, 2016, 12 (3) : 949-973. doi: 10.3934/jimo.2016.12.949 Yu-Hong Dai, Xin-Wei Liu, Jie Sun. A primal-dual interior-point method capable of rapidly detecting infeasibility for nonlinear programs. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-27. doi: 10.3934/jimo.2018190 Xiantao Xiao, Liwei Zhang, Jianzhong Zhang. On convergence of augmented Lagrangian method for inverse semi-definite quadratic programming problems. Journal of Industrial & Management Optimization, 2009, 5 (2) : 319-339. doi: 10.3934/jimo.2009.5.319 Yue Lu, Ying-En Ge, Li-Wei Zhang. An alternating direction method for solving a class of inverse semi-definite quadratic programming problems. Journal of Industrial & Management Optimization, 2016, 12 (1) : 317-336. doi: 10.3934/jimo.2016.12.317 Behrouz Kheirfam, Morteza Moslemi. On the extension of an arc-search interior-point algorithm for semidefinite optimization. Numerical Algebra, Control & Optimization, 2018, 8 (2) : 261-275. doi: 10.3934/naco.2018015 Mohsen Tadi. A computational method for an inverse problem in a parabolic system. Discrete & Continuous Dynamical Systems - B, 2009, 12 (1) : 205-218. doi: 10.3934/dcdsb.2009.12.205 Yanqin Bai, Chuanhao Guo. Doubly nonnegative relaxation method for solving multiple objective quadratic programming problems. Journal of Industrial & Management Optimization, 2014, 10 (2) : 543-556. doi: 10.3934/jimo.2014.10.543 Behrouz Kheirfam. A full Nesterov-Todd step infeasible interior-point algorithm for symmetric optimization based on a specific kernel function. Numerical Algebra, Control & Optimization, 2013, 3 (4) : 601-614. doi: 10.3934/naco.2013.3.601 Siqi Li, Weiyi Qian. Analysis of complexity of primal-dual interior-point algorithms based on a new kernel function for linear optimization. Numerical Algebra, Control & Optimization, 2015, 5 (1) : 37-46. doi: 10.3934/naco.2015.5.37 Yinghong Xu, Lipu Zhang, Jing Zhang. A full-modified-Newton step infeasible interior-point algorithm for linear optimization. Journal of Industrial & Management Optimization, 2016, 12 (1) : 103-116. doi: 10.3934/jimo.2016.12.103 Liming Sun, Li-Zhi Liao. An interior point continuous path-following trajectory for linear programming. Journal of Industrial & Management Optimization, 2019, 15 (4) : 1517-1534. doi: 10.3934/jimo.2018107 Fang Zeng, Pablo Suarez, Jiguang Sun. A decomposition method for an interior inverse scattering problem. Inverse Problems & Imaging, 2013, 7 (1) : 291-303. doi: 10.3934/ipi.2013.7.291 Guoqiang Wang, Zhongchen Wu, Zhongtuan Zheng, Xinzhong Cai. Complexity analysis of primal-dual interior-point methods for semidefinite optimization based on a parametric kernel function with a trigonometric barrier term. Numerical Algebra, Control & Optimization, 2015, 5 (2) : 101-113. doi: 10.3934/naco.2015.5.101 Behrouz Kheirfam, Guoqiang Wang. An infeasible full NT-step interior point method for circular optimization. Numerical Algebra, Control & Optimization, 2017, 7 (2) : 171-184. doi: 10.3934/naco.2017011 Ye Tian, Cheng Lu. Nonconvex quadratic reformulations and solvable conditions for mixed integer quadratic programming problems. Journal of Industrial & Management Optimization, 2011, 7 (4) : 1027-1039. doi: 10.3934/jimo.2011.7.1027 Yanqun Liu. An exterior point linear programming method based on inclusive normal cones. Journal of Industrial & Management Optimization, 2010, 6 (4) : 825-846. doi: 10.3934/jimo.2010.6.825 Michel Cristofol Patricia Gaitan Kati Niinimäki Olivier Poisson
CommonCrawl
Accuracy of genomic selection for growth and wood quality traits in two control-pollinated progeny trials using exome capture as the genotyping platform in Norway spruce Zhi-Qiang Chen1, John Baison1, Jin Pan1, Bo Karlsson2, Bengt Andersson3, Johan Westin3, María Rosario García-Gil1 & Harry X. Wu ORCID: orcid.org/0000-0002-7072-47041,4 Genomic selection (GS) can increase genetic gain by reducing the length of breeding cycle in forest trees. Here we genotyped 1370 control-pollinated progeny trees from 128 full-sib families in Norway spruce (Picea abies (L.) Karst.), using exome capture as genotyping platform. We used 116,765 high-quality SNPs to develop genomic prediction models for tree height and wood quality traits. We assessed the impact of different genomic prediction methods, genotype-by-environment interaction (G × E), genetic composition, size of the training and validation set, relatedness, and number of SNPs on accuracy and predictive ability (PA) of GS. Using G matrix slightly altered heritability estimates relative to pedigree-based method. GS accuracies were about 11–14% lower than those based on pedigree-based selection. The efficiency of GS per year varied from 1.71 to 1.78, compared to that of the pedigree-based model if breeding cycle length was halved using GS. Height GS accuracy decreased to more than 30% while using one site as training for GS prediction and using this model to predict the second site, indicating that G × E for tree height should be accommodated in model fitting. Using a half-sib family structure instead of full-sib structure led to a significant reduction in GS accuracy and PA. The full-sib family structure needed only 750 markers to reach similar accuracy and PA, as compared to 100,000 markers required for the half-sib family, indicating that maintaining the high relatedness in the model improves accuracy and PA. Using 4000–8000 markers in full-sib family structure was sufficient to obtain GS model accuracy and PA for tree height and wood quality traits, almost equivalent to that obtained with all markers. The study indicates that GS would be efficient in reducing generation time of breeding cycle in conifer tree breeding program that requires long-term progeny testing. The sufficient number of trees within-family (16 for growth and 12 for wood quality traits) and number of SNPs (8000) are required for GS with full-sib family relationship. GS methods had little impact on GS efficiency for growth and wood quality traits. GS model should incorporate G × E effect when a strong G × E is detected. Norway spruce (Picea abies (L.) Karst.) is one of the most important conifer species for commercial wood production and ecological integrity in Europe [1]. A conventional breeding program for Norway spruce based on pedigree-based phenotypic selection usually takes between 20 and 30 years in Scandinavian countries [2]. To shorten the breeding cycle, genomic selection (GS) has recently been proposed as an alternative in many tree species such as eucalypts (Eucalyptus) [3,4,5], maritime pine (Pinus pinaster Aiton) [6, 7], loblolly pine (Pinus taeda L.) [8, 9], white spruce and its hybrids (Picea glauca [Moench] Voss) [10,11,12], and black spruce (Picea Mariana [Mill] B.S.P.) [13]. Hayes et al. [14] considered four major factors affecting the accuracy of GS: 1) heritability of the target trait; 2) the extent of linkage disequilibrium (LD) between the marker and the quantitative trait locus (QTL); 3) the size of training population and the degree of relationship between training set (TS) and validation set (VS); and 4) the genetic architecture of the target trait. The genetic architecture and heritability of the target/breeding traits are intrinsic of the nature of the traits and environment in the test trials. Thus it is difficult to change in a practical breeding process. However, some factors such as LD between the marker and the QTL and the size of training population are relatively easy to be managed by increasing the number of markers and relationship between training and prediction (or validation) populations, reducing effective population size (Ne), and increasing training population size [15]. Model selection in GS is quite important for the prediction of genomic estimated breeding values (GEBVs) [14]. In GS, model adequacy is more related to the genetic architecture of the target trait. Based on genome-wide association studies (GWAS), most growth and wood quality traits for conifer species have polygenic inheritance with a gamma or exponential distribution of allelic effects [16]. To account for these skewed distributions of a few genes with large effects and most of genes with small effects, Bayes A, B, and Cπ and Bayesian Least Absolute Shrinkage and Selection Operator (BLASSO) were developed to fit the models more accurately, in contrast to Genomic best linear unbiased prediction (GBLUP) model that assumes a normal distribution of allelic effect. In most studies for growth and wood quality traits, the results were similar regardless of models used [4, 9]. Resende et al. [9] reported that fusiform rust in loblolly pine may be controlled by a few genes with large effects and Bayesian-based models had a higher predictive ability (PA), which is defined as the correlation between adjusted phenotype values and GEBVs. Thus, it is worthwhile to test different models on traits that may have different genetic architectures for specific tree species. So far, several genotyping technologies have been employed in GS, such as diversity array technology (DArT) array [5, 17], SNP chip/array [4,5,6, 9, 10, 13], genotyping by sequencing (GBS) [10, 11], and exome capture [18]. Those technologies were developed to genotype a subset of a whole genome, especially for conifer species with a large genome size. From the published papers, the number of used markers in tree species varies from 2500 to 69,511 single nucleotide polymorphisms (SNPs), most with a few thousands of SNPs. With the large genome size in most commercial conifer species (~ 20Gb) [19], for example, 20 Gb in Norway spruce [20], such small number of markers may not be able to capture most of QTL effects with short-range marker-QTLs LD in undomesticated populations or large breeding populations. Thus, such studies mostly capture those QTL effects with long-range marker-QTLs LD and relationships in highly related populations, such as full-sib families in tree breeding programs with a small Ne. Evaluations of accuracy in GS have been performed with phenotypic and dense marker data from a single site and multiple sites in several tree species [8, 11, 17]. Substantial G × E effects for growth traits have been found in conifer species [21,22,23]. However, wood quality traits usually have a low or non-significant G × E [24,25,26]. Thus, a GS model for growth traits using data from a single site and used to predict genomic breeding values in another site, may produce a low accuracy. The aims of this study were to 1) evaluate the accuracy of GS on tree height and wood quality traits; 2) assess the GS accuracy for single site, cross-site, and joint-sites (e.g. G × E effect on GS selection); 3) examine effect of different statistical models and ratios between TS and VS for GS; 4) explore the roles of relatedness (full-sib, half-sib and unrelated) on accuracy of GS; 5) test the accuracy using subsets of random markers and markers with the largest positive effects; 6) estimate number of trees within-family and number of families required for effective GS for tree height and wood quality traits. Sampling of plant material In this study, 1370 individuals were selected from two 28-year-old control-pollinated progeny trials with the same 128 families from a partial diallel mating design that consisted of 55 parents originating from Northern Sweden. 5–20 trees per family per site were selected in Vindeln (64.30°N, 19.67°E, altitude: 325 m) and Hädanberg (63.58°N, 18.19°E, altitude, 240 m). Buds and the first year fresh needles from 46 parents were sampled in a grafted archive at Skogforsk, Sävar (63.89°N, 20.54°E) and in a grafted seed orchard at Hjssjö (63.93°N, 20.15°E). Progenies were raised in the nursery at Sävar, and the trials were established in 1988 by Skogforsk in Vindeln and Hädanberg. A completely randomized design without designed pre-block was used in the Vindeln trial (site 1), which was divided into 44 post-blocks. Single-tree plot with a spacing of 1.5 m × 2 m was used in each rectangular block with 60 trees (6 × 10). The same design was also used in the Hädanberg trial (site 2) with 44 post-blocks, but for the purpose of demonstration, there was an extra design with 47 extra plots, each plot with 16 trees (4 × 4). Based on the spatial analysis, in the final model, 47 plots were combined into two big post-blocks. Tree height was measured in 2003 at the age of 17 years. Solid-wood quality traits including Pilodyn penetration (Pilodyn) and acoustic velocity (velocity) were measured in October 2016. Surrogate wood density trait was measured using Pilodyn 6 J Forest (PROCEQ, Zurich, Switzerland) with a 2.0 mm diameter pin, without removing the bark. Velocity is highly related to microfibril angle (MFA) in Norway spruce [27] and was determined using Hitman ST300 (Fiber-gen, Christchurch, New Zealand). By combining the Pilodyn penetration and acoustic velocity, indirect modulus of elasticity (MOE) was estimated using the equation developed by Chen et al. [27]. Total genomic DNA was extracted from 1, 370 control-pollinated progeny and their 46 unrelated parents using the Qiagen Plant DNA extraction protocol with DNA quantification performed using the Qubit® ds DNA Broad Range Assay Kit, Oregon, USA. Probe design and evaluation were described in Vidalis et al. [28]. Sequence capture was performed using the 40,018 probes previously designed and evaluated for the materials [28] and samples were sequenced to an average depth of 15x at an Illumina HiSeq 2500 platform. Raw reads were mapped against the P. abies reference genome v1.0 using BWA-mem [29, 30]. SAMTools [31] and Picard [32] were used for sorting and removal of PCR duplicates and the resulting BAM files were subsequently reduced to containing the probe only bearing scaffolds (24,919) before variant calling. Variant calling was performed using the Genome Analysis Toolkit (GATK) HaplotypeCaller [32] in Genome Variant Call Format (gVCF) output format. Samples were then merged into batches of ~ 200 before all samples were jointly called. As per the recommendations from GATK's best practices, Variant Quality Score Recalibration (VQSR) method was performed in order to avoid the use of hard filtering for exome/sequence capture data. The VQSR method utilizes machine-learning algorithms to learn from a clean dataset to distinguish what a good versus bad annotation profile of variants for a particular species should be like. For the VQSR analysis two datasets were created, a training subset and the final input file. The training dataset was derived from a Norway spruce genetic mapping population with loci showing expected segregation patterns [33]. The training dataset was designated as true SNPs and assigned a prior value of 12.0. The final input file was derived from the raw sequence data using GATK best practices with the following parameters: extended probe coordinates by + 100 excluding INDELS, excluding LowQual sites, and keeping only bi-allelic sites. The following annotation parameters QualByDepth, MappingQuality and BaseQRankSum, with tranches 100, 99.9, 99.0 and 90.0 were then applied to the two files for the determination of the good versus bad variant annotation profiles. The recalibrated Variant Call Format was filtered based on the following steps: (1) removing indels; (2) keeping only biallelic loci; (3) treating genotype with a genotype quality (GQ) < 6 as missing; (4) filtering read depth (DP) < 2; (5) removing individual call rate < 50%; (6) removing variant call rate ("missingness") < 90%; (7) minor allele frequency (MAF) < 0.01. After steps 1–4, we calculated discordance between 148 pairs technique replicates, the average discordance was less 1%. Thus, conditions of GQ < 6 and DP < 2 as missing are sufficient to do downstream analysis. After all filtering, 116,765 SNPs were kept for downstream analysis and 77,116 SNPs were independent based on LD (r2 < 0.2) calculated in PLINK [34]. The resultant SNPs were annotated using the default parameters for snpEff 4. The Ensembl general feature format (GTF, gene sets) information for the P. abies genome was utilized to build an annotation database. This analysis revealed that 90% of the variants were located within gene coding regions, with only 10% variants in intronic regions. LD K-nearest neighbour genotype imputation approach [35] was used to impute missing genotypes in TASSEL 5 [36]. After several rounds of imputation, a few of missing genotypes were imputed using random imputation of the codeGeno function in the synbreed package in R [37]. In total, 5.9% of missing genotypes were imputed. Estimating breeding values Breeding values for the genotypes in the trials were predicted using the following model: $$ y= X\beta + Wb(s)+{Z}_1a+e $$ Where y is a vector of phenotypic observations of a single trait; β is a vector of fixed effects, including a grand mean and site effects, b(s) is a vector of post-block within site effects, a is a vector of site by additive effects of individuals. X, W, and Z1 are incidence matrices for β, b(s), and a, respectively. For join-site cross-validation, the average of breeding values was assumed as estimated (true) breeding values (EBVs) when unstructured variance and covariance were used for additive effects. Otherwise, EBVs in the single site were assumed as true or reference breeding values. The random additive effects (a) in equation (1) were assumed to follow \( a\sim N\left(0,A\left[\begin{array}{cc}{\sigma}_{a1}^2& {\sigma}_{a12}\\ {}{\sigma}_{a12}& {\sigma}_{a2}^2\end{array}\right]\right) \), where A is the additive genetic relationship matrix, \( {\sigma}_{a1}^2 \) and \( {\sigma}_{a2}^2 \) are the additive genetic variances for site 1 and site 2, respectively, \( {\sigma}_{a12} \) is additive genetic covariance between site 1 and site 2. The residual e was assumed to follow \( e\sim N\left(0,\left[\begin{array}{cc}{I}_{n1}{\sigma}_{e1}^2& 0\\ {}0& {I}_{n2}{\sigma}_{e2}^2\end{array}\right]\right) \), where \( {\sigma}_{e1}^2 \) and \( {\sigma}_{e2}^2 \) are the residual variances for site 1 and site 2, In1 and In2 are identity matrices, n1 and n2 are the number of individuals in each site, 0 is the zero matrix. To obtain accurate heritability estimates for joint-site models, we used the equation as following: $$ y= X\beta +\mathrm{W}b(s)+{Z}_1a+{Z}_2 sa+e $$ where sa is a vector of site-by-additive interaction effects, a, sa, and e were assumed to be homogenous between the two sites. Statistical analyses for genomic predictions GBLUP, Bayesian ridge regression (BRR), BLASSO, and reproducing kernel Hilbert space (RKHS) were used to estimate GEBVs. We implemented GBLUP calculations using ASReml R [38]. And we implemented the BRR, BLASSO, and RKHS methods using BGLR function from the BGLR package in R [39]. The details of these statistical methods will be defined later. The GEBVs were estimated using the following mixed linear model: $$ {y}^{\prime }= X\beta + Za+e $$ Where y' is a vector of adjusted phenotypic observations by post-block effects and standardized site effect (transforming it to have zero mean and unit variance for each site), β is a vector of fixed effect, including a grand mean), a and e are vectors of random additive and random error effects, respectively, and X and Z are the incidence matrices. The four genomic-based best linear unbiased prediction methods were compared with the traditional pedigree-based best linear unbiased prediction (ABLUP) ABLUP The ABLUP is the traditional method that utilizes a pedigree relationship matrix (A) to predict the EBVs. For ABLUP the vector of random additive effect (a) in equation (3) is assumed to follow a normal distribution \( a\sim N\left(0,A{\sigma}_a^2\right) \), where \( {\sigma}_a^2 \) is the additive genetic variance. The residual vector e is assumed as \( e\sim N\left(0,I{\sigma}_e^2\right) \), where I is the identity matrix. The mixed model equation (3) was solved to obtain EBVs as: $$ \left[\begin{array}{cc}{X}^{\prime }X& {X}^{\prime }Z\\ {}{Z}^{\prime }X& {Z}^{\prime }Z+{A}^{-1}\alpha \end{array}\right]\left[\begin{array}{c}b\\ {}u\end{array}\right]=\left[\begin{array}{c}{X}^{\prime }y\\ {}{Z}^{\prime }y\end{array}\right] $$ The scalar α is defined as \( \alpha ={\sigma}_e^2/{\sigma}_a^2 \), where \( {\sigma}_e^2 \) is the residual variance, \( {\sigma}_a^2 \) is the additive genetic variance. GBLUP The GBLUP model is the same as ABLUP, with the only difference being that the genomic relationship matrix (G) replaces the A matrix. The G matrix is calculated as \( G=\frac{\left(M-P\right){\left(M-P\right)}^T}{2{\sum}_{i=1}^q{p}_i\left(1-{p}_i\right)} \), where M is the matrix of samples with SNPs encoded as 0, 1, 2 (i.e. the number of minor alleles), P is the matrix of allele frequencies with the ith column given by 2(pi − 0.5), where pi is the observed allele frequency of all genotyped samples. In GBLUP, the random additive effect (a) in equation (3) is assumed to follow \( a\sim N\Big(0,G{\sigma}_g^2 \)), where \( {\sigma}_g^2 \) is the genomic-based genetic variance and GEBVs (\( \widehat{a} \)) are predicted from equation (4), but with A− 1 replaced by G− 1 and \( {\sigma}_a^2 \) replaced by \( {\sigma}_g^2 \). The inverse of G matrix was estimated using write.realtionshipMatrix function in the synbreed package in R [40]. Bayesian ridge regression (BRR) BRR is a Bayesian version of ridge regression with the shrinkage merit that was originally intended to deal with the problem of high correlation among predictors in linear regression models [41]. The random additive vector a is assigned a multivariate normal prior distribution with a common variance to all marker effects, that is \( a\sim N\Big(0,{I}_p{\sigma}_m^2 \)), where p is the number of markers. Parameter \( {\sigma}_m^2 \) denotes the unknown genetic variance contributed by each individual marker and is assigned as \( {\sigma}_a^2\sim {\chi}^{-2}\left({df}_a,{S}_m\right) \), where dfa is degrees of freedom, Sm is the scale parameter. Finally, the residual variance is assigned as \( {\sigma}_e^2\sim {\chi}^{-2}\left({df}_e,{S}_e\right) \), where dfe is degrees of freedom for residual variance, Se is the scale parameter for residual variance. Bayesian LASSO (BLASSO) BLASSO is a Bayesian version of LASSO regression with two properties of LASSO: 1) shrinkage and 2) variable selection. BLASSO assumes that the random additive effects in equation (3) is given a multivariate normal distribution with marker specific prior variance, which is assigned as \( a\sim N\left(0,T{\sigma}_m^2\right) \), where \( T=\mathit{\operatorname{diag}}\left({\tau}_1^2,\dots, {\tau}_p^2\right) \), Parameter \( {\tau}_j^2 \) is assigned as \( {\tau}_j^2\sim Exp\left({\lambda}^2\right) \) for j = 1,…, p, where λ2 is assigned as λ2~Gamma(r, δ). The residual variance is assigned as \( {\sigma}_e^2\sim {\chi}^{-2}\ \left({df}_e,{S}_e\right) \), where dfe is degrees of freedom, Se is the scale parameter. RKHS RKHS assumes that the random additive marker effects in equation (3) are distributed as a~N(0, \( \overline{K}{\sigma}_a^2 \)), where \( \overline{K} \) is computed by means of a Gaussian Kernel that is given by Kij = exp(−hdij), where h is a semi-parameter that controls how fast the prior covariance function declines as genetic distance increase and dij is the genetic distance between two samples computed as \( {d}_{ij}={\sum}_{k=1}^p{\left({x}_{ik}-{x}_{jk}\right)}^2 \), where xik and xjk are the kth SNPs for the ith and jth samples, respectively [42]. RKHS method uses a Gibbs sampler for the Bayesian framework and assigned the prior distribution of \( {\sigma}_a^2 \) and \( {\sigma}_e^2 \) as \( {\sigma}_a^2\sim {\chi}^{-2}\ \left({df}_a,{S}_a\right) \) and \( {\sigma}_e^2\sim {\chi}^{-2}\ \left({df}_e,{S}_e\right) \), respectively. Here we chose a Single-Kernel model as suggested by Perez and de Los Campos [39], where h value was defined as h = 0.25. Model convergence and prior sensitivity analysis The algorithm is extended by Gibbs sampling for estimation of variance components. The Gibbs sampler was run for 150,000 iterations with a burn-in of 50,000 iterations. A thinning interval was set to 1000. The convergence of the posterior distribution was verified using trace plots. Flat priors were given to all the models. Heritability and type-B genetic correlation estimates Pedigree-based individual narrow-sense heritabilities (\( {h}_a^2 \)) and marker-based individual narrow-sense heritabilities (\( {h}_g^2\Big) \) were calculated as $$ {h}_a^2=\frac{\sigma_a^2}{\sigma_{pa}^2},{h}_g^2=\frac{\sigma_g^2}{\sigma_{pg}^2} $$ respectively, where \( {\sigma}_a^2 \) is the pedigree-based additive variance estimated from ABLUP, while \( {\sigma}_g^2 \) is the marker-based additive variance estimated from GBLUP. \( {\sigma}_{pa}^2 \) and \( {\sigma}_{pg}^2 \) are phenotypic variances for pedigree-based and marker-based models, respectively. Type-B genetic correlation was calculated as \( {r}_{12}={\sigma}_{a12}/\sqrt{\sigma_{a1}^2{\sigma}_{a2}^2} \), where σa12 is covariance between additive effects of the same traits in different sites and \( {\sigma}_{a1}^2\ \mathrm{and}\ {\sigma}_{a2}^2 \) are estimated additive variances for the same traits in different sites, respectively [43]. One-tailed likelihood ratio test (LRT) was used to check the significance of the Type-B genetic correlation against one. Model validation and estimation of GS accuracy Tenfold cross-validation (90% of training and 10% validation) was performed for all the models, except in testing the various sizes of training data sets and the number of trees per family. GEBVs in VS for Bayesian and RKHS methods were estimated as $$ \widehat{g_i}={\sum}_{j=1}^n{Z}_{ij}^{\prime }{\widehat{a}}_j $$ Where \( {Z}_{ij}^{\prime } \) is the indicator covariate (− 1, 0, or 1) for the ith tree at the jth locus and \( {\widehat{a}}_j \) is the estimated effect at the jth locus. In this study, prediction accuracy (accuracy) was defined as the Pearson correlation between cross-validated GEBVs and EBVs as "true" or reference breeding values estimated from ABLUP using all the phenotypic data (y). Predictive ability (PA) was defined as the Pearson correlation between GEBVs and adjusted phenotypic values y' in equation (3). Testing different statistical models and the size of training set on GS accuracy To test the effect of different statistical models, we used the five models (ABLUP, GBLUP, BRR, BLASSO, and RKHS) and five TS/VS sizes (ratio of 1:1, 3:1, 5:1, 7:1, and 9:1 of training/validation population size, respectively) to evaluate the accuracies of different models. For each of 25 models, 10 replicate runs were carried out for each scenario of four traits. Testing the number of families on GS accuracy To test the effect of the number of families on GS, we randomly selected family numbers from 10 to 120 to test the efficiency of GS. The purpose is to examine the efficiency of using a small subset of families in clonal selection. Testing the number of trees per family on GS accuracy To test the effect of the number of trees per family on GS, we randomly selected 1 to 20 trees per family as TS, remaining trees as VS. Testing site effect on GS accuracy In order to consider genotype and environment interaction (G × E) effect on GS, different GS scenarios were tested: 1) within-site GS, both training and validation sets are from one single site, where EBVs from single site model including G × E term was assumed as true breeding values of the site; 2) cross-site GS, using one site data as TS to predict GEBVs in another site; 3) joint-site GS, average EBVs were assumed as true breeding values when unstructured additive variance-covariance were used in equation (1) . Testing the effect of relatedness on GS accuracy In order to test whether the different relatedness (family structures) affect the accuracy and PA, three different scenarios were used in this study. There were: 1) TS and VS selected based on random sampling, but more likely from the same full-sib families; 2) for comparison purpose, TS and VS selected based on half-sib family structure in which the TS and VS shared female parents, but from different families; 3) for comparison purpose, TS and VS selected based on unrelated family structure in which the TS and VS shared different female and male parents. Testing subset of markers (SNPs) on GS accuracy To test the impact of the number of SNPs on the accuracy of genomic prediction models, 15 subsets of SNPs (10, 25, 50, 100, 250, 500, 750, 1 K, 2 K, 4 K, 6 K, 8 K, 10 K, 50 K, and 100 K) and two different types of sampling strategies: 1) randomly selected SNP subsets and 2) SNP subsets selected with the largest positive effects were implemented. The single marker effects were estimated using single-marker regression for association testing in training data set. To examine the impact of relatedness on the number of markers, we used full-sib and half-sib family structures between training and validation populations. Selection response with genomic selection Selection response could be calculated as the ratio between selection accuracy and breeding cycle length in years. The relative efficiency of GS to traditional BLUP-based selection (TBS) is $$ \mathrm{RE}=\frac{\mathrm{r}\left({\mathrm{GEBVs}}_{\mathrm{GS}},\mathrm{EBVs}\right)}{\mathrm{r}\left({\mathrm{EBVs}}_{\mathrm{TBS}},\mathrm{EBVs}\right)} $$ Thus, the relative efficiency of GS to TBS per year is $$ \mathrm{RE}/\mathrm{year}=\frac{\mathrm{r}\left({\mathrm{GEBVs}}_{\mathrm{GS}},\mathrm{EBVs}\right)}{\mathrm{r}\left({\mathrm{EBVs}}_{\mathrm{T}\mathrm{BS}},\mathrm{EBVs}\right)}\times \frac{{\mathrm{T}}_{\mathrm{T}\mathrm{S}}}{{\mathrm{T}}_{\mathrm{GS}}} $$ Where EBVs are the estimated breeding values using the full data from equation (1), and TTBS and TGS are the breeding cycle lengths under TBS and GS, respectively [15]. We assumed that the TGS is reduced to 12.5 years from the current approximately 25 years of a breeding cycle by omitting or reducing progeny testing time about 10–15 years. Heritability and type-B genetic correlation Heritabilities of the tree height based on GBLUP from two single sites and the joint sites were higher than those from ABLUP models (Table 1). This is in contrast with the wood quality traits (Pilodyn, velocity, and MOE) where all heritabilities from the GBLUP models were lower than those obtained from the ABLUP models. For example, heritability of Pilodyn (0.34) from the GBLUP model was smaller than that from the ABLUP model (0.41). Table 1 Estimates of variance components and narrow-sense heritabilities from the conventional pedigree-based relationship matrix model (ABLUP) and genomic-based relationship matrix model (GBLUP) in two single sites and a joint-site analysis The type-B genetic correlations for tree height estimated from ABLUP and GBLUP models (as 0.48 and 0.41, respectively), were significantly different from 1, however, the type-B genetic correlations of wood quality traits (0.80–0.94) were higher and statistically were not significantly different from 1. Accuracy of different statistical methods and the size of training set Estimates of accuracy were obtained using different statistical methods and for different ratios of TS/VS for each of four traits (Fig. 1). It was observed that ABLUP had higher accuracy than that using the four genomic selection methods (GBLUP, BRR, BLASSO, and RKHS) for tree height, Pilodyn, velocity, and MOE. Tree height had higher accuracy than the three wood quality traits using ABLUP and GS (Fig. 1 and Table 2). Among the four GS methods, GBLUP, BLASSO, and RKHS had approximately similar accuracies, but a slightly higher accuracy was observed when BRR was implemented for tree height. Nevertheless, these four GS methods had little differences on accuracy for the three wood quality traits. Accuracy of different methods and increasing ratios of training set (TS) and validation set (VS) Table 2 Accuracy, predictive ability (PA), relative efficiency (RE), and relative efficiency per year (RE per year) based on all the markers and five genomic selection scenarios for height, Pilodyn, velocity, and MOE GS accuracy increased as the ratio of training to validation population size increased. However, for GS methods, the maximum accuracy was reached at the ratio of 5:1 between training to validation populations for tree height while maximum accuracy seemed to change minimally after the 3:1 ratio for wood quality traits. Impact of the number of families on GS accuracy We estimated the effect of the number of families on GS. The accuracies of all four traits increased when the number of families increased from 10 to 120 families in both the ABLUP and GBLUP model building (Fig. 2). PA had a similar trend, except for tree height. PA of tree height increased from 10 to 30 families and then had similar values up to 120 families in the model building. Accuracy and predictive ability (PA) of genomic selection with different number of families based on two statistical methods: 1) ABLUP and GBLUP with 9:1 for training set and validation set Impact of the number of trees per family on GS accuracy In this study, all 128 full-sib families planted in both progeny trials were selected. Five trees per family in site 1 (Vindeln) and 20 trees per family were selected if there were enough trees for some families in site 2 (Hädanberg). Thus, in joint-site cross-validation model, maximum 20 trees per family were tested. Accuracies and PA from ABLUP for all the traits were higher than those from GBLUP when we randomly selected a subset trees per family as TS (Fig. 3). It was also observed that accuracy and PA had similar increased trends as the number of trees within-family increased, but it flattened (stabilized) as the tree numbers reached between 6 and 14, depending on method (e.g. ABLUP and GBLUP) and traits. The accuracies of tree height, Pilodyn, velocity, and MOE increased initially from 0.48, 0.39, 0.52, and 0.41 to 0.84, 0.64, 0.78, and 0.72, respectively, and then stabilized after tree number reached 14, 6, 12, and 12 per family for ABLUP, respectively. The GS accuracies of tree height, Pilodyn, velocity, and MOE also stabilized after the tree number reached 18, 6, 10 and 10 per families for GBLUP, respectively. This may indicate that more trees within a family (16–19) are required for a reliable training set in GS for growth trait than for wood quality traits (6–12). Accuracy and predictive ability (PA) of genomic selection with different subsets of trees per family based on two statistical methods: 1) ABLUP with randomly selecting subset from one to 20 trees per family as training set (TS); 2) GBLUP with randomly selecting subset from one to 20 trees per family as TS Accuracy and PA of ABLUP and GBLUP were listed in Table 2 for three selection scenarios: Within-site training and selection, cross-site training and selection (e.g. training based on one site while selection for the second site) and joint-site training and selection using full-sib family structure. For all four traits, accuracy of within-site training and selection were always higher than the second scenario of cross-site training and selection. However, the accuracy differences between within-site and cross-site were larger for tree height than for three wood quality traits. For example, the average accuracy of two within-site selections is 0.76 relative to 0.44 for average across-site for tree height from GBLUP. However, for MOE, the average accuracy of two within-site selection is 0.64 relative to 0.59 for average cross-site selection. PA had a similar pattern. The joint-site model accuracies from both ABLUP and GBLUP were the higher than those of within-site and cross-site training and selection. Especially, for instance, tree height accuracy in the joint-site (0.81) was higher than average of within-site (0.76) and average of cross-site (0.44). Similarly, PAs from the joint-site model for all four traits were higher than cross-site training and selection model, but were not always higher than within-site training and selection model. For instance, PAs from ABLUP (0.26) and GBLUP (0.23) models in site 2 were higher than those from the joint-site (0.20 in both models) for tree height. Relative efficiency (RE) of GBLUP to ABLUP was lower than 1 for all the selection scenarios and traits, ranging from 0.80 to 0.95 with an average of 0.88. However, RE per year (assuming halving a breeding cycle time) reached from 1.60 to 1.89 with an average of 1.76 and were not related to any traits and selection scenarios. Compared with the full-sib family structure, GS models built with a half-sib family structure led to a considerable decrease in accuracy and PA (Table 3). For instance, in the half-sib family structure, GS accuracy and PA from GBLUP model decreased from 0.81 and 0.20 to 0.55 and 0.11, respectively for tree height and from 0.69 and 0.36 to 0.50 and 0.26, respectively for MOE. However, both RE and RE per year changed little between half-sib and full-sib structure in GS selection. For example, RE and RE per year increased slightly from 0.89 and 1.78 to 0.98 and 1.97, respectively for velocity from full-sib to half-sib population while RE and REs per year decreased slightly from 0.89 and 1.78 to 0.83 and 1.67, respectively from full-sib to half-sib population for tree height. Table 3 Accuracy, predictive ability (PA), relative efficiency (RE), and relative efficiency per year (RE per year) of genomic selection model based on half-sib families and unrelated families using all markers in a joint-site analysis Compared with half-sib family structure, GS models built with an unrelated family between TS and VS had a considerable decrease in accuracy and PA from GBLUP (Table 3). For example, in the unrelated family structure, GS accuracy and PA from GBLUP model decreased from 0.55 and 0.11 to 0.24 and 0.06, respectively for tree height and from 0.50 and 0.26 to 0.19 and 0.10, respectively for MOE. Especially for velocity, GS accuracy and PA from GBLUP model with a marked decrease from 0.62 and 0.32 to 0.09 and 0.02, respectively, was observed. However, it is worth to note that ABLUP models built with unrelated family had zero accuracies between training and validation populations. Impact of the number of SNPs on GS accuracy Accuracy and PA using subsets of markers with the largest positive effects were slightly higher than those using subsets of random markers until the subset of random markers reached 100 K SNPs (Fig. 4). Accuracy and PA using a subset of random markers increased with the increase in the number of markers, until all the markers were included, except that tree height accuracy stabilized with 4 K SNPs. However, accuracy and PA using the subset of markers with the largest positive effects showed different trends. It increased initially, then stabilized and finally decreased until it got to the same level as the random markers selection at the highest number of markers of 100 K SNPs. The use of subsets of markers with the largest positive effects had higher accuracy and PA than random markers until most of the markers were used. For example, PA using a subset of markers with the largest positive effects increased initially from 0.10 to 0.23 with 1 K SNPs and then decreased to 0.19 with 10 K SNPs for tree height. Trait had a similar influence on the number of markers to reach the plateau of the accuracy and PA using a subset of markers with the highest positive effects. The accuracy and PA reached a plateau when the number of markers with the highest positive effects increased to 4 K–6 K for all the traits. Accuracy and predictive ability (PA) of genomic selection with subset SNPs based on 2 scenarios: 1) randomly selecting the SNPs subset (10, 25, 50, 100, 250, 500, 750, 1000, 2000, 4000, 6000, 8000, 10,000, and 100,000 SNPs); 2) selecting the SNPs subset with the largest positive effects Impact of the number of SNPs and relatedness on GS accuracy It required fewer number of markers in the full-sib family structure than the half-sib family structure to reach the same accuracy and PA (Fig. 5). For example, the accuracies observed with 250 markers in the full-sib family structure were similar as using all 100 K markers in the half-sib family structure for tree height, Pilodyn, velocity, and MOE. The same PA with random 750 markers used in the full-sib family structure for all four traits required at least 100 K markers in half-sib family structure. Accuracy and predictive ability (PA) of genomic selection with subset SNPs based on 2 scenarios: 1) randomly selecting subset of SNPs (10, 25, 50, 100, 250, 500, 750, 1000, 2000, 4000, 6000, 8000, 10,000, and 100,000 SNPs) with full-sib family structure; 2) selecting the subset of SNPs with half-sib family structure Heritability estimates Heritability is an essential genetic parameter in selective breeding and its values are dependent on the relative contributions of genetic and environmental variations, and vary among traits [43]. In our study, heritabilities for wood quality traits were higher than those for tree height, which is expected and agrees with previous reports for Norway spruce [1, 27]. Tan et al. [4] reported that heritability estimates obtained from GBLUP were higher than those from ABLUP for growth and wood quality traits in Eucalyptus. In contrast, Lenz et al. [13] and Gamal EL-Dien et al. [11] reported that heritability estimates obtained from GBLUP were lower than those from ABLUP for similar growth and wood quality traits in black spruce and interior spruce (Picea glauca [Moench] Voss × Picea engelmannii Parry ex Engelm.). In this study, the heritability estimates for tree height obtained from GBLUP were slightly higher than those from ABLUP, but there is no significant difference from ABLUP considering the estimated standard errors. The heritability estimates for wood quality traits obtained from GBLUP were slightly lower than those from ABLUP. The above two opposite situations indicate that pedigree-based ABLUP model may inflate heritability estimates for tree height and deflate heritability estimates for wood quality traits if estimates from GBLUP reflect true genetic relationships among families and account for Mendelian segregation within families. The impact of heritability on the accuracy seems to be low in this study, in line with the report in Douglas-fir [18] and interior spruce [11]. In our joint-site analyses, the heritabilities of tree height, Pilodyn, velocity, and MOE were low to moderate (0.15, 0.34, 0.37, and 0.36, respectively), but the accuracies were high (0.81, 0.66, 0.74, and 0.83, respectively). Several factors may explain this: (1) the large sample size of 1370 and the relatively small effective population size (Ne = 55) likely negate the effect of low trait heritability on prediction accuracy [18]. Märtens et al. [44] demonstrated that increasing relatedness between training and validation populations leads to high prediction accuracy on yeast; (2) the accuracy is the correlation between EBVs assuming as true breeding values from ABLUP and GEBVs from GBLUP in VS in that heritability may affect PAs of ABLUP and GBLUP, but may have little influence on the correlation; (3) the accuracy estimate only represents the additive genetic effect. We found that PA is more similar to the narrow-sense heritability because PA involved both phenotypic and genetic values. For example, heritability of MOE from GBLUP model (0.36) is the same as from ABLUP model (0.36). In the present study, tree height accuracy (0.81 from GBLUP in Table 2) in full-sib family structure with 1233 individuals in TS was similar as in the deterministic simulation with 50 QTLs and heritability of 0.2 [15]. Wood quality traits were similar to that in the simulation with 100 QTLs and a heritability of 0.4. Different GS methods show similar results As expected, the accuracies of four different genomic statistical methods did not clearly outperform each other. This contrasts with previous evaluation of the RKHS method that was reported to outperform other GS methods for low heritability growth traits [4]. It was usually observed that different genomic statistical methods produce similar results for growth and wood quality traits in other forest trees species [10, 18, 45]. With an exception, Resende et al. [9] compared Ridge Regression-BLUP (RR-BLUP), Bayes A, Cπ, and BLASSO for 17 traits in loblolly pine, and found that Bayes A and Bayes Cπ have higher PA than RR-BLUP and BLASSO for fusiform rust disease-resistance trait. They attribute this to a few genes with large effects that control disease resistance. When the number of markers increases the computational time for Bayesian methods take longer time to converge. Therefore, we also support the proposal that GBLUP is an effective method in providing the best compromise between computational time and prediction efficiency if there are no major gene effects [4, 46]. The effect of training dataset and the number of trees per family on GS accuracy In this study, ratio of TS/VS varied from 1:1 to 9:1. We found that the accuracy improved from the TS/VS ratios of 1:1 to 3:1, but accuracy only improved a little after the TS/VS ratio of 3:1. This is different from other studies [4, 13] showing that an increased ratio of the TS/VS beyond 3:1 still increases the accuracy. However, we found that our result concurs with other reports when the ratio of TS/VS is related to the number of trees per family [13]. In our case, each family under 1:1 ratio of the TS/VS, has 5 trees. There is an average of 10.7 trees per each of the 128 families. The ratios of TS/VS with 1:1, 3:1, 5:1, 7:1, and 9:1 equate to average numbers of TS trees per family of 5.3, 8.0, 8.9, 9.4, and 9.6, respectively. This may indicate that after 8 trees per family on average as TS, there is little increase of GS efficiency for the full-sib family. Based on resampling technique, Perron et al. [47] reported that the number of trees per family has an important effect on the magnitude and precision of genetic parameter estimate. For tree height in that study, at least four trees per family at each site should be included in a half-sib family in order to estimate a more accurate heritability and 4–8 trees/per family per site still could improve the accuracy of heritability. However, further increase in the number of tree trees per family had little contribution toward increase of accuracy. Wood quality traits usually have higher heritabilities than those of growth traits and also have less G × E. Therefore, wood quality traits may need a lower number of trees than growth traits for obtaining a similar accurate estimate for genetic parameters. Such calculations could guide us to make a more accurate estimate on the number of trees per family required for phenotyping and genotyping. The effect of the number of families on GS accuracy The effect of the number of families used in cross-validation test was found important in this study with full-sib family structure. We found that the PA and accuracy increased greatly for all the traits from 10 to 120 families, except the PA for tree height, which stabilized after 30 families for cross-validation test. Based on resampling technique, Perron et al. [47] reported that the number of families has a less important effect on the magnitude and precision of genetic parameter estimate in a half-sib family structure. However, in this study, we found that the number of families is also important for estimates of GS accuracy and PA. It may be due to the small effective population size (55 unrelated parents) as compared to the study in Perron et al. [47]. One application of GS is in clonal forestry to select the best clones after selection and mating of several best parents (5–10). One question is how to build the training equation for such clonal selection. Should we use progenies of the selected parents or all parents fromthe trial? From this study, it seems that it is more efficient involving progenies from a larger number of parents (families). Model building and selection using 10 families seem to have lower accuracy than using a larger number of families. It must be tested whether it is due to the small size of families (10–20 trees per family) used in this study, and whether increasing the family size (for example 40 trees per family) as used in clonal progeny testing in Norway spruce in Swedish tree breeding program [2] would increase GS accuracy in a small group of elite families. Genotype-by-environment interaction G × E is usually important for growth traits when the seedlings are planted in different environments. We found that tree height in these two northern trials had a significantly strong G × E, indicated by the type-B genetic correlations (0.48 and 0.41) from ABLUP and GBLUP, respectively (Table 1). Such strong G × E resulted in a low accuracy and PA when one site is used as TS to predict BVs in another site as VS. A moderate to strong G × E for growth traits has been reported in several studies in southern and central Sweden [21, 48, 49], but not documented in northern Sweden. Chen et al. [21] reported that within a test series in southern and central Sweden, the averages of type-B genetic correlations varied from 0.60 to 0.89 in 6 test series and their type-B correlations were higher than those in this study. Such strong G × E in the present study should be considered in model fitting in order to improve PA and accuracy. Several advanced models have been built and tested in crops [50, 51] and one study in tree species [52]. For instance, Oakey et al. [50] used marker and marker by environment interaction in RR-BLUP method to extend the genomic selection to multiple environments. As expected, we found that wood quality traits have no significant type-B genetic correlations and negligible change of accuracy and PA. A similar result has been reported in two southern Norway spruce open-pollinated progeny trials [27]. All these indicate that we could use a genomic model in one site to predict GEBV in another site in the same test series for wood quality traits and use of joint-site models could slightly improve the accuracy as in Table 2. Effect of different family structures We found that our genomic accuracy for tree height and Pilodyn using half-sib family structure was lower than that reported by Lenz et al. [13] in black spruce, even though more SNPs were used in this study (116 K in 20,695 contigs vs 0.49 K from SNP chip). The difference is likely due to lower heritabilities (i.e. 0.15 vs 0.42 for tree height in GBLUP) and the larger Ne in this study (55 vs 27 unrelated parents). Accuracy and PA decreased from full-sib to half-sib family structure and from half-sib to unrelated family structure, indicating that GS model is more efficient in strongly structured populations where relatedness and LD are higher, and full-sib families also needed less number of markers than half-sib families for obtaining similar accuracy (Fig. 5). Similar results were obtained by other studies [12, 13, 45, 53]. However, the relative efficiency between ABLUP and GBLUP (the accuracy ratio) is more or less similar in both full-sib and half-sib populations, which indicates that GS could be used in both half-sib and full-sib populations. The lower estimates of accuracy and PA (Table 3) obtained from GBLUP for unrelated family structure may be due to the low LD between marker and QTL in the unrelated population, which indicates that GS may not be used in unrelated family structure based on the current exome capture data. The effect of the number of SNPs and LD in genomic prediction To our knowledge, this study has used the largest number of SNPs (116,765 SNPs from 20,695 contigs) for GS in tree species [3, 7, 18]. When we used GS in the half-sib family structure, the accuracy and PA reduced about 20%. Moreover, the values of square of PA in half-sib and full-sib family structure accounted for less than 50% of heritability for all traits, which may indicate that even with such a large number of SNPs, we may still have not captured and explained most of the QTL effects. This may be attributed to the low LD in spruce in Fig. 6 (approximate 84 base pairs based on 517 unrelated individuals in Baison et al. [54]) and only exome regions being used. In humans, the exome constitutes a mere 1% of the whole genome (3Gb) [55]. For Norway spruce, however, the genome size is ca. 20 Gb and LD is lower than Humans [19]. Norway spruce has a mapped genome size of 3326.3 centiMorgan (cM) [33], which is larger compared to ~ 2100 cM in white spruce (Picea glauca (Moench) Voss) [56]. There were about 5.6 SNPs per contig/gene on average based on 20,695 contigs/genes that our SNPs come from. This translates to an average genome coverage of ~ 6.2 markers/cM. If we use all independent 77,116 SNPs, then, it amounts to 3.7 SNPs per contig used. Within contigs LD decay estimated from 517 related individuals in Baison et al. [54] Accuracy and PA obtained from the subset of markers with the largest positive effects were slightly higher than those from a subset of markers based on random selection, implying that using the subset of markers with the largest effects in genomic regions with small LD decay could track relationship more effectively than random markers. It also implies that using the subset of markers with the largest positive effect could also obtain some effects based on part of the short-range LD [13, 45]. Thus, this factor could also be potentially useful to reduce genotyping the number of SNPs and GS cost, in highly structured population when the genome locations of markers are known. Efficiency of genomic selection In the present study, we observed that the efficiency of GS per year is greater than that based on traditional progeny test selection if GS is used to half generation time. In traditional pedigree-based progeny selection, the generation time for Norway spruce in Northern Sweden is at least 25 years, which is based on clonal replicated progeny trial of selected breeding trees. The clonal based progeny test procedure includes: seed sowing and growing to a sufficient size (2 yrs), vegetative propagation of seed plants by rooted cuttings (2 yrs), testing in field trials (15 yrs), and assessment of trials (1 yr). The final stage, the completion of a crossing scheme to create the next generation, is about 5 yrs. If we could omit the progeny testing of the first three stages (a total of 19 yrs) and complete flowering induction and mating within 15 years of GS selection, the time for a breeding cycle could be halved for Norway spruce. In this study, we only considered the efficiency of GS based on the timing of the breeding cycle. Considering the cost for field testing, the benefit of GS might be higher if the cost for establishing and maintaining 3–4 progeny trials in each breeding population is more than genotyping. As expected, the main advantage of genomic selection is the potential to shorten the breeding cycle. We observed that: Using G matrix slightly altered heritability relative to A matrix with a slight increase for tree height and a decrease for wood quality traits. ABLUP is about 11–14% more efficient than GBLUP for tree height and wood quality traits, while the four GS methods (GBLUP, BRR, BLASSO, RKHS) had a similar accuracy. Efficiency of GS increased from 49 to 97% among four growth and wood quality traits if GS could reduce the generation time for a breeding cycle to half. The GS accuracy improved from the TS/VS ratios of 1:1 to 3:1, but accuracy only improved marginally after a TS/VS ratio of 3:1. The number of families and the number of trees per family also had an effect on GS efficiency. Wood quality traits need fewer number of trees within-family than tree height for a similar GS efficiency. GS accuracy decreased from full-sib to half-sib family structure and from half-sib to unrelated family structure. The number of markers need to be increased greatly for half-sib family structure to have a similar efficiency to the full-sib family structure. GS accuracy increased as the number of markers increased. The accuracy almost reached a plateau when the number of markers increased to 4 K–8 K for all the traits. ABLUP: Pedigree-based best linear unbiased prediction BLASSO: Bayesian LASSO regression BRR: Bayesian ridge regression DP: Read depth EBV: Estimated breeding value G × E: GATK: GBLUP: Genomic best linear unbiased prediction GEBV: Genomic breeding values GQ: Genotype quality GS: Genomic selection LD: Linkage disequilibrium MAF: Minor allele frequency MFA: Microfibril angle MOE: Modulus of elasticity PA: Predictive ability Pilodyn: Pilodyn penetration QTL: Quantitative trait locus Relative efficiency RKHS: Reproducing Kernel Hilbert Space TS: Velocity: Acoustic velocity VQSR: Variant quality score recalibration VS: Validation set Hannrup B, Cahalan C, Chantre G, Grabner M, Karlsson B, Le Bayon I, et al. Genetic parameters of growth and wood quality traits in Picea abies. Scand J For Res. 2004;19(1):14–29. Karlsson B, Rosvall O. Progeny testing and breeding strategies. Proceedings of the Nordic group for tree breeding. Edinburgh; 1993. Tan B, Grattapaglia D, Wu HX, Ingvarsson PK. Genomic relationships reveal significant dominance effects for growth in hybrid Eucalyptus. Plant Sci. 2018;267:84–93. Tan B, Grattapaglia D, Martins GS, Ferreira KZ, Sundberg B, Ingvarsson PK. Evaluating the accuracy of genomic prediction of growth and wood traits in two Eucalyptus species and their F1 hybrids. BMC Plant Biol. 2017;17(1):110. Resende MDV, Resende MFR, Sansaloni CP, Petroli CD, Missiaggia AA, Aguiar AM, et al. Genomic selection for growth and wood quality in Eucalyptus: capturing the missing heritability and accelerating breeding for complex traits in forest trees. New Phytol. 2012;194(1):116–28. Isik F, Bartholomé J, Farjat A, Chancerel E, Raffin A, Sanchez L, et al. Genomic selection in maritime pine. Plant Sci. 2016;242:108–19. Bartholomé J, Van Heerwaarden J, Isik F, Boury C, Vidal M, Plomion C, et al. Performance of genomic prediction within and across generations in maritime pine. BMC Genomics. 2016;17(1):604. Zapata-Valenzuela J, Whetten RW, Neale D, McKeand S, Isik F. Genomic estimated breeding values using genomic relationship matrices in a cloned population of loblolly pine. G3 Genes Genom Genet. 2013;3(5):909–−916. Resende MFR, Muñoz P, Resende MDV, Garrick DJ, Fernando RL, Davis JM, et al. Accuracy of genomic selection methods in a standard data set of loblolly pine (Pinus taeda L.). Genetics. 2012;190(4):1503–10. Ratcliffe B, El-Dien OG, Klápště J, Porth I, Chen C, Jaquish B, et al. A comparison of genomic selection models across time in interior spruce (Picea engelmannii× glauca) using unordered SNP imputation methods. Heredity. 2015;115(6):547–55. Gamal El-Dien O, Ratcliffe B, Klápště J, Chen C, Porth I, El-Kassaby YA. Prediction accuracies for growth and wood attributes of interior spruce in space using genotyping-by-sequencing. BMC Genomics. 2015;16(1):1–16. Beaulieu J, Doerksen TK, Clement S, MacKay J, Bousquet J. Accuracy of genomic selection models in a large population of open-pollinated families in white spruce. Heredity. 2014;113(4):343–52. Lenz PRN, Beaulieu J, Mansfield SD, Clément S, Desponts M, Bousquet J. Factors affecting the accuracy of genomic selection for growth and wood quality traits in an advanced-breeding population of black spruce (Picea mariana). BMC Genomics. 2017;18(1):335. Hayes BJ, Bowman PJ, Chamberlain AJ, Goddard ME. Invited review: genomic selection in dairy cattle: Progress and challenges. J Dairy Sci. 2009;92(2):433–43. Grattapaglia D, Resende MDV. Genomic selection in forest tree breeding. Tree Genet Genomes. 2011;7(2):241–55. Hall D, Hallingbäck HR. Wu HX. Estimation of number and size of QTL effects in forest tree traits. Tree Genet Genomes. 2016;12(6):110. Resende MFR, Muñoz P, Acosta JJ, Peter GF, Davis JM, Grattapaglia D, et al. Accelerating the domestication of trees using genomic selection: accuracy of prediction models across ages and environments. New Phytol. 2012;193(3):617–24. Thistlethwaite FR, Ratcliffe B, Klápště J, Porth I, Chen C, Stoehr MU, et al. Genomic prediction accuracies in space and time for height and wood density of Douglas-fir using exome capture as the genotyping platform. BMC Genomics. 2017;18(1):930. Mackay J, Dean JFD, Plomion C, Peterson DG, Canovas FM, Pavy N, et al. Towards decoding the conifer giga-genome. Plant Mol Biol. 2012;80(6):555–69. Nystedt B, Street NR, Wetterbom A, Zuccolo A, Lin Y-C, Scofield DG, et al. The Norway spruce genome sequence and conifer genome evolution. Nature. 2013;497:579–84. Chen Z-Q, Karlsson B. Wu HX. Patterns of additive genotype-by-environment interaction in tree height of Norway spruce in southern and Central Sweden. Tree Genet Genomes. 2017;13(1):25. Cullis BR, Jefferson P, Thompson R, Smith AB. Factor analytic and reduced animal models for the investigation of additive genotype-by-environment interaction in outcrossing plant species with application to a Pinus radiata breeding programme. Theor Appl Genet. 2014;127(10):2193–210. Wu HX, Matheson AC. Genotype by environment interactions in an Australia-wide radiata pine diallel mating experiment: implications for regionalized breeding. For Sci. 2005;51(1):29–40. Baltunis BS, Gapare WJ, Wu HX. Genetic parameters and genotype by environment interaction in radiata pine for growth and wood quality traits in Australia. Silvae Genet. 2010;59:113–24. Gapare WJ, Ivković M, Baltunis BS, Matheson CA, Wu HX. Genetic stability of wood density and diameter in Pinus radiata D. Don plantation estate across Australia. Tree Genet Genomes. 2010;6(1):113–25. Chen Z-Q, Karlsson B, Mörling T, Olsson L, Mellerowicz EJ, Wu HX, et al. Genetic analysis of fiber dimensions and their correlation with stem diameter and solid-wood properties in Norway spruce. Tree Genet Genomes. 2016;12(6):123. Chen Z-Q, Karlsson B, Lundqvist S-O, García Gil MR, Olsson L, Wu HX. Estimating solid wood properties using Pilodyn and acoustic velocity on standing trees of Norway spruce. Ann For Sci. 2015;72(4):499–508. Vidalis A, Scofield DG, Neves LG, Bernhardsson C, García-Gil MR, Ingvarsson P. Design and evaluation of a large sequence-capture probe set and associated SNPs for diploid and haploid samples of Norway spruce (Picea abies). bioRxiv. 2018. Langmead B, Salzberg SL. Fast gapped-read alignment with bowtie 2. Nat Methods. 2012;9:357. Li H, Durbin R. Fast and accurate short read alignment with burrows–wheeler transform. Bioinformatics. 2009;25(14):1754–60. Li H, Handsaker B, Wysoker A, Fennell T, Ruan J, Homer N, et al. The sequence alignment/map format and SAMtools. Bioinformatics. 2009;25(16):2078–9. McKenna A, Hanna M, Banks E, Sivachenko A, Cibulskis K, Kernytsky A, et al. The genome analysis toolkit: a MapReduce framework for analyzing next-generation DNA sequencing data. Genome Res. 2010;20(9):1297–303. Bernhardsson C, Vidalis A, Wang X, Scofield DG, Shiffthaler B, Baison J, et al. An ultra-dense haploid genetic map for evaluating the highly fragmented genome assembly of Norway spruce (Picea abies). bioRxiv. 2018. Purcell S, Neale B, Todd-Brown K, Thomas L, Ferreira MAR, Bender D, et al. PLINK: a tool set for whole-genome association and population-based linkage analyses. Am J Hum Genet. 2007;81(3):559–75. Money D, Gardner K, Migicovsky Z, Schwaninger H, Zhong G-Y, Myles S. LinkImpute: fast and accurate genotype imputation for nonmodel organisms. G3 Genes Genom Genet. 2015;5(11):2383–90. Bradbury PJ, Zhang Z, Kroon DE, Casstevens TM, Ramdoss Y, Buckler ES. TASSEL: software for association mapping of complex traits in diverse samples. Bioinformatics. 2007;23(19):2633–5. R Core Team. R: a language and environment for statistical Computing R Foundation for Statistical Computing. Vienna; 2014. Butler DG, Cullis BR, Gilmour AR, Gogel BJ. ASReml-R reference manual. The State of Queensland, Department of Primary Industries and Fisheries. Brisbane; 2009. Pérez P, de Los Campos G. Genome-wide regression & prediction with the BGLR statistical package. Genetics. 2014;198(2):483–95. Wimmer V, Albrecht T, Auinger H-J, Schoen C-C. Synbreed: a framework for the analysis of genomic prediction data using R. Bioinformatics. 2012;28(15):2086–7. Hoerl AE, Kennard RW. Ridge regression: biased estimation for nonorthogonal problems. Technometrics. 1970;12(1):55–67. De Los Campos G, Gianola D, Rosa GJ, Weigel KA, Crossa J. Semi-parametric genomic-enabled prediction of genetic values using reproducing kernel Hilbert spaces methods. Genet Res. 2010;92(4):295–308. Falconer D, Mackay T. Introduction to quantitative genetics. 4th ed. New York: Longman; 1996. Märtens K, Hallin J, Warringer J, Liti G, Parts L. Predicting quantitative traits from genome and phenome with near perfect accuracy. Nat Commun. 2016;7:11512. Beaulieu J, Doerksen TK, MacKay J, Rainville A, Bousquet J. Genomic selection accuracies within and between environments and small breeding groups in white spruce. BMC Genomics. 2014;15(1):1048. Lorenz AJ, Chao S, Asoro FG, Heffner EL, Hayashi T, Iwata H, et al. Genomic selection in plant breeding: knowledge and prospects. In: Advances in agronomy, vol. 110; 2011. p. 77–−123. Perron M, DeBlois J, Desponts M. Use of resampling to assess optimal subgroup composition for estimating genetic parameters from progeny trials. Tree Genet Genomes. 2013;9(1):129–43. Kroon J, Ericsson T, Jansson G, Andersson B. Patterns of genetic parameters for height in field genetic tests of Picea abies and Pinus sylvestris in Sweden. Tree Genet Genomes. 2011;7(6):1099–111. Berlin M, Jansson G, Högberg K-A. Genotype by environment interaction in the southern Swedish breeding population of Picea abies using new climatic indices. Scand J For Res. 2014;30(2):112–21. Oakey H, Cullis B, Thompson R, Comadran J, Halpin C, Waugh R. Genomic selection in multi-environment crop trials. G3 Genes Genom Genet. 2016;6(5):1313–26. Pérez-Rodríguez P, Crossa J, Rutkoski J, Poland J, Singh R, Legarra A, et al. Single-step genomic and pedigree genotype×environment interaction models for predicting wheat lines in international environments. Plant Genome. 2017;10(2). Ventorim Ferrão LF, Gava Ferrão R, Ferrão MAG, Francisco A. Garcia AAF. A mixed model to multiple harvest-location trials applied to genomic prediction in Coffea canephora. Tree Genet Genomes. 2017;13(5):95. Zapata-Valenzuela J, Isik F, Maltecca C, Wegrzyn J, Neale D, McKeand S, et al. SNP markers trace familial linkages in a cloned population of Pinus taeda—prospects for genomic selection. Tree Genet Genomes. 2012;8(6):1307–18. Baison J, Vidalis A, Zhou L, Chen Z-Q, Li Z, Sillanpää MJ, et al. Association mapping identified novel candidate loci affecting wood formation in Norway spruce. bioRxiv. 2018. Ng SB, Turner EH, Robertson PD, Flygare SD, Bigham AW, Lee C, et al. Targeted capture and massively parallel sequencing of 12 human exomes. Nature. 2009;461:272. Pavy N, Namroud MC, Gagnon F, Isabel N, Bousquet J. The heterogeneous levels of linkage disequilibrium in white spruce genes and comparative analysis with other conifers. Heredity. 2012;108(3):273–84. The computations were performed on resources by the Swedish National Infrastructure for Computing (SNIC) at UPPMAX and HPC2N. We thank Dr. Junjie Zhang, Tianyi Liu, and Ms. Xinyu Chen, Linghua Zhou for help of the DNA extraction and field assistance and Anders Fries for field work. Financial support was received from Formas (grant number 230–2014-427) and the Swedish Foundation for Strategic Research (SSF, grant number RBP14–0040). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The datasets supporting the conclusions of this article are available upon request. Umeå Plant Science Centre, Department of Forest Genetics and Plant Physiology, Swedish University of Agricultural Sciences, SE-90183, Umeå, Sweden Zhi-Qiang Chen, John Baison, Jin Pan, María Rosario García-Gil & Harry X. Wu Skogforsk, Ekebo 2250, SE-268 90, Svalöv, Sweden Bo Karlsson Skogforsk, Box 3, SE-918 21, Sävar, Sweden Bengt Andersson & Johan Westin CSIRO NRCA, Black Mountain Laboratory, Canberra, ACT, 2601, Australia Harry X. Wu Zhi-Qiang Chen John Baison Jin Pan Bengt Andersson Johan Westin María Rosario García-Gil ZQC designed sampling strategy, coordinated field sampling, analyzed data, and drafted the manuscript. BA, BK, and JW participated in the selection of the breeding populations, providing access to field experiments, tree height data and edited the manuscript. JB, JP, and MRGG participated in collection of phenotypic data, extraction of the DNA, SNP calling, and editing of the manuscript. HXW conceived and designed the study, and assisted writing of the manuscript. All authors read and approved the final manuscript. Correspondence to Harry X. Wu. The plant materials analyzed for this study comes from common garden experiments (Plantation and clonal archives) that were established and maintained by the Forestry Research Institute of Sweden (Skogforsk) for breeding selections and research purposes. Three tree breeders in Sweden were coauthors in this paper. They agreed to access the materials. Chen, ZQ., Baison, J., Pan, J. et al. Accuracy of genomic selection for growth and wood quality traits in two control-pollinated progeny trials using exome capture as the genotyping platform in Norway spruce. BMC Genomics 19, 946 (2018). https://doi.org/10.1186/s12864-018-5256-y DOI: https://doi.org/10.1186/s12864-018-5256-y Exome capture Bayesian LASSO
CommonCrawl
The Poll Bludger Analysis and discussion of elections and opinion polls in Australia One finely crafted electoral news item for every state (and territory) that is or might ever conceivably have been part of our great nation. A bone for every dog in the federation kennel: Gladys Berejiklian has backed a move for the Liberal Party to desist from endorsing or financially supporting candidates in local government elections, reportedly to distance the state government from adverse findings arising from Independent Commission Against Corruption investigations into a number of councils. Many in the party are displeased with the idea, including a source cited by Linda Silmalis of the Daily Telegraph, who predicted "world war three" because many MPs relied on councillors to organise their numbers at preselections. The second biggest story in the politics of Victoria over the past fortnight has been the expose of the activities of Liberal Party operator Marcus Bastiaan by the Nine newspaper-and-television news complex, a neat counterpoint to its similar revelations involving Labor powerbroker Adem Somyurek in June. The revelations have been embarrassing or worse for federal MPs Michael Sukkar and Kevin Andrews, with the former appearing to have directed the latter's electorate office staff to spend work time on party factional activities. Together with then state party president Michael Kroger, Bastiaan was instrumental in establishing a conservative ascendancy with help from Bastiaan's recruitment of members from Mormon churches and the Indian community. Having installed ally Nick Demiris as campaign director, Bastiaan's fingerprints were on the party's stridently conservative campaign at the 2018 state election, which yielded the loss of 11 lower house Coalition seats. Religious conservatives led by Karina Okotel, now a federal party vice-president, then split from the Bastiaan network, complaining their numbers had been used to buttress more secular conservatives. The Age's report noted that "in the days leading up to the publication of this investigation, News Corporation mastheads have run stories attacking factional opponents of Mr Bastiaan and Mr Sukkar". Presumably related to this was a report on Okotel's own party activities in The Australian last weekend, which was long on emotive adjectives but short on tangible allegations of wrongdoing, beyond her having formed an alliance with factional moderates after the split. There are now less than two months to go until the October 31 election, which is already awash with Clive Palmer's trademark yellow advertising targeting Labor. Thanks the state's commendable law requiring that donations be publicly disclosed within seven days (or 24 hours in the last week of an election campaign), as compared with over a year after the election at federal level (where only donations upwards of $14,000 need to be disclosed at all, compared with $1000 in Queensland), we are aware that Palmer's companies have donated more than $80,000 to his United Australia Party. Liberal National Party sources cited by The Guardian say a preference deal has already been struck with Palmer's outfit, although others in the party are said to be "furious" and "concerned" at the prospect of being tarred with Palmer's brush. I have nothing to relate here, which is worth noting in and itself, because the near total absence of voting intention polling from the state since Mark McGowan's government came to power in 2017 is without modern historical precedent. This reflects the demise of the aggregated state polling that Newspoll used to provide on a quarterly basis in the smaller states (bi-monthly in the larger ones), and an apparent lack of interest in voting intention polling on behalf of the local monopoly newspaper, which offers only attitudinal polling from local market research outfit Painted Dog Research. The one and only media poll of the term was this one from YouGov Galaxy in the Sunday Times in mid-2018, showing Labor with a lead of 54-46, slightly below the 55.5-44.5 blowout it recorded in 2017. With Newspoll having recorded Mark McGowan's approval rating at 88% in late June, it can be stated with confidence that the gap would be quite a bit wider than that if a poll were conducted now. The West Australian reported in late July that Utting Research, which has conducted much of Labor's internal polling over the years, had Labor leading 66-34, which would not sound too far-fetched to anyone in tune with the public mood at present. The next election is to be held on March 13. I have been delinquent in not covering the publication of the state's draft redistribution a fortnight ago, but Ben Raue at The Tally Room has it covered here and here, complete with easily navigable maps. These are the first boundaries drawn since the commissioners were liberated from the "fairness provision" which directed them to shoot for boundaries that would deliver a majority to the party with the largest two-party vote. This proved easier said than done, with three of Labor's four election wins from 2002 and 2014 being achieved without it. The commissioners used the wriggle room allowed them in the legislation to essentially not even try in 2014, before bending other backwards to tilt the playing field to the Liberals in 2018, who duly won a modest majority from 51.9%. By the Boundaries Commission's own reckoning, there would have been no difference to the outcome of the 2018 election if it had held under the proposed new boundaries. Nonetheless, the Liberals have weakened in three seats where they are left with new margins of inside 1%: Elder, where their margin is slashed from 4.5% to 0.1%; Newland, down from 2.1% to 0.4%; and Adelaide, down from 1.1% to 0.7%. Their only notable compensation is an increase in their margin in King from 0.8% to 1.5%, and a cut in Labor's margin in Badcoe from 5.6% to 2.0%. Local pollster EMRS has published its quarterly state voting intention poll, which reflects Newspoll in finding voters to be over the moon with Premier Peter Gutwein, who came to the job just in time for COVID-19 to hit the fan when Will Hodgman retired in January. Over three polls, the Liberal vote has progressed from 43% to 52% to 54%; Labor has gone from 34% to 28% to 24%; and the Greens have gone from 12% to 10% and back again. Gutwein now leads Labor's Rebecca White by 70% to 23% as preferred premier, out from 63-26 last time (and 41-39 to White on Gutwein's debut in March). The poll was conducted by phone from August 18 to 24. With the last dregs of counting being conducted from now through Friday, fully our of the 25 seats in the Northern Territory remain in doubt following the election the Saturday before last, with current margins ranging from seven to 18 votes. However, the actual election result is well and truly done and dusted, with Labor having 13 seats in the bag. You can follow the action on my dedicated post, which includes live updating of results. Not that I have anything particular to say about it at this point, but the Australian Capital Territory is the next cab off the election rank with polling day on October 17, a fortnight before Queensland. Do Kiwi nationalists complain of being treated like the seventh state in Australia? Well, they can now, as I have a new Roy Morgan poll to relate ahead of their election which will, like that of the ACT, be held on October 17, with the originally anticipated date of September 19 being pushed back due to its recent COVID-19 flare-up. If this poll is any guide, this may have knocked a coat of paint off Labour without in any way endangering Jacinda Ardern's government. Labour is now at 48%, down from 53.5% last month, with National up two to 28.5%. The Greens are up from 8% to 11.5%, and do notably better out of this poll series than rivals Colmar Brunton and Reid Research, which show them struggling to keep their head above the 5% threshold that guarantees them seats in parliment under the country's mixed-member proportional representation system. New Zealand First remain well below it at 2.5%, albeit that this is up a point on last month, while the free-market liberal ACT New Zealand party is clear of it on 6%, down half a point. The poll was conducted by phone from a sample of 897 "during August". Author: William Bowe William Bowe is a Perth-based election analyst and occasional teacher of political science. His blog, The Poll Bludger, has existed in one form or another since 2004, and is one of the most heavily trafficked websites on Australian politics. View all posts by William Bowe Author William BowePosted on Wednesday, September 2, 2020 Wednesday, September 2, 2020 Categories Federal Politics 2019-2022, New Zealand politics, NSW Politics, Queensland Politics, South Australian Politics, Tasmanian Politics, Victorian Politics, Western Australian Politics 1,590 comments on "Affairs of state" Comments Page 28 of 32 1 … 27 28 29 … 32 KayJay says: Friday, September 4, 2020 at 1:40 pm lizzie @ #1339 Friday, September 4th, 2020 – 1:32 pm Oh god, he's got a Plan for starting talks on a National Design of Hotspots. That's OK then, his plans never come to fruition. Make him stop. I can't believe that anyone takes this prick seriously. Simon Katich says: I can be paid for exuding inane BS? Where do I sign up? Do it for the love, not the money? PB'ers are like the Sisters and do it for themselves. guytaur says: The international headlines are going to be interesting. "Australia starts civil war" is not out of the question given Morrison's statements. Spray says: Fulvio Sammut @ #1347 Friday, September 4th, 2020 – 1:36 pm Because these passengers were allowed to disembark and travel freely to all parts of Australia, spreading the seeds of this disease wherever those who were infected went. And to who knows where else overseas. As I said, the inquiry found that the spread from the disembarking passengers was limited to 62 infections, resulting in one serious illness. No deaths. Potential overseas spread is a different matter, but we were specifically talking about the consequences in Australia. I get that it's an inconvenient truth, but I'm more interested in the truthfulness than the convenience. Has Morrison open the Australian international borders ? He has stated Australia should not be closed Tricot says: Just heard Morrison have to backtrack on WA………………………………Hot air yesterday in parliament, now "given up on WA" according to jocks on 6PR………….What a weak phoney he is………………………………… I just need somewhere to vent about politics and politicians, where I might get a sympathetic response. Why does this make me a "hack"? Because Quoll says so Lizzie, in one of the most obnoxious posts you're likely to find here. KayJay, I want to know what you really think. Marketing 101 – treat your customers like they are dumber than a hamster. https://www.youtube.com/watch?v=D0uUF3mK7Xk sprocket_ says: Morrison back peddling after calling the Premiers teenagers who won't get in Daddy's car, and go off and do their own thing C@tmomma says: poroti @ #1333 Friday, September 4th, 2020 – 1:26 pm Scrott's marketing gland working already. Rolls out a new brand name. This year, the year of the COVID pandemic and the COVID recession,………. https://www.theguardian.com/australia-news/live/2020/sep/04/coronavirus-australia-latest-updates-borders-national-cabinet-scott-morrison-gladys-berejiklian-health-business-nsw-queensland-police-victoria-hotel-quarantine-live-news And who was in charge? 🙂 doyley says: As I have posted previously the best thing Morrison has done to help labor is exclude Albanese from National Cabinet. In his haste to side line the opposition leader and labor and make it all about himself Morrison has full ownership of the breakdown in national consensus we are now seeing. poroti says: 😆 😆 Shame so many would lap it up but come on down Daddy Scrott having to deal with those bolshie pimply State teenagers. It doesn't matter whether you're running a business or a community organisation or you're a parent, you try and get all the kids in the car! And you try and do – everybody at the same place at the same time. Particularly if they're teenagers, that gets a lot harder and they'll do their own thing every now and again. Looks like final NT results are in ALP 14 CLP 8 Ind 2 Blain to ALP Barkly and Namatjira to CLP Araluen the one and only TA win Antony Green's blog… 1:50pm – I missed updates because the NTEC results feed is still running. After the includion of the final votes, the last four seats were decided as Robyn Lambley has won Araluen by 43 votes, Labor win Blain by 13, CLP win Barkly by 7 and CLP win Namatjira by 22. All those results will have to be confirmed by the formal distribution of preferences later today. New Assembly is ALP 14, CLP 8, TA 1 and IND 2 sprocket_ Sounded like he was making sure anyone who missed the connection had their attention drawn to it. I know you were going to make the obvious comparisons, I would encourage you not to, and to resist that temptation. The Silver Bodgie says: Roy Morgan Unemployment in August up to 1.98 million (13.8%) – largest increases in 'hard border' states of Queensland, WA and Tasmania https://www.roymorgan.com/findings/8512-australian-unemployment-estimates-august-2020-202009030534 Poroti, that's one of Trump's tactics Say something provocative, and then backtrack – it was a joke etc. He knows the provocative piece gets sound grabbed and may go viral. Better still if it is humiliating for the target, and reinforces prejudices. Alpha Zero says: BoJo likes to dress up like the Australian who he is hanging around with at all times: https://static.ffx.io/images/$zoom_0.491,$multiply_2.15,$ratio_1.776119,$width_357,$x_77,$y_54/t_crop_custom/w_768/q_86,f_auto/02f69ddf8260ea7e1012626b9ce7879649ff860d/ south says: sproket. If it works then why doesn't labor do it. Also everything think hard about this. Who's the Australian equivalent of AOC????? Who in labor? We don't have one, and that's what labor needs. Someone who just wants to fight all the time. That's not Albo. And i can't think of anyone on the front bench with any passion. Yep the statement and the faux backtrack all written beforehand. Mission accomplished as the message is heard by those toward whom it is aimed. alfred venison says: the conversation (uk) has a very good essay on why johnson hired abbott. -a.v. https://theconversation.com/tony-abbott-why-boris-johnson-would-want-australias-controversial-ex-pm-as-a-trade-envoy-145494 Fulvio Sammut says: What is a teenager to do when the parent is an ignorant, incompetent Yobbo with no capacity to drive safely, no scruples and no moral compass? I believe the Q which made Morrison turn tail and run was one on sportsrorts. Who's the Australian equivalent of AOC It is not AOCs passion that sets her apart. Her forensic questioning, dissection and exposition of systemic BS is what makes her a force. She is calm and careful in what she does. If I were to try to compare her to someone here, I would say Penny Wong is somewhat similar. I reckon the point about AOC is that she is not beholden to anyone or any group or powerful section other than the grass roots people she immediately (and has since subsequently) represents. And that the Democrat party has provision for someone like this to challenge for a candidacy and win an election – that AOC sits alongside DFL reps from Minnesota and conservative representatives from the south in one political party. In a federated nation of extremely diverse states, the only way to win presidential and congressional power is to be like this. For two days, my ABC Newstream has been frequently dropping out, requiring reload. Anyone else suffering like this? Danama Papers says: McGowan lays the boots into ScoMoFo: Today at National Cabinet we discussed border controls at length. It was a productive discussion but I made it clear that Western Australia will not be agreeing to a hot spot model or a hot spot definition which replaces our successful border controls. Western Australia has always avoided setting an arbitrary deadline on borders. A date will be set when our health advice recommends it, but that might be some time away. We went through this before and then Victoria happened. Opening and closing borders just causes more confusion and it isn't a good outcome for the state's economy. The Prime Minister and other states respect and understand our decision given the unique factors for Western Australia and the very positive direction our economy is heading. Unlike the rest of the country, WA is not currently in a recession. So we won't be prematurely reopening our borders. Via The Grauniad Blogg. @CroweDM There is no national cabinet agreement on how to define a "hotspot" but there is a federal proposal: In cities, that definition is an average of 10 locally acquired cases over three days — i.e. more than 30 cases in 3 consecutive days. In rural areas, the definition is an average of 3 locally acquired cases over three days — i.e. more than 9 cases over three consecutive days. Fargo61 says: I have just sent the following to the Premier of Queensland… "Dear twice elected Premier Annastacia Palaszczuk, Thank you for having the sense and fortitude to follow Dr Young's advice and for standing up for Queenslanders – providing strong but flexible border security – and for ignoring the typical crap from the Murdoch press and right wing fruit loops." Meanwhile the spineless LNP leader finally got into line on border closures yesterday, still with quibbles – Clive Palmer and Pauline Hanson still yabbering away for open borders. That's were labor needs to get to. If it could manage to have a core platform of an agenda but allow dissent in public on issues and have the fights in public it would take away some of that closed off union shop feel that that ALP has. Amazingly, the LNP allow fuck heads like craig kelly to be in their party. But it's to their benefit. Craig mirrors they views of the crazies in the outter orbit who vote LNP so they don't splinter to the far right. The ALP is too much in the center. …Abbott stands symbolically for a set of values and a political orientation which the Johnson government wishes to endorse and align itself with. In terms of values, Abbott represents a US style of conservatism based on a belief in "family values", patriotism and the flag. But within that broad appellation we can also identify a distinctively neoconservative stance in terms of the assertion of "western" values and the superiority of the European inheritance, including but not limited to the value of colonialism and imperialism, and what international relations scholars term "offensive realism". This is the view that, in a world of competing ideologies, military conflicts are inevitable. In short, Abbott's world view is not at all dissimilar to that of Steve Bannon, the controversial architect of the first phase of Trump's administration. Like Bannon, Abbott is an unapologetic culture warrior. He believes that western societies have lost their way and lost confidence in themselves. He thinks the west needs to refind its mojo and reassert the superiority of its values and way of life, particularly in relation to the Islamic world and China. Boris Johnson is pictured at a meeting of his cabinet. Johnson is looking to establish 'Global Britain' after Brexit. PA All this implies a kind of permanent war against the forces of the left – such as antifa, the left-liberal establishment of universities and the media and the apologists for identity politics, multiculturalism and cosmopolitanism. It also means committing to permanent conflict externally, on the hostile terrain that is global politics. It is a hawkish, unfashionable view of the world with metropolitan elites, but one virulently supported in Australia by its leading newspaper, the Australian, and by the Rupert Murdoch-owned Sky News. The question remains then, what possible use are all these associations to Johnson? He has strived to confect an image of harmless amiability with a "big tent" politics. He has sought to be a lot of different things to a lot of different groups in order to secure the hallowed middle ground of British electoral politics. The answer is surely that "culture war" of a kind articulated quite crudely by Abbott and Trump but also in Europe by the likes of France's Marine Le Pen, the Netherlands' Geert Wilders, Italy's Matteo Salvini and Hungary's Viktor Orban has shown itself to be popular with voters who don't normally vote for the right. The theme is a great way to draw in working class and precariously employed people who are looking for stronger "authority" figures to deal with what they perceive to be increasingly lawless societies surrendering themselves to immigrants and the multicultural left. It also serves to insulate a regime from the vagaries of public policy outcomes, of which COVID-19 is the most recent and obvious example. The pandemic is a classic no-win scenario for most governments. Play too lax and one gets blamed for too many deaths. Play it too hard and one suffers the economic consequences of lockdown. A culture war, on the other hand, presents a win-win for conservative regimes across the world looking to maintain power. Ridiculous criteria. Just asking for trouble. citizen says: A polite way of saying Morrison had a tantrum because his playmates wouldn't let him be boss. Scott Morrison has scolded stubborn state premiers after failing to secure consensus on easing border restrictions or allowing the free movement of https://www.canberratimes.com.au/story/6909772/unhappy-pm-scolds-stubborn-state-premiers/?cs=9676&utm_source=website&utm_medium=home&utm_campaign=latestnews McGowan But I'd just remain you all, Western Australia isn't the only place with a border. The Commonwealth Government has a border. The Commonwealth Government has a border to the rest of the world. There's no pressure on them to say, "When that's coming down?" The arguments are exactly the same. There's community spread in the east, therefore, we're not bringing down the border. The Commonwealth has a border with the rest of the world, there is community spread around the world, they're not bringing down the border. And McGowan lays even more of a boot into ScoMoFo: Well, the health advice would take into account the elimination of community spread in the east. That's what would have to occur. And so, if the health advice provides that – says that that has occurred, well then, that allows us to move towards removing the borders. That last paragraph is the knock out punch to SfM from McGowan, the current and future premier of the great state of Western Australia. Damn ya gotta be quick around here. Curse you poroti and your lightning fast fingers. A comparison between NSW and Queensland quarantine. https://www.9news.com.au/national/queensland-hotel-quarantine-coronavirus-covid19-pippa-bradshaw-tragedies/e40ea522-3b00-4f70-a2b9-bd7d63824351 With the borders to be closed for some time yet, it may be time for the Premier and Dr Young to consider the unnecessary harm both psychologically and financially, their system is doing to people, who are going through the worst time of their lives. Anthony Albanese @AlboMP · 14m Today Scott Morrison finally acknowledged that the so-called National Cabinet is not national and it's not a Cabinet. He just wants to turn everything into a marketing opportunity without taking responsibility for anything. mundo says: Danama Papers @ #1382 Friday, September 4th, 2020 – 2:42 pm Now that's more like it. Imagine that sort of take that Scotty pressure from federal Labor. Bloody brilliant Mark. south @ #1377 Friday, September 4th, 2020 – 2:35 pm The ALP doesn't know where it is. Socrates says: After listening to this video for the first time, about letting "nature take its course"" i.e. letting old people die so the economy can be started again faster, I was going to nickname Tony "Soylent Green" Abbott. https://www.theguardian.com/australia-news/video/2020/sep/01/hard-questions-needed-on-cost-of-keeping-all-covid-patients-alive-says-tony-abbott-video But after catching up with Morrison's rants I see the current Prime Sociopath probably deserves the same nickname. One thing about Abbott – he doesn't come across as a slob like Johnson does. Maybe BoJo has a bigger role in mind for The Lying Friar? frednk says: I think labor knows exactly where it is. Dickheads to the left trying to undermine, dickheads to the right and dickheads that can't even refer to themselves in the first person. You've done yer dough. You got that right. Quoll says: lizziesays: Spraysays: Ha, pretty low bar there Spray Of the 3 475 000 or so posts on PB that takes the cake eh Are half those 3 million+ posts by one of the Laborite cabal, who know who they are, screaming and making any old crap up about 'teh Greens' about 100 times a day as usual? Some people really take themselvs far too seriously around here. Lizzie's one of the few who seems like a genuine human and Labor supporter and not so infested with blind partisanship and pumping out inane posts 50 times a day about 'teh Greens', like some others do. Don't we all think we know who they are? Some of the Laborite cabal are astoundingly abusive and derogatory even to those here who would express support for Labor but still have some critical analysis or point to share, in what seems mostly genuine interest in seeing the ALP do better. Frankly the PB laborite cabal that dominates seems to drive any genuine supporters who don't follow their factional/party line away. I have been stunned at how purposefully abusive and derogatory some are to their fellow ALP members and supporters here who don't just follow the right faction party line. As I have said previously, some of the true-believers here seem the best anti-ALP advertising going around, fairly regularly. So I often prefer to let them go on and on and not engage. You know the issue of arguing with fools and all that. It's not an ALP blog and Laborites can only be expected to recieve the same attention and consideration as they afford others, same for all parties and people…. there's years and years and literally thousands of threads and comments that anyone can freely waste their life going through and clearly see the inane constantly repeated jibes and idiocies mostly from the same few people, going on for years and years. Though I would say that is not Lizzie. More and more of the same for years and years will surely reap an entirely different outcome, isn't that how it works? did anyone notice ? in case you missed it, mundo resorted to the first person last night when he announced that, for the first time in many years (since 2011?), he won't be making his usual annual donation to the labor party, because they're pusillanimous. -a.v. Quoll I do come here for entertainment as well, although some days (and nights) the wit and wisdom of PB is a bit lacking. Repetition and circular arguments are very boring, but, on the plus side, I've learned a lot from many contributors and have more insight about parliamentary workings and political shenanigans. And other surprising subjects. Assistant Commissioner calling a spade a spade. https://www.theage.com.au/national/victoria/anti-lockdown-protesters-divided-over-planned-melbourne-rally-20200903-p55s99.html Isn't working for the interests of a foreign country treason? And why is Bronwyn Bishop so supportive of Tony Abbott these days? I thought she hated his guts? Previous Previous post: Northern Territory election live: the final stretch Next Next post: More affairs of state Contact the Poll Bludger Groom by-election Guide to the by-election for the federal seat of Groom Voting intention trend measures based on polling published since the 2019 federal election. Click here for a clearer view, plus leadership approval trends and detailed polling data. Kelly's zeroes Georgia Senate runoffs live Georgia Senate runoffs minus four days BludgerTrack 2: electric boogaloo Blog posts by category Select Category ACT Election 2004 (7) ACT Election 2008 (6) ACT Politics (13) Brisbane City Council Elections (13) British politics (53) Canadian Politics (6) Election results (3) Electoral reform (34) Federal By-Elections (87) Federal Election 2004 (124) Federal Election 2007 (293) Federal Election 2010 (68) Federal Election 2013 (126) Federal Election 2016 (123) Federal Election 2019 (120) Federal Election Campaign 2016 (52) Federal Politics (270) Federal Politics 2010-2013 (324) Federal Politics 2013- (332) Federal Politics 2016-2019 (437) Federal Politics 2019-2022 (161) Federal Redistributions (39) French politics (2) General (85) German politics (3) Irish politics (8) Israeli politics (5) Mexican politics (1) New Zealand Election 2005 (5) New Zealand Election 2008 (4) New Zealand politics (18) Northern Territory Election 2005 (10) Northern Territory Election 2008 (5) Northern Territory Election 2016 (3) Northern Territory Election 2020 (4) NSW By-Elections (51) NSW Council Elections (2) NSW Election 2007 (23) NSW Election 2011 (14) NSW Election 2015 (15) NSW Election 2019 (19) NSW Politics (215) NT By-Elections (18) NT Politics (35) Queensland By-Elections (32) Queensland Election 2004 (28) Queensland Election 2006 (29) Queensland Election 2009 (7) Queensland Election 2012 (19) Queensland Election 2015 (19) Queensland Election 2017 (17) Queensland Election 2020 (14) Queensland Politics (224) SA By-Elections (9) SA Election 2006 (23) SA Election 2010 (13) SA Election 2014 (13) SA Election 2018 (15) South Australian Politics (118) State Redistributions (19) Tasmanian Election 2006 (13) Tasmanian Election 2010 (7) Tasmanian Election 2014 (8) Tasmanian Election 2018 (8) Tasmanian Periodical Elections (44) Tasmanian Politics (124) US Politics (106) Victorian By-Elections (20) Victorian Council Elections (1) Victorian Election 2006 (36) Victorian Election 2010 (21) Victorian Election 2014 (20) Victorian Election 2018 (18) Victorian Politics (187) WA By-Elections (32) WA Election 2005 (54) WA Election 2008 (25) WA Senate Election 2014 (5) Western Australian Election 2013 (15) Western Australian Election 2017 (18) Western Australian Politics (117) Blog posts by month Select Month January 2021 (4) December 2020 (9) November 2020 (19) October 2020 (28) September 2020 (16) August 2020 (16) July 2020 (16) June 2020 (17) May 2020 (15) April 2020 (16) March 2020 (17) February 2020 (16) January 2020 (11) December 2019 (12) November 2019 (15) October 2019 (15) September 2019 (13) August 2019 (15) July 2019 (13) June 2019 (14) May 2019 (51) April 2019 (30) March 2019 (29) February 2019 (18) January 2019 (10) December 2018 (14) November 2018 (30) October 2018 (23) September 2018 (17) August 2018 (20) July 2018 (20) June 2018 (18) May 2018 (24) April 2018 (15) March 2018 (27) February 2018 (21) January 2018 (9) December 2017 (20) November 2017 (33) October 2017 (21) September 2017 (19) August 2017 (19) July 2017 (18) June 2017 (16) May 2017 (14) April 2017 (16) March 2017 (21) February 2017 (19) January 2017 (9) December 2016 (9) November 2016 (20) October 2016 (16) September 2016 (12) August 2016 (13) July 2016 (18) June 2016 (26) May 2016 (30) April 2016 (40) March 2016 (31) February 2016 (16) January 2016 (15) December 2015 (13) November 2015 (18) October 2015 (20) September 2015 (21) August 2015 (18) July 2015 (17) June 2015 (18) May 2015 (20) April 2015 (21) March 2015 (26) February 2015 (18) January 2015 (31) December 2014 (18) November 2014 (34) October 2014 (23) September 2014 (23) August 2014 (18) July 2014 (24) June 2014 (17) May 2014 (26) April 2014 (20) March 2014 (31) February 2014 (25) January 2014 (13) December 2013 (17) November 2013 (23) October 2013 (17) September 2013 (38) August 2013 (40) July 2013 (21) June 2013 (20) May 2013 (19) April 2013 (18) March 2013 (26) February 2013 (20) January 2013 (17) December 2012 (14) November 2012 (11) October 2012 (19) September 2012 (17) August 2012 (24) July 2012 (12) June 2012 (9) May 2012 (13) April 2012 (16) March 2012 (28) February 2012 (15) January 2012 (16) December 2011 (9) November 2011 (17) October 2011 (16) September 2011 (8) August 2011 (14) July 2011 (11) June 2011 (15) May 2011 (16) April 2011 (11) March 2011 (26) February 2011 (16) January 2011 (11) December 2010 (10) November 2010 (31) October 2010 (17) September 2010 (15) August 2010 (50) July 2010 (24) June 2010 (18) May 2010 (14) April 2010 (11) March 2010 (26) February 2010 (18) January 2010 (11) December 2009 (15) November 2009 (17) October 2009 (13) September 2009 (11) August 2009 (16) July 2009 (10) June 2009 (15) May 2009 (19) April 2009 (12) March 2009 (13) February 2009 (15) January 2009 (9) December 2008 (14) November 2008 (21) October 2008 (30) September 2008 (42) August 2008 (26) July 2008 (15) June 2008 (23) May 2008 (19) April 2008 (16) March 2008 (14) February 2008 (17) January 2008 (14) December 2007 (10) November 2007 (110) October 2007 (64) September 2007 (40) August 2007 (32) July 2007 (22) June 2007 (13) May 2007 (7) April 2007 (4) March 2007 (26) February 2007 (8) January 2007 (3) December 2006 (7) November 2006 (29) October 2006 (3) September 2006 (17) August 2006 (16) July 2006 (2) June 2006 (3) May 2006 (4) April 2006 (3) March 2006 (31) February 2006 (9) January 2006 (6) December 2005 (3) November 2005 (7) October 2005 (9) September 2005 (7) August 2005 (11) July 2005 (4) June 2005 (8) May 2005 (6) April 2005 (4) March 2005 (6) February 2005 (34) January 2005 (16) December 2004 (7) November 2004 (3) October 2004 (33) September 2004 (42) August 2004 (24) July 2004 (11) June 2004 (7) May 2004 (8) April 2004 (4) March 2004 (6) February 2004 (9) January 2004 (22) Repository of links to past election guides and poll trends for previous terms. The Poll Bludger Proudly powered by WordPress
CommonCrawl
The Journal of Symbolic Logic (7) The Review of Symbolic Logic (2) Bulletin of Symbolic Logic (1) Association for Symbolic Logic (10) NONREPRESENTABLE RELATION ALGEBRAS FROM GROUPS - ADDENDUM HAJNAL ANDRÉKA, ISTVÁN NÉMETI, STEVEN GIVANT Journal: The Review of Symbolic Logic / Volume 12 / Issue 4 / December 2019 Published online by Cambridge University Press: 04 October 2019, p. 892 NONREPRESENTABLE RELATION ALGEBRAS FROM GROUPS Journal: The Review of Symbolic Logic , First View Published online by Cambridge University Press: 13 June 2019, pp. 1-21 A series of nonrepresentable relation algebras is constructed from groups. We use them to prove that there are continuum many subvarieties between the variety of representable relation algebras and the variety of coset relation algebras. We present our main construction in terms of polygroupoids. THE VARIETY OF COSET RELATION ALGEBRAS STEVEN GIVANT, HAJNAL ANDRÉKA Journal: The Journal of Symbolic Logic / Volume 83 / Issue 4 / December 2018 Givant [6] generalized the notion of an atomic pair-dense relation algebra from Maddux [13] by defining the notion of a measurable relation algebra, that is to say, a relation algebra in which the identity element is a sum of atoms that can be measured in the sense that the "size" of each such atom can be defined in an intuitive and reasonable way (within the framework of the first-order theory of relation algebras). In Andréka--Givant [2], a large class of examples of such algebras is constructed from systems of groups, coordinated systems of isomorphisms between quotients of the groups, and systems of cosets that are used to "shift" the operation of relative multiplication. In Givant--Andréka [8], it is shown that the class of these full coset relation algebras is adequate to the task of describing all measurable relation algebras in the sense that every atomic and complete measurable relation algebra is isomorphic to a full coset relation algebra. Call an algebra $\mathfrak{A}$ a coset relation algebra if $\mathfrak{A}$ is embeddable into some full coset relation algebra. In the present article, it is shown that the class of coset relation algebras is equationally axiomatizable (that is to say, it is a variety), but that no finite set of sentences suffices to axiomatize the class (that is to say, the class is not finitely axiomatizable). 9 - Relativistic Computation By Hajnal Andréka, Judit X. Madarász, István Németi, Péter Németi, Gergely Székely Edited by Michael E. Cuffaro, University of Western Ontario, Samuel C. Fletcher Book: Physical Perspectives on Computation, Computational Perspectives on Physics Print publication: 17 May 2018, pp 195-216 ON TARSKI'S AXIOMATIC FOUNDATIONS OF THE CALCULUS OF RELATIONS HAJNAL ANDRÉKA, STEVEN GIVANT, PETER JIPSEN, ISTVÁN NÉMETI Journal: The Journal of Symbolic Logic / Volume 82 / Issue 3 / September 2017 It is shown that Tarski's set of ten axioms for the calculus of relations is independent in the sense that no axiom can be derived from the remaining axioms. It is also shown that by modifying one of Tarski's axioms slightly, and in fact by replacing the right-hand distributive law for relative multiplication with its left-hand version, we arrive at an equivalent set of axioms which is redundant in the sense that one of the axioms, namely the second involution law, is derivable from the other axioms. The set of remaining axioms is independent. Finally, it is shown that if both the left-hand and right-hand distributive laws for relative multiplication are included in the set of axioms, then two of Tarski's other axioms become redundant, namely the second involution law and the distributive law for converse. The set of remaining axioms is independent and equivalent to Tarski's axiom system. Omitting types for finite variable fragments and complete representations of algebras Hajnal Andréka, István Németi, Tarek Sayed Ahmed Journal: The Journal of Symbolic Logic / Volume 73 / Issue 1 / March 2008 Published online by Cambridge University Press: 12 March 2014, pp. 65-89 We give a novel application of algebraic logic to first order logic. A new, flexible construction is presented for representable but not completely representable atomic relation and cylindric algebras of dimension n (for finite n > 2) with the additional property that they are one-generated and the set of all n by n atomic matrices forms a cylindric basis. We use this construction to show that the classical Henkin-Orey omitting types theorem fails for the finite variable fragments of first order logic as long as the number of variables available is > 2 and we have a binary relation symbol in our language. We also prove a stronger result to the effect that there is no finite upper bound for the extra variables needed in the witness formulas. This result further emphasizes the ongoing interplay between algebraic logic and first order logic. Groups and Algebras of Binary Relations Journal: Bulletin of Symbolic Logic / Volume 8 / Issue 1 / March 2002 In 1941, Tarski published an abstract, finitely axiomatized version of the theory of binary relations, called the theory of relation algebras. He asked whether every model of his abstract theory could be represented as a concrete algebra of binary relations. He and Jónsson obtained some initial, positive results for special classes of abstract relation algebras. But Lyndon showed, in 1950, that in general the answer to Tarski's question is negative. Monk proved later that the answer remains negative even if one adjoins finitely many new axioms to Tarski's system. In this paper we describe a far-reaching generalization of the positive results of Jónsson and Tarski, as well as of some later, related results of Maddux. We construct a class of concrete models of Tarski's axioms—called coset relation algebras—that are very close in spirit to algebras of binary relations, but are built using systems of groups and cosets instead of elements of a base set. The models include all algebras of binary relations, and many non-representable relation algebras as well. We prove that every atomic relation algebra satisfying a certain measurability condition—a condition generalizing the conditions imposed by Jónsson and Tarski—is essentially isomorphic to a coset relation algebra. The theorem raises the possibility of providing a positive solution to Tarski's problem by using coset relation algebras instead of the standard algebras of binary relations. Relativised quantification: Some canonical varieties of sequence-set algebras Hajnal Andréka, Robert Goldblatt, István Németi This paper explores algebraic aspects of two modifications of the usual account of first-order quantifiers. Standard first-order quantificational logic is modelled algebraically by cylindric algebras. Prime examples of these are algebras whose members are sets of sequences: given a first-order model U for a language that is based on the set {υκ: κ < α} of variables, each formula φ is represented by the set of all those α-length sequences x = 〈xκ: κ < α〉 that satisfy φ in U. Such a sequence provides a value-assignment to the variables (υκ is assigned value xκ), but it may also be viewed geometrically as a point in the α-dimensional Cartesian spaceαU of all α-length sequences whose terms come from the underlying set U of U. Then existential quantification is represented by the operation of cylindrification. To explain this, define a binary relation Tκ on sequences by putting xTκy if and only if x and y differ at most at their κth coordinate, i.e., Then for any set X ⊆ αU, the set is the "cylinder" generated by translation of X parallel to the κth coordinate axis in αU. Given the standard semantics for the existential quantifier ∃υκ as it is evident that Perfect extensions and derived algebras Hajnal Andréka, Steven Givant, István Németi Jónsson and Tarski [1951] introduced the notion of a Boolean algebra with (additive) operators (for short, a Bo). They showed that every Bo can be extended to a complete and atomic Bo satisfying certain additional conditions, and that any two complete, atomic extensions of satisfying these conditions are isomorphic over . Henkin [1970] extended these results to Boolean algebras with generalized (i.e., weakly additive) operators. The particular complete, atomic extension of studied by Jónsson and Tarski is called the perfect extension of , and is denoted by +. It is very useful in algebraic investigations of classes of algebras that are associated with logics. Interesting examples of Bos abound in algebraic logic, and include relation algebras, cylindric algebras, and polyadic and quasi-polyadic algebras (with or without equality). Moreover, there are several important constructions that, when applied to certain Bos, lead to other, derived Bos. Obvious examples include the formation of subalgebras, homomorphic images, relativizations, and direct products. Other examples include the Boolean algebra of ideal elements of a Bo, the neat β;-reduct of an α-dimensional cylindric algebra (β; < α), and the relation algebraic reduct of a cylindric algebra (of dimension at least 3). It is natural to ask about the relationship between the perfect extension of a Bo and the perfect extension of one of its derived algebras ′: Is the perfect extension of the derived algebra just the derived algebra of the perfect extension? In symbols, is ( ′)+ = ( +)′? For example, is the perfect extension of a subalgebra, homomorphic image, relativization, or direct product, just the corresponding subalgebra, homomorphic image, relativization, or direct product of the perfect extension (up to isomorphisms)? Is the perfect extension of the Boolean algebra of ideal elements, or the neat reduct of a cylindric algebra, or the relation algebraic reduct of a cylindric algebra just the Boolean algebra of ideal elements, or the neat β;-reduct, or the relation algebraic reduct, of the perfect extension? We shall prove a general result in this direction; namely, if the derived algebra is constructed as the range of a relatively multiplicative operator, then the answer to our question is "yes". We shall also give examples to show that in "infinitary" constructions, our question can have a spectacularly negative answer. Expressibility of properties of relations Hajnal Andréka, Ivo Düntsch, István Németi We investigate in an algebraic setting the question of which logical languages can express the properties integral, permutational, and rigid for algebras of relations. The lattice of varieties of representable relation algebras Journal: The Journal of Symbolic Logic / Volume 59 / Issue 2 / June 1994 We shall show that certain natural and interesting intervals in the lattice of varieties of representable relation algebras embed the lattice of all subsets of the natural numbers, and therefore must have a very complicated lattice-theoretic structure.
CommonCrawl
Mortality, hospital days and expenditures attributable to ambient air pollution from particulate matter in Israel Gary M. Ginsberg1, Ehud Kaliner1 & Itamar Grotto1 The Commentary to this article has been published in Israel Journal of Health Policy Research 2016 5:63 Worldwide, ambient air pollution accounts for around 3.7 million deaths annually. Measuring the burden of disease is important not just for advocacy but also is a first step towards carrying out a full cost-utility analysis in order to prioritise technological interventions that are available to reduce air pollution (and subsequent morbidity and mortality) from industrial, power generating and vehicular sources. We calculated the average national exposure to particulate matter particles less than 2.5 μm (PM2.5) in diameter by weighting readings from 52 (non-roadside) monitoring stations by the population of the catchment area around the station. The PM2.5 exposure level was then multiplied by the gender and cause specific (Acute Lower Respiratory Infections, Asthma, Circulatory Diseases, Coronary Heart Failure, Chronic Obstructive Pulmonary Disease, Diabetes, Ischemic Heart Disease, Lung Cancer, Low Birth Weight, Respiratory Diseases and Stroke) relative risks and the national age, cause and gender specific mortality (and hospital utilisation which included neuro-degenerative disorders) rates to arrive at the estimated mortality and hospital days attributable to ambient PM2.5 pollution in Israel in 2015. We utilised a WHO spread-sheet model, which was expanded to include relative risks (based on more recent meta-analyses) of sub-sets of other diagnoses in two additional models. Mortality estimates from the three models were 1609, 1908 and 2253 respectively in addition to 184,000, 348,000 and 542,000 days hospitalisation in general hospitals. Total costs from PM2.5 pollution (including premature burial costs) amounted to $544 million, $1030 million and $1749 million respectively (or 0.18 %, 0.35 % and 0.59 % of GNP). Subject to the caveat that our estimates were based on a limited number of non-randomly sited stations exposure data. The mortality, morbidity and monetary burden of disease attributable to air pollution from particulate matter in Israel is of sufficient magnitude to warrant the consideration of and prioritisation of technological interventions that are available to reduce air pollution from industrial, power generating and vehicular sources. The accuracy of our burden estimates would be improved if more precise estimates of population exposure were to become available in the future. According to the WHO, air pollution accounted in 2012 for around 7,000,000 deaths worldwide [1], of which 3,700,000 deaths were attributable to ambient air pollution (AAP) as opposed to household air pollution [2]. The major contributor to AAP is ambient particulate matter pollution (APMP), with ambient ozone pollution being a minor contributor [2]. In 2005 and 2010, it was estimated that there were around 565,000 and 500,000 deaths respectively in the WHO European region attributable to APMP, of which 2552 and 2452 deaths respectively occurred in Israel [1]. The WHO mortality calculations were primarily made by multiplying average pollution levels by cause specific relative risks (RR) based on the literature [3–6]. An unpublished study commissioned by the Israeli Ministry of Environment protection [7], based on aggregation of spatial emission rates from all pollutants, estimated the monetary costs of air pollution from transport, industrial and electricity generation sources, but did not estimate mortality. Measuring the burden of disease from air pollution is important not just for advocacy but also is a first step towards carrying out a full cost-utility analysis in order to prioritise technological interventions that are available to reduce air pollution (and subsequent morbidity and mortality) from industrial, electricity generating and vehicular sources. This paper aims to estimate mortality, serious morbidity (proxied by hospitalization days) and associated expenditures from APMP in Israel. Population-weighted PM2.5 exposure Annual average ambient PM2.5 and/or PM10 exposure data was calculated based on published monthly data for 2015 from 52 non-roadside monitoring stations [8]. Readings from stations that only recorded PM10 were converted to PM2.5 by a monthly specific PM2.5/PM10 ratio based on stations where both measurements were made in the same region or on national data in the event no regional data existed. Mid-2015 population data by towns, cities and regions (by urban and rural status) were multiplied by the relevant local monitoring stations annual average PM2.5 level and divided by the national exposed population figure of 8,608,500 (which included 236,000 temporary migrants) in order to arrive at the national population weighted average PM2.5 exposure level [9, 10]. Where more than one monitoring station existed in a city, an average PM2.5 value was calculated and applied to that city's population. Separately weighted urban and rural regional average readings for each geographic region were calculated and applied to other urban and rural populations which were not covered by a monitoring station. Relative risks Age group (in five year increments) specific RR, based on the WHO burden of disease calculations from AAP [11], were obtained for ischemic heart disease (IHD) and cerebrovascular disease (stroke) mortality from PM2.5 in adults aged over 25 years. Non-age specific RR were obtained for chronic obstructive pulmonary disease (COPD), lung cancer (LC) as well as for acute lower respiratory infection (ALRI) in children under 5 years of age. We utilised a test version of a spread-sheet for estimating the burden of disease from ambient air pollution that we obtained from the WHO (based on the methods described in http://www.who.int/phe/health_topics/outdoorair/databases/AAP_BoD_methods_March2014.pdf?ua=1 and http://www.who.int/phe/health_topics/outdoorair/databases/en/). Values reported in terms of PM10 were converted to PM2.5 equivalents by multiplying by 0.73 [12]. Sensitivity analyses (Table 1) Table 1 Diagnostic composition of different models (ages 25+ unless otherwise stated) The WHO supplied RR values were only based on literature that was available up to mid-2013. We updated these RR by including recent papers and meta-analyses of incidence, utilization and mortality data and expanded the categories in the test tool model to include type 2 Diabetes in Adults [13] and Asthma [14, 15] and Low Birth Weight [LBW] in the under-fives [16] in what we call our MAXI (category) model. A recent study of 9.8 million subjects in the USA [17] reported that PM2.5 levels were positively related to elevated hospitalization risks for Alzheimer's disease, Parkinson's disease and dementia. The results indicated that long-term changes in PM2.5 accelerated neuro-degeneration, potentially after the disease onset, hence we included the attributable hospitalization days into our MAXI model. However, we did not include estimates of attributable mortality, since the study was unable to assess whether PM2.5 levels caused the onset of neuro-degeneration, for which age is a predominant risk factor [18]. We applied age-specific relative risks for IHD and stroke in proportion to the overall ratio of the RR calculated from the meta-analyses to the overall RR from the WHO model. We noticed that different meta-analyses of the long-term effect (short-term effects were excluded) of pollutants on a specific disease, did not always include identical studies. Due to time constraints, in our calculation of updated relative risks, we included every individual study that had been included in meta-analyses, plus any published data since the latest meta-analysis. However we took care not to include multiple studies based on the same temporal populations and preserving a hierarchy of inclusion based primarily on mortality, then hospitalisations, emergency room visits and incidence risks (which we assumed will reflect proportionality of pollution related risks). However, we excluded studies based in the Far East (China, South Korea, Japan etc.) as their risks (which were usually higher) were generally based on higher levels of air pollution than that of Israel, North America and Europe [19]. In addition we included a WIDE category model, that included the broad areas of all circulatory and all respiratory diseases in addition to lung cancer, diabetes and LBW. Combined RR were calculated by applying weights inversely proportional to the square of the reported standard errors of the estimates of the diseases in the WIDE and MAXI categories. Population Attributable Fraction (PAF) Age, gender and cause specific PAFs for APMP were calculated according to the standard formula $$ \mathrm{P}\mathrm{A}\mathrm{F}=\frac{\mathrm{RR}-1}{\left(\mathrm{R}\mathrm{R}-1\right)+1} $$ Attributable mortality and hospital days Age and cause specific mortality and days of hospital utilisation by primary cause of death and hospitalisation for 2009–2013 were obtained from the Ministry of Health's national mortality and hospitalisation data bases. These raw data were adjusted upwards by 6.8 % [9] to take into account population growth until mid-2015. Finally we calculated mortality and hospital days attributable to PM2.5 by multiplying the age, gender and cause specific mortality and hospitalization data by the relevant PAF. Potential years of Life Lost (PYLL) Extrapolations of age and gender specific life expectancies to 2015 [10, 11] were multiplied by age-gender and cause specific mortality data in order to calculate the cause specific PYLL attributable to PM2.5. Disability adjusted life years (DALYs) lost Age- and gender-specific disability weights, used by the Ministry of Health, were applied to the life expectancies in order to calculate each individual's additional Healthy Adjusted Life Expectancy (HALE), using a 3 % per annum discount rate. These HALEs were subsequently multiplied by age-gender and cause specific mortality data in order to calculate the cause specific DALYs lost due to mortality. Attributable direct costs of ambient PM2.5 pollution In 2015, Israel spent around $18.5 billion on health services [9, 10]. Around 57 % of this was spent on capital costs, medicines, equipment and ambulatory, emergency room and out-patient visits [9, 10]. This figure was in turn multiplied by the percentage of hospital days from APMP for each of our models. The general hospitalisation costs (accounting for a further 19.6 %) were then added, taking into account that the per diem hospital costs were higher in departments [$916 vs $869] that cared for persons with diagnoses affected by PM2.5 than the average hospital cost [20]. We included premature burial costs (based on discounting the $5263 average burial costs over the life years lost) as the only monetary cost (in contrast to "human costs" reflected in lost DALYs) attributable to mortality. In addition, we calculated a statistical value of life loss based on valuing each member of society [regardless of age and gender] according to the national average gross national product (GNP) per capita of $35,222 multiplied by their life expectancy, using a 3 % per annum discount rate. The hospital, health service and premature burial costs were also expressed in terms of their percentages of GNP. However since the statistical value of life computation is based on "virtual" as opposed to real resource costs, this was not expressed in terms of percentage of GNP. The population weighted average PM2.5 exposure in Israel in 2015 was 21.6 μg/m3. The calculated diagnostic specific RR due to 10 μg/m3 changes in PM2.5 that we used for the non-WHO models are listed along with their diagnoses in Additional file 1: Appendix I. Risks for ALRI (RR = 1.10, 95 % CI 1.06–1.12), Alzheimers (3.00, 2.40–3.70), Asthma (1.02, 1.01–1.03), Dementia (1.16, 1.10–1.22), Diabetes (1.05, 1.01–1,08), IHD (1.11, 1.08–1.15), Lung Cancer (1.11, 1.05–1.16), Parkinson's (1.88, 1.44–2.40) and Respiratory Diagnoses (1.04, 1.001–1.08) were all significant. COPD (1.03, 0.997–1.07) and LBW (1.06, 0.989–1.12) were marginally not-significant, whilst there was a non-significant elevated risk for Strokes [1.08, 0.93–1.24]. According to the WHO model, 1609 (95 % CI 863–2361) deaths (or 3.6 % of all fatalities) were attributable to ambient PM2.5. Around half were due to IHD and a quarter attributable to strokes (Table 2). Table 2 Mortality attributable to ambient air pollution from PM2.5 (Israel 2015) (WHO model) The Wide list (containing wide circulatory and respiratory categorisations) estimated 15 % more deaths (1908, 95 % CI 1121–2804 being 4.3 % of all deaths) than the WHO model, Circulatory disorders accounted for 64 % of attributable mortality, with lung cancer and respiratory disorders each accounting for 18 % and 14 % respectively (Table 3). Table 3 Mortality attributable to ambient air pollution by pollutant (Israel 2015) (WIDE list) The maxi list (containing many more, but narrower disease categories, than the wide list) produced an estimate, 40 % higher than the WHO model, of 2253 (95 % CI 632–2904) deaths, being 5.1 % of all deaths. IHD, CHF lung cancer and stroke accounting for 41 %, 18 %, 16 % and 14 % of all attributable deaths respectively (Table 4). Table 4 Mortality attributable to ambient air pollution from PM2.5 (Israel 2015) (MAXI list–Single Pollutant Models) Table 5 Deaths, hospital utilization and costs from PM2.5 (Israel 2015) Table 5 shows that PM2.5 pollution accounted for between 183,000–591,000 days in general hospitals, costing between $168 million–$592 million, 3.5–11.4 % of all general hospital costs. Total health costs from PM2.5 pollution were between $541 million - $1028 million accounting for between 2.4–4.6 % of health expenditures in Israel. Total costs from PM2.5 pollution (including premature burial costs) amounted to between $544 million–$1749 million or 0.18 %–0.59 % of GNP. Using a statistical value of life based on GNP per capita methodology would add between $584 million–$797 million to the morbidity costs of PM2.5 pollution. In contrast to deaths which are clearly attributable to a given causality (such as automobile accidents, suicides, drowning), deaths due to air pollution and to personal behaviour, such as smoking, nutritional habits and physical exercise are harder to identify. Despite this difficulty, ambient particulate matter pollution has been implicated as a factor in many causes of death [8]. The range of mortality from our three estimates of between 1609–2253 deaths from PM2.5 alone is between four and five times that of road accident fatalities (although road fatalities have a higher PYLL due to the younger age of deceased persons) and between 10–16 times that of homicides in Israel [10]. Mortality attributable to PM2.5 is however lower than deaths from smoking [21], obesity [22] and sedentariness [23]. Our estimated deaths from PM2.5 are lower than the 2452 estimated by the WHO European region in 2010 [1] partly due to our model taking into account the fact that the southern desert region of the country has higher particulate levels but a far lower population density. Particulate matter data in Israel are strongly impacted by synoptic phenomena such as the occurrence of "dust storms" from surrounding deserts. Our estimates were limited to pollution data from only 2015, when there was a below average incidence of such storms. Hence our overall estimates of mortality, hospitalizations and costs are more likely to be downwardly biased than if they were to have been based on multi-year pollution data. Our estimates were based on the 52 non-roadside monitoring stations, which fall far short of the current infeasible goal of having monitoring stations in every neighbourhood or street. These stations are not distributed randomly in the urban space, but are located after careful thought, often in places of special interest (e.g. potential hot spots, town halls etc.). Thus, averaging PM concentrations over monitoring stations (for either a city or a region) does not necessarily give a very good estimate of the true population exposure. In addition, there might also be data quality issues that need to be assessed and corrected by air pollution experts. Nevertheless, we consider our estimates to be an acceptable pragmatic compromise for the purpose of an initial estimation of the mortality effects from particulates. We consider our estimation method to be preferable to estimates based on industrial and transport emission volumes, where wind direction and natural pollutant sources such as sand act as confounders. We consider the methodology for exposure assessment used in this paper to be a valid and generally acceptable for the purpose of making a national estimate of mortality. However, future localized estimates could be based on improved methodologies utilizing spatial models of particulate matter based on integrating data from monitoring stations, meteorology, traffic and other inputs. A major limitation of our estimates is that due to the lack of such studies in Israel, we employed, as an acceptable compromise, relative risk estimates from studies in countries where the PM2.5 is at a different exposure level. In the event of non-linearity between risk and exposure this would cause biased estimates. However, these biases were lessened by our exclusion of Asian based studies, which tended to have higher PM2.5 levels. A further source of potential bias is that the sources and hence composition of PM2.5 and subsequent composition-specific relative risks [24, 25] in international studies are different from that in Israel. While relying on meta analyses of risks might reduce any difference with Israel, an overall bias cannot be ruled out. It should be borne in mind that our estimates only relate to one pollutant, particulate matter. A companion article will estimate the mortality attributable to two other air pollutants (Ozone and Nitrogen Dioxide). Due large negative and smaller positive correlations with particulate matter levels respectively, a simple addition of all three individual pollutant models will overestimate the total deaths attributable to ambient air pollution. Therefore adjustments will be made to the estimated total deaths by means of combining data from three studies [26–28] that have reported results of multi-pollution models (i.e.: that adjusted for the other two pollutants). The WHO estimates, have a great advantage in that they allow for uniform comparisons with other countries, and that their relative risk information for IHD and Stroke was age-specific. However their disadvantage is that their RR were based on information that was available three years ago in 2013. Our WIDE and MAXI lists incorporated data from studies on Diabetes, which had a significant RR. However, it could be considered contentious that we included categories whose RR were marginally significant (COPD, LBW) or not significant (Strokes), although Strokes were considered significant in the WHO model. The inclusion of LBW did not affect the WIDE estimates magnitude, since LBW contributed close to zero attributable deaths. However the inclusion of COPD and Strokes (in addition to LBW) in the MAXI list added 356 [95 % CI, −370, +860] deaths. The mortality, morbidity (between 3.5 %–11.4 % of general hospital days) and monetary burden (between $544–$1748 million annually) of diseases attributable to air pollution in Israel is of sufficient magnitude to warrant the consideration and prioritisation of technological interventions that are available to reduce air pollution from industrial and vehicular sources. While some interventions will be on a national scale (eg: limits on vehicle emissions), others might be aimed at local hot spots of high industry or vehicular pollution where a significantly large population is being exposed. Thus further analysis of our data (at pollution station level) will be required to identify and prioritise high risk localities and search for possible supplementary interventions (to national level interventions). The data in this study provides a basis of mortality, DALY and health costs that can form the basis of any future cost-utility analyses of interventions (with proven efficacy) to reduce the burden of disease from man-made sources of particulate matter pollution. Interventions will have the potential not only to reduce mortality (and morbidity) but also to generate reductions in attributable health service costs that account for between 2.4 %–7.8 % of all health expenditures in Israel. In the UK in 2005 [1, 29], road transport accounted for around 40 % of premature deaths from APMP, other transport (20 %), power generation (20 %) and other sectors (20 %). As long as twenty years ago, a considerable number of deaths from particulate matter in Tel-Aviv, Israel were shown to be attributable to diesel fuels [30]. Ways have been suggested to almost eradicate reduce these emissions and hence their related mortality and morbidity [31] by increasing the use of catalytic converters and moving over to hybrid, electrical and LPG powered vehicles–especially trucks and buses. Large desert areas account for the fact that the Middle East is the region with the highest percentage of PM2.5 pollutants from natural sources [32], being around 52 % compared with 42 % Japan, 22 % Africa, 21 %, India, 17 % China, 10 % USA and 5 % Western Europe. So the potential for decreasing the percentage of particulate mass concentrations (used in this paper) through technological improvements is lower in the Middle East than in other regions (both developed and developing). The effect of surrounding deserts on air Pollutant levels in Israel was described almost a decade ago [33]. A natural experimental study on the Day of Atonement from 2000–2008, when nearly all industry and vehicular travel ceases, based on four stations in three cities, reported a reduction in particulate concentrations ranging from 11.4 %–21.7 % [34]. However, a similar study over a longer period (1998–2012) estimated a 74 % contribution by natural sources to PM2.5 pollution [35]. Assuming 74 % of particulate pollution comes from natural sources in Israel, means that for every 10 % relative decrease in man-made PM2.5 attained through the implementation of intervention strategies [36], between 42–59 lives will be saved each year, (in addition to between $14 million and $21 million in resource costs). The considerable mortality and morbidity burden attributable to ambient particulate matter pollution, cries out for the establishment of an inter-ministerial plan to identify and implement those intervention strategies that are cost-effective, in order to decrease the considerable burden of mortality and morbidity, in both human and monetary terms, from ambient air pollution in Israel. AAP: Ambient air pollution ALRI: Acute Lower Respiratory tract Infection APMP: Ambient Particular Matter Pollution COPD: DALY: Disability adjusted life year GNP: HALE: Healthy Adjusted Life Expectancy IHD: LBW: LC: PAF: Population Attributable Fraction PM10: Particulate Matter Particles less than 10 micrometers in diameter PM2.5: Particulate Matter Particles less than 2.5 micrometers in diameter PYLL: Potential years of Life Lost RR: Relative risk World Health Organization Regional Office for Europe, OECD. Economic cost of the health impact of air pollution in Europe: Clean air, health and wealth. Copenhagen: WHO Regional Office for Europe; 2015. World Health Organization. Burden of disease from household air pollution for 2012. Summary of results. World Health Organization: Geneva. http://www.who.int/phe/health_topics/outdoorair/databases/FINAL_HAP_AAP_BoD_24March2014.pdf. Accessed 16 Oct 2016. Lopez AD, Rodgers A, Vander Hoorn S, Murray CJ. Comparative Risk Assessment Collaborating Group. Selected major risk factors and global and regional burden of disease. Lancet. 2002;360:1347–60. Ezzati M, Hoorn SV, Lopez AD, Danaei G, Rodgers A, Mathers CD, Murray CJL. Comparative Quantification of Mortality and Burden of Disease Attributable to Selected Risk Factors. In: Lopez AD, Mathers CD, Ezzati M, Jamison DT, Murray CJL, editors. Global Burden of Disease and Risk Factors. Washington, D.C.: World Bank; 2006. Chapter 4. Lim SS, Vos T, Flaxman AD, Danaei G, Shibuya K, Adair-Rohani H, et al. A comparative risk assessment of burden of disease and injury attributable to 67 risk factors and risk factor clusters in 21 regions, 1990–2010: a systematic analysis for the Global Burden of Disease Study 2010. Lancet. 2012;380:2224–60. doi:10.1016/S0140-6736[12]61766-8. Smith KR, Bruce N, Balakrishnan K, Adair-Rohani H, Balmes J, Chafe Z, et al. Millions dead: how do we know and what does it mean? Methods used in the comparative risk assessment of household air pollution. Annu Rev Public Health. 2014;35:185–206. doi:10.1146/annurev-publhealth-032013-182356. Bakar N, Rosenthal G, Gabai N. Estimate of external costs due to air-pollution from transport and industry in Israel. Department of Environmental and Social Sciences. Tel-Hai Academic College: Tel-Hai Acedemic College; 2012. In Hebrew. Monthly Environmental Quality Data, Ministry of Environmental Protection. http://www.svivaaqm.net/Default.rtl.aspx. Accessed 16 Oct 2016. Central Bureau of Statistics, Monthly Bulletin of Statistics, November 2015, Jerusalem 2015. http://www.cbs.gov.il/webpub/pub/text_page_eng.html?publ=93&CYear=2015&CMonth=11. Accessed 16 Oct 2016. Central Bureau of Statistics, Statistical Annual of Israel No 66 2015, Table 27.9 Jerusalem 2015. http://www.cbs.gov.il/reader/shnaton/templ_shnaton_e.html?num_tab=st27_09&CYear=2015. Accessed 16 Oct 2016. Burnett RT, Pope A, Ezzati M, Olives C, Lim SS, Mehta S, et al. An integrated risk function for estimating the global burden of disease attributable to ambient fine particulate matter exposure. Environ Health Perspect. 2014;122:397–403. http://dx.doi.org/10.1289/ehp.1307049. Accessed 16 Oct 2016. Ostro B. Outdoor air pollution: assessing the environmental burden of disease at national and local levels. Geneva: World Health Organization; 2004. WHO. Environmental Burden of Disease Series. No. 5. http://www.who.int/quantifying_ehimpacts/publications/ebd5/en/. Accessed 16 Oct 2016. Wang B, Xu D, Jing Z, Liu D, Yan S, Wang Y. Effect of long-term exposure to air pollution on type 2 diabetes meelitus risk: a systemic review and meta-analysis of cohort studies. Eur J Endocrinol. 2014;171:R173–82. Jacquemin B, Siroux V, Sanchez M, Carsin A-e, Shilkowski T, Adam M, et al. Ambient Air Pollution and Adult Asthma Incidence in Six European Cohorts (ESCAPE). Environ Health Perspect. 2015;123:613–21. http://ehp.niehs.nih.gov/1408206/. Accessed 16 Oct 2016. Zheng X-Y, Ding H, Jiang L-N, Chen S-W, Zheng J-P, Qui M, et al. Association between Air Pollutants and astma emergency Room Visits and Hospital admissions in Time series studies: A Systematic Review and Meta-Analysis. PLoS ONE. 10(9):e0138146. doi:10.1371/journal.pone.0138146. Pedersen M, Giorgis-Allemand L, Bernard C, Aguilera I, Andersen AM, Ballester F, et al. Ambient air pollution and low birthweights: a European cohort study [ESCAPE].The Lancet Respiratory Medicine 2013:1; 695–704. http://www.thelancet.com/pdfs/journals/lanres/PIIS2213-2600(13)70192-9.pdf. Accessed Oct 16 2016. Kiomourtzoglou M-A, Schwartz JD, Weisskopf MG, Melly SJ, Wang Y, Dominici F, et al. Long-term PM2.5 Exposure and Neurological Hospital admissions in Northeastern United States. Environ Health Perspect. 2016;124:23–9. http://ehp.niehs.nih.gov/wp-content/uploads/124/1/ehp.1408973.alt.pdf. Accessed Oct 16 2016. Brookmeyer R, Evans A, Hebert L, Langa M, Heeringa G, Plassman L, et al. National estimates of the prevalence of Alzheimer's disease in the United States. Alzheimers Dement. 2011;7:61–73. Faustini A, Rapp R, Forastiere F. Nitrogen dioxide and mortality: review and meta-analysis of long-term studies. Eur Respir J. 2014;44:744–53. doi:10.1183/09031936.00114713. Ministry of Health. Ambulatory and Hospitalization Price Data for 2015. http://www.health.gov.il/subjects/finance/taarifon/pages/pricelist.aspx. Accessed 16 Oct 2016. Ginsberg G, Geva H. The burden of smoking in Israel–attributable mortality and costs (2014). Isr J Health Policy Res. 2014;3:28. Ginsberg G, Rosen B, Rosenberg E. Cost-Utility Analysis Cost-Utility Analyses of Interventions to Prevent and Treat Obesity in Israel. RR-550-10. Brookdale-Smokler Center for health policy research. http://brookdale.jdc.org.il/?CategoryID=192&ArticleID=211. Accessed 16 Oct 2016. Ginsberg G, Rosen B, Rosenberg E. Cost-Utility Analysis Cost-Utility Analyses of Interventions to Increase Physical Activity in Israeli adults. RR-565-11, Brookdale-Smokler Center for health policy research. http://brookdale.jdc.org.il/?CategoryID=192&ArticleID=257. Accessed 16 Oct 2016. Wang M, Beelen R, Stafoggia M, Raaschou-Nielsen O, Andersen ZJ, Hoffman B, et al. Long-term exposure to elemental constituents of particulate matter and cardiovascular mortality in 19 European cohorts: Results from the ESCAPE and TRANSPHORM projects. Environ Int. 2014;66:97–108. Pedersen M, Gehring U, Beelen R, Wang M, Giorgis-Allmand L, Nybo A-M, et al. Elemental Constituents of Particulate Matter and newborn's Size in Eight European Cohorts. Environ Health Perspect. 2016;124:141–50. Jerrett M, Burnett RT, Beckerman BS, et al. Spatial analysis of air pollution and mortality in California. Am J Respir Crit Care Med. 2013;188:593–9. Crouse DL, Petera PA, Hystad P, Brook JR, van Donkelaar A, Randall V, et al. Ambient PM2.5, O3, and NO2 Exposures and Associations with Mortality over 16 Years of Follow-up in the Canadian Census Health and Environment Cohort [CanCHEC]. Environ Health Perspect. 2015;123:1180–6. Turner MC, Jerrett M, Pope III CA, Krewski D, Gatspur SM, Diver WR, et al. Long-Term Ozone Exposure and Mortality in a Large prospective Study. In press. Am J Respir Crit Care Med. doi:10.1164/rccm.201508-1633OC. Posted online on 17 Dec 2015. Yim SHL, Barrett SRH. Public health impacts of consumption emissions in the United Kingdom. Environ Sci Technol. 2012;46:4291–6. Ginsberg GM, Seeri A, Fletcher E, Koutik D, Keresente E, Shemer Y. Mortality and morbidity from vehicular emissions in Tel-Aviv. World Transp Policy Prac. 1998;4:27–31. Ginsberg GM, Seeri A, Fletcher E, Tene M, Karsente E, Shemer Y. Mortality reductions as a result of changing to alternative powered vehicles in Tel-Aviv-Jafo. World Transp Policy Prac. 1998;4:4–9. Karagulian F, Belis C, Dor C, Pruss-Ustun A, Bonjour S, Adair-Rohani H, et al. Contributions to cities' ambient particulate matter [PM]: A systematic review of local source contributions at global level. Atmos Environ. 2015;120:475–83. Erel Y, Dayan U, Rabi R, Rudich Y, Stein M. Trans-boundary transport of pollutants by atmospheric mineral dust. Environ Sci Technol. 2006;40:2996–3005. Dayan U, Erel Y, Shpund J, Kordova L, Wanger A, Schauer JJ. The impact of local sources and meteorological factors on nitrogen oxide and particulate matter concentrations: A case study of the Day of Atonement in Israel. Atmos Environ. 2011;2011:3325–32. Levy I. A national day with near zero emissions and its effect on primary and secondary pollutants. Atmospheric Environment. 2013:77;202–12. World Health Organization. Reducing Global Health Risks through mitigation of Short-Lived Climate Pollutants. Scoping Report for Policy-makers. Scovronick N, editor. Switzerland; 2015. ISBN: 978 92 4 156508 0. To Dr. Annette Prüss-Ustün and Pierpaulo Mudu of the Department of Public Health and Environmental and Social Determinants, WHO, Geneva for allowing us to use a test version of their spread-sheet for estimating the burden of disease from ambient air pollution. To Ziona Haklai and Nehama Goldberger of the Health Ministry's Statistical Unit in supplying the raw mortality and hospitalization data. The datasets during and/or analysed during the current study available from the corresponding author on reasonable request. GMG designed the study, collected the data, carried out the data analysis, wrote the initial and wrote read and approved the final manuscript. EK contributed to the interpretation of the data, made critical revision and wrote, read and approved the final manuscript. IG initiated the study and wrote, read and approved the final manuscript. All the authors are salaried staff of the Ministry of Health and there are no competing interests to declare. As the study is based on published literature and a built spreadsheet, no human subjects were involved–hence there is no need for ethical approval or consent to participate. Israel Ministry of Health, Public Health Services, Yirmiahu Street 39, Jerusalem, 9446724, Israel Gary M. Ginsberg , Ehud Kaliner & Itamar Grotto Search for Gary M. Ginsberg in: Search for Ehud Kaliner in: Search for Itamar Grotto in: Correspondence to Gary M. Ginsberg. Appendix I. Studies contained in meta-analyses of RR due to 10 ug/m3 changes in PM2.5. (DOC 192 kb) Ginsberg, G.M., Kaliner, E. & Grotto, I. Mortality, hospital days and expenditures attributable to ambient air pollution from particulate matter in Israel. Isr J Health Policy Res 5, 51 (2016) doi:10.1186/s13584-016-0110-7 Attributable mortality Hospitalisations Health promotion and disease prevention
CommonCrawl
Does Heisenberg's uncertainty principle imply discretization of position and momentum? [closed] Want to improve this question? Update the question so it's on-topic for Physics Stack Exchange. Closed 5 months ago. Measuring the position and momentum of a particle is not simultaneously possible according to Heisenberg's uncertainty principle. Heisenberg's uncertainty principle gives us the uncertainty in measuring the position and momentum of a particle simultaneously. We can think of this as the highest possible resolution in measuring distance as $\Delta{x}$, and highest possible resolution in measuring momentum as $\Delta{p}$. So momentum and position can be thought of as varying in units $\Delta{p}$ and $\Delta{x}$ respectively, so they can't take on continuous values. This is my interpretation of Heisenberg's uncertainty principle. My interpretation is not only we cannot measure position and momentum simultaneously, but that momentum and position take on discrete values. For position, it would be $x+\Delta{x}$, $x-\Delta{x}$, $x+2\Delta{x}$, ..., $x + n\Delta{x}$, where $n$ is an integer. This implies discretization. Is my interpretation right? heisenberg-uncertainty-principle discrete Shashank V M Shashank V MShashank V M $\begingroup$ There is nothing in QM saying that position or momentum can get only discrete values at a distance $\Delta x$. That is your interpretation, but it is based on a misunderstanding. $\endgroup$ – GiorgioP $\begingroup$ No, there is no difficulty in QM with a continuous quantity like $x$. Some difficulties may come in QFT, but they are not directly related to the continuous character of $x$. They are rather due to the way interactions depend on position. $\endgroup$ $\begingroup$ In any case, I would suggest you do not rely too much on Wikipedia. If you want to understand QM. It is much better to start with an introduction to the subject. You may find many, tuned for different backgrounds. $\endgroup$ No, just because a quantity has a minimum value doesn't mean it can't vary continuously. Δx is continuously variable, as is Δp. EDIT TO ADRESS YOUR ADDITIONAL CONTENT The information you've added to you question is entirely beside the point. Space may well be quantised, but that is not a conclusion that follows from the HUP. Your assertion that if Dx has a minimum value then x must be discrete multiple of that minimum value is a non-sequitur. edited Jul 5 '21 at 8:33 Marco OcramMarco Ocram Definitely no, uncertainty principle has nothing to do with the possible discreteness of space and time, or more unlikely, discreteness of energy-momentum. A way to see this is by looking at the modification of the uncertainty principle in the presence of a minimal length : see this article, equation (1). It turns out that the uncertainty principle is : \begin{equation} \Delta x \Delta p=\frac{\hbar}{2}\left( 1+\beta (\Delta p)^2\right) \end{equation} With $\beta$ a constant linked to the discreteness of space. Here, space is discretized but not momentum. The uncertainty principle is not only a principle but also a theorem in mathematics related to Fourier transform and hence has nothing to do with the discreteness of $\mathbb{R}^n$, because it would be a contradiction. To finish, I will say that at least for the position and momentum (the general case is false as pointed out in the comments), there are two possible forms of this uncertainty principle: one using the standard deviation of the canonical variables used, and one using the commutator. The latter is given by: \begin{equation} [\hat{x}_i;\hat{p}_j]\propto i\delta_{ij} \end{equation} This one is better suited for interpretation because it is straightforward: if you measure firstly the momentum and then the position you get a result $A$, if you do the converse you get a result $B$, and it turns out that $B\neq A$. Thus, it is impossible to measure simultaneously the position and the momentum with arbitrary resolution. Note that under this form, this uncertainty principle can't be interpreted as a discreteness of space and momentum. Urb Jeanbaptiste RouxJeanbaptiste Roux $\begingroup$ The commutator is not a "form of the uncertainty principle". You can derive the uncertainty relations of two operators from their commutation relations, but you cannot in general derive the commutation relations of two operators from their uncertainty relations. $\endgroup$ – ACuriousMind ♦ $\begingroup$ I was implicitly talking about the uncertainty relation of position and momentum, not the general case. But you are right and I will edit my answer to make it clearer. $\endgroup$ – Jeanbaptiste Roux $Δx$ , the increment of x because x is a continuous variable , is also a continuous variable. Your taking discrete values of a particular $Δx$ is one instance of the possible discrete values of the continuous $Δx$, it does not bind it to your $Δx$. You added: Heisenberg's uncertainty principle necessitating the treatment of fields as quantized, and thus space as discrete. This argument has little to do with Heisenberg's uncertainty principle, HUP. Loop quantum gravity,is a proposal for a quantized space , but this does not change the HUP. The HUP is an envelope extension of the commutators of the space and momentum variables, operator relations. It makes no difference if the operators operate on discrete space or continuous for the evaluation of the HUP. anna vanna v $\begingroup$ Yes, but if $\Delta{x}$ is fixed, then discretization applies, does it not? $\endgroup$ – Shashank V M $\begingroup$ It applies mathematically by your choice, not to the physics variables described by the uncertainty principle, as the continuum is open to them. $\endgroup$ – anna v Not the answer you're looking for? Browse other questions tagged heisenberg-uncertainty-principle discrete or ask your own question. EPR paradox and uncertainty principle Application of Heisenberg's uncertainty principle Why can't we use entanglement to defy Heisenberg's Uncertainty Principle? Problem understanding Heisenberg's uncertainty principle Why is the word 'simultaneously' important in stating Heisenberg's uncertainty principle? Heisenberg's uncertainty principle for MRI Heisenberg's uncertainty principle and measurements Hard boundary of Heisenberg's uncertainty principle
CommonCrawl
Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up. For a binary tree of n nodes, there is a subtree with n/3 to 2n/3 nodes in my notes I have one fact: in a binary tree with $n$ elements ($n$ divisible by three) there is a node $u$ such that the number of nodes in the subtree with root $u$ is at least $\frac{n}{3}$ and at most $\frac{2n}{3}$. Does it work for every binary tree or only when $n$ is divisible by three? How can I quickly prove the fact in my note? A simple idea? algorithms graphs data-structures binary-trees J. Steph.J. Steph. $\begingroup$ cstheory.stackexchange.com/q/48150/5038, math.stackexchange.com/q/3971772/14578, cs.stackexchange.com/q/133521/755 $\endgroup$ – D.W. ♦ $\DeclareMathOperator\s{size}\def\f#1{\lfloor#1\rfloor}\def\c#1{\lceil#1\rceil}$As already pointed out by gnasher729, the statement is not literally true when $n\equiv1\pmod3$: if $n=3k+1$, there are binary trees of size $n$ whose all subtrees have size either $\le k<n/3$ or $\ge2k+1>2n/3$. A version suitable for all $n$ can be proved as follows. Let $\s(u)$ denote the size of the subtree rooted at $u$. Lemma 1: If $1\le s\le n$, then every binary tree of size $n$ has a node $u$ such that $s\le\s(u)\le2s-1$. Proof: Let $u$ be a node with $\s(u)\ge s$ such that $\s(u)$ is minimal possible. By minimality, the child subtrees of $u$ have size $\le s-1$ each, hence $\s(u)\le2(s-1)+1=2s-1$. QED Balancing $\s(u)$ and its complement, we obtain the following. Corollary 2: In every binary tree of size $n>1$, there is a node $u$ such that $\f{(n+1)/3}\le\s(u)\le\f{(2n-1)/3}$, and there is a node $u$ such that $\c{n/3}\le\s(u)\le\f{(2n+1)/3}$. Proof: Using Lemma 1 with $s=\f{(n+1)/3}$, we have $2s-1\le\f{(2n+2)/3}-1=\f{(2n-1)/3}$. Likewise, $2\c{n/3}-1=2\f{(n+2)/3}-1\le\f{(2n+4)/3}-1=\f{(2n+1)/3}$. QED We can also reformulate it as follows: Corollary 3: Every binary tree of size $n>1$ has a subtree such that both the subtree and its complement have sizes between $\f{(n+1)/3}$ and $\f{(2n+1)/3}$. Proof: Taking $u$ with $\f{(n+1)/3}\le\s(u)\le\f{(2n-1)/3}$ by Corollary 2, the complement of the subtree rooted at $u$ has size between $n-\f{(2n-1)/3}=\c{(n+1)/3}=\f{n/3}+1$ and $n-\f{(n+1)/3}=\c{(2n-1)/3}=\f{(2n+1)/3}$. QED Using Corollary 2, the simple bound $n/3\le\s(u)<2n/3$ works if $n\equiv0,2\pmod3$. It also works for all $n$ if the tree is full: Corollary 4: Every binary tree of size $n>1$ such that all nodes have $0$ or $2$ children has a node $u$ such that $n/3\le\s(u)<2n/3$. Proof: In this case, the size of every subtree (including $n$ itself) is odd. By Corollary 2, there is $u$ with $\f{(n+1)/3}\le\s(u)\le\f{(2n-1)/3}<2n/3$. If $\f{(n+1)/3}\ge n/3$, we are done. The remaining case is that $\f{(n+1)/3}=(n-1)/3$. But here $n-1$ is even, hence $(n-1)/3$ is also even, while $\s(u)$ is odd. Thus $\s(u)\ge1+(n-1)/3>n/3$. QED Emil JeřábekEmil Jeřábek $\begingroup$ Yes. Even better: every binary tree has a subtree with size in $[\lfloor n/3\rfloor,\lfloor2n/3\rfloor]$. (Or if you prefer, in $[\lceil n/3\rceil,\lceil2n/3\rceil]$.) $\endgroup$ – Emil Jeřábek $\begingroup$ Yes, both are ceilings. $\endgroup$ Let us prove that in a binary tree with $n\gt1$ nodes, there is a node $u$ such that $\frac{n}3\le size(u)\le\frac{2n}3$. Proof: Run the following algorithm. Let $u$ be the root node of the given binary tree. Check the number of nodes in the left subtree and the right subtree of $u$. If the number of nodes in the left subtree is between $\frac n3$ and $\frac{2n}3$ inclusive, return the left child node of $u$. If the number of nodes in the right subtree is between $\frac n3$ and $\frac{2n}3$ inclusive, return the right child node of $u$. If there are more nodes in the left subtree of $u$, assign the left child node of $u$ to $u$. Otherwise, assign the right child node of $u$ to $u$. Go back to step 2. We claim that whenever the algorithm is entering step 2, $size(u)\gt\frac{2n}3$. The first time when the algorithm is entering step 2, $size(u)=n\gt\frac{2n}3$. For the sake of induction, assume that $size(u)\gt \frac{2n}3$ when the algorithm is entering step 2 at some moment. We continue to run the algorithm. If it returns at step 2.1 or 2.2, there is nothing more to prove. Otherwise, the algorithm will be entering step 2.3. Then the number of nodes in each subtree of $u$ is not in $[\frac{n}3, \frac{2n}3]$. (Otherwise, the algorithm must have returned at step 2.1 or step 2.2.) If there are more nodes in the left subtree of $u$, the number of nodes in the right subtree of $u$ must be $\le\frac{size(u)}2\le \frac{n}2$. So, the number of nodes in the right subtree of $u$ must be $\lt\frac{n}3$. So the number of nodes in the left subtree $u$ must be $\ge size(u)- \frac{n}3>\frac{n}3.$ That means, the number of nodes in the left subtree $u$ must be $\gt\frac{2n}3$. Otherwise, by similar reasoning, the number of nodes in the right subtree of $u$ must be $\gt\frac{2n}3$. So, after $u$ is updated at step 2.3, $size(u)>\frac{2n}3$. In other words, when the algorithm is re-entering step 2 the next time, $size(u)>\frac{2n}3$. The claim is proved. Note that $size(u)$ decreases by at least 1 whenever $u$ is updated. So $u$ cannot be updated forever. That means the algorithm must stop at some time. The only places it stops is at step 2.1 or step 2.2, where node $u$ such that $\frac{n}3\le size(u)\le\frac{2n}3$ is returned. Here are more directed answers to your questions. The conclusion works for every binary tree has more than one node. It does not work for a binary tree with one node. The simple idea is checking whether one of the child nodes satisfies the condition. If not, then recurse to the subtree that has no less nodes than the other subtree. John L.John L. $\begingroup$ If $n$ is divisible by 3, then the conclusion as well as the proof is easier to understand (by setting $n$ to, for example, 3 or 6), simply because $\frac n3$ and $\frac{2n}3$ are integers. $\endgroup$ – John L. $\begingroup$ Yes, $n$ should $\gt1$. It is a huge typo of mine to write $n\ge1$ in my first statement. I did mention that the conclusion "does not work for a binary tree with one node". $\endgroup$ $\begingroup$ I had been preparing a totally different proof since 5 hours ago. I will update my answer once it is done. $\endgroup$ $\begingroup$ You asked, "how do you conclude from $\frac n2$ to $\frac n3$ ?" I just wrote before that paragraph, "the number of nodes in each subtree of $u$ is not in $[\frac{n}3, \frac{2n}3]$". So, if that number is $\le n/2$, then it must be $\lt\frac n3$; otherwise, it will be in $[\frac{n}3, \frac{n}2]$, which is contained in $[\frac{n}3, \frac{2n}3]$. $\endgroup$ $\begingroup$ I have to do something else. I will update before tomorrow. $\endgroup$ It's not true for trees with 3k+1 nodes. There is supposed to be a subtree with k+1/3 to 2k+2/3 nodes, that is k+1 to 2k nodes. Take a tree starting with only left nodes until we have a subtree of 2n+1, and then both sub trees have size n. The theorem fails. So assume n = 3k or n = 3k+2. n = 3k: We start with N = root of the tree. As long as N has a subtree with more than 2n/3 nodes, replace N with the root of that subtree. This will end eventually, and at that point the tree with root N has more than 2n/3 nodes, and both subtrees have at most 2n/3 nodes. The tree has at least 2k+1 nodes. If both subtrees had less than n/3 nodes, that's at most k-1 nodes, subtrees plus node would have at most 2k-1 nodes, less than the 2k+1 required. n = 3k+2: We start with N = root of the tree. As long as N has a subtree with more than 2n/3 nodes, replace N with the root of that subtree. This will end eventually, and at that point the tree with root N has more than 2n/3 nodes, and both subtrees have at most 2n/3 nodes. The tree has at least 2k+2 nodes. If both subtrees had less than n/3 nodes, that's at most k nodes, subtrees plus node would have at most 2k+1 nodes, less than the 2k+2 required. The two subtrees have at most 2n/3 nodes, and don't both have less than n/3 nodes, so one has between n/3 and 2n/3 nodes. gnasher729gnasher729 Thanks for contributing an answer to Computer Science Stack Exchange! Not the answer you're looking for? Browse other questions tagged algorithms graphs data-structures binary-trees or ask your own question. Creating a Self Ordering Binary Tree Height of a full binary tree Speculating big O for a binary tree Number of binary search trees with maximum possible height for n nodes Selecting a subtree in an array representation of a binary tree Proof that a subtree of a red-black tree has no more than $\frac{3n}{4}$ nodes Subtree with minimum sum of nodes' costs Proof that an almost complete binary tree with n nodes has at least $\frac{n}{2}$ leaf nodes Complexity of "Subtree of Another Tree"
CommonCrawl
Journals SciPost Physics Vol. 7 issue 3 SciPost Physics Vol. 7 issue 3 (September 2019) Previous issue | Vol. 7 issue 2 Vol. 7 issue 4 | Next issue Anisotropic scaling of the two-dimensional Ising model I: the torus Hendrik Hobrecht, Alfred Hucht SciPost Phys. 7, 026 (2019) · published 2 September 2019 | Toggle abstract We present detailed calculations for the partition function and the free energy of the finite two-dimensional square lattice Ising model with periodic and antiperiodic boundary conditions, variable aspect ratio, and anisotropic couplings, as well as for the corresponding universal free energy finite-size scaling functions. Therefore, we review the dimer mapping, as well as the interplay between its topology and the different types of boundary conditions. As a central result, we show how both the finite system as well as the scaling form decay into contributions for the bulk, a characteristic finite-size part, and - if present - the surface tension, which emerges due to at least one antiperiodic boundary in the system. For the scaling limit we extend the proper finite-size scaling theory to the anisotropic case and show how this anisotropy can be absorbed into suitable scaling variables. Anomalous dimensions of potential top-partners Diogo Buarque Franzosi, Gabriele Ferretti We discuss anomalous dimensions of top-partner candidates in theories of Partial Compositeness. First, we revisit, confirm and extend the computation by DeGrand and Shamir of anomalous dimensions of fermionic trilinears. We present general results applicable to all matter representations and to composite operators of any allowed spin. We then ask the question of whether it is reasonable to expect some models to have composite operators of sufficiently large anomalous dimension to serve as top-partners. While this question can be answered conclusively only by lattice gauge theory, within perturbation theory we find that such values could well occur for some specific models. In the Appendix we collect a number of practical group theory results for fourth-order invariants of general interest in gauge theories with many irreducible representations of fermions. Exact effective interactions and 1/4-BPS dyons in heterotic CHL orbifolds Guillaume Bossard, Charles Cosnier-Horeau, Boris Pioline Motivated by precision counting of BPS black holes, we analyze six-derivative couplings in the low energy effective action of three-dimensional string vacua with 16 supercharges. Based on perturbative computations up to two-loop, supersymmetry and duality arguments, we conjecture that the exact coefficient of the $\nabla^2(\nabla\phi)^4$ effective interaction is given by a genus-two modular integral of a Siegel theta series for the non-perturbative Narain lattice times a specific meromorphic Siegel modular form. The latter is familiar from the Dijkgraaf-Verlinde-Verlinde (DVV) conjecture on exact degeneracies of 1/4-BPS dyons. We show that this Ansatz reproduces the known perturbative corrections at weak heterotic coupling, including tree-level, one- and two-loop corrections, plus non-perturbative effects of order $e^{-1/g_3^2}$. We also examine the weak coupling expansions in type I and type II string duals and find agreement with known perturbative results, as well as new predictions for higher genus perturbative contributions. In the limit where a circle in the internal torus decompactifies, our Ansatz predicts the exact $\nabla^2 F^4$ effective interaction in four-dimensional CHL string vacua, along with infinite series of exponentially suppressed corrections of order $e^{-R}$ from Euclideanized BPS black holes winding around the circle, and further suppressed corrections of order $e^{-R^2}$ from Taub-NUT instantons. We show that instanton corrections from 1/4-BPS black holes are precisely weighted by the BPS index predicted from the DVV formula, including the detailed moduli dependence. We also extract two-instanton corrections from pairs of 1/2-BPS black holes, demonstrating consistency with supersymmetry and wall-crossing, and estimate the size of instanton-anti-instanton contributions. Anomalous phase ordering of a quenched ferromagnetic superfluid Lewis A. Williamson, P. Blair Blakie Coarsening dynamics, the canonical theory of phase ordering following a quench across a symmetry breaking phase transition, is thought to be driven by the annihilation of topological defects. Here we show that this understanding is incomplete. We simulate the dynamics of an isolated spin-1 condensate quenched into the easy-plane ferromagnetic phase and find that the mutual annihilation of spin vortices does not take the system to the equilibrium state. A nonequilibrium background of long wavelength spin waves remain at the Berezinskii-Kosterlitz-Thouless temperature, an order of magnitude hotter than the equilibrium temperature. The coarsening continues through a second much slower scale invariant process with a length scale that grows with time as $t^{1/3}$. This second regime of coarsening is associated with spin wave energy transport from low to high wavevectors, bringing about the the eventual equilibrium state. Because the relevant spin waves are noninteracting, the transport occurs through a dynamic coupling to other degrees of freedom of the system. The transport displays features of a spin wave energy cascade, providing a potential profitable connection with the emerging field of spin wave turbulence. Strongly coupling the system to a reservoir destroys the second regime of coarsening, allowing the system to thermalise following the annihilation of vortices. Gauged sigma models and magnetic Skyrmions Bernd J Schroers SciPost Phys. 7, 030 (2019) · published 10 September 2019 | We define a gauged non-linear sigma model for a 2-sphere valued field and a $SU(2)$ connection on an arbitrary Riemann surface whose energy functional reduces to that for critically coupled magnetic skyrmions in the plane, with arbitrary Dzyaloshinskii-Moriya interaction, for a suitably chosen gauge field. We use the interplay of unitary and holomorphic structures to derive a general solution of the first order Bogomol'nyi equation of the model for any given connection. We illustrate this formula with examples, and also point out applications to the study of impurities. The self-consistent quantum-electrostatic problem in strongly non-linear regime Pacome Armagnat, A. Lacerda-Santos, Benoit Rossignol, Christoph Groth, Xavier Waintal The self-consistent quantum-electrostatic (also known as Poisson-Schr\"odinger) problem is notoriously difficult in situations where the density of states varies rapidly with energy. At low temperatures, these fluctuations make the problem highly non-linear which renders iterative schemes deeply unstable. We present a stable algorithm that provides a solution to this problem with controlled accuracy. The technique is intrinsically convergent including in highly non-linear regimes. We illustrate our approach with (i) a calculation of the compressible and incompressible stripes in the integer quantum Hall regime and (ii) a calculation of the differential conductance of a quantum point contact geometry. Our technique provides a viable route for the predictive modeling of the transport properties of quantum nanoelectronics devices. Invariants of winding-numbers and steric obstruction in dynamics of flux lines Olivier Cépas, Peter M. Akhmetiev We classify the sectors of configurations that result from the dynamics of 2d crossing flux lines, which are the simplest degrees of freedom of the 3-coloring lattice model. We show that the dynamical obstruction is the consequence of two effects: (i) conservation laws described by a set of invariants that are polynomials of the winding numbers of the loop configuration, (ii) steric obstruction that prevents paths between configurations, for lack of free space. We argue that the invariants fully classify the configurations in five, chiral and achiral, sectors and no further obstruction in the limit of low-winding numbers. The equilibrium landscape of the Heisenberg spin chain Enej Ilievski, Eoin Quinn We characterise the equilibrium landscape, the entire manifold of local equilibrium states, of an interacting integrable quantum model. Focusing on the isotropic Heisenberg spin chain, we describe in full generality two complementary frameworks for addressing equilibrium ensembles: the functional integral Thermodynamic Bethe Ansatz approach, and the lattice regularisation transfer matrix approach. We demonstrate the equivalence between the two, and in doing so clarify several subtle features of generic equilibrium states. In particular we explain the breakdown of the canonical Y-system, which reflects a hidden structure in the parametrisation of equilibrium ensembles. Event generation with Sherpa 2.2 Enrico Bothmann, Gurpreet Singh Chahal, Stefan Höche, Johannes Krause, Frank Krauss, Silvan Kuttimalai, Sebastian Liebschner, Davide Napoletano, Marek Schönherr, Holger Schulz, Steffen Schumann, Frank Siegert Sherpa is a general-purpose Monte Carlo event generator for the simulation of particle collisions in high-energy collider experiments. We summarize essential features and improvements of the Sherpa 2.2 release series, which is heavily used for event generation in the analysis and interpretation of LHC Run 1 and Run 2 data. We highlight a decade of developments towards ever higher precision in the simulation of particle-collision events. Curvature induced magnonic crystal in nanowires Anastasiia Korniienko, Volodymyr P. Kravchuk, Oleksandr V. Pylypovskyi, Denis D. Sheka, Jeroen van den Brink, Yuri Gaididei A new type of magnonic crystals, curvature induced ones, is realized in ferromagnetic nanowires with periodically deformed shape. A magnon band structure of such crystal is fully determined by its curvature: the developed theory is well confirmed by simulations. An application to nanoscale spintronic devices with the geometrically tunable parameters is proposed, namely, to filter elements. Reports of my demise are greatly exaggerated: $N$-subjettiness taggers take on jet images Liam Moore, Karl Nordström, Sreedevi Varma, Malcolm Fairbairn We compare the performance of a convolutional neural network (CNN) trained on jet images with dense neural networks (DNNs) trained on n-subjettiness variables to study the distinguishing power of these two separate techniques applied to top quark decays. We find that they perform almost identically and are highly correlated once jet mass information is included, which suggests they are accessing the same underlying information which can be intuitively understood as being contained in 4-, 5-, 6-, and 8-body kinematic phase spaces depending on the sample. This suggests both of these methods are highly useful for heavy object tagging and provides a tentative answer to the question of what the image network is actually learning. Twisted and untwisted negativity spectrum of free fermions Hassan Shapourian, Paola Ruggiero, Shinsei Ryu, Pasquale Calabrese A basic diagnostic of entanglement in mixed quantum states is known as the positive partial transpose (PT) criterion. Such criterion is based on the observation that the spectrum of the partially transposed density matrix of an entangled state contains negative eigenvalues, in turn, used to define an entanglement measure called the logarithmic negativity. Despite the great success of logarithmic negativity in characterizing bosonic many-body systems, generalizing the operation of PT to fermionic systems remained a technical challenge until recently when a more natural definition of PT for fermions that accounts for the Fermi statistics has been put forward. In this paper, we study the many-body spectrum of the reduced density matrix of two adjacent intervals for one-dimensional free fermions after applying the fermionic PT. We show that in general there is a freedom in the definition of such operation which leads to two different definitions of PT: the resulting density matrix is Hermitian in one case, while it becomes pseudo-Hermitian in the other case. Using the path-integral formalism, we analytically compute the leading order term of the moments in both cases and derive the distribution of the corresponding eigenvalues over the complex plane. We further verify our analytical findings by checking them against numerical lattice calculations. Equilibration towards generalized Gibbs ensembles in non-interacting theories Marek Gluza, Jens Eisert, Terry Farrelly Even after almost a century, the foundations of quantum statistical mechanics are still not completely understood. In this work, we provide a precise account on these foundations for a class of systems of paradigmatic importance that appear frequently as mean-field models in condensed matter physics, namely non-interacting lattice models of fermions (with straightforward extension to bosons). We demonstrate that already the translation invariance of the Hamiltonian governing the dynamics and a finite correlation length of the possibly non-Gaussian initial state provide sufficient structure to make mathematically precise statements about the equilibration of the system towards a generalized Gibbs ensemble, even for highly non-translation invariant initial states far from ground states of non-interacting models. Whenever these are given, the system will equilibrate rapidly according to a power-law in time as long as there are no long-wavelength dislocations in the initial second moments that would render the system resilient to relaxation. Our proof technique is rooted in the machinery of Kusmin-Landau bounds. Subsequently, we numerically illustrate our analytical findings by discussing quench scenarios with an initial state corresponding to an Anderson insulator observing power-law equilibration. We discuss the implications of the results for the understanding of current quantum simulators, both in how one can understand the behaviour of equilibration in time, as well as concerning perspectives for realizing distinct instances of generalized Gibbs ensembles in optical lattice-based architectures. Supercurrent-induced Majorana bound states in a planar geometry André Melo, Sebastian Rubbert, Anton R. Akhmerov We propose a new setup for creating Majorana bound states in a two-dimensional electron gas Josephson junction. Our proposal relies exclusively on a supercurrent parallel to the junction as a mechanism of breaking time-reversal symmetry. We show that combined with spin-orbit coupling, supercurrents induce a Zeeman-like spin splitting. Further, we identify a new conserved quantity---charge-momentum parity---that prevents the opening of the topological gap by the supercurrent in a straight Josephson junction. We propose breaking this conservation law by adding a third superconductor, introducing a periodic potential, or making the junction zigzag-shaped. By comparing the topological phase diagrams and practical limitations of these systems we identify the zigzag-shaped junction as the most promising option. Logarithmic correlation functions for critical dense polymers on the cylinder Alexi Morin-Duchesne, Jesper Lykke Jacobsen We compute lattice correlation functions for the model of critical dense polymers on a semi-infinite cylinder of perimeter $n$. In the lattice loop model, contractible loops have a vanishing fugacity whereas non-contractible loops have a fugacity $\alpha \in (0,\infty)$. These correlators are defined as ratios $Z(x)/Z_0$ of partition functions, where $Z_0$ is a reference partition function wherein only simple half-arcs are attached to the boundary of the cylinder. For $Z(x)$, the boundary of the cylinder is also decorated with simple half-arcs, but it also has two special positions $1$ and $x$ where the boundary condition is different. We investigate two such kinds of boundary conditions: (i) there is a single node at each of these points where a long arc is attached, and (ii) there are pairs of adjacent nodes at these points where two long arcs are attached. We find explicit expressions for these correlators for finite $n$ using the representation of the enlarged periodic Temperley-Lieb algebra in the XX spin chain. The resulting asymptotics as $n\to \infty$ are expressed as simple integrals that depend on the scaling parameter $\tau = \frac {x-1} n \in (0,1)$. For small $\tau$, the leading behaviours are proportional to $\tau^{1/4}$, $\tau^{1/4}\log \tau$, $\log \tau$ and $\log^2 \tau$. We interpret the lattice results in terms of ratios of conformal correlation functions. We assume that the corresponding boundary changing fields are highest weight states in irreducible, Kac or staggered Virasoro modules, with central charge $c=-2$ and conformal dimensions $\Delta = -\frac18$ or $\Delta = 0$. With these assumptions, we obtain differential equations of order two and three satisfied by the conformal correlation functions, solve these equations in terms of hypergeometric functions, and find a perfect agreement with the lattice results. We use the lattice results to compute structure constants and ratios thereof which appear in the operator product expansions of the boundary condition changing fields. The fusion of these fields is found to be non-abelian.
CommonCrawl
"Human Knot" solvability probability Somewhat surprisingly, I don't see a question about this. There is a team-building (or just fun mathematical) game where a group of people hold hands with each other, usually trying not to hold hands with someone right next to you. The goal is then to "untangle the human knot" thus formed. Folk wisdom says this can always be "solved", in the sense of twisting and moving to demonstrate the knot formed is just one unknot, but this isn't so, since you can form all sorts of knots. Probably forming a simple non-trivial link of circles would be easiest to demonstrate this. But I am intrigued by a lack of easy-to-find references on the probability of such a configuration being the unknot. There is this MathOverflow question, which however has devolved into whether any link can be formed, which is NOT what I am asking. See also this Reddit thread and this Quora thread. In any event, not only do I feel like probably there is a known answer, it is also not possible to search on this site for questions on MO, so hopefully it is appropriate to ask on MSE this question: Given any reasonable set of definitions of this game and reasonable probability distribution given your definitions, what is the probability that such a link is the unknot, as a function of $n$ players? Presumably this will vary by some assumptions on arm length or the exact rules (can you grasp your neighbor's hand, how are people arranged), so there could be multiple answers. I suppose it's likely the parity of $n$ will be involved as well. As a hint, there is a comment to the MO question suggesting some possible references in somewhat difficult-to-access resources - but I don't care about references per se, I would like answers that are publicly available on a user-friendly and well-indexed site ... such as this one! Update: A review of one of the articles linked to on MO has some useful information about how many loops one can find in the link, though apparently not whether said loops are knotted or (k)not. recreational-mathematics puzzle knot-theory kcrisman kcrismankcrisman $\begingroup$ Do you look particularly for unknot, or can we have several unlinked unknots as well? Do we care whether players arms are twisted, or whether they (ultimately) are all facing the same way (inwards or outwards)? Or are these all included in the freedom granted by "any reasonable set of definitions"? $\endgroup$ – Arthur Apr 6 at 10:55 $\begingroup$ I suppose it's the freedom, though ordinarily the unknot (no links) is what people desire. All in/out shouldn't matter, as it doesn't in the team-building game inspiring the question. That would indeed be improbable, anecdotally! $\endgroup$ – kcrisman Apr 11 at 19:17 In a somewhat related MathOverflow question, where a closed loop is chosen at random as a polygonal path whose vertices lie on a sphere, there are some thoughts that the average crossing number of such a knot is something like $n^{3/2}$, where $n$ is the number of vertices. This would mean nontrivial knots are reasonably likely as the number of people grows, especially since the maximum crossing number of any such knot is bounded above by $n^2$. Even-Zohar has a paper on models for random knots. The random jump model is sort of like the human knot, but people are allowed to be placed anywhere in a unit sphere. Numerical experiments suggest the probability of encountering an unknot vanishes faster than $\exp(−O(n))$. In that paper, there is a description of a model that is much closer to the human knot game: random grid diagrams. If I understand it correctly, the difference is that the order in which people hold hands matters. Figure 15 has a graph showing the sampled distributions for the Casson invariant $c_2$ (the order-$2$ Vassiliev invariant, the second coefficient of the Alexander-Conway polynomial) of random knots from different models (including the grid model). The value of $c_2$ for the unknot is $0$. If I'm reading the graph correctly, with about eighty people the probability of getting an unknot happens no more than $55\%$ of the time. The actual probability is less since other knots also have $c_2=0$. The paper cites what appears to be a 2007 PhD thesis by Gilad Cohen in which the human knot game is numerically analyzed. However, I cannot find a copy or a reference to it anywhere. In one experimental analysis of the grid diagram model, they find the knotting probability approaches $1$ as $n$ increases. As an example, it looks like the human knot game (conditioned on people always forming a single closed loop) for ten people has nearly a $20\%$ chance of being unable to be detangled, though I can't say for certain since I don't know how well this random model actually maps to the human knot game. Anyway, the short of it is that it appears the answer is unknown right now, but there is numerical evidence to support the conjecture that, as the number of people playing the game increases, winning becomes arbitrarily improbable. answered Apr 6 at 19:28 Kyle MillerKyle Miller $\begingroup$ A little sleuthing uncovers that "Gilad Cohen, research high-school student, Weizmann." So probably this reference was something the author discovered in personal communication with the student's advisor. $\endgroup$ – kcrisman Apr 11 at 19:36 Not the answer you're looking for? Browse other questions tagged recreational-mathematics puzzle knot-theory or ask your own question. Tying knot theory with traveling salesman problem (TSP) Crossing number and Torus links Framed Cobordism Classes of links in $\mathbb R^3$ Knots as boundaries Finding the Jones polynomial of the $(2,q)$ torus knot Is unknot a composite knot? Understanding a Bill Thurston popularization of knot complements Have similar theories like knot theory been developed in higher dimensions? The projection of a turning knot Nomenclature for composite knots with hierarchies
CommonCrawl
Conformal geometry From formulasearchengine In mathematics, conformal geometry is the study of the set of angle-preserving (conformal) transformations on a space. In two real dimensions, conformal geometry is precisely the geometry of Riemann surfaces. In more than two dimensions, conformal geometry may refer either to the study of conformal transformations of "flat" spaces (such as Euclidean spaces or spheres), or, more commonly, to the study of conformal manifolds which are Riemannian or pseudo-Riemannian manifolds with a class of metrics defined up to scale. Study of the flat structures is sometimes termed Möbius geometry, and is a type of Klein geometry. 1 Conformal manifolds 2 Möbius geometry 2.1 Two dimensions 2.1.1 Minkowski space 2.1.2 Euclidean space 2.2 Higher dimensions 2.2.1 The inversive model 2.2.2 The projective model 2.2.3 The Euclidean sphere 2.2.4 Representative metrics 2.2.5 Ambient metric model 2.2.6 The Kleinian model 2.2.7 The conformal Lie algebras 3 Computational conformal geometry Conformal manifolds A conformal manifold is a differentiable manifold equipped with an equivalence class of (pseudo-)Riemannian metric tensors, in which two metrics g and h are equivalent (see also: Conformal equivalence) if and only if h=λ2⁢g{\displaystyle h=\lambda ^{2}g\,} where λ is a smooth real-valued function defined on the manifold. An equivalence class of such metrics is known as a conformal metric or conformal class. Thus a conformal metric may be regarded as a metric that is only defined "up to scale". Often conformal metrics are treated by selecting a metric in the conformal class, and applying only "conformally invariant" constructions to the chosen metric. A conformal metric is conformally flat if there is a metric representing it that is flat, in the usual sense that the Riemann tensor vanishes. It may only be possible to find a metric in the conformal class that is flat in an open neighborhood of each point. When it is necessary to distinguish these cases, the latter is called locally conformally flat, although often in the literature no distinction is maintained. The n-sphere is a locally conformally flat manifold that is not globally conformally flat in this sense, whereas a Euclidean space, a torus, or any conformal manifold that is covered by an open subset of Euclidean space is (globally) conformally flat in this sense. A locally conformally flat manifold is locally conformal to a Möbius geometry, meaning that there exists an angle preserving local diffeomorphism from the manifold into a Möbius geometry. In two dimensions, every conformal metric is locally conformally flat. In dimension n > 3 a conformal metric is locally conformally flat if and only if its Weyl tensor vanishes; in dimension n = 3, if and only if the Cotton tensor vanishes. Conformal geometry has a number of features which distinguish it from (pseudo-)Riemannian geometry. The first is that although in (pseudo-)Riemannian geometry one has a well-defined metric at each point, in conformal geometry one only has a class of metrics. Thus the length of a tangent vector cannot be defined, but the angle between two vectors still can. Another feature is that there is no Levi-Civita connection because if g and λ2g are two representatives of the conformal structure, then the Christoffel symbols of g and λ2g would not agree. Those associated with λ2g would involve derivatives of the function λ whereas those associated with g would not. Despite these differences, conformal geometry is still tractable. The Levi-Civita connection and curvature tensor, although only being defined once a particular representative of the conformal structure has been singled out, do satisfy certain transformation laws involving the λ and its derivatives when a different representative is chosen. In particular, (in dimension higher than 3) the Weyl tensor turns out not to depend on λ, and so it is a conformal invariant. Moreover, even though there is no Levi-Civita connection on a conformal manifold, one can instead work with a conformal connection, which can be handled either as a type of Cartan connection modelled on the associated Möbius geometry, or as a Weyl connection. This allows one to define conformal curvature and other invariants of the conformal structure. Möbius geometry Möbius geometry is the study of "Euclidean space with a point added at infinity", or a "Minkowski (or pseudo-Euclidean) space with a null cone added at infinity". That is, the setting is a compactification of a familiar space; the geometry is concerned with the implications of preserving angles. At an abstract level, the Euclidean and pseudo-Euclidean spaces can be handled in much the same way, except in the case of dimension two. The compactified two-dimensional Minkowski plane exhibits extensive conformal symmetry. Formally, its group of conformal transformations is infinite-dimensional. By contrast, the group of conformal transformations of the compactified Euclidean plane is only 6-dimensional. Two dimensions Minkowski space The conformal group for the Minkowski quadratic form q(x, y) = 2xy in the plane is the abelian Lie group: C⁢S⁢O⁡(1,1)={(ea00eb)|⁢a,b∈R}{\displaystyle CSO(1,1)=\left\{\left.{\begin{pmatrix}e^{a}&0\\0&e^{b}\end{pmatrix}}\right|a,b\in \mathbb {R} \right\}} with Lie algebra cso(1, 1) consisting of all real diagonal 2 × 2 matrices. Consider now the Minkowski plane: R2 equipped with the metric g=2⁢d⁢x⁢d⁢y.{\displaystyle g=2\,dx\,dy.\,} A 1-parameter group of conformal transformations gives rise to a vector field X with the property that the Lie derivative of g along X is proportional to g. Symbolically, LX g = λ g for some λ. In particular, using the above description of the Lie algebra cso(1, 1), this implies that LX dx = a(x) dx LX dy = b(y) dy for some real-valued functions a and b depending, respectively, on x and y. Conversely, given any such pair of real-valued functions, there exists a vector field X satisfying 1. and 2. Hence the Lie algebra of infinitesimal symmetries of the conformal structure is infinite-dimensional. The conformal compactification of the Minkowski plane is a Cartesian product of two circles S1 × S1. On the universal cover, there is no obstruction to integrating the infinitesimal symmetries, and so the group of conformal transformations is the infinite-dimensional Lie group (Z⋊D⁢i⁢f⁢f⁢(S1))×(Z⋊D⁢i⁢f⁢f⁢(S1)){\displaystyle (\mathbb {Z} \rtimes \mathrm {Diff} (S^{1}))\times (\mathbb {Z} \rtimes \mathrm {Diff} (S^{1}))\,} where Diff(S1) is the diffeomorphism group of the circle.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} The conformal group CSO(1, 1) and its Lie algebra are of current interest in conformal field theory. See also Virasoro algebra. Euclidean space A coordinate grid prior to a Möbius transformation The same grid after a Möbius transformation The group of conformal symmetries of the quadratic form q⁡(z,z¯)=z⁢z¯{\displaystyle q(z,{\bar {z}})=z{\bar {z}}\,} is the group GL1(C) = C* of non-zero complex numbers. Its Lie algebra is gl1(C) = C. Consider the (Euclidean) complex plane equipped with the metric g=d⁢z⁢d⁢z¯.{\displaystyle g=dz\,d{\bar {z}}.} The infinitesimal conformal symmetries satisfy LX⁢d⁢z=f⁡(z)⁢d⁢z{\displaystyle \mathbf {L} _{X}\,dz=f(z)\,dz} LX⁢d⁢z¯=f⁡(z¯)⁢d⁢z¯{\displaystyle \mathbf {L} _{X}\,d{\bar {z}}=f({\bar {z}})\,d{\bar {z}}} where ƒ satisfies the Cauchy-Riemann equation, and so is holomorphic over its domain. (See Witt algebra.) The conformal isometries of a domain therefore consist of holomorphic self-maps. In particular, on the conformal compactification — the Riemann sphere — the conformal transformations are given by the Möbius transformations z↦a⁢z+bc⁢z+d{\displaystyle z\mapsto {\frac {az+b}{cz+d}}} where ad − bc is nonzero. Higher dimensions In two dimensions, the group of conformal automorphisms of a space can be quite large (as in the case of Lorentzian signature) or variable (as with the case of Euclidean signature). The comparative lack of rigidity of the two-dimensional case with that of higher dimensions owes to the analytical fact that the asymptotic developments of the infinitesimal automorphisms of the structure are relatively unconstrained. In Lorentzian signature, the freedom is in a pair of real valued functions. In Euclidean, the freedom is in a single holomorphic function. In the case of higher dimensions, the asymptotic developments of infinitesimal symmetries are at most quadratic polynomials.[1] In particular, they form a finite-dimensional Lie algebra. The pointwise infinitesimal conformal symmetries of a manifold can be integrated precisely when the manifold is a certain model conformally flat space (up to taking universal covers and discrete group quotients).[2] The general theory of conformal geometry is similar, although with some differences, in the cases of Euclidean and pseudo-Euclidean signature.[3] In either case, there are a number of ways of introducing the model space of conformally flat geometry. Unless otherwise clear from the context, this article treats the case of Euclidean conformal geometry with the understanding that it also applies, mutatis mutandis, to the pseudo-Euclidean situation. The inversive model The inversive model of conformal geometry consists of the group of local transformations on the Euclidean space En generated by inversion in spheres. By Liouville's theorem, any angle-preserving local (conformal) transformation is of this form.[4] From this perspective, the transformation properties of flat conformal space are those of inversive geometry. The projective model The projective model identifies the conformal sphere with a certain quadric in a projective space. Let q denote the Lorentzian quadratic form on Rn+2 defined by q⁡(x0,x1,…,xn+1)=−2⁢x0⁢xn+1+x12+x22+⋯+xn2.{\displaystyle q(x_{0},x_{1},\ldots ,x_{n+1})=-2x_{0}x_{n+1}+x_{1}^{2}+x_{2}^{2}+\cdots +x_{n}^{2}.} In the projective space P(Rn+2), let S be the locus of q = 0. Then S is the projective (or Möbius) model of conformal geometry. A conformal transformation on S is a projective linear transformation of P(Rn+2) which leaves the quadric invariant. In a related construction, the quadric S is thought of as the celestial sphere at infinity of the null cone in the Minkowski space Rn+1, 1, which is equipped with the quadratic form q as above. The null cone is defined by N={(x0,…,xn+1)|−2⁢x0⁢xn+1+x12+⋯+xn2=0}.{\displaystyle N=\left\{\left.(x_{0},\ldots ,x_{n+1})\right|-2x_{0}x_{n+1}+x_{1}^{2}+\cdots +x_{n}^{2}=0\right\}.} This is the affine cone over the projective quadric S. Let N+ be the future part of the null cone (with the origin deleted). Then the tautological projection Rn+1, 1 − {0} → P(Rn+2) restricts to a projection N+ → S. This gives N+ the structure of a line bundle over S. Conformal transformations on S are induced by the orthochronous Lorentz transformations of Rn+1, 1, since these are homogeneous linear transformations preserving the future null cone. The Euclidean sphere Intuitively, the conformally flat geometry of a sphere is less rigid than the Riemannian geometry of a sphere. Conformal symmetries of a sphere are generated by the inversion in all of its hyperspheres. On the other hand, Riemannian isometries of a sphere are generated by inversions in geodesic hyperspheres (see the Cartan-Dieudonné theorem.) The Euclidean sphere can be mapped to the conformal sphere in a canonical manner, but not vice-versa. The Euclidean unit sphere is the locus in Rn+1 z2+x12+x22+⋯+xn2=1.{\displaystyle z^{2}+x_{1}^{2}+x_{2}^{2}+\cdots +x_{n}^{2}=1.} This can be mapped to the Minkowski space Rn+1,1 by letting x0=z+12,x1=x1,…,xn=xn,xn+1=z−12.{\displaystyle x_{0}={\frac {z+1}{\sqrt {2}}},\,x_{1}=x_{1},\,\ldots ,\,x_{n}=x_{n},\,x_{n+1}={\frac {z-1}{\sqrt {2}}}.} It is readily seen that the image of the sphere under this transformation is null in the Minkowski space, and so it lies on the cone N+. Consequently, it determines a cross-section of the line bundle N+ → S. Nevertheless, there was an arbitrary choice. In fact, if κ(x) is any positive function of x=(z, x0, ..., xn), then the assignment x0=z+1κ⁡(x)⁢2,x1=x1,…,xn=xn,xn+1=(z−1)⁢κ⁡(x)2{\displaystyle x_{0}={\frac {z+1}{\kappa (x){\sqrt {2}}}},\,x_{1}=x_{1},\,\ldots ,\,x_{n}=x_{n},\,x_{n+1}={\frac {(z-1)\kappa (x)}{\sqrt {2}}}} also gives a mapping into N+. The function κ is an arbitrary choice of conformal scale. Representative metrics A representative Riemannian metric on the sphere is a metric which is proportional to the standard sphere metric. This gives a realization of the sphere as a conformal manifold. The standard sphere metric is the restriction of the Euclidean metric on Rn+1 g=d⁢z2+d⁢x12+d⁢x22+⋯+d⁢xn2{\displaystyle g=dz^{2}+dx_{1}^{2}+dx_{2}^{2}+\cdots +dx_{n}^{2}\,} to the sphere z2+x12+x22+⋯+xn2.{\displaystyle z^{2}+x_{1}^{2}+x_{2}^{2}+\cdots +x_{n}^{2}.\,} A conformal representative of g is a metric of the form λ²g where λ is a positive function on the sphere. The conformal class of g, denoted [g], is the collection of all such representatives: [g]={λ2⁢g|⁢λ>0}.{\displaystyle [g]=\left\{\left.\lambda ^{2}g\right|\lambda >0\right\}.\,} An embedding of the Euclidean sphere into N+, as in the previous section, determines a conformal scale on S. Conversely, any conformal scale on S is given by such an embedding. Thus the line bundle N+ → S is identified with the bundle of conformal scales on S: to give a section of this bundle is tantamount to specifying a metric in the conformal class [g]. Ambient metric model {{#invoke:see also|seealso}} Another way to realize the representative metrics is through a special coordinate system on Rn+1, 1. Suppose that the Euclidean n-sphere S carries a stereographic coordinate system. This consists of the following map of Rn → S ⊂ Rn+1: y∈Rn↦(2⁢y|y|2+1,|y|2−1|y|2+1)∈S⊂Rn+1.{\displaystyle \mathbf {y} \in \mathbf {R} ^{n}\mapsto \left({\frac {2\mathbf {y} }{|\mathbf {y} |^{2}+1}},{\frac {|\mathbf {y} |^{2}-1}{|\mathbf {y} |^{2}+1}}\right)\in S\subset \mathbf {R} ^{n+1}.} In terms of these stereographic coordinates, it is possible to give a coordinate system on the null cone N+ in Minkowski space. Using the embedding given above, the representative metric section of the null cone is x0=2⁢|y|21+|y|2,xi=yi|y|2+1,xn+1=2⁢1|y|2+1.{\displaystyle x_{0}={\sqrt {2}}{\frac {|\mathbf {y} |^{2}}{1+|\mathbf {y} |^{2}}},x_{i}={\frac {y_{i}}{|\mathbf {y} |^{2}+1}},x_{n+1}={\sqrt {2}}{\frac {1}{|\mathbf {y} |^{2}+1}}.} Introduce a new variable t corresponding to dilations up N+, so that the null cone is coordinatized by x0=t⁢2⁢|y|21+|y|2,xi=t⁢yi|y|2+1,xn+1=t⁢2⁢1|y|2+1.{\displaystyle x_{0}=t{\sqrt {2}}{\frac {|\mathbf {y} |^{2}}{1+|\mathbf {y} |^{2}}},x_{i}=t{\frac {y_{i}}{|\mathbf {y} |^{2}+1}},x_{n+1}=t{\sqrt {2}}{\frac {1}{|\mathbf {y} |^{2}+1}}.} Finally, let ρ be the following defining function of N+: ρ=−2⁢x0⁢xn+1+x12+x22+⋯+xn2t2.{\displaystyle \rho ={\frac {-2x_{0}x_{n+1}+x_{1}^{2}+x_{2}^{2}+\cdots +x_{n}^{2}}{t^{2}}}.} In the t, ρ, y coordinates on Rn+1,1, the Minkowski metric takes the form: t2⁢gi⁢j⁡(y)⁢d⁢yi⁢d⁢yj+2⁢ρ⁢d⁢t2+2⁢t⁢d⁢t⁢d⁢ρ,{\displaystyle t^{2}g_{ij}(y)\,dy^{i}\,dy^{j}+2\rho \,dt^{2}+2t\,dt\,d\rho ,\,} where gij is the metric on the sphere. In these terms, a section of the bundle N+ consists of a specification of the value of the variable t = t(yi) as a function of the yi along the null cone ρ = 0. This yields the following representative of the conformal metric on S: t⁢(y)2⁢gi⁢j⁢d⁢yi⁢d⁢yj.{\displaystyle t(y)^{2}g_{ij}\,dy^{i}\,dy^{j}.\,} The Kleinian model Consider first the case of the flat conformal geometry in Euclidean signature. The n-dimensional model is the celestial sphere of the (n + 2)-dimensional Lorentzian space Rn+1,1. Here the model is a Klein geometry: a homogeneous space G/H where G = SO(n + 1, 1) acting on the (n+2)-dimensional Lorentzian space Rn+1,1 and H is the isotropy group of a fixed null ray in the light cone. Thus the conformally flat models are the spaces of inversive geometry. For pseudo-Euclidean of metric signature (p, q), the model flat geometry is defined analogously as the homogeneous space O(p + 1, q + 1)/H, where H is again taken as the stabilizer of a null line. Note that both the Euclidean and pseudo-Euclidean model spaces are compact. The conformal Lie algebras To describe the groups and algebras involved in the flat model space, fix the following form on Rp+1,q+1: Q=(00−10J0−100){\displaystyle Q={\begin{pmatrix}0&0&-1\\0&J&0\\-1&0&0\end{pmatrix}}} where J is a quadratic form of signature (p, q). Then G = O(p + 1, q + 1) consists of (n + 2) × (n + 2) matrices stabilizing Q: tMQM = Q. The Lie algebra admits a Cartan decomposition g=g−1⊕g0⊕g1{\displaystyle \mathbf {g} =\mathbf {g} _{-1}\oplus \mathbf {g} _{0}\oplus \mathbf {g} _{1}} g−1={(0t⁢p000J−1⁢p000)|⁢p∈Rn},g−1={(000t⁢q000q⁢J−10)|⁢q∈(Rn)∗}{\displaystyle \mathbf {g} _{-1}=\left\{\left.{\begin{pmatrix}0&^{t}p&0\\0&0&J^{-1}p\\0&0&0\end{pmatrix}}\right|p\in \mathbb {R} ^{n}\right\},\quad \mathbf {g} _{-1}=\left\{\left.{\begin{pmatrix}0&0&0\\^{t}q&0&0\\0&qJ^{-1}&0\end{pmatrix}}\right|q\in (\mathbb {R} ^{n})^{*}\right\}} g0={(−a000A000a)|⁢A∈s⁢o⁢(p,q),a∈R}.{\displaystyle \mathbf {g} _{0}=\left\{\left.{\begin{pmatrix}-a&0&0\\0&A&0\\0&0&a\end{pmatrix}}\right|A\in {\mathfrak {so}}(p,q),a\in \mathbb {R} \right\}.} Alternatively, this decomposition agrees with a natural Lie algebra structure defined on Rn ⊕ cso(p, q) ⊕ (Rn)*. The stabilizer of the null ray pointing up the last coordinate vector is given by the Borel subalgebra h = g0 ⊕ g1. Computational conformal geometry Template:Empty section Conformal equivalence Conformal geometric algebra Conformal gravity Erlangen program ↑ Kobayashi (1972). ↑ Due to a general theorem of Sternberg (1962). ↑ Slovak (1993). ↑ {{#invoke:citation/CS1|citation |CitationClass=citation }}. {{#invoke:citation/CS1|citation |CitationClass=book }}. {{#invoke:citation/CS1|citation |CitationClass=book }} |CitationClass=citation }} http://www.euclideanspace.com/maths/geometry/space/nonEuclid/conformal/index.htm Retrieved from "https://en.formulasearchengine.com/index.php?title=Conformal_geometry&oldid=227860" Differential geometry About formulasearchengine
CommonCrawl
Identification of New Microsatellite DNAs in the Chromosomal DNA of the Korean Cattle (Hanwoo) Kim, J.W.;Hong, J.M.;Lee, Y.S.;Chae, S.H.;Choi, C.B.;Choi, I.H.;Yeo, J.S. 1329 To isolate the microsatellites from the chromosomal DNA of the Korean cattle (Hanwoo) and to use those for the genetic selection, four bacteriophage genomic libraries containing the chromosomal DNA of six Hanwoo steers showing the differences in meat quality and quantity were used. Screening of the genomic libraries using $^{32}P-radiolabeled 5'-({CA})_{12}-3$nucleotide as a probe, resulted in isolation of about 3,000 positive candidate bacteriophage clones that contain $(CA)_n$-type dinucleotide microsatellites. After confirming the presence of microsatellite in each positive candidate clone by Southern blot analysis, the DNA fragments that include microsatellite and flanking sequences possessing less than 2 kb in size, were subcloned into plasmid vector. Results from the analysis of microsatellite length polymorphism, using twenty-two PCR primers designed from flanking region of each microsatellite DNA, demonstrated that 208 and 210 alleles of HW-YU-MS#3 were closely related to the economic traits such as marbling score, daily gain, backfat thickness and M. longissimus dorsi area in Hanwoo. Interestingly, HW-YU-MS#3 microsatellite was localized in bovine chromosome 17 on which QTLs related to regulation of the body fat content and muscle ypertrophy locus are previously known to exist. Taken together, the results from the present study suggest the possible use of the two alleles as a DNA marker related to economic trait to select the Hanwoo in the future. Detection of Polymorphism of Growth Hormone Gene for the Analysis of Relationship between Allele Type and Growth Traits in Karan Fries Cattle Pal, Aruna;Chakravarty, A.K.;Bhattacharya, T.K.;Joshi, B.K.;Sharma, Arjava 1334 The present study was conducted to detect polymorphism at growth hormone gene in Karan Fries bulls. A 428 bp fragment of growth hormone gene spanning over $4^{th}$exon, $4^{th}$intron and $5^{th}$ exon was amplified and digested with AluI restriction enzyme to identify polymorphism at this locus. Karan Fries bulls were found to be polymorphic at this locus. Two genotypes LL and LV were identified in Karan Fries with higher allelic frequency for L allele. In Karan Fries males, the average birth weight, 3 months body weight and daily body weight gains of LL homozygotes were significantly higher than that of LV heterozygotes. Genetic distances of KF bulls with respect to genotype along with 3 months body weight and average daily body weight gain forms a single cluster of bulls with LL genotype, while individuals with LV genotype forms three distinct clusters indicating more influence of L allele on growth traits. Establishment and Identification of a Debao Pony Ear Marginal Tissue Fibroblast Cell Line Zhou, X.M.;Ma, Y.H.;Guan, W.J.;Zhao, D.M. 1338 The Debao pony ear marginal tissue fibroblast cell line (NDPEM 2/2) was uccessfully established using either primary explant technique or collagenase technique. The characterizations of the cell line were identified as following: the cells were adherent and of density limitation; population doubling time (PDT) of cells made with the two techniques were 35.9 h and 48 h, respectively; chromosome analysis showed that the frequency of cell chromosome number to be 2n=64 was 91.3%-92.8%. Confirmed by isoenzyme analysis, this cell line had no cross- contamination. Tests for microbial contamination from bacteria, fungi, virus or mycoplasma were negative. This newly established cell line meets all the standard quality controls of ATCC. It will provide a precious genetic resource for the conservation of the Debao pony breed, as well as effective experimental material for genetic studies on Debao ponies. Cloning and Characterization of Bovine Titin-cap (TCAP) Gene Yu, S.L.;Chung, H.J.;Jung, K.C.;Sang, B.C.;Yoon, D.H.;Lee, S.H.;Kata, S.R.;Womack, J.E.;Lee, J.H. 1344 Titin-cap (TCAP), one of the abundant transcripts in skeletal muscles, was nvestigated in this study in cattle because of its role in regulating the proliferation and differentiation of myoblasts by interacting with the myostatin gene. From the 5, and 3, RACE experiments, full-length TCAP coding sequence was identified, comprising 166 amino acids. The amino acid comparison showed high sequence similarities with previously identified human (95.8%) and mouse (95.2%) TCAP genes. The TCAP expression, addressed by northern blot, is limited in muscle tissues as indicated by Valle et al. (1997). The radiation hybrid analysis localized the gene on BTA19, where the comparative human and porcine counterparts are on HSA17 and SSC12. A few muscle-related genetic disorders were mapped on HSA17 and some growth-related QTLs were identified on SSC12. The bovine TCAP gene found in this study opens up new possibilities for the investigation of muscle-related genetic diseases as well as meat yield traits in cattle. Mapping of Quantitative Trait Loci on Porcine Chromosome 7 Using Combined Data Analysis Zuo, B.;Xiong, Y.Z.;Su, Y.H.;Deng, C.Y.;Lei, M.G.;Zheng, R.;Jiang, S.W.;Li, F.E. 1350 To further investigate the regions on porcine chromosome 7 that are responsible for economically important traits, phenotypic data from a total of 287 F2 individuals were collected and analyzed from 1998 to 2000. All animals were genotyped for eight microsatellite loci spanning the length of chromosome 7. QTL analysis was performed using interval mapping under the line-cross model. A permutation test was used to establish significance levels associated with QTL effects. Observed QTL effects were (chromosomewide significance, position of maximum significance in centimorgans): Birth weight (<0.01, 3); Carcass length (<0.05, 80); Longissimus muscle area (<0.01, 69); Skin percentage (<0.01, 69); Bone percentage (<0.01, 74); Fat depths at shoulder (<0.05, 54);Mean fat depth (<0.05, 81); Moisture in m. Longissimus Dorsi (<0.05, 88). Additional evidence was also found which suggested QTL for dressing percentage and fat depths at buttock. This study offers confirmation of several QTL affecting growth and carcass traits on SSC7 and provides an important step in the search for the actual major genes involved in the traits of economic interest. Genealogical Relationship between Pedigree and Microsatellite Information and Analysis of Genetic Structure of a Highly Inbred Japanese Black Cattle Strain Sasazaki, S.;Honda, T.;Fukushima, M.;Oyama, K.;Mannen, H.;Mukai, F.;Tsuji, S. 1355 Japanese Black cattle of Hyogo prefecture (Tajima strain) are famous for its ability to produce high-quality meat and have been maintained as a closed system for more than 80 years. In order to assess the usefulness of microsatellite markers in closed cattle populations, and evaluate the genetic structure of the Tajima strain, we analyzed representative dams of the Tajima strain comprised of the substrains Nakadoi and Kinosaki. Genetic variability analyses indicated low genetic diversity in the Tajima strain. In addition, a recent genetic bottleneck, which could be accounted for by the high level of inbreeding, was detected in both substrains. In phylogenetic analyses, relationship coefficients and genetic distances between individuals were calculated using pedigree and microsatellite information. Two phylogenetic trees were constructed from microsatellite and pedigree information using the UPGMA method. Both trees illustrated that most individuals were distinguished clearly on the basis of the two substrains, although in the microsatellite tree some individuals appeared in clusters of different substrains. Comparing the two phylogenetic trees revealed good consistency between the microsatellite analysis tree and the pedigree information. The correlation coefficient between genetic distances derived from microsatellite and pedigree information was 0.686 with a high significance level (p<0.001). These results indicated that microsatellite information may provide data substantially equivalent to pedigree information even in unusually inbred herds of cattle, and suggested that microsatellite markers may be useful in revealing genetic structure without accurate or complete pedigree nformation. Japanese Black cattle of Hyogo prefecture (Tajima strain) are famous for its ability to produce high-quality meat and have been maintained as a closed system for more than 80 years. In order to assess the usefulness of microsatellite markers in closed cattle populations, and evaluate the genetic structure of the Tajima strain, we analyzed representative dams of the Tajima strain comprised of the substrains Nakadoi and Kinosaki. Genetic variability analyses indicated low genetic diversity in the Tajima strain. In addition, a recent genetic bottleneck, which could be accounted for by the high level of inbreeding, was detected in both substrains. In phylogenetic analyses, relationship coefficients and genetic distances between individuals were calculated using pedigree and microsatellite information. Two phylogenetic trees were constructed from microsatellite and pedigree information using the UPGMA method. Both trees illustrated that most individuals were distinguished clearly on the basis of the two substrains, although in the microsatellite tree some individuals appeared in clusters of different substrains. Comparing the two phylogenetic trees revealed good consistency between the microsatellite analysis tree and the pedigree information. The correlation coefficient between genetic distances derived from microsatellite and pedigree information was 0.686 with a high significance level (p<0.001). These results indicated that microsatellite information may provide data substantially equivalent to pedigree information even in unusually inbred herds of cattle, and suggested that microsatellite markers may be useful in revealing genetic structure without accurate or complete pedigree information. Genetic Differentiation among Sheep Populations from Near-sea Mainland in East Asia Lu, S.X.;Chang, H.;Du, L.;Tsunoda, K.;Ji, D.J.;Sun, W.;Yang, Z.P.;Chang, G.B.;Mao, Y.J.;Wang, Q.H.;Xu, M. 1360 Using the method of 'random sampling in typical colonies of the central area of the habitat', 60 Small-tailed Han sheep were obtained in Jining city, Shangdong province. The variations of Small-tailed Han sheep at 12 structural loci encoding blood proteins were detected by several electrophoresis techniques and their gene frequencies were then estimated. The same data of four other sheep populations from Near-sea Mainland in East Asia were cited for the analysis of genetic differentiation. The average heterozygosities of five populations, namely Kharkhorin sheep, Ulaanbaatar sheep, Small-tailed Han sheep, Hu sheep and Cham Tribe sheep were 0.3447, 0.3285, 0.3157, 0.3884 and 0.2300, respectively. The coefficient of gene differentiation among four populations, Kharkhorin sheep, Ulaanbaatar sheep, Small-tailed Han sheep and Hu sheep, was 0.045557, and that between these four breeds and Cham Tribe sheep was 0.088005, indicating that the level of gene differentiation among the former four sheep populations of Mongolian group was comparatively lower than that between Cham Tribe sheep and other four sheep populations. The origin of Cham Tribe sheep deserve further research. The documentary research on the evolution of Small-tailed Han sheep and Hu sheep from Mongolian sheep was further verified by the biochemical experiments in the study. It was reasonably deduced that Hu sheep, Small Tailed Han sheep and Cham Tribe sheep were decreasingly influenced by the bloodline of Mongolian sheep. Molecular Characterisation of the Mafriwal Dairy Cattle of Malaysia Using Microsatellite Markers Selvi, P.K.;Panandam, J.M.;Yusoff , K.;Tan, S.G. 1366 The Mafriwal dairy cattle was developed to meet the demands of the Malaysian dairy Industry. Although there are reports on its production and reproductive performance, there has been no work on its molecular characterization. This study was conducted to characterize the Mafriwal dairy cattle using microsatellite markers. Fifty two microsatellite loci were analysed for forty Mafriwal dairy cows kept at Institut Haiwan Kluang, Malaysia. The study showed two microsatellite loci to be monomorphic. Allele frequencies for the polymorphic loci ranged from 0.01 to 0.31. Genotype frequencies ranged from 0.03 to 0.33. The mean overall heterozygosity was 0.79. All polymorphic microsatellite loci deviated significantly (p<0.01) from Hardy-Weinberg equilibrium. The Mafriwal dairy cattle showed high genetic variability despite being a nucleus herd and artificial insemination being practiced. Liquid Boar Sperm Quality during Storage and In vitro Fertilization and Culture of Pig Oocytes Park, C.S.;Kim, M.Y.;Yi, Y.J.;Chang, Y.J.;Lee, S.H.;Lee, J.J.;Kim, M.C.;Jin, D.I. 1369 The percentages of sperm motility and normal acrosome on the liquid boar semen diluted and preserved at $4^{\circ}C$ with lactose hydrate, egg yolk and N-acetyl-D-glucosamine (LEN) diluent were significant differences according to preservation day and incubation time, respectively. The sperm motility steadily declined from 96.9% at 0.5 h incubation to 78.8% at 6 h incubation at 1 day of preservation. However, the sperm motility rapidly declined after 4 day of preservation during incubation. The normal acrosome steadily declined from 93.3% at 0.5 h incubation to 73.8% at 6 h incubation at 1 day of preservation. However, the normal acrosome rapidly declined after 3 day of preservation during incubation. The rates of sperm penetration and polyspermy were higher in 5 and $10{\times}10^6$ sperm/ml than in 0.2 and $1{\times}10^6$ sperm/ml. Mean numbers of sperm in penetrated oocyte were highest in $10{\times}10^6$ sperm/ml compared with other sperm concentrations. The rates of blastocysts from the cleaved oocytes (2-4 cell stage) were highest in $1{\times}10^6$sperm/ml compared with other sperm concentrations. In conclusion, we found out that liquid boar sperm stored at $4^{\circ}C$ could be used for in vitro fertilization of pig oocytes matured in vitro. Also, we recommend $1{\times}10^6$sperm/ml concentration for in vitro fertilization of pig oocytes. Estrus Behavior and Superovulatory Response in Black Bengal Goats (Capra hircus) Following Administration of Prostaglandin and Gonadotropins Mishra, O.P.;Gawande, P.G.;Nema, R.K.;Tiwari, S.K. 1374 The present study was conducted to explore the possibilities of estrus induction and superovulation in a native Indian breed of goats called 'Black Bengal'. Forty-two adult non-pregnant females were divided in two groups, of which 18 goats were subjected to a superovulatory treatment comprising of equine chorionic gonadotropin (eCG), Prostaglandin (PGF2$\alpha$) and human chorionic gonadotropin (hCG) to induce superovulation. The remaining 24 goats received no treatment and served as controls for the parameter under study as well as recipients for embryo transfer studies. The average duration of estrus was found to be significantly increased in treated goats (34.2${\pm}$3.4 h) compared to controls 3.0${\pm}$2.4 h). The average duration between PGF administration and occurrence of estrus was 2.0${\pm}$5.2 h. After mid ventral laparotomy, superovulatory responses indicated a significant increase in the number of follicles, which was 8.27${\pm}$0.37 in the treatment group compared to 4.16${\pm}$0.17 in the control group. The number of corpora lutea was also significantly increased in treated animals compared to control (2.90${\pm}$0.86 vs. 0.74${\pm}$0.04) respectively per ovary per goat. Effects of Zinc on Lipogenesis of Bovine Intramuscular Adipocytes Oh, Young Sook;Choi, Chang Bon 1378 Zinc (Zn) is a micromineral and functions as a cofactor of many enzymes and its deficiency induces retardation of growth and dysfunction of the immune system in animals. This study was conducted to determine lipogenic activity of Zn in bovine intramuscular adipocytes. Preadipocytes were isolated from intramuscular fat depots of 26 month old Korean (Hanwoo) steers and cultured in media containing Zn. At confluence, the cells were treated with insulin, dexamethasone, and 1-methyl-3-isobutyl-xanthine to induce differentiation (accumulation of lipid droplets in cells). The sources of Zn were zinc chloride (${ZnCl}_2$) and zinc sulfate (${ZnSO}_4$), and the final concentrations of both Zn sources were 0, 5, 25, 50 and 100 ${\mu}$M. Glycerol-3-phosphate dehydrogenase (GPDH) activity, an index of adipocyte differentiation, was increased as the concentration of Zn in media increased showing the highest activity (25.74 ng/min/mg protein) at 25 ${\mu}$M of ${ZnSO}_4$. Supplementation of Zn during differentiation of bovine intramuscular adipocytes tended to decrease the production of nitric oxide (NO). Peroxisome proliferator-activated receptor gamma 2(PPAR$\gamma$2) gene expression was increased 10 days after differentiation induction. The current results indicate that Zn has a strong lipogenic activity in cultured bovine intramuscular adipocytes with remarkable suppression of NO production. Estimation of Nutritive Value of Whole Crop Rice Silage and Its Effect on Milk Production Performance by Dairy Cows Islam, M.R.;Ishida, M.;Ando, S.;Nishida, T.;Yoshida, N. 1383 The nutritive value and utilization of whole crop rice silage (WCRS), Hamasari, at yellow mature stage was determined by three studies. In first study, chemical composition, in vivo digestibility and metabolizable energy (ME) content of WCRS was determined by Holstein steers. WCRS contains 6.23% CP, its digestibility is 48.4% and estimated TDN is 56.4%. Its ME content was 1.91 Mcal/kg DM. Gross energy (GE) retention (% of GE intake) in steers is only 22.7% most of which was lost through feces (44.7% of GE intake). It takes 81 minutes to chew a kg of WCRS by steers. In another study, the effect of Hamasari at yellow mature stage at three stages of lactation (early, mid and late lactation) and two levels of concentrate (40 or 60%) on voluntary intake, ME content and ME intake, milk yield and composition using lactating Holstein dairy cows were investigated. Total intake increased with the concentrate level in early and mid lactation, but was similar irrespective of concentrate level in late lactation. WCRS intake was higher with 40% concentrate level than with 60% concentrate. ME intake by cows increased with the concentrate level and WCRS in early lactating cows with 40% concentrate can support only 90% of the ME requirement. Milk production in accordance with ME intake increased with the increase in concentrate level in early and mid lactating cows but was similar in late lactating cows irrespective of concentrate level. Fat and protein percent of milk in mid and late lactating cows were higher with for 60% concentrate than 40%, but reverse was in early lactating cows. Solids-not-fat was higher with for 60% concentrate than 40% concentrate. Finally in situ degradability of botanical fractions such as leaf, stem, head and whole WCRS, Hamasari at yellow mature stage was incubated from 0 to 96 h in Holstein steers to determine DM and N degradability characteristics of botanical fractions and whole WCRS. Both DM and N solubility, rate of degradation and effective degradability of leaf of silage was lower, but slowly degradable fraction was higher compared to stem and head. Solubility of DM and N of stem was higher than other fractions. The 48 h degradability, effective degradability and rate of degradation of leaf were always lower than stem or head. In conclusion, voluntary intake of silage ranged from 5 to 12 kg/d and was higher with low levels of concentrate, but milk yield was higher with high levels of concentrate. Fat corrected milk yield ranged from 19 to 37 kg per day. For consistency of milk, early lactating cows should not be allowed more than 40% whole crop rice silage in the diet, but late lactating cows may be allowed 60% whole crop rice silage. Development of Transgenic Tall Fescue Plants from Mature Seed-derived Callus via Agrobacterium-mediated Transformation Lee, Sang-Hoon;Lee, Dong-Gi;Woo, Hyun-Sook;Lee, Byung-Hyun 1390 We have achieved efficient transformation system for forage-type tall fescue plants by Agrobacterium tumefaciens. Mature seed-derived embryogenic calli were infected and co-cultivated with each of three A. tumefaciens strains, all of which harbored a standard binary vector pIG121Hm encoding the neomycin phosphotransferase II (NPTII), hygromycin phosphotransferase (HPT) and intron-containing $\beta$-glucuronidase (intron-GUS) genes in the T-DNA region. Transformation efficiency was influenced by the A. tumefaciens strain, addition of the phenolic compound acetosyringone and duration of vacuum treatment. Of the three A. tumefaciens strains tested, EHA101/pIG121Hm was found to be most effective followed by GV3101/pIG121Hm and LBA4404/pIG121Hm for transient GUS expression after 3 days co-cultivation. Inclusion of 100 $\mu$M acetosyringone in both the inoculation and co-cultivation media lead to an improvement in transient GUS expression observed in targeted calli. Vacuum treatment during infection of calli with A. tumefaciens strains increased transformation efficiency. The highest stable transformation efficiency of transgenic plants was obtained when mature seed-derived calli infected with A. tumefaciens EHA101/pIG121Hm in the presence of 100 $\mu$M acetosyringone and vacuum treatment for 30 min. Southern blot analysis indicated integration of the transgene into the genome of tall fescue. The transformation system developed in this study would be useful for Agrobacterium-mediated genetic transformation of tall fescue plants with genes of agronomic importance. The Effect of Dietary Fat Inclusion on Nutrient Intake and Reproductive Performance in Postpartum Awassi Ewes Oqla, H.M.;Kridli, R.T.;Haddad, S.G. 1395 The objective of this study was to evaluate the effect of dietary fat inclusion on nutrient intake, body weight, milk production, return to estrus, pregnancy and lambing of winter-lambing, postpartum Awassi ewes. Thirty multiparous, winter-lambing Awassi ewes (body weight=51${\pm}$7.0 kg) were randomly assigned to three dietary treatments (n=10) for 62 days using a completely randomized design. Experimental diets were isonitrogenous, and were formulated to contain 0 (CON), 2.5 (MF), and 5% (HF) added fat, and 33% of the dietary crude protein (CP) as undegradable intake protein (UIP). On day 26 postpartum (day 0=parturition), ewes and their lambs were housed in individual pens for 28 days. Feed offered and refused was recorded daily. At the end of this period, ewes and their lambs within each treatment were combined into one group and fed their respective diet ad libitum. One fertile Awassi ram fitted with a marking harness was allowed with each group for 34 days. No significant (p>0.05) differences in dry matter intake, organic matter intake, and crude protein intake were observed for ewes fed the three experimental diets. No difference was observed in metabolizable energy intake (MEI) for ewes fed the CON and the MF diets (average 8.3 Mcal/d) diet. However, ewes fed the HF diet had greater(p<0.05) MEI compared with the rest of the treatments. Ewe body weights increased throughout the study, unaffected by the experimental diets. No significant differences in milk production were found among ewes fed the three experimental diets. No significant differences were observed in pregnancy rate (6/10, 5/10, 6/10 for CON, MF and HF diets, respectively), lambing rate and the number of lambs per ewe among the three treatments. postpartum reproductive performance of well-fed, winter-lambing Awassi ewes. Effect of Synchronizing Starch Sources and Protein (NPN) in the Rumen on Feed Intake, Rumen Microbial Fermentation, Nutrient Utilization and Performance of Lactating Dairy Cows Chanjula, P.;Wanapat, M.;Wachirapakorn, C.;Rowlinson, P. 1400 Eight crossbred (75% Holstein Friesian) cows in mid-lactation were randomly assigned to a switchback design with a 2x2 factorial arrangement to evaluate two nonstructural carbohydrate (NSC) sources (corn meal and cassava chips) with different rumen degradability and used at two levels of NSC (55 vs. 75%) with protein source (supplied by urea in the concentrate mix). The treatments were 1) Low degradable low level of corn (55%) 2) Low degradable high level of corn (75%) 3) High degradable low level of cassava (55%) and 4) High degradable high level of cassava (75%). The cows were offered the treatment concentrate at a ratio to milk yield at 1:2. Urea-treated rice straw was offered ad libitum as the roughage and supplement with 1 kg/hd/d cassava hay. The results revealed that total DM intake, BW and digestion coefficients of DM were not affected by either level or source of energy. Rumen fermentation parameters; NH3-N, blood urea nitrogen and milk urea nitrogen were unaffected by source of energy, but were dramatically increased by level of NSC. Rumen microorganism populations were not affected (p>0.05) by source of energy, but fungal zoospores were greater for cassava-based concentrate than corn-based concentrate. Milk production and milk composition were not affected significantly by diets containing either source or level of NSC, however concentrate than corn-based concentrate averaging (4.4 and 4.2, respectively). Likewise, income over feed, as estimated from 3.5% FCM, was higher on cassava-based concentrate than corn-based concentrate averaging (54.0 and 51.4 US$/mo, respectively). These results indicate that feeding diets containing either cassava-based diets and/or a higher of oncentrates up to 75% of DM with NPN (supplied by urea up to 4.5% of DM) can be used in dairy rations without altering rumen ecology or animal performance compared with corn-based concentrate. Screening and Characterization of Lactate Dehydrogenase-producing Microorganism Sung, Ha Guyn;Lee, Jae Heung;Shin, Hyung Tai 1411 The objective of this work was to isolate a microorganism, able to produce high lactate dehydrogenase (LDH) activity, for use as a microbial feed additive. The LDH is an important enzyme for lactate conversion in the rumen, thereby possibly overcoming lactic acidosis owing to sudden increases of cereal in the diets of ruminants. In the present study, various bacterial strains were screened from a variety of environments. Among the isolated microorganisms, strain FFy 111-1 isolated from a Korean traditional fermented vegetable food called Kimchi showed the highest enzyme activity, along with retaining strong enzyme activity even in rumen fluid in vitro. Based on morphological and biochemical characteristics as well as compositions of cellular fatty acids plus API analyses, this strain was identified as Lactobacillus sp. The optimum temperature and pH for growth were found to be 30$^{\circ}C$ and pH 6.5, respectively. A maximum cell growth of 2.2 at $A_{650}$ together with LDH activity of 2.08 U per mL was achieved after 24 h of incubation. Initial characterization of FFy 111-1 suggested that it could be a potential candidate for use as a direct-fed microbial in the ruminant animals. Effects of Season, Housing and Physiological Stage on Drinking and Other Related Behavior of Dairy Cows (Bos taurus) Lainez, Marielena Moncada;Hsia, Liang Chou 1417 The objective of the paper was to study the drinking and other related behavior of dairy cows (Bos taurus). There were 142 Holstein dairy cows observed and compared in this study. The experiment was designed on the basis of two different housing systems (wet pad with forced ventilation cooling house and open house); two different seasons (winter and summer); four different stages (high milk yielding cows, low milk yielding cows, dry cows, and heifers); and grouping (home and visitor animals). All cows had free access to water. Dairy cows spent 13.8 min/day drinking in wet-pad house and 11.7 min/day in open house. owever, there was no significant difference in the duration of water drinking between these two housing systems (p>0.05). The water consumption was significantly higher in wet-pad housed animals (68 L/day) than open-housed animals (31.5 L/day) (p<0.05). A significant interaction between housing and grouping (p<0.05) was found. Home and visitor animals spent more time drinking in open house, wet-pad house, respectively. A highly significant interaction was found between housing and drinking time during the day (p<0.001). Animals in open house drank more during the morning (6:00 to 10:00 h), whereas wet-pad housed animals drank in the afternoon (14:00 to 15:00 h) and evening (18:00 to 20:00 h). The average time a cow spent in drinking in summer was not ignificantly different from that of drinking in winter. However, the water intake was significantly higher in summer (61.9 L/day) than in winter (38.6 L/day) (p<0.05). Drinking activity showed a highly significant interaction between season and physiological stage (p<0.01). High milk yield cows spent more time drinking in summer than in winter, whereas cows in all other stages followed the opposite drinking pattern. Grouping exchange did not influence the drinking behavior of dairy cows in either season (p>0.05); both home and visitor animals spent almost the same time in drinking water. A strong significant interaction between season and time during the day was found(p<0.01), suggesting that animal's high drinking frequency occurred during the daytime for both seasons, with a peak midday in winter and two peaks at 10:00 h in the morning and 19:00 h in summer. Thus, drinking behavior was associated with the cooler time of day in summer and with the warmer hours of day in winter. High and low milk yielding cows and heifers spent 15.3 min/day, 14.3 min/day, and 12.8 min/day, respectively, in water drinking activity, but there was no significant difference among them (p>0.05). There was, however, a significant difference in water drinking activity found in dry cows, which spent less time in drinking at 8.2 min/day (p<0.05). Comparison of Different Alkali Treatment of Bagasse and Rice Straw Suksombat, W. 1430 A study was conducted to determine the effect of different alkali treatments on changes in chemical composition and on degradability of bagasse and rice straw. This study divided into 2 experiments, the first with bagasse and the second with rice straw. Each experiment comprised 9 treatments which included: untreated control; 3% NaOH; 6% NaOH; 3% urea; 6% urea; 3% NaOH/3% urea; 3% NaOH/6% urea; 6% NaOH/3% urea; 6% NaOH/6% urea. In both experiments, crude protein contents were increased from 2.0 to 12.5 units for bagasse and 3.1 to 13.7 units for rice straw by urea treatments. Ash contents of the treated bagasse and rice straw were increased over the untreated control (1.5-9.7 units for bagasse; 4.2-8.8 units for rice straw). The effects on ether extract, crude fiber, neutral detergent fiber and acid detergent fiber of the treated bagasse and rice straw were variable. Nylon bag degradability of dry matter and crude fiber were increased by treatments applying NaOH and NaOH plus urea but not urea alone. In contrast, the egradability of neutral detergent fiber and acid detergent fiber were reduced compared with the untreated control. From these degradability studies, it can be concluded that the most efficient treatments of bagasse were those treatments with 6% NaOH, followed by treatments with 6% NaOH plus 3% or 6% urea and 3% NaOH plus 3% or 6% urea, respectively. However, when comparison was made on the cost of chemical used to treat the agricultural by-products, particularly in case of rice straw, 3-6% urea would be appropriate. Effects of Montmorillonite Nanocomposite on Mercury Residues in Growing/Finishing Pigs Lin, Xianglin;Xu, Zirong;Zou, Xiaoting;Wang, Feng;Yan, Xianghua;Jiang, Junfang 1434 The study was conducted to evaluate the effects of montmorillonite anocomposite (MNC) on mercury residues in growing/finishing pigs. A total of 96 cross bred pigs ($Duroc{\times}Landrace{\times}large$ white, 48 barrows and gilts respectively), with similar initial weight (27.87${\pm}$1.15 kg), were used in this study. The animals were randomly assigned to two concentrations of mercury (0.1 and 0.3 ppm from $HgCl_2$) and two levels (0 and 0.3%) of MNC in a $2{\times}2$factorial arrangement of treatments. Each group has 3 pens (replications), and each pen has 8 pigs (4 barrows and 4 gilts). The experiment lasted for 90 days. The results showed that pig growth performances were not affected significantly by inclusion of Hg and addition of MNC (p$\geq$0.05). It indicated that the extent of intoxication in these pigs were not severe enough to impair growth performances. Both on the bases of 0.1 ppm and 0.3 ppm mercury supplementations, addition of 0.3% MNC markedly decreased mercury levels of blood, muscle, kidney and liver tissue (p<0.05). These results implied that the addition of non-nutritive sorptive material, MNC, could effectively reduce the gastrointestinal absorption of mercury via its specific adsorption, with a consequent reduction of mercury residues in body tissues. MNC had offered an encouraging solution to produce safe animal products with mercury contaminated feed. Effects of Dietary Zinc on Performance and Immune Response of Growing Pigs Inoculated with Porcine Reproductive and Respiratory Syndrome Virus and Mycoplasma hyopneumoniae Roberts, E.S.;Heugten, E. van;Spears, J.W.;Routh, P.A.;Lloyd, K.L.;Almond, G.W. 1438 The objective of this study was to determine the effects of dietary Zn level on performance, serum Zn concentrations, alkaline phosphatase activity (ALP), and immune response of pigs inoculated with Porcine Reproductive and Respiratory Syndrome virus (PRRSv) and Mycoplasma hyopneumoniae. A $2{\times}4$ factorial arrangement of treatments was used in a randomized design. Factors included; 1) PRRSv and M. hyopneumoniae inoculation (n=36 pigs) or sham inoculation (n=36 pigs) with media when pigs entered the grower facility (d 0) at 9 weeks of age and 2) 10, 50, 150 ppm supplemental Zn sulfate (${ZnSO}_4$) from weaning until the completion of the study, or 2,000 ppm supplemental ${ZnSO}_4$for two weeks in the nursery and then supplementation with 150 ppm ${ZnSO}_4$for the remainder of the trial. The basal diet contained 34 ppm Zn. Pigs were weighed on d 0, 10, 17, 24 and 31 and blood samples were collected on d 0, 7, 14, 21 and 28. Pigs inoculated with PRRSv were serologically positive at d 28 and control pigs remained negative to PRRSv. In contrast, the M hyopneumoniae inoculation was inconsistent with 33.3% and 52.8% of pigs serologically positive at d 28 in the control and infected groups, respectively. A febrile response was observed for approximately one week after inoculation with PRRSv. Feed intake (p<0.01) and gain (p<0.1) were less in PRRSv infected pigs than control pigs for the 31 d study. However, performance did not differ among pigs in the four levels of ${ZnSO}_4$. Assessments of immune responses failed to provide unequivocal influence of either PRRSv inoculation or ${ZnSO}_4$level. These data suggest that PRRSv and M. hyopneumoniae act to produce some performance deficits and the influence of Zn supplementation of nursery age pigs does not have clear effect in grower pigs affected with disease. Effects of Dietary Crude Protein on Growth Performance, Nutrient Utilization, Immunity Index and Protease Activity in Weaner to 2 Month-old New Zealand Rabbits Lei, Q.X.;Li, F.C.;Jiao, H.C. 1447 An experiment was conducted to determine the effects of different dietary crude protein (CP) levels on growth performance, nutrient utilization, small intestine protease activity and immunity index of weaner to 2 month-old New Zealand rabbits. Eighty weaner rabbits were allocated in individual cages to five treatments in which they were fed diets with CP at 14%, 16%, 18%, 20% and 22%, respectively. The growth performance and nutrient digestibility of rabbits increased firstly when dietary CP increased, then decreased. The average daily gain was the highest and feed conversion rate was the lowest when dietary CP reached 20%, namely 34.9 g/d and 2.74:1, respectively. Maximum CP digestibility was 72.1% in the 18% CP group, maximum crude fiber digestibility of 28.4% occurred in the 16% CP group and was significantly different from other treatments (p<0.01), apparent digestibility of Lys and Val followed the same trend as CP digestibility, and reached their maximum when dietary CP was 18%. Apparent digestibility of Cys, Tyr, Leu and Thr also had a similar trend to CP digestibility. Nitrogen retention (RN) increased with CP level (p>0.05), and was highest for 20% CP treatment (1.5 g/d). The effect of CP level on the rate of digestible nitrogen (DN) converted RN was small. The spleen index, thymus index, chymotrypsin and trypsin activities in small intestine were highest when dietary CP was 16%, which were 1.0, 2.8, 15.7 U/g and 125.7 U/g, respectively. There was no significant difference among treatments (p>0.05). According to the above results, the appropriate dietary CP level from weaner to 2 month-old meat rabbits was 18-20%. Effects of Sex and Market Weight on Performance, Carcass haracteristics and Pork Quality of Market Hogs Piao, J.R.;Tian, J.Z.;Kim, B.G.;Choi, Y.I.;Kim, Y.Y.;Han, In K. 1452 An experiment was conducted to examine the effects of sex and market weight on performance, carcass characteristics and pork quality. A total of 224 crossbred pigs (initially 26.64 kg BW) were allotted in a $2{\times}4$ factorial arrangement in a randomized complete block (RCB) design. The variables were sex (gilts and barrows) and different market weights (100, 110, 120 and 130 kg). Average daily gain (ADG) and average daily feed intake (ADFI) were significantly higher (p<0.01) in barrows than gilts, ADFI and feed conversion ratio (FCR) increased as body weight increased (p<0.05). Gender differences were observed in carcass characteristics. Backfat thickness and drip loss were greater in barrows (p<0.01), while loin eye area (p<0.01), flavor score (p<0.05) and lean content (p<0.001) were higher in gilts. Carcass grade and water holding capacity were the highest in 110 kg market weight pigs. The 100 kg arket weight pigs showed lower juiciness, tenderness, shear forces and total palatability than the other market weights (p<0.01). Hunter values (L*, a* and b*) were increased as market weight increased (p<0.05). Hunter a* value was greater in gilts (p<0.01) but L* value and b* value were not affected by sex of pigs. Net profit [(carcass weight${\times}$price by carcass grade)-(total feed cost+cost of purchased pig)] was higher in gilts than barrows (p<0.01), and was higher (p<0.05) in the pigs marketed at 110 and 120 kg market weight compared with 100 kg market weight. These results demonstrated that gilts showed higher carcass characteristics, pork quality, feed cost per kg body weight gain and net profit compared with barrows. Moreover, 110 or 120 kg body weight would be the recommended market weight based on pork quality and net profit for swine producers. Calcium-binding Peptides Derived from Tryptic Hydrolysates of Cheese Whey Protein Kim, S.B.;Lim, J.W. 1459 The purpose of this research was to investigate the potential use of cheese whey protein (CWP), a cheese by-product. The physiological activity of calcium-binding peptides in CWP may be used as a food additive that prevents bone disorders. This research also examined the characteristics of calcium-binding peptides. After the CWP was heat treated, it was hydrolyzed by trypsin. Then calcium-binding peptides were separated and purified by ion-exchange chromatography and reverse phase HPLC, respectively. To examine the characteristics of the purified calcium-binding peptides, amino acid composition and amino acid sequence were analyzed. Calcium-binding peptides with a small molecular weight of about 1.4 to 3.4 kDa were identified in the fraction that was flowed out from 0.25 M NaCl step gradient by ion-exchange chromatography of tryptic hydrolysates. The results of the amino acid analysis revealed that glutamic acid in a calcium-binding site took up most part of the amino acids including a quantity of proline, leucine and lysine. The amino acid sequence of calcium-binding peptides showed Phe-Leu-Asp-Asp-Asp-Leu-Thr-Asp and Ile-Leu-Asp-Lys from $\alpha$-LA and Ile-Pro-Ala-Val-Phe-Lys and Val-Tyr-Val-Glu-Glu-Leu-Lys from ${\beta}$-LG. Effect of Claw Abrasives in Cages on Claw Condition, Feather Cover and Mortality of Laying Hens Glatz, P.C. 1465 A trial was conducted to determine the effect of abrasive strips and abrasive paint in layer cages on claw length and claw sharpness, foot condition, feather cover and mortality of hens. During the preparation of the cages for the experiment it was simpler and took less time to apply the pre-prepared paint with a spatula to the egg guard compared to sticking the abrasive strips onto the egg guard. Fitting the strips took longer because it had to be cut from a 25 mm roll, cut into the appropriate lengths, the tape backing removed and then stuck onto the egg guard section. Abrasive paint was more effective as a claw shortener than abrasive strips. The birds using the abrasive paint had the shortest (p<0.05) claw length and lowest (p<0.05) claw sharpness. One of the original reasons for reducing claw length with claw shorteners was to reduce mortality by minimising skin skin abrasions caused by the claws. Surprisingly hen mortality from prolapse and cannibalism was higher (p<0.05) in cages fitted with abrasives. There are no other reports in the literature showing an increase in prolapse and cannibalism from hens using abrasives. Production of Biogenic Amines by Microflora Inoculated in Meats Min, Joong-seok;Lee, Sang-ok;Jang, Aera;Lee, Mooha;Kim, Yangha 1472 The effects of microorganisms inoculated in beef, pork and chicken on the production of various biogenic amines (BA) were examined. Acinetobacter haemolyticus, Aeromonas hydrophila subsp. hydrophila, Alcaligenes faecalis subsp. faecalis, Bacillus cereus, Bacillus subtilis, Enterobacter aerogenes, Enterobacter cloacae, Escherichia coli, Lactobacillus alimentarius, Lactobacillus curvatus, Leuconostoc mesenteroides subsp. Mesenteroides, Proteus mirabilis, Proteus vulgaris, Pseudomonas aerugina, Salmonella enteritidis and Salmonella typhimurium were inoculated into beef, pork and chicken and incubated for 24 h at optimum temperatures of each bacterium. In ground beef, total amount of amines (TAA) produced was highest in the sample inoculated with Bacillus cereus, followed by Enterobacter cloacae. In ground pork, TAA was highest in the sample inoculated with Alcaligenes faecalis, followed by Enterobacter cloacae, Proteus vulgaris and Bacillus cereus. TAA of chicken breast was highest in the sample inoculated with Alcaligenes faecalis, followed by Bacillus cereus and Lactobacillus alimentarius while in chicken leg was the sample inoculated with Proteus vulgaris, followed by Enterobacter aerogenes, Enterobacter cloacae and Alcaligenes faecalis. Among biogenic amines produced, cadaverine (CAD) was detected at the highest level, followed by putrescine (PUT) and tyramine (TYM), their order being reversed by the kind of microorganism in beef and pork. In chicken breast and leg, CAD level was still the highest but PUT, TYM or PHM was the second highest, depending upon the kind of microorganism inoculated. In total, Alcaligenes faecalis, Enterobacter cloacae and Bacillus cereus were ones that produced a larger amount of BAs regardless of meat sources from different species.
CommonCrawl
Prove that negative absolute temperatures are actually hotter than positive absolute temperatures Could someone provide me with a mathematical proof of why, a system with an absolute negative Kelvin temperature (such that of a spin system) is hotter than any system with a positive temperature (in the sense that if a negative-temperature system and a positive-temperature system come in contact, heat will flow from the negative- to the positive-temperature system). thermodynamics statistical-mechanics temperature equilibrium spin-chains $\begingroup$ Obligatory movie title $\endgroup$ – Qmechanic♦ Oct 12 '17 at 19:37 From a fundamental (i.e., statistical mechanics) point of view, the physically relevant parameter is coldness = inverse temperature $\beta=1/k_BT$. This changes continuously. If it passes from a positive value through zero to a negative value, the temperature changes from very large positive to infinite (with indefinite sign) to very large negative. Therefore systems with negative temperature have a smaller coldness and hence are hotter than systems with positive temperature. Some references: D. Montgomery and G. Joyce. Statistical mechanics of "negative temperature" states. Phys. Fluids, 17:1139–1145, 1974. http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19730013937_1973013937.pdf E.M. Purcell and R.V. Pound. A nuclear spin system at negative temperature. Phys. Rev., 81:279–280, 1951. http://prola.aps.org/abstract/PR/v81/i2/p279_1 Section 73 of Landau and E.M. Lifshits. Statistical Physics: Part 1, Example 9.2.5 in my online book Classical and Quantum Mechanics via Lie algebras. Arnold NeumaierArnold Neumaier $\begingroup$ "From a fundamental (i.e., statistical mechanics) point of view, the physically relevant parameter is coldness". I am afraid, that is not correct. It is energy, as shown in this paper. For instance, (inverse) temperature does generally not allow determining the direction of heat flow, because it is only a derivative of $S$. $\endgroup$ – jkds Oct 15 '18 at 11:26 $\begingroup$ @jkds: Of course, internal energy, temperature, pressure, etc. are all physically relevant. What I had meant is that coldness (inverse) temperature is more relevant than temperature itself. $\endgroup$ – Arnold Neumaier Oct 15 '18 at 12:18 $\begingroup$ Sure, but what the authors showed was that temperature is not in one-to one correspondence to a system's macrostate. The same system can have the same temperature at completely different internal energies. So temperature, unlike $E/N$, can be a misleading descriptor of the system. $\endgroup$ – jkds Oct 22 '18 at 9:36 $\begingroup$ @jkds: In the canonical ensemble, the macrostate is determined by the temperature; in other ensembles (such as the grand canonical one), one needs of course additional parameters. Then temperature and internal energy are no longer in 1-1 correspondence but related by an equation of state involving the other parameters. But my answer is anyway independent of heat flow. $\endgroup$ – Arnold Neumaier Oct 22 '18 at 13:29 $\begingroup$ @jkds: Temperature is a property of the thermodynamic limit where the microcanonical ensemble is equivalent to the canonical ensemble. In the canonical ensemble the 1-1 correspondence is self-evident. Moreover one can prove convexity. Thus if you assume a non-convex entropy functional you are in the thermodynamic situation only after performing the Maxwell construction (corresponding here to taking the convex envelope). $\endgroup$ – Arnold Neumaier Oct 23 '18 at 16:40 Arnold Neumaier's comment about statistical mechanics is correct, but here's how you can prove it using just thermodynamics. Let's imagine two bodies at different temperatures in contact with one another. Let's say that body 1 transfers a small amount of heat $Q$ to body 2. Body 1's entropy changes by $-Q/T_1$, and body 2's entropy changes by $Q/T_2$, so the total entropy change is $$ Q\left(\frac{1}{T_2}-\frac{1}{T_1}\right). $$ This total entropy change must be positive (according to the second law), so if $1/T_1>1/T_2$ then $Q$ has to be negative, meaning that body 2 can transfer heat to body 1 rather than the other way around. It's the sign of $\frac{1}{T_2}-\frac{1}{T_1}$ that determines the direction that heat can flow. Now let's say that $T_1<0$ and $T_2>0$. Now it's clear that $\frac{1}{T_2}-\frac{1}{T_1}>0$ since both $1/T_2$ and $-1/T_1$ are positive. This means that body 1 (with a negative temperature) can transfer heat to body 2 (with a positive temperature), but not the other way around. In this sense body 1 is "hotter" than body 2. NathanielNathaniel $\begingroup$ This is right, and central point can be stated like this: when heat energy leaves a body at negative temperature, the entropy of that body increases. $\endgroup$ – Andrew Steane Oct 30 '18 at 10:01 $\begingroup$ Your thermodynamic proof is wrong, because in thermodynamics $T<0$ breaks the consistency of thermodynamics, see this Nature Physics paper Consistent thermostatistics forbids negative absolute temperatures $\endgroup$ – jkds Oct 30 '18 at 14:48 Take a hydrogen gas in a magnetic field. The nuclei can be aligned with the field, low energy, or against it, high energy. At low temperature most of the nuclei are aligned with the field and no matter how much I heat the gas I can never make the population of the higher energy state exceed the lower energy state. All I can do is make them almost equal, as described by the Boltzmann distribution. Now I take another sample of hydrogen where I have created a population inversion, maybe by some method akin to that used in a laser, so there are more nuclei aligned against the field than with it. This is my negative temperature material. What happens when I mix the samples. Well I would expect the population inverted gas to "cool" and the normal gas to "heat" so that my mixture ends up with the Boltzmann distribution of aligned and opposite nuclei. John RennieJohn Rennie Ah, but who says that negative absolute temperatures exist at all? This is not without its controversies. There's a nature paper here which challenges the very existence of negative absolute temperatures, arguing that negative temperatures come about due to a poor method of defining the entropy, which in turn is used to calculate the temperature. Other people insist that these negative temperatures are "real". So, depending on which side of this debate you align yourself with, these systems can be described with positive temperatures (and behave accordingly), or negative temperatures which have very exotic properties. Matt ThompsonMatt Thompson $\begingroup$ This does not answer the question (the proof that is asked for does not rely on whether such systems actually exist or not). $\endgroup$ – ACuriousMind♦ Jun 30 '15 at 10:26 $\begingroup$ The one thing that everyone agrees on is that their behavior is a bit surprising, and that is to be expected as we don't encounter systems with temperature ceilings in day-to-day life. In any case, that paper is cited in the comments on most of our "negative absolute temperature" questions. I can assure you that most of the answer authors are aware of it. But the question presupposes the definition of temperature which generates 'negative' values and this post doesn't really address it. $\endgroup$ – dmckee♦ Jul 1 '15 at 3:04 $\begingroup$ @ACuriousMind: What of E=-mcc? Matt Thompson's answer is to claim the negative temperatures are the similar beast of spurious mathematical solutions and have no meaning whatsoever. $\endgroup$ – Joshua May 22 '16 at 16:20 $\begingroup$ @matt-thompson: you are spot on. In fact, "temperature" as opposed to energy is only a derived quantity (a derivative of $S$) and nowhere near as fundamental. By looking at non-monotonously growing densities of states it is easy to construct paradoxa, like systems in which heat is flowing from the colder to the hotter bath, regardless of which entropy definition is used, see the authors' follow-up paper $\endgroup$ – jkds Oct 15 '18 at 6:36 $\begingroup$ For negative temperature, you require a thermal equilibrium in which dS/dU < 0. This can happen, but only in a metastable sense. However, much of equilibrium thermal physics can apply to long-lived metastable equilibria. The concept of negative temperature is consistent with this. (And by the way, if it were true that someone had found a way for heat to flow from a colder to a hotter bath (correctly defined) without entropy increasing elsewhere, then we would all know about it because they would be rich and our energy problems would be over.) $\endgroup$ – Andrew Steane Oct 30 '18 at 10:13 For the visually inclined, this article explains it simply. The maximum hotness definition is the middle image instead of the expected right image: Due to the unintuitive definition of heat, a sample that only includes hot particles is negative kelvin / beyond infinite hot, and as clear from the image would give energy to colder particles. Cees TimmermanCees Timmerman Negative temperature - yes I encountered that once: I seem to recall that it's the state that arises when, say, you have a system of magnetic dipoles in a magnetic field, and they have arrived at an equilibrium distribution of orientations ... and then the magnetic field is suddenly reversed and the distribution is momentariy backwards - basically the distribition given by substituting a negative value of T. Other scenarios can probably be thought of or actually brought into being that would similarly occasion this notion. I think possibly the answer is that the system is utterly out of thermodynamic equilibrium, whence the 'temperature' is just the variable that formerly was truly a temperature, and is now merely an artifact that gives this non-equilibrium distribution when rudely plugged into the distribution formula. So heat is transferred because you now have a highly excited system utterly out of equilibrium impinging upon a system that approximates a heat reservoir. I think there's no question really of accounting for the heat transfer by the usual method, ie when both temperatures are positive, of introducing the temperature difference as that which drives the transfer. And would it even be heat transfer atall if the energy is proceeding from a source utterly out of thermodynamic equilibrium? It's more that the transferred energy is becoming heat, I would say. AmbretteOrriseyAmbretteOrrisey $\begingroup$ Just to say, in the spin example the system is not "utterly out of equilibrium". Surprising as it may seem, the situation with spins more "up" than "down" is a metastable equilibrium, because the second derivative of the entropy is negative. This means that after a small fluctuation the system will move back or 'relax' to the negative temperature state, and this is the sense in which we can speak of thermal equilibrium here. $\endgroup$ – Andrew Steane Oct 30 '18 at 10:20 $\begingroup$ Really!? It's metastable is it? That's really quite remarkable! I feel a need to look at that more closely. Thankyou. $\endgroup$ – AmbretteOrrisey Oct 30 '18 at 10:23 None of the answers above are correct. Matt Thompson's answer is close. The OP asks for a mathematical proof that if a negative-temperature system and a positive-temperature system come in contact, heat will flow from the negative- to the positive-temperature system There is no proof for this statement because it is incorrect In statistical mechanics temperature is defined as \begin{equation} \frac{1}{T} = \frac{\partial S}{\partial E} \end{equation} i.e. a derivative of $S$. For $\it normal$ systems, like ideal gases, etc. $S(E)$ is a highly convex function of $E$ and there is a 1-to-1 relation between the system's macrostate and its temperature. However, in cases where $S$ is not a convex function of $E$, $\frac{\partial S}{\partial E}$ can take the same numerical value at different energies $E$ and therefore the same temperature. In other words, $T$, unlike $E$ does --in general-- not uniquely describe a system's macrostate. This situation occurs in systems that have a negative Boltzmann temperature (detail: for a negative Boltzmann temperature $S$ needs to be non-monotonous in $E$). An isolated system 1 with a negative Boltzmann temperature $T_B<0$ can have either higher or lower internal energy $E_1/N$ than another isolated system, system 2, that it gets coupled to. Depending on which system has a higher $E_i/N, i=1,2$ heat flows either from system 1 to system 2 or vice versa, regardless of the temperatures of the two systems before coupling. For details, see Thermodynamics in isolated systems Below I have attached Fig. 1, taken from the arxiv version of this work to illustrate this fact. I am not an author of any of the cited papers. Thermodynamics is compatible with the use of the Gibbs entropy, but not with the Boltzmann entropy. Showing this is a four line proof, see this Nature Physics paper Consistent thermostatistics forbids negative absolute temperatures. The Gibbs temperature (unlike Boltzmann temperature) is always positive, $T>0$. The attempt above by @Nathaniel at a purely thermodynamic proof of the OP's statement relies on the premise that $T<0$ is compatible with thermodynamics. This is not the case, see point 2. The proof given is invalid. For normal systems the distinction between Gibbs and Boltzmann temperature is practically irrelevant. The difference becomes drastic though, when edge cases are considered, e.g. truncated Hamiltonians or systems with non-monotonous densities of states. In fact, in most calculations in statistical mechanics textbooks the Gibbs entropy is used instead of the Boltzmann entropy. Remember calculating "all states up to energy $E$" instead of "all states in an $\epsilon$ shell at energy $E$"? That's all the difference. There is a whole series of attempts to publish comments on the Nature Physics article by Dunkel and Hilbert, but all got rejected. These all follow the pattern of trying to create a contradiction, but none were able to punch a hole into Dunkel and Hilbert's short mathematical argument. jkdsjkds $\begingroup$ It is not necessary for $S$ to be nonconvex in order to have a negative temperature. The canonical ensemble for a simple 2-state system has a negative temperature regime, but $S(E)$ is convex in that case. It is surely the case that if you move to the microcanonical ensemble then nonconvexity can make things more complicated, but that's tangential to this question. $\endgroup$ – Nathaniel Oct 30 '18 at 9:51 $\begingroup$ I had a quick look at the paper just in case, but I didn't change my mind. The proof in my answer really is a mathematical proof - it says that (i) if temperature is defined as $1/T=\frac{\partial S}{\partial E}$, and (ii) if the first and second laws hold, then (iii) heat must always flow from lower $1/T$ to higher $1/T$. If it doesn't then you're using the wrong ensemble or have made some other mistake - there is no other possibility. Neither non-convexity of the entropy nor non-uniqueness of $E(T)$ can change this. $\endgroup$ – Nathaniel Oct 30 '18 at 10:13 $\begingroup$ @Nathaniel the research result I quoted here, including an specific example, is precisely that temperature (regardless which entropy is used) does not allow to deduce the direction of heat flow. My answer is specific to the OPs question and short, because I did not want to go into all the details. Please see the linked paper and others by the same authors for answers to your questions. $\endgroup$ – jkds Oct 30 '18 at 11:10 $\begingroup$ Yes, I read the paper, albeit briefly, as I said. They review multiple statistical definitions of the entropy and temperature, and claim that for some of them the temperature doesn't predict the direction of heat flow. But that implies a violation of the second law, so it just means those definitions are not the correct ones for the system in question. I do agree with them that the temperature doesn't uniquely determine the thermodynamic state if the entropy isn't convex, but they seem to say this implies it can't predict the direction of heat flow, which doesn't actually follow at all. $\endgroup$ – Nathaniel Oct 30 '18 at 14:13 $\begingroup$ @Nathaniel "But that implies a violation of the second law ". Not correct. The second law is discussed in the paper in Sec. V. Of the reviewed entropy definitions only one --the Gibbs entropy-- satisfies the second law strictly. $\endgroup$ – jkds Nov 2 '18 at 8:42 Why there is no negative temperature Thermodynamics, temperature below 0 Kelvin The second law of thermodynamics in terms of entropy at negative absolute temperatures Temperature Less Than Nothing? Negative temperature Why is there no absolute maximum temperature? Can negative absolute temperature be achieved? Temperature below absolute zero? Physical significance of negative temperature Are negative temperatures typically associated with negative absolute pressures? Upper bound for the Kelvin scale Negative absolute pressure with positive absolute temperature A physical explanation for negative kelvin temperatures Hotter than the Absolute Hot? If more heat means more energy and negative temperature is hotter than any temperature,has it more energy than any positive temperature's energy? Does negative absolute temperature imply heat engine efficiency greater than one?
CommonCrawl
arXiv.org > astro-ph > arXiv:1912.08338 Astrophysics > Solar and Stellar Astrophysics Title:A stripped helium star in the potential black hole binary LB-1 Authors:Andreas Irrgang, Stephan Geier, Simon Kreuzer, Ingrid Pelisoli, Ulrich Heber (Submitted on 18 Dec 2019 (v1), last revised 3 Jan 2020 (this version, v2)) Abstract: The recently claimed discovery of a massive ($M_\mathrm{BH}=68^{+11}_{-13}\,M_\odot$) black hole in the Galactic solar neighborhood has led to controversial discussions because it severely challenges our current view of stellar evolution. A crucial aspect for the determination of the mass of the unseen black hole is the precise nature of its visible companion, the B-type star LS V+22 25. Because stars of different mass can exhibit B-type spectra during the course of their evolution, it is essential to obtain a comprehensive picture of the star to unravel its nature and, thus, its mass. To this end, we study the spectral energy distribution of LS V+22 25 and perform a quantitative spectroscopic analysis that includes the determination of chemical abundances for He, C, N, O, Ne, Mg, Al, Si, S, Ar, and Fe. Our analysis clearly shows that LS V+22 25 is not an ordinary main sequence B-type star. The derived abundance pattern exhibits heavy imprints of the CNO bi-cycle of hydrogen burning, that is, He and N are strongly enriched at the expense of C and O. Moreover, the elements Mg, Al, Si, S, Ar, and Fe are systematically underabundant when compared to normal main-sequence B-type stars. We suggest that LS V+22 25 is a stripped helium star and discuss two possible formation scenarios. Combining our photometric and spectroscopic results with the Gaia parallax, we infer a stellar mass of $1.1\pm0.5\,M_\odot$. Based on the binary system's mass function, this yields a minimum mass of $2-3\,M_\odot$ for the compact companion, which implies that it may not necessarily be a black hole but a massive neutron- or main sequence star. The star LS V+22 25 has become famous for possibly having a very massive black hole companion. However, a closer look reveals that the star itself is a very intriguing object. Further investigations are necessary for complete characterization of this object. Comments: Accepted for publication in A&A (Astronomy and Astrophysics) Subjects: Solar and Stellar Astrophysics (astro-ph.SR) Journal reference: A&A 633, L5 (2020) DOI: 10.1051/0004-6361/201937343 Cite as: arXiv:1912.08338 [astro-ph.SR] (or arXiv:1912.08338v2 [astro-ph.SR] for this version) From: Andreas Irrgang [view email] [v1] Wed, 18 Dec 2019 01:58:48 UTC (709 KB) [v2] Fri, 3 Jan 2020 14:10:56 UTC (714 KB) astro-ph.SR
CommonCrawl
Search SpringerLink Who gains the most from improving working conditions? Health-related absenteeism and presenteeism due to stress at work Beatrice Brunner ORCID: orcid.org/0000-0002-6010-51841, Ivana Igic2, Anita C. Keller3 & Simon Wieser1 The European Journal of Health Economics volume 20, pages 1165–1180 (2019)Cite this article Work stress-related productivity losses represent a substantial economic burden. In this study, we estimate the effects of social and task-related stressors and resources at work on health-related productivity losses caused by absenteeism and presenteeism. We also explore the interaction effects between job stressors, job resources and personal resources and estimate the costs of work stress. Work stress is defined as exposure to an unfavorable combination of high job stressors and low job resources. The study is based on a repeated survey assessing work productivity and workplace characteristics among Swiss employees. We use a representative cross-sectional data set and a longitudinal data set and apply both OLS and fixed effects models. We find that an increase in task-related and social job stressors increases health-related productivity losses, whereas an increase in social job resources and personal resources (measured by occupational self-efficacy) reduces these losses. Moreover, we find that job stressors have a stronger effect on health-related productivity losses for employees lacking personal and job resources, and that employees with high levels of job stressors and low personal resources will profit the most from an increase in job resources. Productivity losses due to absenteeism and presenteeism attributable to work stress are estimated at 195 Swiss francs per person and month. Our study has implications for interventions aiming to reduce health absenteeism and presenteeism. Working on a manuscript? Avoid the most common mistakes and prepare your manuscript for journal editors. A loss of work productivity can be a result of health impairments and arise from absenteeism (being away from work due to illness or disability) and presenteeism (being present at work but constrained in certain aspects of job performance by health problems) [1]. Maintaining a healthy and productive workforce is increasingly challenging due to the continuing structural changes in the working environment, an aging workforce and an increasing number of employees affected by stress at work [2]. Gaining better knowledge of the stress-related causes of absenteeism and presenteeism is therefore of high social and economic importance. A detailed analysis of the drivers of work stress-related productivity losses may be particularly useful to understand which employees are most at risk of incurring stress-related productivity losses and to identify those who might profit the most from interventions that improve work conditions. Productivity losses are determined by multiple factors [3], but work-related factors have often been proposed as especially important [4]. According to the models developed in occupational health psychology, such as the Job-Demands Control model (JDC) [5, 6] and the Job-Demands Resources model (JDR) [7], unfavorable job conditions are associated with high levels of job stressors and a lack of job resources. Exposure to such job conditions can lead to stress among employees, resulting in decreased performance and motivation and, over time, in serious health problems [8]. However, only a handful of empirical studies have analyzed these propositions in relation to productivity losses caused by absenteeism and presenteeism (e.g., [4, 9, 10]). For example, a lack of job control, which is a well-established work resource [11] defined as the ability to determine when and where work is done, has been shown to increase the risk of presenteeism [12]. Another study found a similar relation with sickness absence but only for women [13]. Additionally, high time demands and physical demands at work have been shown to be associated with presenteeism and absenteeism [14, 15]. In this study, we estimate the effects of work stressors and resources on health-related productivity losses caused by absenteeism and presenteeism, and add to the current literature in three ways. First, we estimate the effects of task-related and social job stressors and resources on health-related productivity losses, whereas the current literature mainly focused on task-related factors. Empirical evidence suggests that social stressors may be especially harmful to employee health and well-being, even more so than other job stressors [16]. Furthermore, the recent "Stress-as-Offense-to-Self" model underlines the relevance of social job resources, such as appreciation at work, and highlights its absence as particularly stressful for employees [17]. Absenteeism and presenteeism can also be explained by the social exchange perspective [18]. According to this approach, the employee–organization relationship is a trade of effort and loyalty for benefits such as pay, social support and recognition [19]. When employees are satisfied with this mutual exchange, they will be engaged in their jobs. However, when employees perceive the benefits received as too low compared to their contribution, they may withdraw from the relationship. Absenteeism and presenteeism can thus be seen as a method of restoring equity in the employee–organization relationship [20]. Some previous studies support these assumptions, although evidence is still scarce. Injustice at work [20], low organizational support [18] and low workgroup cohesiveness [21] have, for example, been shown to increase the risk of absenteeism. Similarly, negative relationships with colleagues [22], role ambiguity [23], and workplace bullying [24] have been shown to increase the risk of presenteeism. A few studies also provide evidence of the relevance of positive social aspects at work. Employees working under a supportive supervisor [25] who demonstrated strong integrity [26] showed less presenteeism and absenteeism. Second, in addition to social and task-related stressors and resources at work, we consider personal resources. According to the JDR model, job and personal resources can affect health and organizational outcomes both directly and indirectly; personal resources might enable employees not only to deal with job demands in a resilient way but also to make better use of available job resources [7]. Previous studies have shown that personal resources are related to absenteeism; however, studies on presenteeism rarely consider personal factors [27]. We included occupational self-efficacy as a relevant personal resource for individuals in organizations [28, 29], expecting self-efficacy to act as a buffer for the negative effects of job stressors [7]. Occupational self-efficacy is defined as the belief or confidence in one's ability to successfully fulfill a task or cope with difficult tasks or problems [30]. Previous research has shown direct beneficial effects of occupational self-efficacy on productivity [28, 29], work-related behavior [31] and job attitudes [32] and demonstrated its moderating effects in the stressors–strain relationship [33, 34]. However, to date, no studies have explored its effects on health-related productivity losses. Based on the conservation of resources theory (COR, [35]), according to which individuals who lack resources are more vulnerable to resource loss and less capable of resource gain (negative spiral), we expected that employees lacking both job and personal resources are most at risk of experiencing health-related productivity losses due to absenteeism and presenteeism when job stressors increase. Furthermore, in line with the "gain paradox principle" [35] according to which resource gains become more important when resources are loss is high, we expect that an increase in job resources is especially important for employees with low self-efficacy and high stressors. Third, we contribute to the literature on the economic burden of work stress. Although work stress and its consequences for employees and employers are high on the political agendas of European institutions and policy-makers [36], evidence of the economic burden of work stress is scant, especially regarding stress-related presenteeism [37]. The few available studies suggest that the costs of work stress are substantial [38]. We add to previous studies on the productivity losses caused by work stress by estimating the cost of employees' health-related productivity loss due to presenteeism and absenteeism of being exposed to an imbalance between job stressors and job resources. Such an imbalance, according to occupational stress models (e.g., JDC, JDCR [39]), results in work stress and has a high probability of leading to serious health problems. We calculated the total health-related productivity loss due working under unfavorable job conditions per employee and month, considering both absenteeism and presenteeism. The aim of this study was threefold. First, we estimated the effects of task-related and social stressors and resources on health-related productivity losses due to absenteeism and presenteeism. We assessed stressors and resources at work based on six indices measuring (1) task-related work stressors (time pressure, task uncertainty, performance constraints, and mental and qualitative overload), (2) social work stressors (social stressors from supervisor and co-workers), (3) task-related work resources (job control and task significance), (4) social work resources (social support from supervisor, appreciation at work), as well as (5) overall work stressors and (6) overall work resources. We controlled for a wide range of confounding factors, such as socio-economic characteristics, job characteristics, private demands, and personal characteristics (self-efficacy). Second, we explored the interaction effects between job stressors, job resources, and personal resources. We aimed to understand which employees are most at risk if job stress increases and which employees would benefit the most from interventions improving the balance between job stressors and resources. Third, we built an economic model estimating the productivity losses caused by employee exposure to an imbalance between job stressors and resources. We used data from a Swiss workforce survey carried out in two measurement waves. The survey consisted of two datasets: a representative cross-sectional dataset based on the first wave and a longitudinal dataset based on both waves. We used both datasets because they have different strengths and weaknesses. The cross-sectional wave 1 dataset is representative of the Swiss workforce regarding gender, age, region and industry branch. Moreover, it contains information on occupational self-efficacy, allowing us to explore the interaction effects between job stressors, job resources and occupational self-efficacy. However, due to its cross-sectional nature, it could not be used to identify causal effects. The longitudinal wave 1–2 dataset allowed us to overcome this weakness, as it permits the application of methodologically superior panel data estimation methods. However, wave 1–2 suffers from considerable attrition in the second wave and does not allow us to explore interaction effects because occupational self-efficacy was not assessed in the second wave. Wave 1 The first wave was conducted in February 2014. The recruitment of participants was based on a large Swiss Internet panel including full- and part-time employees. The sample was stratified by gender, age, region and industry branch. Participants were recruited randomly from the sample by phone and e-mail to complete the online questionnaire [40]. A total of 3758 employees completed the questionnaire. Of these, 59 were excluded because of timing and response patterns and 318 because of missing or implausible information. The final cross-sectional sample consisted of 3381 employees who are representative of the Swiss workforce regarding gender, age, region and industry branch. Wave 1–2 The second wave was conducted in February 2015. Of the 3381 participants of wave 1, 352 had left the panel and 196 were no longer economically active and were therefore excluded. Hence, 2833 individuals were re-contacted, of whom 2125 (75%) participated and 1759 (62%) completed the questionnaire. We excluded 93 individuals because of timing and response patterns and 153 because of missing information on industry branch or work productivity. Plausibility checks on income and hours worked led to the exclusion of an additional 14 individuals. Our final longitudinal sample included N = 1513 individuals who had participated in both waves. The longitudinal wave 1–2 data set was used to test the robustness of the cross-sectional estimations. We accounted for selective attrition by estimating and applying inverse-probability-of-attrition weights. Our dependent variable was individual health-related productivity loss, corresponding to the sum of the percentage of absenteeism (percentage of work time missed due to health) and percentage of presenteeism (percentage of work time affected by productivity impairment due to health problems while working). These data were collected with the Work Productivity and Activity Impairment-General Health (WPAI-GH) questionnaire. The WPAI-GH is a psychometrically tested instrument measuring absenteeism, presenteeism and overall health-related work productivity losses (corresponding to the sum of absenteeism and presenteeism) with good reliability, validity, generalizability and practicability [41,42,43]. The WPAI-GH questionnaire is composed of five questions: Q1 = currently employed; Q2 = hours missed due to health problems; Q3 = hours missed due to other reasons (e.g., vacation); Q4 = hours actually worked; Q5 = degree to which health affected productivity while working (using a 0–10 Visual Analogue Scale). Following the coding and scoring rules of the WPAI developers, we obtained the percentage of health-related work productivity losses [Q2/(Q2 + Q4) + ((1 − Q2/(Q2 + Q4)) × Q5/10)], which constitutes our dependent variable. The main advantages of the WPAI over other productivity questionnaires [e.g., health and work questionnaire (HWQ) or health and work performance questionnaire (HPQ)] are the possibility of transforming outcomes into monetary values, as outcomes are expressed as impairment percentages. Furthermore, it uses a 1-week rather than a 4-week recall period, which significantly reduces recall bias [42]. Main explanatory variables Our main explanatory variables are six indices measuring task-related and social stressors and resources at work. The indices were constructed based on several task-related and social work conditions proposed by the theoretical and empirical literature to be relevant regarding health and key organizational variables such as productivity and motivation (e.g., JDC, JDCR [7, 8, 11]). We included four well-established task-related stressors (time pressure, task uncertainty, performance constraints, and mental and qualitative overload) and two resources (job control and task significance). In addition to the task-related factors, we included two social resources (social support from supervisor, appreciation at work) [25, 44, 45] and two social stressors (social stressors from supervisor and co-workers) [46, 47] as suggested by theory [8] and empirical research [48, 49]. We also included occupational self-efficacy, a personal resource, to test proposed interaction effects [33, 34]. Job stressors We assessed four task-related stressors with items from the Instrument for Stress-Oriented Task Analysis (ISTA; [50]), including time pressure (e.g., "How often must you finish work later because of having too much to do?"), task uncertainty (e.g., "How often do you receive contradictory instructions from different supervisors?"), performance constraints (e.g., having to work with inadequate devices or obsolete information) [50], and mental and qualitative overload at work (three items, e.g., having to perform tasks that exceed one's skills) [51]. With the exception of the last scale, which has three items, each of the scales contains four items. We assessed social stressors with the social stressors scale by Frese and Zapf, which includes two scales each with five items. One scale focuses on conflicts or animosities and negative group climate among co-workers (e.g., "With some colleagues there is often conflict"), and the other focuses on conflicts with supervisors (e.g., "I often quarrel with my boss") [52]. We assessed job control using the ISTA [50]. The five items measured job control by evaluating respondents' freedom to choose the time (e.g., "To what degree are you able to decide on the amount of time you will be working on a certain task?") and method (e.g., "Can you decide yourself which way to carry out your work?") for accomplishing tasks at work. The second task-related resource was task significance ("In my job, one can produce something or carry out an assignment from A to Z"), which was measured with one item from the Salutogenetische Subjektive Arbeitsanalyse (SALSA; [51]) instrument. Social job resources were measured by four items evaluating supportive behavior from supervisors (e.g., a line manager lets a worker know how well a job was done), which was also measured with items from the SALSA [51], and appreciation at work, which was assessed with a single item based on the Appreciation at Work Scale ("I feel generally appreciated in my job") [53]. With the exception of appreciation, all items were answered on a 5-point Likert scale, with responses ranging from 1 (very little/not at all) to 5 (very much). Appreciation, originally answered on a 7-point Likert scale, was transformed into a 5-point scale. We assessed occupational self-efficacy with a four-item scale from Rigotti, Schyns, and Mohr [54]. Work-related self-efficacy measures the belief in one's ability to cope with difficult tasks and problems at work (e.g., "I can remain calm when facing difficulties in my job because I can rely on my abilities") and was assessed only in the first wave of the survey (wave 1). We created six indices measuring the level of job demands and job resources. First, to test the overall effects of job stressors and resources on employees' health-related productivity losses, we built an overall job stressors and an overall job resources measure. These measures were constructed by averaging over all six stressors and four resources described above, representing demands and resources from the JDC model. This procedure has been previously used [55]. Second, to test the distinct productivity effects of task-related and social stressors and resources, we constructed four additional indices measuring (1) task-related stressors, (2) social stressors, (3) task-related resources and (4) social resources. The four measures were constructed similarly, by averaging over the single task-related and social job stressors and resources. Table A.1 presents the Cronbach alpha values for the single stressors and resources as well as for the indices. For the analysis, we used the standardized values of the six job stressor and resource measures. We considered a variety of potential confounders that, based on previous evidence, were expected to be associated with work productivity as well as job stressors and resources. First, we controlled for several demographic and socio-economic characteristics (gender, age, number of children, marital status, educational level, and whether the respondent had Swiss citizenship) [56, 57]. Second, we controlled for labor market and job characteristics such as industry branch, occupation, company size, job tenure, average number of working hours, shiftwork, part-time employment, and managerial function, as has been done in previous studies [56, 58, 59]. Third, we controlled for chronic physical health conditions such as asthma, allergies, cancer, chronic bronchitis or emphysema, diabetes, kidney disease, osteoarthritis or rheumatoid arthritis, osteoporosis, and permanent injury after an accident, as the negative relationship between health problems and work productivity is well established [60, 61]. We did not, however, consider diseases with often psychosomatic causes, such as migraines or depression, as they may be a part of the outcome [11, 62] and therefore represent bad control variables for our research question [63]. Finally, we controlled for family-to-work conflict [64] to account for the potential productivity effects of mood spillovers, which have been identified in previous studies [65]. Econometric framework Since our analysis was based on a cross-sectional dataset (wave 1) and on a longitudinal dataset (wave 1–2), we applied both cross-sectional and panel-data estimation methods. Cross-sectional methods were used to explore the association of health-related productivity loss with job stressors and job resources as well as to explore the interaction between job stressors, job resources, and personal resources in wave 1. The wave 1 dataset is representative of the Swiss workforce and holds information on occupational self-efficacy, which the second wave does not. However, cross-sectional estimation methods require the key regressors to be strictly exogenous conditional on covariates in order to have a causal interpretation. Panel data methods allow for relaxing this strong assumption by controlling for unobserved time-invariant heterogeneity. We used the wave 1–2 panel data set to test the robustness of the cross-sectional estimation results estimating fixed effects models while accounting for selective attrition using inverse-probability-of-attrition weights. Cross-sectional estimation The associations between health-related productivity losses and job stressors and resources were examined based on five model specifications with hierarchical adjustment and estimated by ordinary least squared (OLS). The fully specified model takes the following form: $$\begin{aligned} Y_{i} & = \alpha + \beta_{1} \dot{R}_{i}^{j} + \beta_{2} \dot{S}_{i}^{j} + \gamma {\mathbf{X}}'_{i} + \delta {\mathbf{J}}'_{i}\\ & \quad + \varphi_{\text{r}} + \mu \dot{S}_{i}^{p} + \theta {\mathbf{H}}'_{i} + \vartheta \dot{R}_{i}^{p} + \varepsilon_{i} , \\ & \quad {\text{with}}\quad \dot{R}_{i}^{j} \in \left( {\dot{R}_{i}^{j} ,(\dot{R}_{i}^{{j,{\text{social}}}} ,\dot{R}_{i}^{{j,{\text{task}}}} ) } \right)\\ & \quad\quad {\text{and}}\quad \dot{S}_{i}^{j} \in \left( {\dot{S}_{i}^{j} , (\dot{S}_{i}^{{j,{\text{social}}}} ,\dot{S}_{i}^{{j,{\text{task}}}} )} \right). \\ \end{aligned} ,$$ \(Y_{i}\) denotes the percentage productivity losses of individual i due to sickness absenteeism and presenteeism. \(\dot{R}_{i}^{j}\) and \(\dot{S}_{i}^{j}\) represent the level of resources and stressors at individual i's current job (the dots representing standardized values). The fully specified model distinguishes between task-related and social job stressors and resources. The hierarchical adjustment involves the following five model specifications. We started by estimating simple correlations with Model 1 (CS-1), including only job resources \((\dot{R}_{i}^{j} )\) and job stressors \((\dot{S}_{i}^{j} )\). Model 2 (CS-2) additionally included known confounding variables related to socio-economic and job characteristics (\({\mathbf{X^{\prime}}}\) and \({\mathbf{J^{\prime}}}\), see covariates section for more details). This model also included regional fixed effects, denoted by \(\varphi_{\text{r}}\) with r indexing the canton of residence of individual i. Model 3 (CS-4) additionally included family-related stressors (\(\dot{S}_{i}^{p}\)), as previous literature suggests that mood disturbances can spill over from the family domain to the work domain [65]. Model 4 (CS-4) added a set of nine dummy variables indicating chronic health conditions (\({\mathbf{H^{\prime}}}\), see "Covariates" section) to account for the relationship between chronic conditions, such as asthma and diabetes, and work productivity [60, 61]. Finally, our fully specified Model 5 (CS-5) included occupational self-efficacy (\(\dot{R}_{i}^{p}\)). Under the assumption of strict exogeneity, \(\beta_{1}\) and \(\beta_{2}\) represent the percentage-point change in health-related productivity losses due to a one-standard-deviation change in job resources and job stressors. Note that there is a potential issue of reverse causality. However, this problem is likely to be mitigated as the dependent variable referred to the week before the interview, while the key regressors referred to the current work situation in general and may therefore be considered predetermined. Panel data estimation and inverse-probability-of-attrition weighting We assessed the robustness of the fully specified cross-sectional model (CS-5) by estimating a fixed effects model based on the longitudinal wave 1–2 dataset while accounting for selective attrition using inverse-probability-of-attrition weighting. The fixed effects model differed from the fully specified cross-sectional model in Eq. (1) in three ways. First, it included individual fixed effects. This allowed us to relax the assumption of strict exogeneity as the model controlled for unobserved time-invariant heterogeneity. Second, it excluded occupational self-efficacy because it was not observed in the second wave. While the time-invariant component of occupational self-efficacy was captured by the individual fixed effect, we could not control for its time-variant component, i.e., potential productivity effects resulting from changes in occupational self-efficacy. However, self-efficacy is considered to be stable over time [66]. Third, for the fixed effects model to provide unbiased estimates of \(\beta_{1}\) and \(\beta_{2}\), it is essential to avoid attrition bias; thus, the fixed effects model weights observations by inverse-probability-of-attrition weights. Inverse-probability-of-attrition weighting involved two steps. First, for each period with potential selective attrition (in our case, wave 2), a dummy variable indicating second-wave participation was regressed on a series of covariates in wave 1, and probabilities \(\hat{P}_{i2}\) were fitted using logistic regression. The covariates also included variables on attitudes, character traits, mental health and well-being, and many other variables not used in Eq. (1) (see Appendix A.2 for specification details). In the second step, the objective function was weighted by the inverse probability weights 1/\(\hat{P}_{i2}\). The intuition behind these weights was that respondents with characteristics similar to those of individuals missing due to attrition are up-weighted in the analysis and vice versa. The method of inverse-probability-of-attrition weighting corrects for selection bias under the assumption that conditional on observables in the first wave, second-wave participation is independent of health-related productivity and job stressors and resources in the second wave [67]. We estimated the costs of job stress based on the representative wave 1 dataset. Using the results of Eq. (1), we proceeded in four steps: first, job stress was defined as a binary variable taking the value 1 if job stressors exceeded job resources, which was the case if the net effect of workplace conditions on productivity losses was positive, and 0 otherwise (\({\text{job}}\;{\text{stress}}_{i} = 1[ \beta_{1} \dot{R}_{i}^{j} + \beta_{2} \dot{S}_{i}^{j} > 0 ]\)). Second, we converted the individual percentage productivity losses into monetary values by multiplying them with monthly earnings. This yielded the observed monthly production loss in Swiss francs (CHF) caused by health problems for an average employee in February 2014. The third step involved a counterfactual prediction. We predicted the health-related production loss that would have been observed if each employee experiencing job stress had a net workplace condition effect of zero, i.e., would not have been exposed to job stress. This yielded the predicted monthly production loss caused by health problems for an average employee in the absence of job stress. The fourth and final step consisted of taking the difference between the observed and the predicted production losses, which yielded the part of the health-related production loss attributable to job stress. Descriptive statistics of wave 1 Of the 3381 wave 1 participants, 54% were female, and the average age was 42.3 years (Table 1). Almost two-thirds were employed full-time (64%), and approximately one-fifth performed shift work (20%). In terms of job category, approximately 18% were self-employed, were firm owners or worked in independent professions, 31% were executive employees, 37% were non-executive employees, 17% were skilled workers, and 2.3% were unskilled manual workers. Table 1 Descriptive statistics of the cross-sectional data (wave 1) The average health-related productivity losses amounted to 14.3% of the working time, corresponding to 6 h per week for a full-time employee. At 10.9%, presenteeism had a more important role than absenteeism (3.4%). These findings are in line with those of other studies using the WPAI (e.g., [68]). Moreover, Fig. 1a shows that 65% of the participants reported health-related productivity losses of zero. Of those with a non-zero loss, the majority reported a loss between 10 and 20%, corresponding to 4–8 h per week. On average, job resources (M = 3.85, SD = 0.66) were higher than job stressors (M = 2.03, SD = 0.51), and this difference was more pronounced for social than for task-related job stressors and resources. Furthermore, both job stressors and job resources exhibited a distinctive asymmetrical distribution with opposite skewness, with the majority of employees reporting above-average resources and below-average stressors (Fig. 1b). Distribution of key variables Correcting for selective attrition in wave 2 Table 2 reports the scale of selective attrition in the wave 1–2 subsample and shows how well the inverse-probability-of-attrition weights performed in adjusting for it. Comparing the characteristics between the wave 1 sample (column 1) and the unweighted wave 1–2 subsample (column 2) suggests that attrition was non-random as participation in the second wave was significantly related to age, office size, working in the art sector, and absenteeism. In particular, participants in the second wave were on average younger (43 vs. 42.3 years), worked in smaller offices (9.7 vs. 3.3 co-workers), showed higher absenteeism (2.5% vs. 3.4%) and were more likely to work in the art sector (2.6% vs. 6%) than participants in the first wave. Table 2 Attrition and the results of inverse-probability-of-attrition weighting The comparison of the characteristics between the wave 1 sample (column 1) and the weighted wave 1–2 subsample (column 3) illustrates that inverse-probability-of-attrition weighting is capable of reducing the differences between the two samples considerably. Small differences only remain with respect to office size and the probability of working in the art sector. Productivity effect of job stressors and job resources Table 3 presents the main regression results using cross-sectional wave 1 data, with the first five models building up from simple correlations (CS-1) to the fully specified model presented in column five (CS-5). Column six (FE-5) shows the results of the fully specified fixed effects model based on longitudinal data (waves 1–2). Finally, columns seven and eight (CS-6 and FE-6) present the effects of task-related and social job stressors and resources on productivity separately. Table 3 Effects of job stressors and resources on health-related productivity losses The results for the cross-sectional data show that (CS-1) a one-standard-deviation increase in job stressors is associated with an increase in health-related productivity losses of 4.4 percentage points (95% CI 3.2–5.7), and a one-standard-deviation increase in job resources is associated with a decrease in health-related productivity losses of 1.7 percentage points (95% CI − 0.5 to − 2.9). Adding socio-economic and job characteristics (CS-2) increases the point estimate of job stressors to 4.8 (95% CI 3.6–6.1) but decreases the point estimate of job resources to − 1.5 (95% CI − 0.3 to − 2.8). As expected, including family conflicts (CS-3) drives down the point estimate of job stressors to 4.1 (95% CI 2.7–5.4), while the point estimate of job resources remains unchanged. Adjusting for chronic diseases (CS-4) leaves both coefficients nearly unchanged, implying that chronic conditions are related neither to job stressors nor to resources. Finally, adding occupational self-efficacy (CS-5) reduces the point estimate of job resources to − 1.3 (95% CI − 0.1 to − 2.5) and, to a lesser extent, the point estimate of job stressors to 4 (95% CI 2.7–5.3). This implies a positive correlation between occupational self-efficacy and both job stressors and resources. One explanation might be that individuals with a high level of occupational self-efficacy may also seek job tasks (or jobs) that are more challenging and demanding, as they feel capable of mastering them, and such jobs are typically combined with high job resources (e.g., [69]). Also note that occupational self-efficacy significantly reduces productivity losses due to absenteeism and presenteeism. The point estimates of the fixed effects regression based on wave 1–2 (FE-5) appear statistically equivalent to those of the fully specified model using wave 1 (CS-5). A one-standard-deviation increase in job stressors leads to an increase in productivity losses of 3.8 percentage points (95% CI 1.4–6.2), whereas a one-standard-deviation increase in job resources leads to a decrease in productivity losses of − 1.2 percentage points (95% CI 1.3 to − 3.7). While the point estimates are statistically identical, the standard errors have nearly doubled due to the smaller sample size, which renders the coefficient of job resources insignificant. Converted into elasticities, our results suggest an elasticity of health-related productivity losses of 1.09 with respect to job stressors and an elasticity of − 0.53 with respect to job resources (as shown in the last two rows). The results of distinguishing between social and task-related job stressors and job resources are presented in the last two columns of Table 3 with CS-6, which present OLS and FE-6 fixed effects regression results. The coefficients on both task-related and social job stressors are positive and statistically significant in both models. Moreover, although productivity losses seem to be slightly more affected by task-related than by social job stressors, the hypothesis of equal effects cannot be rejected (see Table 3, notes). A similar pattern emerges for job resources, as we cannot reject the hypothesis that social and task-related job resources affect productivity equally. However, in the fixed effects model (Table 3, FE-6), neither social nor task-related resources are statistically significant. The elasticity of lost productivity with respect to social job stressors ranged between 0.30 and 0.48, and that with respect to task-related stressors ranged between 0.72 and 0.8. In terms of the comparability between the wave 1 and wave 1–2 results (OLS vs. fixed effects regression), it should be kept in mind that although the weights reduced sample differences considerably (see Table 2), we cannot rule out the possibility that a certain attrition bias is still present. Nonetheless, the fact that both models yield statistically equivalent results can be interpreted as strong evidence that omitted time-invariant variables in the cross-sectional data hardly bias the results. Robustness checks We performed a set of robustness checks. The first robustness check was related to the fact that the dependent variable is strictly non-negative and contains a large mass of zeros, which might lead to biased estimates when estimated by OLS. We therefore transformed the dependent variable into a count variable corresponding to the weekly number of hours lost due to health problems, and we tested the robustness of the baseline results with respect to the use of a negative binomial model (NBM), which is especially suited to account for zero-inflated over-dispersed count data [70]. The results are shown in the first two columns of Table 4. Column 1 (C1) replicates our baseline estimates (Table 3, CS-5) using the transformed dependent variable. C2 presents the NBM results based on the same covariate specification. The comparison shows that the models lead to very similar marginal effects. The OLS point estimates lie between the marginal effects estimated by the NBM at mean characteristics and the average marginal effects. This is strong evidence that our results are not driven by the choice of the model. The next robustness check refers to the specification of both the covariates and the main explanatory variables. The specification in C3 differs from the baseline model, as it allows for interaction effects between gender, age, education and industry sector and includes the gender-age-education-sector distribution with sixteen sectors and five education categories. Again, the resulting point estimates are virtually identical to our baseline estimates. The last two robustness checks refer to the functional form of the relationship between the dependent variable and job stressors and resources. Model C4 estimates different effects for below- and above-average job resources and stressors, and C5 tests for a quadratic form of the relationships providing no evidence for a significant non-linear relationship. Table 4 Robustness checks Interaction effects In our baseline model (Eq. 1), we assumed that job stressors affect health-related productivity losses independently of the level of job resources and vice versa and that both the impact of job stressors and the impact of job resources do not depend on the level of occupational self-efficacy. We relaxed these assumptions and estimated two additional models. In Model 1 (CS-7), we added an interaction term between job stressors and resources to the baseline model (CS-5). In Model 2 (CS-8), we additionally included interaction terms between job stressors, job resources, and occupational self-efficacy (Table 5). Table 5 Heterogeneous effects Comparing CS-7 (Table 5; Fig. 2) with our baseline results in CS-5 (Table 3) shows that the impact of job resources on productivity—when estimated at average stress levels—is similar, although somewhat smaller than the constant resource effect estimated by the baseline model. The same applies to the effect of job stressors. However, the coefficient of the interaction term turns out negative (and only closely misses the 10% significance level), indicating a decreasing marginal effect of job stressors on health-related productivity losses with increasing levels of job resources. Furthermore, the productivity effect of job stressors is significant at all levels of job resources and approximately twice as large at the minimum level than at the maximum level of job resources (Fig. 2a). By contrast, job resources affect work productivity only at higher levels of job stressors (> fourth decile). In summary, the productivity effect of a change in job stressors is larger for individuals with low, rather than high, job resources. In contrast, the effect of a change in job resources is larger for individuals with high compared to low stressors. Marginal effects of job stressors and resources allowing for interaction effects. Notes: a The marginal effects of job stressors on lost productivity depending on job resources. b The marginal effects of job resources on lost productivity depending on job stressors. The estimates and the 90% CI are based on the results shown in column 1 of Table 5 The results of Model 2 (CS-8) are presented in Table 5 and, for easier interpretation, in Fig. 3. Graph (a) shows the marginal effects of job stressors on lost productivity depending on job resources at low (first decile), medium (mean) and high (ninth decile) levels of occupational self-efficacy. Graph (a) shows that at low levels of occupational self-efficacy, the marginal effects of job stressors heavily depend on the level of job resources: the lower the job resources, the larger the negative productivity effect of job stressors. With increasing levels of occupational self-efficacy, however, this relationship becomes weaker until it disappears at about the sixth decile of occupational self-efficacy. Graph (b) shows the marginal effects of job resources. In contrast to job stressors, job resources do not affect every individual's productivity loss. Positive effects of job resources are found for individuals who have above-average job stressors and below-average occupational self-efficacy. The effects for individuals with low occupational self-efficacy who face high job stressors are largest. In sum, individuals with low occupational self-efficacy are the most vulnerable in the sense that negative and positive changes in job stressors and resources have the biggest impact on work productivity in the expected direction. Marginal effects of job stressors and job resources depending on occupational self-efficacy. a The marginal effects of job stressors on health-related productivity losses depending on job resources and at low (1st decile), average and high (9th decile) values of occupational self-efficacy. b The marginal effects of job resources on health-related productivity losses depending on job stressors at low (1st decile), average and high (9th decile) values of occupational self-efficacy. The estimates and the 90% CI are based on the results shown in column 2 of Table 5 Costs of work stress Table 6 presents our estimates on the costs of job stress due to absenteeism and presenteeism. Our results suggest that job stress accounts for 23.8% of the total health-related production losses, which, in monetary terms, corresponds to CHF 195 per person and month. This corresponds to 3.2% of the average monthly earnings in Switzerland. Table 6 Average monthly per capita costs of job stress We estimated the impact of job stressors and job resources on productivity losses due to sickness absenteeism and presenteeism based on a representative survey of Swiss employees conducted in 2014 and 2015. First, we found that health-related productivity losses increase with an increase in job stressors and decrease with an increase in job resources, with social and task-related stressors and resources being equally important determinants. Second, the analysis of heterogeneous effects revealed that an increase in job stressors is especially harmful if job resources are low. These effects are even more pronounced if occupational self-efficacy is low as well. On the other hand, an increase in job resources is most effective in reducing health-related productivity losses if job stressors are high and occupational self-efficacy is low. Third, the results of a counterfactual analysis suggest that job stress (defined as job stressors exceeding job resources) accounts for 23% of the total health-related productivity losses due to absenteeism and presenteeism. This corresponds to CHF 195 per person and month. Our findings contribute to studies on the effects of positive and negative social aspects of work on presenteeism and absenteeism. In line with research showing that social aspects of work may be especially relevant to employee health and organizational behavior [25, 46], we found that social stressors and resources at work are important determinants of health-related productivity losses due to absenteeism and presenteeism in addition to task-related job stressors and resources. Moreover, we found that social and task-related stressors have direct and equal effects on health-related productivity losses, and while social resources remain a significant predictor, task-related resources do not. If employees work under unfavorable work conditions characterized by high levels of job demands, do not feel appreciated or respectfully treated at work, or lack social support, health-related productivity losses due to absenteeism and presenteeism might increase. This behavior can be seen as a method of employees restoring equity in the employee-organization relationship, as proposed by the social exchange perspective [18]. These results have scientific and practical implications. Our findings suggest that both social and task-related factors should be considered in future studies and in planning interventions aiming to reduce health-related productivity losses by improving workplace conditions. As expected, our results confirm that job resources buffer the negative effects of job stressors on productivity losses. These findings are in line with the buffering hypothesis of the JDC model as well as with the postulation that high-strain jobs, characterized by a combination of high job demands and low resources, should see the most harmful effects, while the combination of high demands and high resources is considered to be the most beneficial (active job) [8]. Moreover, our results show that an increase of 1% in job stressors results in a larger effect on health-related productivity losses than a decrease of 1% in job resources. These results are in line with those of previous studies showing that negative conditions and events typically have stronger effects than good conditions [71]. This implies that an increase in demands at work should always be accompanied by an even larger increase in job resources in order to prevent the negative consequences regarding health-related productivity impairments. Our results also show that not only job resources but also occupational self-efficacy buffer the negative effects of job stressors on health-related productivity losses. Furthermore, we find that employees with a simultaneous lack of personal and job resources are the most vulnerable with respect to an increase in job stressors. This finding is in line with the vicious cycle postulated by the COR model: individuals who lack resources are more vulnerable to resource loss and less capable of resource gain. We also find that employees with low personal resources facing high job stressors are the ones who would profit the most from an increase in job resources. This is in line with the "gain paradox principle" of the COR model [35], stating that resources are even more important when resource losses are high. We do not find a significant productivity effect of an increase in job resources for employees with high personal resources and low level of job stressors. Therefore, an increase in job resources without a reduction in job stressors may not always be sufficient to reduce health-related productivity losses. We add to the economic literature by estimating the total health-related productivity loss due to unfavorable job conditions. Our estimated productivity loss of CHF 195 per person and month may seem modest at first. However, extrapolation indicates that job stress may have cost Swiss companies up to CHF 10 billion in 2014, corresponding to 1.7% of the gross domestic product. This emphasizes the economic importance of interventions aiming to improve work conditions in general and the balance between work demand and resources. Our study has several methodological and theoretical strengths. First, the cross-sectional data were representative of Swiss employees with respect to gender, age, region, and industry branch. Second, we tested the robustness of our cross-sectional results using longitudinal data, as this allowed the application of methodologically superior panel-data estimation methods. Third, we included several task-related and social work conditions and explored the relevance of positive and negative social aspects at work beyond the task-related aspect. Fourth, in addition to job resources, we considered personal resources—occupational self-efficacy—and explored interaction effects with job stressors and job resources. Several limitations need to be taken into account. First, self-reported measures such as the WPAI-GH may suffer from social desirability and recall bias. While a recall bias is unlikely, given the 1-week recall period of WPAI-GH, a social desirability bias is likely to be present. Studies comparing self-reported with company-registered absenteeism show that employees tend to underreport absenteeism [22]. If this were due to social desirability, we would also expect employees to underreport presenteeism. While this would lead to an underestimate of the magnitude of health-related productivity losses, it would not necessarily bias the validity of the associations between workplace conditions and health-related productivity losses. A second shortcoming related to the WPAI-GH is that it has not (yet) been validated against objective work productivity data. We thus do not know whether an employee-reported productivity impairment of 10% translates into a 10% loss of an employee's value to the employer. A study comparing self-reported measures from the Work Limitations Questionnaire (WLQ) with objective productivity outcomes found that a self-reported 10% health-related limitation at work translated into a 4–5% reduction in work output. However, the generalizability of these results is unclear because the study was carried out in a single work setting and did not consider quality of work [14]. If this overestimation in self-reporting of productivity losses applied to our data, it would imply an overestimation in our job stress-induced productivity losses. There is a clear need for more research on the extent to which employee-reported productivity measures translate into production losses for employers. A third limitation relates to the high dropout rate in the second wave of the survey. Although we show that the inverse-probability-of-attrition weights are capable of correcting for selective attrition to a large extent, we cannot rule out the possibility that our panel data estimations are still biased. Our results suggest that improvements in work conditions could help organizations to reduce previously undetected productivity losses by implementing programs targeting an improved balance between job stressors and job resources. We also show that an increase in job demands affects employees to different degrees depending on their levels of job and personal resources and that not everyone benefits from increased job resources. This finding highlights the need for organizations to take a tailored approach by providing additional attention to the most vulnerable employees. Moreover, our data suggest that job stressors and resources as well as health-related productivity losses vary greatly across occupations. Our sample size prevents the estimation of occupation specific effects though, offering an opportunity for future research. Schultz, A.B., Edington, D.W.: Employee health and presenteeism: a systematic review. J. Occup. Rehabil. 17(3), 547–579 (2007) Parent-Thirion, A., et al.: Eurofound, Sixth European Working Conditions Survey—Overview Report (2017 update). J Publications Office of the European Union, Luxembourg (2017) Caverley, N., Cunningham, J.B., MacGregor, J.N.: Sickness presenteeism, sickness absenteeism, and health following restructuring in a public service organization. J. Manag. Stud. 44(2), 304–319 (2007) Demerouti, E., et al.: Present but sick: a three-wave study on job demands, presenteeism and burnout. Career Dev. Int. 14(1), 50–68 (2009) Karasek Jr., R.A.: Job demands, job decision latitude, and mental strain: Implications for job redesign. Admin. Sci. Q. 24, 285–308 (1979) Theorell, T., Karasek, R.A.: Current issues relating to psychosocial job strain and cardiovascular disease research. J. Occup. Health Psychol. 1(1), 9 (1996) Bakker, A.B., Demerouti, E.: Job demands—resources theory: taking stock and looking forward. J. Occup. Health Psychol. 22(3), 273 (2017) Theorell, T., Karasek, R.A., Eneroth, P.: Job strain variations in relation to plasma testosterone fluctuations in working men—a longitudinal study. J. Intern. Med. 227(1), 31–36 (1990) Harter Griep, R., et al.: Beyond simple approaches to studying the association between work characteristics and absenteeism: combining the DCS and ERI models. Work Stress 24(2), 179–195 (2010) Jourdain, G., Vézina, M.: How psychological stress in the workplace influences presenteeism propensity: a test of the Demand–Control–Support model. Eur. J. Work Organ. Psychol. 23(4), 483–496 (2014) Sonnentag, S., Frese, M.: Stress in organizations. In: Schmitt, N.W., Highhouse, S. (eds.) Industrial and organizational psychology (Handbook of Psychology, 2nd ed.), vol. 12, pp. 560–592. Wiley Online Library, New York (2013) Aronsson, G., Gustafsson, K.: Sickness presenteeism: prevalence, attendance-pressure factors, and an outline of a model for research. J. Occup. Environ. Med. 47(9), 958–966 (2005) Johansson, G., Lundberg, I.: Adjustment latitude and attendance requirements as determinants of sickness absence or attendance. Empirical tests of the illness flexibility model. Soc. Sci. Med. 58(10), 1857–1868 (2004) Lerner, D., et al.: Relationship of employee-reported work limitations to work productivity. Med. Care 41(5), 649–659 (2003) Alavinia, S.M., Molenaar, D., Burdorf, A.: Productivity loss in the workforce: associations with health, work demands, and individual characteristics. Am. J. Ind. Med. 52(1), 49–56 (2009) Bowling, N.A., Beehr, T.A.: Workplace harassment from the victim's perspective: a theoretical model and meta-analysis. J. Appl. Psychol. 91(5), 998 (2006) Semmer, N.K., et al.: Illegitimate tasks as a source of work stress. Work Stress 29(1), 32–56 (2015) Van Knippenberg, D., Van Dick, R., Tavares, S.: Social identity and social exchange: identification, support, and withdrawal from the job. J. Appl. Soc. Psychol. 37(3), 457–477 (2007) Rhoades, L., Eisenberger, R.: Perceived organizational support: a review of the literature. J. Appl. Psychol. 87(4), 698–714 (2002) Johns, G., Nicholson, N.: The meanings of absence—new strategies for theory and research. Res. Organ. Behav. 4, 127–172 (1982) Xie, J.L., Johns, G.: Interactive effects of absence culture salience and group cohesiveness: a multi-level and cross-level analysis of work absenteeism in the Chinese context. J. Occup. Organ. Psychol. 73(1), 31–52 (2000) Hansen, C.D., Andersen, J.H.: Going ill to work—what personal circumstances, attitudes and work-related factors are associated with sickness presenteeism? Soc. Sci. Med. 67(6), 956–964 (2008) Zhou, Q., et al.: Supervisor support, role ambiguity and productivity associated with presenteeism: a longitudinal study. J. Bus. Res. 69(9), 3380–3387 (2016) Conway, P.M., et al.: Workplace bullying and sickness presenteeism: cross-sectional and prospective associations in a 2-year follow-up study. Int. Arch. Occup. Environ. Health 89(1), 103–114 (2016) Schmid, J.A., et al.: Associations between supportive leadership behavior and the costs of absenteeism and presenteeism: an epidemiological and economic approach. J. Occup. Environ. Med. 59(2), 141–147 (2017) Nyberg, A., et al.: Managerial leadership is associated with self-reported sickness absence and sickness presenteeism among Swedish men and women. Scand. J. Public Health 36(8), 803–811 (2008) Johns, G.: Presenteeism in the workplace: a review and research agenda. J. Organ. Behav. 31(4), 519–542 (2010) Judge, T.A., et al.: Self-efficacy and work-related performance: the integral role of individual differences. J. Appl. Psychol. 92(1), 107 (2007) Stajkovic, A.D., Luthans, F.: Self-efficacy and work-related performance: a meta-analysis. Psychol. Bull. 124(2), 240 (1998) Bandura, A.: Self-efficacy: toward a unifying theory of behavioral change. Psychol. Rev. 84(2), 191–215 (1977) Sadri, G., Robertson, I.T.: Self-efficacy and work-related behaviour: a review and meta-analysis. Appl. Psychol. 42(2), 139–152 (1993) McNatt, D.B., Judge, T.A.: Self-efficacy intervention, job attitudes, and turnover: a field experiment with employees in role transition. Hum. Relat. 61(6), 783–810 (2008) Grau, R., Salanova, M., Peiro, J.M.: Moderator effects of self-efficacy on occupational stress. Psychol. Spain 5, 63–75 (2001) Jex, S.M., et al.: The impact of self-efficacy on stressor–strain relations: coping style as an explanatory mechanism. J. Appl. Psychol. 86(3), 401 (2001) Hobfoll, S.E., et al.: Conservation of resources in the organizational context: the reality of resources and their consequences. Annu. Rev. Organ. Psychol. Organ. Behav. 5, 103–128 (2018) Eurofound: Work-related stress. European Foundation for the Improvement of Living and Working Conditions (2010). https://www.eurofound.europa.eu/publications/report/2010/work-related-stress Hassard, J., et al.: The cost of work-related stress to society: a systematic review. J. Occup. Health Psychol. 23(1), 1 (2018) Hassard, J., et al.: Calculating the Cost of Work-related Stress and Psychosocial Risks. Publications Office of the European Union, Luxembourg (2014) Siegrist, J., et al.: The measurement of effort–reward imbalance at work: European comparisons. Soc. Sci. Med. 58(8), 1483–1499 (2004) LINK, Link Internet-Panel; 2018. https://research.link.ch/home/participate. Accessed 10 May 2019 Reilly, M.C., Zbrozek, A.S., Dukes, E.M.: The validity and reproducibility of a work productivity and activity impairment instrument. Pharmacoeconomics 4(5), 353–365 (1993) Prasad, M., et al.: A review of self-report instruments measuring health-related work productivity. Pharmacoeconomics 22(4), 225–244 (2004) Wahlqvist, P., et al.: Validity of a Work Productivity and Activity Impairment Questionnaire for Patients with Symptoms of Gastro-Esophageal Reflux Disease (WPAI-GERD)—results from a cross-sectional study. Value Health 5(2), 106–113 (2002) Stocker, D., et al.: Appreciation at work in the Swiss armed forces. Swiss J. Psychol. 69, 117–124 (2010) Van Vegchel, N., et al.: Testing global and specific indicators of rewards in the Effort–Reward Imbalance Model: does it make any difference? Eur. J. Work Organ. Psychol. 11(4), 403–421 (2002) Zapf, D., Frese, M.: Soziale Stressoren am Arbeitsplatz. Psychischer Stress am Arbeitsplatz, pp. 168–184 (1991) Johnson, J.V., Hall, E.M., Theorell, T.: Combined effects of job strain and social isolation on cardiovascular disease morbidity and mortality in a random sample of the Swedish male working population. Scand J. Work Environ. Health 15, 271–279 (1989) Humphrey, S.E., Nahrgang, J.D., Morgeson, F.P.: Integrating motivational, social, and contextual work design features: a meta-analytic summary and theoretical extension of the work design literature. J Appl Psychol 92(5), 1332 (2007) Viswesvaran, C., Sanchez, J.I., Fisher, J.: The role of social support in the process of work stress: a meta-analysis. J. Vocat. Behav. 54(2), 314–334 (1999) Semmer, N., Zapf, D., Dunckel, H.: Assessing stress at work: a framework and an instrument. Work and health: Scientific basis of progress in the working environment, pp. 105–113 (1995) Rimann, M., Udris, I.: Fragebogen "Salutogenetische Subjektive Arbeits analyse"(SALSA), pp. 404–419. Handbuch psychologischer Arbeitsanalyseverfahren, Zürich (1999) Frese, M., Zapf, D.: Eine Skala zur Erfassung von sozialen Stressoren am Arbeitsplatz. Z. Arbeitswissenschaft 3, 134–141 (1987) Jacobshagen, N., et al.: Appreciation at work: Measurement and associations with well-being (2008) Rigotti, T., Schyns, B., Mohr, G.: A short version of the occupational self-efficacy scale: structural and construct validity across five countries. J. Career Assess. 16(2), 238–255 (2008) Igic, I., et al.: Ten-year trajectories of stressors and resources at work: cumulative and chronic effects on health and well-being. J. Appl. Psychol. 102(9), 1317 (2017) North, F., et al.: Explaining socioeconomic differences in sickness absence: the Whitehall II study. Br. Med. J. 306(6874), 361–366 (1993) Soares, J., Grossi, G., Sundin, O.: Burnout among women: associations with demographic/socio-economic, work, life-style and health factors. Arch. Womens Ment. Health 10(2), 61–71 (2007) Benavides, F.G., et al.: How do types of employment relate to health indicators? Findings from the Second European Survey on Working Conditions. J. Epidemiol. Community Health 54(7), 494–501 (2000) Reynolds, J.R.: The effects of industrial employment conditions on job-related distress. J. Health Soc. Behav. 38(2), 105–116 (1997) Collins, J.J., et al.: The assessment of chronic health conditions on work performance, absence, and total economic impact for employers. J. Occup. Environ. Med. 47(6), 547–557 (2005) Tunceli, K., et al.: The impact of diabetes on employment and work productivity. Diabetes Care 28(11), 2662–2667 (2005) Tennant, C.: Work-related stress and depressive disorders. J. Psychosom. Res. 51(5), 697–704 (2001) Angrist, J.D., Pischke, J.-S.: Mostly Harmless Econometrics: An Empiricist's Companion. Princeton University Press, Princeton (2008) Netemeyer, R.G., Boles, J.S., McMurrian, R.: Development and validation of work–family conflict and family–work conflict scales. J. Appl. Psychol. 81(4), 400 (1996) Williams, K.J., Alliger, G.M.: Role stressors, mood spillover, and perceptions of work-family conflict in employed parents. Acad. Manag. J. 37(4), 837–868 (1994) Judge, T.A., Bono, J.E.: Relationship of core self-evaluations traits—self-esteem, generalized self-efficacy, locus of control, and emotional stability—with job satisfaction and job performance: a meta-analysis. J. Appl. Psychol. 86(1), 80 (2001) Wooldridge, J.M.: Econometric Analysis of Cross Section and Panel Data. MIT Press, Cambridge (2010) Langley, P.C., et al.: The association of pain with labor force participation, absenteeism, and presenteeism in Spain. J. Med. Econ. 14(6), 835–845 (2011) Sexton, T.L., Tuckman, B.W.: Self-beliefs and behavior: the role of self-efficacy and outcome expectation over time. Pers. Individ. Differ. 12(7), 725–736 (1991) Cameron, A.C., Trivedi, P.K.: Regression Analysis of Count Data. Cambridge University Press, Cambridge (2013) Baumeister, R.F., et al.: Bad is stronger than good. Rev. Gen. Psychol. 5(4), 323 (2001) Winterthur Institute of Health Economics, Zurich University of Applied Sciences, Gertrudstrasse 15, 8401, Winterthur, Switzerland Beatrice Brunner & Simon Wieser Department of Work and Organizational Psychology, University of Bern, Fabrikstrasse 8, 3012, Bern, Switzerland Ivana Igic Department of Organizational Psychology, University of Groningen, Grote Kruisstraat 2/1, 9712 TS, Groningen, The Netherlands Anita C. Keller Beatrice Brunner Simon Wieser Correspondence to Beatrice Brunner. Below is the link to the electronic supplementary material. Supplementary material 1 (DOCX 23 kb) Brunner, B., Igic, I., Keller, A.C. et al. Who gains the most from improving working conditions? Health-related absenteeism and presenteeism due to stress at work. Eur J Health Econ 20, 1165–1180 (2019). https://doi.org/10.1007/s10198-019-01084-9 Issue Date: November 2019 Health-related productivity losses Task-related and social stressors and resources at work Absenteeism Over 10 million scientific documents at your fingertips Switch Edition Academic Edition Corporate Edition Not affiliated Springer Nature © 2023 Springer Nature Switzerland AG. Part of Springer Nature.
CommonCrawl
Optimizing a function of a given matrix Deepak Sarma 127 ●3 ●10 ●17 Let us consider the $4\times 4$ symmetric matrix $$ A_x=\left(\begin{array}{rrrr} 0 & 1 & 1 & 1 \ 1 & 0 & 2^x & 2^x \ 1 & 2^x & 0 & 2^x \ 1 & 2^x & 2^x & 0 \end{array}\right) $$ Here I need to find $\min { x>0: det(A_x)=0 \, or \, ||A_x^{-1}||=0 } ,$ where by $||M||$ we mean the sum of all entries of the matrix $M.$ I'm looking for a general sage program where my input will be a matrix with entries as functions of an inderminant (like the matrix $A_x$ above) which will give me the unique $x$ corresponding to my matrix. If no such real value exists, it should result as $\infty$ Can anyone help me? Thank you in advance. slelievre 15764 ●19 ●145 ●313 http://carva.org/samue... updated 2018-07-25 06:28:31 +0100 One could use Sage to explore the problem. Define the matrix $A$: sage: A = matrix(SR, 4, [0, 1, 1, 1, 1, 0, 2^x, 2^x, 1, 2^x, 0, 2^x, 1, 2^x, 2^x, 0]) sage: A [ 0 1 1 1] [ 1 0 2^x 2^x] [ 1 2^x 0 2^x] [ 1 2^x 2^x 0] Compute its determinant: sage: det(A) -3*2^(2*x) The sum of its entries: sage: sum(A.list()) 6*2^x + 6 Its inverse: sage: ~A [-2/3*2^x 1/3 1/3 1/3] [ 1/3 -2/3/2^x 1/3/2^x 1/3/2^x] [ 1/3 1/3/2^x -2/3/2^x 1/3/2^x] [ 1/3 1/3/2^x 1/3/2^x -2/3/2^x] The sum of the entries of its inverse: sage: sum((~A).list()) -2/3*2^x + 2 Now solve $\det(A) = 0$ and $\lVert A^{-1} \rVert = 0$, extract positive solutions, take the minimum, and compute a numerical approximation. sage: S = solve(det(A) == 0, x); S sage: T = solve(sum((~A).list()) == 0, x); T [x == (log(6) - log(2))/log(2)] sage: min([e.rhs() for e in S + T if e.rhs() > 0]).n() This can all be put in a function which, given a matrix A as above, returns this minimum positive solution. Actually, I was stuck at this step. I tried the code "solve" to solve the two systems, but I don't get numerical value from this. Again I tried "find_root" code with the interval but one of the systems (First one) does not have a solution, and so it gives a long list of errors. Therefore I cannot evaluate the minimum of all solutions (min() doesn't work). For this particular matrix, I can find it by direct calculation. But what I need a general method to get my required (numerical value) whenever I input a matrix as a function of $x$. I need something like this X=det(A); Y=sum((~A).list()); S=solve([X,x>0],x); T=solve([Y,x>0],x); min(S+T) This doesn't give me numerical value. Note the last step, I want sage to calculate the minimum value of the ...(more) Deepak Sarma ( 2018-07-25 05:49:07 +0100 )edit Edited question to add last step. slelievre ( 2018-07-25 06:28:46 +0100 )edit Thank you for your solution. It works with that particular matrix. But it doesn't work for all matrices. For example consider $$A=\left(\begin{array}{rrrrr} 0 & 2^{x} & 1 & 1 & 1 \ 2^{x} & 0 & 1 & 1 & 1 \ 1 & 1 & 0 & 2^{x} & 2^{x} \ 1 & 1 & 2^{x} & 0 & 2^{x} \ 1 & 1 & 2^{x} & 2^{x} & 0 \end{array}\right)$$ $$A=\left(\begin{array}{rrrr} 0 & 1 & 2^{x} & 1 \ 1 & 0 & 1 & 2^{x} \ 2^{x} & 1 & 0 & 1 \ 1 & 2^{x} & 1 & 0 \end{array}\right)$$ Here I get an error "cannot evaluate symbolic expression numerically" Note: For all my matrices, it is known that the solution is at most equal to 2. So if something like min(find_root(X,0,2)+find_root(Y,0,2)) could be used, that is also enough for me. Last updated: Jul 25 '18 how to get the diagonal of a matrix? Symbolic linear algebra exponential equation exponential equation real solution smith normal form RAM limits? Using the solution of equation Solution to a long running jsmath/notebook problem Is there a way to simplify_full and trig_reduce a matrix? Using matrix elements as arguments How to make 1:1 matrix plots?
CommonCrawl
Calculus Guide Lesson 1: 1 Minute Calculus: X-Ray and Time-Lapse Vision Home›A BetterExplained Guide to Calculus›Lesson 1: 1 Minute Calculus: X-Ray and Time-Lapse Vision 1 Minute Calculus: X-Ray and Time-Lapse Vision We usually take shapes, formulas, and situations at face value. Calculus gives us two superpowers to dig deeper: X-Ray Vision: You see the hidden pieces inside a pattern. You don't just see the tree, you know it's made of rings, with another growing as we speak. Time-Lapse Vision: You see the future path of an object laid out before you (cool, right?). "Hey, there's the moon. For the next few days it'll be white, but on the sixth it'll be low in the sky, in a color I like. I'll take a photo then." So how is Calculus useful? Well, just imagine having X-Ray or Time-Lapse vision to use at will. For an object or scenario we care about, how was it put together? What will happen to it down the line? (Strangely, my letters to Marvel about Calculus-man have been ignored to date.) Calculus In 10 Minutes: See Patterns Step-By-Step What do X-Ray and Time-Lapse vision have in common? They examine patterns step-by-step. An X-Ray shows the individual slices inside, and a Time-Lapse puts each future state next to the other. This seems pretty abstract. Let's look at the equations for circumference, area, surface area, and volume: We have a vague feeling these formulas are connected, right? Let's turn on our X-Ray vision and see where this leads. Suppose we know $$$\textit{circumference} = 2 \pi r$$$ and we want to figure out the equation for area. What can we do? This is a tough question. Squares are easy to measure, but how do we work out the size of an ever-curving shape? Calculus to the rescue. Let's use our X-Ray vision to realize a disc is really just a bunch of rings put together. Similar to a tree trunk, here's a "step-by-step" view of a circle's area: How does this viewpoint help? Well, let's unroll those curled-up rings into straight lines, so they're easier to measure: Whoa! We have a bunch of straightened-out rings that form a triangle, which is much easier to measure. (Wikipedia has an animation[^1].) The height of each ring depends on its original distance from the center; the ring 3 inches from the center would have a height of $$$2 \pi \cdot 3$$$ inches. The smallest ring is a pinpoint, more or less, without any height at all. The height of the largest ring is the full circumference ($$$2 \pi r$$$). And because triangles are easier to measure than circles, finding the area isn't too much trouble. The area of the "ring triangle" = $$$\frac{1}{2} \textit{base} \cdot \textit{height} = \frac{1}{2} r (2 \pi r) = \pi r^2$$$, which is the formula for a circle's area! Our X-Ray vision revealed a simple, easy-to-measure structure within a curvy shape. We realized a circle and a set of glued-together rings were really the same. From another perspective, a filled-in disc is really just the "time lapse" of a single ring that grew larger. So… What Can I Do With Calculus? Remember learning arithmetic? You learned how to count out a number, and how to combine it with others (add/subtract, multiply/divide, take exponents/roots). Technically, counting isn't necessary, as our caveman ancestors did "fine" (survived) without it. But, having a specific notion of quantity makes navigating the world easier. You don't have a "big" and "small" pile of rocks: you have an exact count. You know how many arrows can be given to each hunter, or whether the gathered berries are enough for the tribe. Even better, arithmetic gives us metaphors that go beyond strict calculations. It has sharpened our descriptions of everything, letting us clarify everything from spiciness levels and movie ratings (1 to 5) to our mood (1 to 10). Specific measurements are a useful idea, and hard to give up once seen. Calculus trains us in two new metaphors: splitting apart and gluing together. A pattern can be separated into parts, and the parts can be progressively combined into the full pattern. Is this viewpoint necessary for survival? Nope. But it is interesting. Numbers and equations describe what we have, but Calculus explains the steps that got us there. Instead of just the cookie, we can see the recipe. Sure, Calculus appears in science because a step-by-step blueprint is more useful than being handed a final result. But in everyday scenarios, we have a nice perspective to turn on: What steps got us here? Are there any pros or cons to that approach? And based on these steps, where are we going next? Let's feel what a Calculus perspective is like. [^1]: Visit http://betterexplained.com/calculus/book for clickable links to extra resources. Calculus, Better Explained Course Course Homepage Preface On Learning Build Your Intuition 1. 1-Minute Summary 2. X-Ray Vision 3. 3d Intuition Learn The Lingo 4. Integrals, Derivatives 5. Computer Notation Basic Understanding 6. Improved Algebra 7. Linear Changes 8. Squared Changes 9. Infinity 10. Derivatives 11. Fundamental Theorem Figure Out The Rules 12. Add, Multiply, Invert 13. Patterns In The Rules 14. Take Powers, Divide Put It To Use 15. Archimedes' Formulas Cheatsheet "If you can't explain it simply, you don't understand it well enough." —Einstein (more) | Privacy | CC-BY-NC-SA TwitterYoutubeRedditRss
CommonCrawl
What's your favourite function? Part II We have received loads of feedback from people telling us their favourite function by Matthew Wright. Published on 26 November 2015. A few weeks ago on this blog we featured some of the Chalkdust team's favourite functions. Since then we have received loads of feedback from people telling us their favourite function, so in part II of this blog we're featuring some of the best of the contributions we have received. Our reddit thread got many other excellent contributions, from the Cantor function to the zeta function, and there was a heated debate about whether the Dirac delta function is actually a function (it is). The Bessel Function (Ian Stewart) First up is eminent maths populariser, Professor Ian Stewart. Last week we had the privilege of interviewing him for Chalkdust Issue 3, and the chance to ask him about his favourite function. His immediate response was the Bessel function. He thinks "they are fascinating because they turn up in all sorts of problems in circular symmetry." They were first introduced to him at school through a popular maths book by W W Sawyer, which accomplished the task of introducing hypergeometric functions to a general audience. Sawyer described Bessel functions as "like trigonometric functions, but much more interesting", which intrigued Ian Stewart. He appreciates them because they have all sorts of interesting applications in many different fields: they are connected to representation theory of groups in addition to coming up in optics and astronomy; in fact Bessel himself was an astronomer. Ian Stewart has in the past conducted research involving Bessel functions, looking at a particular type of spiral formation in nature. The proposed hypotheses at the time were that these were Archimidean spirals or involute spirals. Instead Stewart found they were in fact parametrized by a certain combination of Bessel functions. Look out for the rest of our fascinating interview with Ian Stewart, coming up next year in Issue 3 of Chalkdust! Next up is Luciano, who has a challenge for you: The Taboo Cubic (Luciano Rila) Before you read this, please sketch a cubic graph. I bet you chose a cubic with two stationary points. Possibly a cubic with one stationary point but you would be in a minority here. I am still to meet someone who would sketch a cubic with no stationary points (I'll buy you a coffee if you did). Sadly they seem to have vanished from our collective mathematical conscience or at least from our classroom. They are beautiful though, such a simple shape, and I love that their gradient function is a quadratic with complex roots. As we never talk about them, I call them The Taboo Cubic. David Silvester got in touch to suggest an improvement on Matthew Scroggs's nomination of the linear hat function from Part I. I agree with him, this is far better than the linear hat function: Piecewise Quadratic Hat Function (David Silvester) My favourite function is the piecewise quadratic hat function. $$ f_2(x)= \begin{cases} 0 &\mbox{if } x \leq x_{i-2}, \\ {(x-x_{i-1})(x-x_{i-2})\over (x_i-x_{i-1})(x_i-x_{i-2})} &\mbox{if } x_{i-2} \leq x < x_i ,\\ {(x-x_{i+1})(x-x_{i+2})\over (x_i-x_{i+1})(x_i-x_{i+2})} &\mbox{if } x_i \leq x < x_{i+2} ,\\ 0 &\mbox{if } x\geq x_{i+2} .\end{cases} $$ The quadratic hat function is more attractive than the linear hat function (highlighted in the last issue). It is also more useful for practical finite element applications like computing the deflection of an elastic structure when loaded (such as a suspension bridge or a tall building), or computing the motion of a fluid flowing in an axisymmetric pipe. The flow profile that can be observed coming out of a tap is called Poiseuille flow—it can be represented exactly using a small set of quadratic hat functions defined over a radial cross section of the pipe. The quadratic hat function is also a little rough around the edges (because it does not have a derivative at the joins). The cubic B-spline function $S_3(x)$ is a smooth relative. Cubic spline functions are used everyday in computer graphics and drawing applications where more smoothness is needed. It is just a little too smooth for my taste. Chris Smith sent us this contribution via Twitter: Super-Logarithm (Chris Smith) The Super-Logarithm function is ok but hard work…it can be a bit of a 'slog' to use… Although in all seriousness the super-logarithm is actually a pretty interesting function. Chris is correct that it is hard to use, check out its intimidating recursive definition on Wikipedia…. Chaitin's Constant/ Omega Number (Mike Lynch) My favourite function is usually referred to as Chaitin's constant or omega number. For a computer program F the omega number is an infinite sum over all of its valid inputs (p): $$ \Omega_F=\sum_{p\in P_F }2^{-p}$$ Omega can be interpreted as the probability that the program F will eventually terminate, rather than go into an infinite loop: computing this is equivalent to solving the halting problem, which was shown to be undecidable by Alan Turing. This gives the omega numbers the perversely intriguing quality they cannot be computed, even in principle. Some properties of all omega numbers have been proven – they converge to a value between 0 and 1, their digits are random, and evenly distributed, they are all transcendental – but only for very simple programs can the first few dozen digits be calculated. Rob Beckett's favourite is a classic: $e^x$ (Rob Beckett) My favourite function is the exponential function $f(x)=e^x$. This is primarily because $\frac{d}{dx}(f(x))=e^x$, in other words the function is growing at a rate which is equal to its current size. This is a really interesting property which comes from the fact that $e=\lim\limits_{n\rightarrow \infty}{(1+\frac{1}{n})^n}$. Just like pi, the number $e$ has a special property. It is the base rate of growth which means it crops up whenever things are growing or decaying continuously and exponentially. This exponential function, and variations, appear when considering bacterial growth rates, populations, radioactive decay and can be seen in the Black-Scholes formula that is used in the financial market. This contribution from Chris Budd is certainly something to think about: Unevaluable function (Chris Budd) My favourite function is something like the following: $$ f(x) = \begin{cases} 1 &\mbox{ if Father Christmas exists} \\ 0 &\mbox{otherwise. } \end{cases} $$ It is nice because (like many functions) it is perfectly well defined, but we cannot evaluate it… Stuart Price has a fantastic example of a really simple function which can exhibit incredibly interesting and complicated behaviour: Logistic map (Stuart Price) The logistic map refers to the family of functions $f_k(x)=kx(1-x)$. I have chosen this as my favourite function as it was my first introduction to chaos theory when I was an A level student. It also appealed to me as I was learning computer programming at the time and it was satisfying to be able to produce my own bifurcation diagram for the parameter space. This model gained popularity through the work of mathematical biologist Robert May published in the 1970s. A significant amount of interesting behaviour is exhibited by this function, and much of it can be analysed with relatively straightforward algebra. The variable $x$ lies between zero and one and represents, as a proportion, the population of a species. In this context the mapping is interpreted as a discrete model with $x_{n+1}=kx_n(1-x_n)$. It turns out that for values of $k\in[0,1]$ the iterates will converge to zero indicating extinction of the species. However, for $k\in[1,3]$ the long-term population will approach a fixed value, $\frac{k-1}{k}$. For values of $k$ greater than 3, we see what is called a "period doubling cascade''. The population oscillates between two values, then four, eight, sixteen… ultimately resulting in chaotic behaviour for $k\approx 3.57$ and beyond. Beautifully, however, for a range of values beginning at $k=1+\sqrt{8}$ there is an oscillation between three values, which then exhibits its own period doubling cascade. There is also a fundamental connection between the mathematics of this model and the Mandelbrot set. Although they parameterise the family of quadratics in slightly different ways, the line of symmetry of the Mandelbrot set exhibits the same dynamical behaviour as the logistic map. And finally, we also had an interesting function from a German contributor, clearly inspired from one of the choices from part I: Funky Funktion (Mattheas Recht) Let $\mathbb{S}$ be the set of all strings of letters from the Roman alphabet. Then meine lieblingsfunktion ist the map $F: \mathbb{S}\rightarrow \mathbb{S}$ such that F(x)=\begin{cases} \textrm{funk}x &\textrm{ if } x=\textrm{tions} \\ x &\textrm{ otherwise} \end{cases} Why do I like this funktion? Because it is the funktion which puts the funk into funktions! Funky. Matthew Wright Matt is a PhD student at UCL, working in the fields of general relativity and cosmology. ucl.ac.uk/~ucahawr + More articles by Matthew Tennis maths: how long should a deuce point last? Using Markov chains to calculate some interesting tennis stats! A brief history of gravitational waves The discovery of gravitational waves offers a new way of looking at the universe. Who knows what we will discover? In conversation with Ian Stewart We speak to one of Britain's most successful popularisers of maths New largest prime number discovered! Amaze your friends with our top facts about the new largest known prime number Weekend at MathsJam Chalkdust visits popular maths' biggest conference Our Favourite Functions: Part I Because we love functions! ← 100 years with the Sierpinski triangle How to make: Christmas Special → 3 thoughts on "What's your favourite function? Part II" Pingback: My favourite function… | Mr Beckett's A Level Maths Pingback: My favourite function… | rbeckettyd
CommonCrawl
A Fishy Function Sunday, July 16, 2017 · 17 min read I started thinking about these ideas in late May, but haven't gotten a chance to write about them until now… If you want to take a boat from the Puget Sound to Lake Washington, you need to go across the Ballard Locks, which separate the Pacific Ocean's saltwater from the freshwater lakes. The locks are an artificial barrier, built in the early 1900s to facilitate shipping. Today, the locks have a secondary function: they are a picnic spot. A while back, I visited the locks on a sunny and warm day. A band was playing in the park by the water, and there were booths with lemonade and carnival games. Every few minutes, a boat would enter the locks, be raised or lowered, and continue on its way. If you walk across the locks, you can check out the fish ladder, a series of raised steps designed to help fish — in this case, salmon — migrate, since the locks cut off their natural path between the water bodies. There is usually a crowd around the fish ladder. Around once a minute, a salmon leaps out of the water and goes up a step; the children gasp and cheer as they watch over the railing. This is the idyllic scene that we will soon destroy with the heavy hammer of mathematical statistics. You see, it turns out that a little bit of thought about these salmon can give us a way to use historical earthquake data to approximate ($ e $). But I'm getting ahead of myself. Let's start at the beginning. What is the probability that a fish jumps out of the water right now? This is a tricky question to answer. Suppose there's a 10% chance that a fish jumps out of the water right now. That means the probability that a fish doesn't jump is 90%. In the next instant of time, there's again a 10% chance that the fish jumps. So, the laws of probability tell us that over the course of ($ n $) instants, there's a ($ 0.90^n $) probability that no fish-jumps occur. But there's an infinite number of instants in every second! Time is continuous: you can subdivide it as much as you want. So the probability that no fish-jumps occur in a one-second period is ($ 0.90^\infty $), which is… zero! Following this reasoning, a fish must jump at least every second. And this is clearly a lie: empirically, the average time between fish-jumps is closer to a minute. Okay, so "probability that a fish jumps right now" is a slippery thing to define. What can we do instead? Since the problem seems to be the "right now" part of the definition, let's try to specify a time interval instead of an instant. For example, what is the probability that we will observe ($ n $) fish-jumps in the next ($ t $) seconds? Well, we're going to need some assumptions. For simplicity, I'm going to assume from now on that fish jump independently, that is, if one fish jumps, then it does not affect the behavior of any other fish. I don't know enough about piscine psychology to know whether or not this is a valid assumption, but it doesn't sound too far-fetched. While we're on the subject of far-fetchedness: the math that follows is going to involve a lot of handwaving and flying-by-the-seat-of-your-pants. We're going to guess at functions, introduce constants whenever we feel like it, evaluate things that may or may not converge, and, throwing caution and continuity to the wind, take derivatives of things that might be better left underived. I think it's more fun this way. Yes, we could take the time to formalize the ideas with lots of definitions and theorems and whatnot. There's a lot to be said about mathematical rigor, and it's really important for you, the reader, to be extremely skeptical of anything I say. In fact, I encourage you to look for mistakes: the reasoning I'm about to show you is entirely my own, and probably has some bugs here and there. (The conclusions, for the record, match what various textbooks say; they just derive them in a slightly different way.) A couple of lemmas here and there might make the arguments here much more convincing. But they will also make this post tedious and uninspiring, and I don't want to go down that road. If you're curious, you can look up the gnarly details in a book. Until then, well, we've got bigger fish to fry! Okay, back to math. We can model the probability we're talking about with a function that takes ($ n $) and ($ t $) as inputs and tells you the probability, ($ P(n, t) $), that you see ($ n $) fish-jumps in the time period ($ t $). What are some things we know about ($ P $)? Well, for starters, ($ P(n, 0) = 0 $), since in no time, there's no way anything can happen. What about ($ P(n, a + b) $)? That's the probability that there are ($ n $) fish-jumps in ($ a + b $) seconds. We can decompose this based on how many of the fish-jumps occurred in the "($ a $)" and "($ b $)" periods: \begin{align} P(n, a+b) & = P(0, a)P(n, b) \\ & + P(1, a)P(n-1, b) \\ & + \ldots \\ & + P(n, a)P(0, b) \end{align} Hmm. This looks familiar… perhaps… Yes! Isn't this what you do to the coefficients of polynomials when you multiply them? The coefficient of ($ x^n $) in ($ a(x)b(x) $) is a similar product, in terms of the coefficients of ($ x^i $) and ($ x^{n-i} $) in ($ a(x) $) and ($ b(x) $), respectively. This can't be a coincidence. In fact, it feels appropriate to break out this gif again: Let's try to force things into polynomial form and see what happens. Let ($ p_t(x) $) be a polynomial where the coefficient of ($ x^n $) is the probability that ($ n $) fish-jumps occur in time ($ t $): \begin{align} p_t(x) &= P(0, t)x^0 + P(1, t)x^1 + \ldots \\ &= \sum_{n=0}^\infty P(n, t)x^n \end{align} (Yes, fine, since ($ n $) can be arbitrarily large, it's technically a "power series", which is just an infinitely long polynomial. Even more technically, it's a generating function.) We know that ($ p_0(x) = 1 $), because nothing happens in no time, i.e. the probability of zero fish-jumps is "1" and the probability of any other number of fish-jumps is "0". So ($ p_0(x) = 1x^0 + 0x^1 + \ldots $), which is equal to just "1". What else do we know? It should make sense that ($ p_t(1) = 1 $), since if you plug in "1", you just add up the coefficients of each term of the polynomial. Since the coefficients are the probabilities, they have to add up to "1" as well. Now, taking a leap of faith, let's say that ($ p_{a+b}(x) = p_a(x)p_b(x) $), because when the coefficients multiply, they work the same way as when we decomposed the probabilities above. Why is this property interesting,? We're turning a property about addition into a property about multiplication. That sounds awfully like something else we're used to: logarithms! Forgetting for a moment that ($ p $) is a power series, maybe we can "solve" for the function ($ p_t(x) $) by messing around with something like this: \[ p_t(x) = e^{tx} \] Okay, ($ e^{tx} $) doesn't quite work because we want ($ p_t(1) = 1 $). Maybe ($ e^{t(x-1)} $) will work? It seems to have all the properties we want… Let's take a moment to stop and think. At this point, it's not even clear what we're doing. The whole point of defining ($ p_t(x) $) was to look at the coefficients, but when we "simplify" it into ($ e^{t(x-1)} $) we no longer have a power series. Or do we? Recall from calculus class that you can expand out some functions using their Taylor Series approximation, which is a power series. In particular, you can show using some Fancy Math that \begin{align} e^x &= \frac{x^0}{0!} + \frac{x^1}{1!} + \frac{x^2}{2!} + \ldots \\ &= \sum_{n=0}^\infty \frac{x^n}{n!} \end{align} If you haven't taken calculus class yet, I promise this isn't black magic. It's not even plain magic. It's just a result of a clever observation about what happens to ($ e^x $) when you increase ($ x $) by a little bit. If you have taken calculus, bet you didn't think this "series approximation" stuff would ever be useful! But it is, because a quick transformation gives us the series representation for ($ p_t(x) $): \[ e^{t(x-1)} = e^{tx}/e^t = \sum_{n=0}^\infty \frac{(tx)^n}{n!e^t} \] and so the coefficient of ($ x^n $) gives us ($ P(n, t) = t^n/(e^t n!) $). Now we have a new problem: this formula doesn't depend at all on the type of events we're observing. In particular, the formula doesn't "know" that the salmon at Lake Washington jump around once a minute. We never told it! Fish at other lakes might jump more or less frequently, but the formula gives the same results. So the formula must be wrong. Sad. But it might be salvageable! Let's go back and see if we can add a new constant to represent the lake we're in. Perhaps we can call it ($ \lambda $), the Greek letter "L" for lake. Where could we slip this constant in? Our solution for ($ p_t(x) $) was: \[ p_t(x) = e^{t(x-1)} \] but in retrospect, the base ($ e $) was pretty arbitrarily chosen. We could make the base ($ \lambda $) instead of ($ e $), but that would mess up the Taylor Series, which only works with base ($ e $). That would be inconvenient. However, we know that we can "turn" ($ e $) into any number by raising it to a power, since ($ e^{\log b} = b $). If we want base ($ b $), we can replace ($ e $) with ($ e^{\log b} $). This suggests that ($ \lambda = \log b $) could work, making our equation: \[ p_t(x) = \left(e^\lambda\right)^{t(x-1)} = e^{(\lambda t) (x-1)} \] This seems to fit the properties we wanted above (you can check them if you want). Going back to our Taylor Series expansion, we can just replace ($ t $) with ($ \lambda t $) to get: \[ P(n, t) = \frac{\left(\lambda t\right)^n}{e^{\lambda t} n!} \] Let's step back and think about what we're claiming. Knowing only that fish jump randomly, and roughly independently, we claim to have an expression for the probability that ($ n $) fish-jumps occur in a time interval ($ t $). "Okay, hold up," you say, "something smells fishy about this. This is pretty bold: we know nothing about how fish think, or fluid dynamics, or whatever other factors could influence a fish's decision to jump. And yet we have this scary-looking expression with ($ e $) and a factorial in there!" That's a fair point. I'm just as skeptical as you are. It would be good to back up these claims with some data. Sadly, I didn't spend my time in Seattle recording fish-jumping times. But, in a few more sections, I promise there will be some empirical evidence to assuage your worries. Until then, let's press on, and see what else we can say about fish. We have a way to get the probability of some number of fish-jumps in some amount of time. What's next? One thing we can do is compute the average number of fish-jumps in that time interval, using expected value. Recall that to find expected value, you multiply the probabilities with the values. In this case, we want to find: \[ E_t[n] = \sum_{n=0}^\infty P(n, t)n \] This looks hard… but also oddly familiar. Remember that \[ p_t(x) = \sum_{n=0}^\infty P(n, t)x^n \] because, y'know, that's how we defined it. Using some more Fancy Math ("taking the derivative"), this means that \[ \frac{dp_t(x)}{dx} = \sum_{n=0}^\infty P(n, t)nx^{n-1} \] and so ($ E_t[n] = p^\prime_t(1) $). That… still looks hard. Derivatives of infinite sums are no fun. But remember from the last section that we also have a finite way to represent ($ p_t(x) $): what happens if we take its derivative? \begin{align} p_t(x) &= e^{(\lambda t) (x-1)} \\ p^\prime_t(x) &= (\lambda t)e^{(\lambda t) (x-1)} \\ p^\prime_t(1) &= E_t[n] = \lambda t \end{align} Aha! The average number of fish-jumps in time ($ t $) is ($ \lambda t $). If ($ t $) has units of time and ($ \lambda t $) has units of fish-jumps, this means that ($ \lambda $) has units of fish-jumps-per-time. In other words, ($ \lambda $) is just the rate of fish-jumps in that particular lake! For Lake Washington, ($ \lambda_w = 1/60 \approx 0.0167 $) fish-jumps-per-second, which means that the probability of seeing two fish-jumps in the next thirty seconds is: \[ p_{30}(2) = \frac{(0.0167\times30)^2}{e^{0.0167\times30}2!} \approx 0.076 \] I think that's pretty neat. What about the standard deviation of the number of fish-jumps? That sounds ambitious. But things have been working out pretty well so far, so let's go for it. Standard deviation, or ($ \sigma $), the Greek letter "sigma", is a measure of "how far, on average, are we from the mean?" and as such seems easy to define: \[ \sigma = E[n-\lambda t] \] Well, this isn't hard to evaluate. Knowing that expected values add up, we can do some quick math: \begin{align} \sigma &= E[n] - E[\lambda t] \\ &= \lambda t - \lambda t = 0 \end{align} Oops. We're definitely off by a little bit on average, so there's no way that the standard deviation is 0. What went wrong? Well, ($ n - \lambda t $) is negative if ($ n $) is lower than expected! When you add the negative values to the positive ones, they cancel out. This is annoying. But there's an easy way to turn negative numbers positive: we can square them. Let's try that. \begin{align} \sigma^2 &= E[(n-\lambda t)^2] \\ &= E[n^2 - 2n\lambda t + (\lambda t)^2] \end{align} Now what? We don't know anything about how ($ E[n^2] $) behaves. Let's go back to how we figured out ($ E[n] $) for inspiration. The big idea was that Hmm. What if we take another derivative? \[ \frac{d^2p_t(x)}{dx^2} = \sum_{n=0}^\infty P(n, t)n(n-1)x^{n-2} \] We get an ($ n(n-1) $) term, which isn't quite ($ n^2 $), but it's degree-two. Let's roll with it. Following what we did last time, \begin{align} p_t(x) &= e^{(\lambda t)(x - 1)} \\ p^\prime_t(x) &= (\lambda t)e^{(\lambda t)(x - 1)} \\ p^{\prime\prime}_t(x) &= (\lambda t)(\lambda t)e^{(\lambda t)(x - 1)} \\ E[n(n-1)] &= p^{\prime\prime}_t(1) \\ &= (\lambda t)^2 \end{align} And now we have to do some sketchy algebra to make things work out: \begin{align} \sigma^2 &= E[(n-\lambda t)^2] \\ &= E[n^2 - 2n\lambda t + (\lambda t)^2] \\ &= E[n^2 - n - 2n\lambda t + n + (\lambda t)^2] \\ &= E[(n^2 - n) - 2n\lambda t + n + (\lambda t)^2] \\ &= E[n^2 - n] - E[2n\lambda t] + E[n] + E[(\lambda t)^2] \\ &= (\lambda t)^2 - 2(\lambda t)(\lambda t) + \lambda t + (\lambda t)^2 \\ &= \lambda t \end{align} …which means ($ \sigma = \sqrt{\lambda t} $). Seems like magic. Okay, fine, we have this fancy function to model these very specific probabilities about fish-jump-counts over time intervals. But the kids watching the fish ladder don't care! They want to know what's important: "how long do I need to wait until the next fish jumps?" Little do they know, this question opens up a whole new can of worms… Until now, we've been playing with ($ n $) as our random variable, with ($ t $) fixed. Now, we need to start exploring what happens if ($ t $) is the random variable. This needs some new ideas. Let's start with an easier question to answer. What is the probability that you need to wait longer than five minutes (300 seconds) to see a fish-jump? (Five minutes is way longer than my attention span when looking at fish. But whatever.) It turns out that we already know how to answer that question. We know the probability that no fish jump in five minutes: that's equal to ($ p_{300}(0) $). Why? Well, when we plug in ($ x = 0 $), all the ($ x $) terms go away in the series representation, and we're only left with (the coefficient of) the ($ x^0 $) term, which is what we want. Long story short, the probability that you need to wait longer than five minutes is ($ e^{0.0167\times300(0-1)} = 0.00674 $). This means that the probability that you will see a fish-jump in the next five minutes is ($ 1 - e^{0.0167\times300(0-1)} $), which is around 0.9932. This is the probability that you have to wait less than five minutes to see a fish-jump. For an arbitrary time interval ($ T $), we have ($ P(t<T) = 1 - e^{-\lambda T} $), where ($ t $) is the actual time you have to wait. Sanity check time! This gets close to 1 as ($ T $) gets higher, which sounds about right: the longer you're willing to wait, the likelier it is that you'll see a fish jump. Similarly, if fish jump at a higher rate, ($ \lambda $) goes up, and the probability gets closer to 1, which makes sense. Indeed, encouragingly enough, this equation looks very close to the equation we use for half-lives and exponential radioactive decay… Now things are going to get a bit hairy. What is the probability that you have to wait exactly ($ T $), that is, ($ P(t = T) $)? This should be zero: nothing happens in no time. But let's be reasonable: when we say "exactly" ($ T $), we really mean a tiny window between, say, ($ T $) and ($ T + dT $) where ($ dt $) is a small amount of time, say, a millisecond. The question then is, what is ($ P(T < t < T + dt) $), which isn't too hard to answer: it's just ($ P(t < T + dt) - P(t < T) $), that is, you need to wait more than ($ T $) but less than ($ T + dT $). In other words, \[ P(t \approx T, dT) = P(t < T+dT) - P(t < T) \] where ($ dt $) is an "acceptable margin of error". This looks awfully like a derivative! We're expressing the change in probability as a function of change in time: if I wait ($ dT $) longer, how much likelier am I to see a fish-jump? Let's rewrite our above equation to take advantage of the derivativey-ness of this situation. \begin{align} P(t \approx T, dT) &= \left(\frac{P(t < T+dT) - P(t < T)}{dT}\right)dT\\ &= \left(\frac{d P(t < T)}{dT}\right)dT \\ &= \left(\frac{d (1-e^{-\lambda T})}{dT}\right)dT \\ &= \lambda e^{-\lambda T} dT \end{align} By the way, this might give a simpler-but-slightly-less-satisfying answer to our initial question, "what is the probability that a fish jumps out right now?" If we set ($ T $) to 0, then we get ($ P(t \approx 0, dT) = \lambda dT $). In other words, if fish jump out of the water at a rate ($ \lambda $), then for a tiny period of time ($ dT $), the probability of seeing a fish jump in that time is ($ \lambda dT $). This is one of those facts that seems really straightforward one day, and completely mindblowing the next day. Anyway. Now that we have an approximation for the probability that you need to wait a specific time ($ T $), we can find an expected value for ($ t $) by taking the sum over discrete increments of ($ dt $): \[ E[t] = \sum^\infty_{k=0} P(t \approx T, dT) \times T \] where ($ T = k\times dT $). Since we're talking about the limit as ($ dT $) gets smaller and smaller, it seems reasonable to assume that this thing turns into \begin{align} E[t] &= \int^\infty_0 P(t \approx T, dT) \times T \\ &= \int^\infty_0 \lambda e^{-\lambda T} dT \times T \end{align} You can integrate that by parts, or just use WolframAlpha, which tells you that ($ E[t] = \lambda^{-1} $). …which is kind of obvious, isn't it? Remember that ($ \lambda $) was the rate at which our fish jumped. If fish jump once a minute, shouldn't we expect to have to wait a minute to see a fish jump? Isn't this similar to the way wavelength and frequency are related? The answer is, "yes and no". "Yes", the value ($ \lambda^{-1} $) is indeed pretty sensible in retrospect. A simpler way to derive it might have been to note that for any time period ($ T $), the expected number of fish-jumps is ($ \lambda T $) (as we found out above), and so the average time interval between fish-jumps would be ($ T / (\lambda T) = \lambda^{-1} $). The fact that the average interval between fish-jumps corresponds to the the expected interval is captured by the surprisingly well-known acronym "PASTA": Poisson Arrivals See Time Averages (I'm not making this up!). But "no", it's not "obvious" that you should have to wait the average inter-fish time! Suppose you, like Rip Van Winkle, you woke up after a very long sleep, and you wanted to know "how much longer until Monday morning?" Well, Monday mornings happen every 7 days, and so if you set ($ \lambda = 1/7 $), you should expect to have to wait 7 days until Monday. But that's silly! You definitely need to wait fewer than 7 days on average! In fact, most people would intuitively say that you need to wait 7/2 = 3.5 days on average: and they would be right. (The intuition is that on average, you'd wake up halfway between two Monday mornings.) This is the so-called "Hitchhiker's Paradox": if cars on a highway through the desert appear roughly once an hour, how long does a hitchhiker who just woke up need to wait until he sees a car? It seems reasonable to say "half an hour", since on average, you'd wake up halfway between two cars. On the other hand, with ($ \lambda = 1 $), you'd expect to wait an hour until you see a car. So which one is right? And why are the answers different? Well, the "Rip Van Winkle" interpretation assumes that cars on a desert highway — like Mondays — come at regular intervals. In reality, cars on a desert highway — like the salmon of Seattle — are usually independent. They might come in a cluster a few minutes after you wake up, or a lone car might come the next day. Crucially, the next car doesn't "know" anything about previous cars, and so it doesn't matter when you wake up: we call this property "memorylessness". It turns out that since there's a nonzero probability of having to wait a very long time for a car, the average gets pulled up from half an hour. With that in mind, it's really quite surprising that the true mean turns out to be exactly ($ 1/\lambda $). And now, the aftermath. Very little of the above discussion was fish-specific. The only properties of salmon that mattered here were that salmon jump randomly and independently of each other, at some rate ($ \lambda $). But our calculations work for any such process (let's call such processes Poisson processes). Poisson processes were studied as early as 1711 by de Moivre, who came up with the cool theorem about complex numbers. However, they're named after Siméon Denis Poisson, who in 1837 studied (not fish, but) the number of wrongful convictions in court cases. Today, Poisson processes model all sorts of things. Managers use it to model customers arriving at a grocery checkout. Programmers use it to model packets coming into a network. Both of these are examples of queueing theory, wherein Little's Law relates ($ \lambda $) to how long things have to wait in queues. You could probably use a Poisson process to model how frequently bad things happen to good people, and use that to create a statistical model of how unfair the world is. The upshot is this: even though I didn't record any fish-jumping data back in Seattle, we can definitely try out these ideas on other "sporadic" processes. Wikipedia, it turns out, maintains a list of earthquakes that happened in the 21st century. Earthquakes are pretty sporadic, so let's play with that dataset. I scraped the date of each earthquake, and wrote a small script to count the the number of earthquakes in each month-long interval. That is, ($ t $) is 2,592,000 seconds. By "binning" my data by month, I got lots of samples of ($ n $). This gives an easy way to compute ($ P(n, t) $) "empirically". On the other hand, taking the total number of earthquakes and dividing by the total time range (around 17 years, since we're in 2017) gives us the rate ($ \lambda $), which in this case works out to about ($ 1.06\times10^{-6} $) earthquakes per second. This gives a way to compute ($ P(n, t) $) "theoretically" by using our fancy formula with the factorial and whatnot. Comparing the results gives us this pretty plot! They match up surprisingly well. What else can we say? Well, the average inter-earthquake time works out to ($ 1/\lambda $), or around 940,000 seconds. That's about eleven days. On average, a reader of this blog post can expect to wait eleven days until the next earthquake of magnitude 7 or above hits. And for those of you who have been wondering, "can we do these calculations in reverse to approximate ($ e $)?" the answer is, yes! We just solve the above equation for ($ e $). \[ e\approx\left(\frac{P(n, t)n!}{(\lambda t)^n}\right)^{-(\lambda t)^{-1}} \] In my case, using earthquake data for ($ n = 1 $), I got ($ e \approx 2.75 $). I'd say that's pretty good for an algorithm that relies on geology for accuracy (in reality, ($ e $) is around 2.718). In many ways, it is quite incredible that the Poisson process conditions — randomness, independence, constant rate — are all you need to derive conclusions for any Poisson process. Knowing roughly that customers at a burger place are random, act independently, and arrive around once a minute at lunchtime — and knowing nothing else — we can predict the probability that four customers arrive in the next three minutes. And, magically, this probability will have ($ e $) and a factorial in it. Humans don't evaluate expressions involving ($ e $) and factorials when they decide when to get a burger. They are subject to the immense complexity of human life, much like how salmon are subject to the immense complexity of the fluid mechanics that govern Lake Washington, much like how earthquakes are subject to the immense complexity of plate tectonics. And yet, somehow statistics unites these vastly different complexities, finding order and meaning in what is otherwise little more than chaos. ~ Fin. ~ Assorted references below. https://en.wikipedia.org/wiki/Poisson_distribution https://en.wikipedia.org/wiki/Exponential_field https://www.stat.auckland.ac.nz/~fewster/325/notes/ch4.pdf http://pages.cs.wisc.edu/~dsmyers/cs547/lecture_11_pasta.pdf https://www.netlab.tkk.fi/opetus/s383143/kalvot/E_poisson.pdf Postscript, two weeks later. This morning at the coffee shop I realized that the Poisson distribution is a lot like the binomial distribution with a lot of trials: the idea is that you have lots of little increments of time, and a fish either jumps or doesn't jump in each increment — this is called a Bernoulli process. Presumably, over a long period of time, this should even out to a Poisson process… Recall that the probability of a fish-jump happening in some small time period ($ dt $) turned out to be ($ \lambda dt $) for our definition of ($ \lambda $) as the rate of fish-jumps. Can we go the other way, and show that if the probability of something happening is ($ \lambda dt $) for a small period of time ($ dt $), then it happens at a rate of ($ \lambda $)? Turns out, yes! The binomial distribution is a way to figure out, say, what the probability is that if I flip 100 counts, then exactly 29 of them land "heads" (a coin toss is another example of a Bernoulli process). More abstractly, the binomial distribution gives you the probability ($ B(N, k) $) that if something has probability ($ p $) of happening, then it happens ($ k $) times out of ($ N $) trials. The formula for ($ B(N, k) $) can be derived pretty easily, and you can find very good explanations in a lot of high-school textbooks. So, if you don't mind, I'm just going to give it to you for the sake of brevity: \[ B(N, k) = \binom{N}{k} p^k (1-p)^{N-k} \] Now, can we apply this to a Poisson process? Well, let's say ($ k = n $), the number of times our event happens in time ($ t $). Then we have \[ \binom{N}{n} p^n (1-p)^{N-n} \] What next? We know that ($ p = \lambda dt $). Also, for time period ($ t $), there are ($ t / dt $) intervals of ($ dt $), so ($ N = t / dt $). That means we can substitute ($ dt = t / N $), and thus ($ p = \lambda (t / N) $). This gives us \[ \binom{N}{n} (\lambda t / N)^n (1-\lambda t / N)^{N-n} \] Oh, and of course to approximate a Poisson process, this is the limit as ($ N $) approaches infinity: \[ \lim_{N\to\infty} \binom{N}{n} (\lambda t / N)^n (1-\lambda t / N)^{N-n} \] This isn't a hard limit to take if we break apart the product. \[ \lim_{N\to\infty} \frac{N! (\lambda t)^n}{n!(N-n)! N^n} \lim_{N\to\infty}(1-\lambda (t / N))^{N-n} \] The right half is surprisingly enough the definition of ($ e^{-\lambda t} $), since the ($ - n $) in the exponent doesn't really matter. The left half is trickier: it turns out that ($ N! / (N-n)! $) is the product ($ N(N-1)\ldots(N-n+1) $). As a polynomial, it is degree ($ n $), and the leading term is ($ N^n $). But look! In the denominator, we have an ($ N^n $) term as well, so in the limit, those both go away. We're left with what simplifies to our expression for the Poisson distribution. \begin{align} \lim_{dt\to 0} B(N=t/dt, p=\lambda dt) &= \frac{(\lambda t)^n}{n!}e^{-\lambda t} \\ &= \frac{(\lambda t)^n}{e^{\lambda t}n!} \\ &= P(\lambda, t) \end{align} which I think is literally magic.
CommonCrawl
Probability and Statistical Physics Seminar The probability seminar is organized by Vivian Healey, Steven Lalley, Gregory Lawler and Xinyi Li. It takes place on Fridays at 2:30 pm in Eckhart 202, unless otherwise specified. Summer 2019 Seminars Friday, July 5, 3:00pm-3:50pm, in Eckhart 206: Eveliina Peltola - University of Geneva Title: Multiple SLEs, discrete interfaces, and crossing probabilities Abstract: Multiple SLEs are conformally invariant measures on families of curves, that naturally correspond to scaling limits of interfaces in critical planar lattice models with alternating ("generalized Dobrushin") boundary conditions. I discuss classification of these measures and how the convergence for discrete interfaces in many models is obtained as a consequence. When viewed as measures with total mass, the multiple SLEs can also be related to probabilities of crossing events in lattice models. The talk is based on joint works with Hao Wu (Yau Mathematical Sciences Center, Tsinghua University) and Vincent Beffara (Université Grenoble Alpes, Institut Fourier). Spring 2019 Seminars Friday, Apr 5: Edwin Perkins - UBC Title: On the range of lattice models in high dimensions. Abstract: We investigate the scaling limit of the {\em range} (the set of visited vertices) for a general class of critical lattice models, starting from a single initial particle at the origin. Conditions are given on the random sets and an associated ``ancestral relation'' under which, conditional on long term survival, the rescaled ranges converge weakly to the range of super-Brownian motion as random sets. These hypotheses also give precise asymptotics for the limiting behaviour of the probability of exiting a large ball. Applications include voter models, contact processes, oriented percolation and lattice trees. This is joint work with Mark Holmes and also features work of Akira Sakai and Gordon Slade. Friday, Apr 12: Jonathan Novak - UCSD Title: A Tale of Two Integrals Abstract: I will discuss the asymptotic behavior of two matrix integrals: the Harish-Chandra/Itzykson-Zuber integral, and its additive counterpart, the Brezin-Gross-Witten integral. Both are integrals over the group U(N), and their behavior in the large N limit is the subject of a pair of conjectures formulated by physicists in 1980 in connection with the large N limit in lattice gauge theory. I will discuss a proof of these conjectures which involves relating them to fundamental combinatorial structures, in particular Hurwitz numbers and increasing subsequences in permutations. [SPECIAL TIME AND PLACE] Wednesday, Apr 17, 2:00PM @ Eckhart 207: Daisuke Shiraishi - Kyoto University. Title: Scaling limit of uniform spanning tree in three dimensions Abstract: In the talk, we will show the existence of the scaling limit of three-dimensional uniform spanning tree (UST) with respect to the Gromov-Hausdorff-Prohorov type topology and obtain several properties of the limiting tree. Moreover, we will prove that the rescaled simple random walk on the 3D UST converges weakly to a diffusion on the limit tree above. Detailed transition density estimates for the limiting process will be derived. These are ongoing works with O. Angel (UBC), D. Croydon (Kyoto University) and S. Hernandez Torres (UBC). Friday, Apr 19: Atilla Yilmaz - Temple Title: Homogenization of a class of one-dimensional nonconvex viscous Hamilton-Jacobi equations with random potential Abstract: I will present joint work with Elena Kosygina and Ofer Zeitouni in which we prove the homogenization of a class of one-dimensional viscous Hamilton-Jacobi equations with random Hamiltonians that are nonconvex in the gradient variable. Due to the special form of the Hamiltonians, the solutions of these PDEs with linear initial conditions have representations involving exponential expectations of controlled Brownian motion in a random potential. The effective Hamiltonian is the asymptotic rate of growth of these exponential expectations as time goes to infinity and is explicit in terms of the tilted free energy of (uncontrolled) Brownian motion in a random potential. The proof involves large deviations, construction of correctors which lead to exponential martingales, and identification of asymptotically optimal policies. Friday, Apr 26: Joe Chen - Colgate University Title: Laplacian growth and sandpiles on the Sierpinski gasket: limit shape universality, fluctuations, and beyond Abstract: Given a locally finite connected graph with a distinguished vertex $o$, start $m$ particles at $o$, and let them aggregate according to one of the four following discrete Laplacian growth models: internal diffusion-limited aggregation (IDLA), rotor-router aggregation (RRA), divisible sandpiles, and abelian sandpiles. We are interested in describing the limit shapes and radial growth of the cluster in each model. Do they coincide for all 4 models? And how sharp a radial bound can one get? On the Euclidean lattice: No, though there are many results on radial bounds. On the Sierpinski gasket graph, where $o$ is the corner vertex: Yes, and sharp radial bounds are now available. My talk will address the gasket story and consists of two parts: 1) By solving the divisible sandpile problem, one can gain access to information about the harmonic measure and obtain limit shape results for IDLA and RRA. I will describe how the proofs work on the gasket, incorporating ideas from the earliest IDLA proofs (Lawler-Bramson-Griffeath), random walks on graphs (Harnack inequality), and a fast simulation algorithm (Friedrich-Levine). 2) The abelian sandpile problem poses an entirely different set of challenges. Quite luckily, on the gasket we have solved the abelian sandpile growth problem EXACTLY. The key idea is to exploit the cellular structure, the cut point structure, and axial symmetries to perform systematic topplings in waves. This allows us to inductively establish tiling patterns in the sandpile configurations, in particular, the identity elements of the associated sandpile groups. It also leads to the enumeration of all the radial jumps in the growing cluster, and implies, via the renewal theorem, a radial asymptotic formula in the form of a power law modulated by log-periodic oscillations---the best possible result on such a state space. Based on joint works with Wilfried Huss, Ecaterina Sava-Huss (TU Graz), Alexander Teplyaev (UConn), and Jonah Kudler-Flam (UChicago). Friday, May 3: Xin Sun - Columbia Title: Dynamical percolation on random triangular lattices Abstract: Dynamical (site) percolation on a graph is a Markov process where the state space is the set of possible black/white colorings of the vertices of the graph. Each vertex is associated with an independent Poisson clock, and the color of a vertex is resampled every time its clock rings. Dynamical percolation on the regular triangular lattice was thoroughly studied by Garban, Pete and Schramm. In this talk we will discuss the case when the graph is a uniformly sampled triangulation. In particular, we will explain how to describe the scaling limit of this process and show its ergodicity. Time permitting, we will also explain the role it played in the study of the conformal structure of uniform triangulations. Based on joint work with Christophe Garban, Nina Holden and Avelio Sepulveda. Friday, May 10: Erik Bates - Stanford Title: Localization phenomena of directed polymers Abstract: On the d-dimensional integer lattice, directed polymers are paths of a random walk that have been reweighted according to a random environment that refreshes at each time step. The qualitative behavior of the system is governed by a temperature parameter; if this parameter is small, the environment has little effect, meaning all possible paths are close to equally likely. If the parameter is made large, however, the system undergoes a phase transition beyond which the path ''localizes,'' meaning the polymer measure concentrates. In this talk, I will discuss different quantitative statements of this phenomenon, methods of proving these statements, and natural connections to other statistical mechanical systems. (Joint work with Sourav Chatterjee) Friday, May 17: Joan Lind - University of Tennessee, Knoxville Title: The effect of random modifications on Loewner hulls Abstract: Loewner hulls are determined by their real-valued driving functions, via the Loewner differential equation. We will discuss two projects which study the geometric effect on the Loewner hulls of random modifications. In the first project, which is joint work with Kei Kobayashi and Andrew Starnes, the driving function is composed with a random time change, such as the inverse of an $\alpha$-stable subordinator. In contrast to Schramm-Loewner evolution (SLE), we show that for a large class of random time changes, the time-changed Brownian motion process does not generate a simple curve. Further we develop criteria which can be applied in many situations to determine whether the Loewner hull generated by a time-changed driving function is simple or non-simple. In the second project, joint with Nathan Albin and Pietro Poggi-Corradini, we look at the scaling limit of Peano curves associated to so-called fair spanning trees. In contrast to the convergence of the uniform spanning tree Peano curve to SLE(8), we show that the scaling limit is a deterministic object. Friday, May 24: Yu Zhang - Colorado Title: A geometric property for optimal paths and its applications in first passage percolation Abstract: We consider the first passage percolation model in Z^d with a weight distribution F for 0 < F(0) < p_c. In this paper, we derive a geometric property for optimal paths to show that all of them have to pass an M-exit. By this property, we show that the shape is strictly convex, and we solve the height problem. Friday, May 31: Charles Smart - Chicago Title: Unique continuation and localization on the planar lattice Abstract: I will discuss joint work with Jian Ding in which we establish localization near the edge for the Anderson Bernoulli model on the two dimensional lattice. Our proof follows the program of Bourgain-Kenig and uses a new unique continuation result inspired by Buhovsky-Logunov-Malinnikova-Sodin. Friday, June 7: Jun Yin - UCLA Title: Universality and delocalization of band random matrix Abstract: In this talk, we discuss some recent work related to a main conjecture on random matrix theory, i.e. phase-transition conjecture on random matrix theory. The prediction says that phase-transition occurs at the band width regime W \sim N^{1/2}. For high dimensional matrix, i.e. x,y\in Z^d, H_{xy}, there exists some similar stimulation results. Based on the development of studying on resolvent, i.e., G=(H-z)^{-1}, we obtained some results on low and high dimension cases. In this talk, we will introduce these work and the main ideas and tools used in these work. They are jointed work with Laci Erdos, Paul Bourgade, H.T. Yau, Yang Fan, etc. Winter 2019 Seminars Monday, Jan 14, 1:30pm, Eckhart 207: Huy V. Tran - TU Berlin Title: A support theorem for SLE curves Abstract: SLE curves are an important family of random curves in the plane. They share many similarities with solutions of SDE (in particular, with Brownian motion). Any question asked for the latter can be asked for the former. Inspired by that, Yizheng and I investigate the support for SLE curves. In this talk, I will explain our theorem with more motivation and idea. Friday, Jan 25: Ilia Binder - University of Toronto Title: Large concentration of harmonic measure: discrete and continuous case. Abstract: I will discuss the sharp bounds on the multifractal spectrum of planar harmonic measure and their continuous analogues. In particular, I will talk about the refinements of Beurling estimates on the concentration of harmonic measure. Friday, Feb 1: Jiaoyang Huang - Harvard Title: Invertibility of adjacency matrices for random d-regular graphs Abstract: The singularity problem of random matrices asks the probability that a given discrete random matrix is singular. The first such result was obtained by Komlós in 1967. He showed a Bernoulli random matrix is singular with probability o(1). This question can be reformulated for the adjacency matrices of random graphs, either directed or undirected. The most challenging case is when the random graph is sparse. In this talk, I will prove that for random directed and undirected d-regular graphs, their adjacency matrices are invertible with high probability for all d>=3. The idea is to study the adjacency matrices over a finite field, and the proof combines a local central limit theorem and a large deviation estimate. Friday, Feb 8: Balint Virag - University of Toronto Title: The directed landscape Abstract: The longest increasing subsequence in a random permutation, the second class particle in TASEP, and semi-discrete polymers at zero temperature have the same scaling limit: a random function with Holder exponent 2/3-. This limit can be described in terms of the directed landscape, a random metric at the heart of the Kardar-Parisi-Zhang universality class. Joint work with Duncan Dauvergne and Janosch Ortmann. Friday, Feb 15: Mateo Wirth - University of Pennsylvania Title: Chemical distance for level-set percolation of planar metric-graph Gaussian free field Abstract: We consider percolation of the level-sets of a metric-graph Gaussian free field on a box of the planar integer lattice. We provide an upper bound for the length of the shortest path joining the boundary components of a macroscopic annulus. The bound holds with high probability conditioned on connectivity and is sharp up to a poly-logarithmic factor with exponent one-quarter. This is joint work with Jian Ding. Friday, Feb 22: Elton Hsu - Northwestern Title: Bismut's Interpolation Between Riemannian Brownian Motion and Geodesic Flow Abstract: A few years ago Bismut found a natural family of diffusion processes that interpolates between Brownian motion and the geodesic flow on a compact Riemannian manifold. The convergence to Brownian motion (together with an attendant Gaussian field) at one end of the parameter interval is nontrivial and proved in the weak sense of finite dimensional marginal distributions. By slightly extending the traditional setting for stochastic calculus on manifold we show that the convergence can be realized as a strong one in the path space. Such a more precise convergence to Brownian motion will help us in understanding asymptotic behavior of some classical functional inequalities for path spaces. Friday, March 1: Lionel Levine - Cornell Title: Random walks with local memory Abstract: The theme of this talk is walks in a random environment of ``signposts'' altered by the walker. I'll focus on three related examples: 1. Rotor walk on Z^2. Your initial signposts are independent with the uniform distribution on {North,East,South,West}. At each step you rotate the signpost at your current location clockwise 90 degrees and then follow it to a nearest neighbor. Priezzhev et al. conjectured that in n such steps you will visit order n^{2/3} distinct sites. I'll outline an elementary proof of a lower bound of this order. The upper bound, which is still open, is related to a famous question about the path of a light ray in a grid of randomly oriented mirrors. This part is joint work with Laura Florescu and Yuval Peres. 2. p-rotor walk on Z. In this walk you flip the signpost at your current location with probability 1-p and then follow it. I'll explain why your scaling limit will be a Brownian motion perturbed at its extrema. This part is joint work with Wilfried Huss and Ecaterina Sava-Huss. 3. p-rotor walk on Z^2. Rotate the signpost at your current location clockwise with probability p and counterclockwise with probability 1-p, and then follow it. This walk ``organizes'' its environment by destroying cycles of signposts. A native environment -- stationary in time, from your perspective as the walker -- is an orientation of the uniform spanning forest, plus one additional edge. This part is joint work with Swee Hong Chan, Lila Greco, and Peter Li. Friday, March 8: Qiang Zeng - CUNY Queens College Title: 2-RSB for spherical mixed p-spin models at zero temperature Abstract: Historically, the method of replica symmetry breaking (RSB) was introduced by Parisi to study mean field spin glass models. This method has played indispensable role in the physical deduction of Parisi formula. Mathematically, the level of RSB corresponds to the number of points in the support of the Parisi measure. There have been known Replica Symmetric, 1-step RSB and Full-step RSB 'natural' spin glass models. However, k-step RSB for finite k>1 is less well understood. In this talk, I will show that certain spherical mixed p-spin glass models are 2-step RSB at zero temperature and present consequences in the energy landscape. This talk is based on joint work with Antonio Auffinger. Fall 2018 Seminars Friday, Aug 31: Daisuke Shiraishi - Kyoto University. Title: Natural parametrization for the scaling limit of loop-erased random walk in three dimensions Abstract: We will consider loop-erased random walk (LERW) and its scaling limit in three dimensions. Gady Kozma (2007) shows that as the lattice spacing becomes finer, LERW in three dimensions converges weakly to a random compact set with respect to the Hausdorff distance. We will show that 3D LERW parametrized by renormalized length converges in the lattice size scaling limit to Kozma's scaling limit parametrized by some suitable measure on it with respect to the uniform norm. This is based on joint works with Xinyi Li (University of Chicago). Friday, Oct 5: Robin Pemantle - University of Pennsylvania Title: Percolation on random trees (joint work with Marcus Michelen and Josh Rosenberg) Abstract: Let T be a tree chosen from Galton-Watson measure and let {U_v} be IID uniform [0,1] random variables associated with the edges between each vertex and its parent. These define coupled Bernoulli percolation processes, as well as an invasion percolation process. We study quenched properties of these percolations: properties conditional on T that hold for almost every T. The invasion cluster has a backbone decomposition which is Markovian if you put on the right blinders. Under suitable moment conditions, the law of the (a.s. unique) backbone ray is absolutely continuous with respect to limit uniform measure. The quenched survival probabilities are smooth in the supercritical region p > p_c. Their behavior as p -> p_c depends on moment assumptions for the offspring distribution. Friday, Oct 12: Jonathon Peterson - Purdue University Title: Quantitative CLTs for random walks in random environments Abstract: The Berry-Esseen estimates give quantitative error estimates on the CLT for sums of i.i.d. random variables, and the polynomial decay rate for the error depends on moment bounds of the i.i.d. random variables with the optimal $1/\sqrt{n}$ rate of convergence obtained under a third moment assumption. In this talk we will prove quantitative error bounds for CLTs of random walks in random environments (RWRE). That is, for certain models of RWRE it is known that the position of the random walk has a Gaussian limiting distribution, and we obtain quantitative error estimates on the rate of convergence to the Gaussian distribution for such RWRE. This talk is based on joint works with Sungwon Ahn and Xiaoqin Guo. October 18-20: (no seminar, but) Midwest Probability Colloquium @Northwestern University. Friday, Oct 26: Will Perkins - University of Illinois at Chicago Title: Algorithmic Pirogov-Sinai theory Abstract: We develop efficient algorithms to approximate the partition function and sample from the hard-core and Potts models on lattices at sufficiently low temperatures in the phase coexistence regime. In contrast, the Glauber dynamics are known to take exponential time to mix in this regime. Our algorithms are based on the cluster expansion and Pirogov-Sinai theory, classical tools from statistical physics for understanding phase transitions, as well as Barvinok's approach to polynomial approximation. Joint work with Tyler Helmuth and Guus Regts. Friday, Nov 2: Jason Schweinsberg - UCSD Title: Yaglom-type limit theorems for branching Brownian motion with absorption Abstract: We consider one-dimensional branching Brownian motion in which particles are absorbed at the origin. We assume that when a particle branches, the offspring distribution is supercritical, but the particles are given a critical drift towards the origin so that the process eventually goes extinct with probability one. We establish precise asymptotics for the probability that the process survives for a large time t, improving upon a result of Kesten (1978) and Berestycki, Berestycki, and Schweinsberg (2014). We also prove a Yaglom-type limit theorem for the behavior of the process conditioned to survive for an unusually long time, which also improves upon results of Kesten (1978). An important tool in the proofs of these results is the convergence of branching Brownian motion with absorption to a continuous state branching process. Thursday, Nov 8: Billingsley lecture by Allan Sly - Princeton University Title: Phase transitions of random constraint satisfaction problems Abstract: Random constraint satisfaction problems encode many interesting questions in the study of random graphs such as the chromatic and independence numbers. Ideas from statistical physics provide a detailed description of phase transitions and properties of these models. We will discuss the one step replica symmetry breaking transition that many such models undergo. Friday, Nov 9: Allan Sly - Princeton University Title: Coalescence of polymers in Last Passage Percolation Abstract: We will discuss new bounds on the time for polymers in first passage percolation to coalesce. As a consequence we will prove that there are no non-trivial bi-geodesics. The methods will combine bounds given by exactly solvable calculations with tools from percolation theory. Friday, Nov 16: Martin Barlow - University of British Columbia Title: Stability of the elliptic Harnack Inequality Abstract: Following the work of Moser, as well as de Giorgi and Nash, Harnack inequalities have proved to be a powerful tool in PDE as well as in probability. In the early 1990s Grigor'yan and Saloff-Coste gave a characterisation of the parabolic Harnack inequality (PHI). This characterisation implies that the PHI is stable under bounded perturbation of weights, as well as rough isometries. In this talk we prove the stability of the EHI. The proof uses the concept of a quasi symmetric transformation of a metric space, and the introduction of these ideas to Markov processes suggests a number of new problems. This is joint work with Mathav Murugan (UBC). Friday, Nov 30: Renming Song - University of Illinois Urbana-Champaign Title: Factorizations and estimates of Dirichlet heat kernels for non-local operators with critical killings Abstract: In this talk I will discuss heat kernel estimates for critical perturbations of non-local operators. To be more precise, let $X$ be the reflected $\alpha$-stable process in the closure of a smooth open set $D$, and $X^D$ the process killed upon exiting $D$. We consider potentials of the form $\kappa(x)=C\delta_D(x)^{-\alpha}$ with positive $C$ and the corresponding Feynman-Kac semigroups. Such potentials do not belong to the Kato class. We obtain sharp two-sided estimates for the heat kernel of the perturbed semigroups. The interior estimates of the heat kernels have the usual $\alpha$-stable form, while the boundary decay is of the form $\delta_D(x)^p$ with non-negative $p\in [\alpha-1, \alpha)$ depending on the precise value of the constant $C$. Our result recovers the heat kernel estimates of both the censored and the killed stable process in $D$. Analogous estimates are obtained for the heat kernel of the Feynman-Kac semigroup of the $\alpha$-stable process in ${\mathbf R}^d\setminus \{0\}$ through the potential $C|x|^{-\alpha}$. All estimates are derived from a more general result described as follows: Let $X$ be a Hunt process on a locally compact separable metric space in a strong duality with $\widehat{X}$. Assume that transition densities of $X$ and $\widehat{X}$ are comparable to the function $\widetilde{q}(t,x,y)$ defined in terms of the volume of balls and a certain scaling function. For an open set $D$ consider the killed process $X^D$, and a critical smooth measure on $D$ with the corresponding positive additive functional $(A_t)$. We show that the heat kernel of the the Feynman-Kac semigroup of $X^D$ through the multiplicative functional $\exp(-A_t)$ admits the factorization of the form ${\mathbf P}_x(\zeta >t)\widehat{\mathbf P}_y(\widehat{\zeta}>t)\widetilde{q}(t,x,y)$. This is joint work with Soobin Cho, Panki Kim and Zoran Vondracek. Friday, Nov 30: Yuri Bakhtin - NYU Title: Ergodic theory of the stochastic Burgers equation Abstract: The stochastic Burgers equation is one of the basic evolutionary SPDEs related to fluid dynamics and KPZ, among other things. The ergodic properties of the system in the compact space case were understood in 2000's. With my coauthors, Eric Cator, Kostya Khanin, Liying Li, I have been studying the noncompact case. The one force - one solution principle has been proved for positive and zero viscosity. The analysis is based on long-term properties of action minimizers and polymer measures. The latest addition to the program is the convergence of infinite volume polymer measures to Lagrangian one-sided minimizers in the limit of vanishing viscosity (or, temperature) which results in the convergence of the associated global solutions and invariant measures. Friday, Dec 7, 3:00 PM: Cheng Ouyang - University of Illinois at Chicago Title: Local density estimate for a hypoelliptic SDE Abstract: In a series of three papers in the 80's, Kusuoka and Stroock developed a probabilistic program in order to obtain sharp bounds for the density function of a hypoelliptic SDE driven by a Brownian motion. We aim to investigate how their method can be used to study rough SDEs driven by fractional Brownian motions. In this talk, I will outline Kusuoka and Stroock's approach and point out where the difficulties are in our current setting. The talk is based on an ongoing project with Xi Geng and Samy Tindel. Friday, Mar 30: Alan Hammond - University of California at Berkeley Title: The weight, geometry and coalescence of scaled polymers in Brownian last passage percolation Abstract: In last passage percolation (LPP) models, a random environment in the two-dimensional integer lattice consisting of independent and identically distributed weights is considered. The weight of an upright path is said to be the sum of the weights encountered along the path. A principal object of study are the polymers, which are the upright paths whose weight is maximal given the two endpoints. Polymers move in straight lines over long distances with a two-thirds exponent dictating fluctuation. It is natural to seek to study collective polymer behaviour in scaled coordinates that take account of this linear behaviour and the two-third exponent-determined fluctuation. We study Brownian LPP, a model whose integrable properties find an attractive probabilistic expression. Building on a study arXiv:1609.02971 concerning the decay in probability for the existence of several near polymers with common endpoints, we demonstrate that the probability that there exist k disjoint polymers across a unit box in scaled coordinates has a superpolynomial decay rate in k. This result has implications for the Brownian regularity of the scaled polymer weight profile begun from rather general initial data. Friday, Apr 13: Yuan Zhang - Texas A & M University Title: Stationary Harmonic Measure and DLA in the Upper Half Plane Abstract: In this talk, we introduce the stationary harmonic measure in the upper half plane. By bounding this measure, we are able to define both the discrete and continuous time diffusion limit aggregation (DLA) in the upper half plane with absorbing boundary conditions. We prove that for the continuous model the growth rate is bounded from above by $o(2+\epsilon)$. When time is discrete, we also prove a better upper bound of $o(2/3+\epsilon)$, on the maximum height of the aggregate at time $n$. Friday, Apr 20: Christopher Hoffman - University of Washington Title: Geodesics in First-Passage Percolation Abstract: First-passage percolation is a classical random growth model which comes from statistical physics. We will discuss recent results about the relationship between the limiting shape in first passage percolation and the structure of the infinite geodesics. This includes a solution to the midpoint problem of Benjamini, Kalai and Schramm. This is joint work with Daniel Ahlberg. Friday, April 27: Timo Seppalainen - University of Wisconsin-Madison Title: Shifted weights and restricted path length in first-passage percolation Abstract: First-passage percolation has remained a challenging field of study since its introduction in 1965 by Hammersley and Welsh. There are many outstanding open problems. Among these are properties of the limit shape and the Euclidean length of geodesics. This talk describes a convex duality between a shift of the edge weights and the length of the geodesic, together with related results on the regularity of the limit shape as a function of the shift. The talk is based on joint work with Arjun Krishnan (Rochester) and Firas Rassoul-Agha (Utah). Friday, May 4: Laura Eslava - Georgia Tech Title: The giant component in a degree-bounded process Abstract: Graph processes $(G(i),i\ge 0)$ are usually defined as follows. Starting from the empty graph on $n$ vertices, at each step $i$ a random edge is added from a set of available edges. For the $d$-process, edges are chosen uniformly at random among all edges joining vertices of current degree at most $d-1$. The fact that, during the process, vertices become 'inactive' when reaching degree $d$ makes the process depend heavily on its history. However, it shares several qualitative properties with the classical Erdos-Renyi graph process. For example, there exists a critical time $t_c$ at which a giant component emerges, whp (that is, the largest component in $G(tn)$ goes from logarithmic to linear order). In this talk we consider $d\ge 3$ fixed and describe the growth of the size of the giant component. In particular, we show that whp the largest component in $G((t_c+\eps)n)$ has asymptotic size $cn$, where $c\sim c_d \eps$ is a function of time $\eps$ as $\eps \to 0+$. The growth, linear in $\eps$, is a new common qualitative feature shared with the Erdos-Renyi graph process and can be generalized to hypergraph processes with different max-allowed degree sequences. This is work in progress jointly with Lutz Warnke. Friday, May 18: Zhongyang Li - University of Connecticut Title: Phase transitions in the 1-2 model Abstract: A configuration in the 1-2 model is a subgraph of the hexagonal lattice, in which each vertex is incident to 1 or 2 edges. By assigning weights to configurations at each vertex, we can define a family of probability measures on the space of these configurations, such that the probability of a configuration is proportional to the product of weights of configurations at vertices. We study the phase transition of the model by investigating the probability measures with varying weights. We explicitly identify the critical weights, in the sense that the edge-edge correlation decays to 0 exponentially in the subcritical case, and converges to a non-zero constant in the supercritical case, under the limit measure obtained from torus approximation. These results are obtained by a novel measure-preserving correspondence between configurations in the 1-2 model and perfect matchings on a decorated graph, which appears to be a more efficient way to solve the model, compared to the holographic algorithm used by computer scientists to study the model. The major difficulty here is the absence of stochastic monotonicity. Friday, May 25: Jinho Baik - University of Michigan Title: Fluctuations of the free energy of spherical Sherrington-Kirkpatrick model Abstract: Consider the question of finding the maximum of a random polynomial defined on a closed manifold or a finite graph. Spherical Sherrington-Kirkpatrick (SSK) model is a finite temperature version of this question when the underlying space is a sphere. The free energy is the finite temperature version of the maximum value. The limit of the free energy as the dimension of the sphere becomes infinity is known by the works of Parisi, Cristanti, Sommers, and Talagrand. In this talk we consider the fluctuations when the polynomial is a symmetric quadratic function. We use a connection to random matrices and obtain limit theorems. This is a joint work with Ji Oon Li and Hao Wu. Wednesday, May 30 ( 3pm -4pm ) : Colloquium by Greg Lawler - The University of Chicago. Title: Conformally Invariant Paths and Loops Abstract: Abstract: There has been incredible progress in the last twenty years in the rigorous understanding of two-dimensional critical systems in statistical physics. I will give an overview with an emphasis on several related models: loop-erased random walk, spanning trees and corresponding loop soup; fractal paths and loops arising in critical systems (Schramm-Loewner evolution); the Gaussian free field and functions thereof (quantum gravity). I will also discuss some challenges for the future. This talk is intended for a general audience - it is not assumed that the audience is familiar with these terms. Friday, June 1: Alexander Drewitz - University of Cologne Title: Phase transitions in some percolation models with long-range correlations on general graphs Abstract: Abstract: We consider two fundamental percolation models with long-range correlations on reasonably general and well-behaved transient graphs: The Gaussian free field and (the vacant set) of Random Interlacements. Both models have been the subject of intensive research during the last years and decades, on $\Z^d$ as well as on some more general graphs. We consider their percolation phase transition and investigate a couple of interesting properties of their critical parameters, in particular the existence of a phase transition. This talk is based on joint works with A. Prevost (Koeln) and P.-F. Rodriguez (Los Angeles). [CANCELLED, RESCHEDULED TO May 18] Friday, Jan 5: Zhongyang Li - University of Connecticut Friday, Jan 12: Shirshendu Ganguly - UC Berkeley NB: this seminar takes place exceptionally at Eckhart 133!! Title: Understanding rare events in models of statistical mechanics Abstract: Statistical mechanics models are ubiquitous at the interface of probability theory, information theory, and inference problems in high dimensions. In this talk we will focus on sparse graphs, and polymers on lattices; two canonical models in natural sciences. The study of large deviations is intimately related to the understanding of such models. We will consider the rare events that a sparse random network has an atypical number of certain local structures and that a polymer in random media has atypical weight. Conditioning on such events can produce different, ranging from local to more global, geometric effects. We will discuss some such results obtained, relying on a variety of entropy theoretic, combinatorial, and analytic tools. [CANCELLED-RESCHEDULED TO April 13] Friday, Jan 26: Yuan Zhang - Texas A & M University Friday, Feb 2: Samuel Watson - Brown University Title: Relating a classical planar map embedding algorithm to Liouville quantum gravity and SLE(16) Abstract: In 1990, Walter Schnyder introduced a class of 3-spanning-tree decompositions of a simple triangulation to describe a combinatorially natural grid embedding algorithm for planar maps. It turns out that a uniformly sampled Schnyder-wood-decorated triangulation on n vertices converges as n tends to infinity to a random fractal surface, called a Liouville quantum gravity (LQG), together with a triple of intertwined fractal curves known as SLE(16). We will motivate this result by describing Schnyder's algorithm and discussing some history of random planar map convergence results, and we will also introduce LQG and SLE and explain their role in the story. Friday, Feb 16: Omer Angel - University of British Columbia Title: New perspectives on Mallows permutations Abstract: I will discuss two projects concerning Mallows permutations, with Ander Holroyd, Tom Hutchcroft and Avi Levy. First, we relate the Mallows permutation to stable matchings, and percolation on bipartite graphs. Second, we study the scaling limit of the cycles in the Mallows permutation, and relate it to diffusions and continuous trees. Friday, Feb 23 Julian Gold - Northwestern University Title: Dynamical freezing in a spin glass system with logarithmic correlations Abstract: We consider a continuous time random walk on the two-dimensional discrete torus, whose motion is governed by the discrete Gaussian free field on the corresponding box acting as a potential. More precisely, at any vertex the walk waits an exponentially distributed time with mean given by the exponential of the field and then jumps to one of its neighbors, chosen uniformly at random. We prove that throughout the low-temperature regime and at in-equilibrium timescales, the process admits a scaling limit as a spatial K-process driven by a random trapping landscape, which is explicitly related to the limiting extremal process of the field. Alternatively, the limiting process is a supercritical Liouville Brownian motion with respect to the continuum Gaussian free field on the box. Joint work with Aser Cortines (University of Zurich) and Oren Louidor (Technion). Friday, Mar 2: Alexander Fribergh - Universite de Montreal Title: The ant in high dimensional labyrinths Abstract: One of the most famous open problem in random walks in random environments is to understand the behaviour of a simple random walk on a critical percolation cluster, a model known as the ant in the labyrinth. We will present new results on the scaling limit for the simple random walk on the critical branching random walk in high dimension which converges, after scaling, to the Brownian motion on the integrated super-brownian motion. In the light of lace expansion, we believe that the limiting behaviour of this model should be universal for simple random walks on critical structures in high dimension. In particular, recent progress show that similar results hold for lattice trees. Friday, September 29: Eviatar Procaccia - Texas A&M University Title: Stationary aggregation processes Abstract: In this talk I'll introduce stationary versions of known aggregation models e.g., DLA, Hastings Levitov, IDLA and Eden. Using the additional symmetry and ergodic theory, one obtains new geometric insight on the aggregation processes. Thursday Oct 5: Billingsley lecture! by Yuval Peres -Microsoft Research and UC Berkeley Title: Gravitational allocation to uniform points on the sphere Abstract: Given n uniform points on the surface of a two-dimensional sphere, how can we partition the sphere fairly among them? "Fairly" means that each region has the same area. It turns out that if the given points apply a two-dimensional gravity force to the rest of the sphere, then the basins of attraction for the resulting gradient flow yield such a partition-with exactly equal areas, no matter how the points are distributed. (See the cover of the AMS Notices at http://www.ams.org/publications/journals/notices/201705/rnoti-cvr1.pdf.) Our main result is that this partition minimizes, up to a bounded factor, the average distance between points in the same cell. I will also present an application to almost optimal matching of n uniform blue points to n uniform red points on the sphere, connecting to a classical result of Ajtai, Komlos and Tusnady (Combinatorica 1984). Joint work with Nina Holden and Alex Zhai. Friday Oct 6: Yuval Peres -Microsoft Research and UC Berkeley Title: The strange geometry of high-dimensional random spanning forests. Abstract: The uniform spanning forest (USF) in the lattice Z^d, first studied by Pemantle (Ann. Prob. 1991), is defined as a limit of uniform spanning trees in growing finite boxes. Although the USF is a limit of trees, it might not be connected- Indeed, Pemantle proved that the USF in Z^d is connected if and only if d<5. In later work with Benjamini, Kesten, and Schramm (Ann. Math 2004) we extended this result, and showed that the component structure of the USF undergoes a phase transition every 4 dimensions: For dimensions d between 5 and 8 there are infinitely many trees, but any two trees are adjacent; for d between 9 and 12 this fails, but for every two trees in the USF there is an intermediary tree, adjacent to each of the them. And this pattern continues, with the number of intermediary trees required increasing by 1 every 4 dimensions. In this talk, I will show that this is not the whole story, and for d>8 the USF geometry undergoes a qualitative change every time the dimension increases by 1. (Joint work with Tom Hutchcroft.) Friday, October 20: Paul Bourgade - NYU Title: Random matrices, the Riemann zeta function and trees Abstract: Fyodorov, Hiary & Keating have conjectured that the maximum of the characteristic polynomial of random unitary matrices behaves like extremes of log-correlated Gaussian fields. This allowed them to predict the typical size of local maxima of the Riemann zeta function along the critical axis. I will first explain the origins of this conjecture, and then outline the proof for the leading order of the maximum, for unitary matrices and the zeta function. This talk is based on joint works with Arguin, Belius, Radziwill and Soundararajan. Friday, November 10: Vadim Gorin - MIT and Russian Academy of Sciences Title: Local limits of Random Sorting Networks Abstract: A sorting network is a shortest path between 12..n and n..21 in the Cayley graph of the symmetric group spanned by swaps of adjacent letters. We will discuss the bulk local limit of the swap process of uniformly random sorting networks and encounter universal distributions of the random matrix theory, including the celebrated Gaudin-Mehta law, which describes the energy level spacings in heavy nuclei. [CANCELLED] Friday, Nov 17: Christopher Hoffman - University of Washington Abstract: First-passage percolation is a classical random growth model which comes from statistical physics. We will discuss recent results about the relationship between the limiting shape in first passage percolation and the structure of the infinite geodesics. This incudes a solution to the midpoint problem of Benjamini, Kalai and Schramm. This is joint work with Gerandy Brito and Daniel Ahlberg. Friday, Dec 1: Wenpin Tang - UCLA Title: Regenerative permutations: Mallows(q) and Riemann zeta function Abstract: In this talk we discuss regenerative permutations on integers, with emphasis on two particular models: p-shifted and P-biased permutations. When p is the geometric distribution, the p-shifted permutations appear to be the limit of Mallows permutation model. We generalize and simplify previous work of Gnedin and Olshanski. The P-biased permutations are reminiscent of successive sampling in Bayesian statistics. Interestingly, some zeta formulas appear in the evaluation of renewal quantities of GEM-biased permutations. This is based on joint work with Jean-Jil Duchamps and Jim Pitman. Tuesday, March 28, 3pm - 4pm, Ryerson 251 (Combinatorics and Theoretical Computer Science Seminar): James Lee - University of Washington. Title: Extremal metrics, eigenvalues, and graph separators Wednesday, March 29, 3pm - 4pm, Eckhart 206 (Colloquium). Title: Discrete conformal metrics and spectral geometry on distributional limits Friday, March 31: Antonio Auffinger - Northwestern University. Title: The SK model is FRSB at zero temperature Abstract: In the early 80's, the physicist Giorgio Parisi wrote a series of ground breaking papers where he introduced the notion of replica symmetry breaking. His powerful insight allowed him to predict a solution for the SK model by breaking the symmetry of replicas infinitely many times. In this talk, we will prove Parisi's prediction at zero temperature for the mixed p-spin model, a generalization of the SK model. We will show that at zero temperature the functional order parameter is full-step replica symmetry breaking (FRSB). We will also describe the importance of this result for the description of the energy landscape. Based on recent works with Wei-Kuo Chen (U. of Minnesota) and Qiang Zeng (Northwestern U.). Friday, April 7: Joseph Yukich - Lehigh University. Title: Limit theory for statistics of random geometric structures Abstract: Questions arising in stochastic geometry and applied geometric probability are often understood in terms of the behavior of statistics of large random geometric structures.Such structures arise in diverse settings and include: (i) Point processes of dependent points in R^d, including determinantal, permanental, and Gibbsian point sets, as well as the zeros of Gaussian analytic functions, (ii) Simplicial complexes in topological data analysis, (iii) Graphs on random vertex sets in Euclidean space, (iv) Random polytopes generated by random data. Global features of geometric structures are often expressible as a sum of local contributions. In general the local contributions have short range spatial interactions but complicated long range dependence. In this survey talk we review ``stabilization'' methods for establishing the limit theory for statistics of geometric structures. Stabilization provides conditions under which the behavior of a sum of local contributions is similar to that of a sum of independent identically distributed random variables. Friday, April 14: Lisa Hartung - New York University. Title: The Structure of Extreme Level Sets in Branching Brownian Motion Abstract: We study the structure of extreme level sets of a standard one dimensional branching Brownian motion, namely the sets of particles whose height is within a fixed distance from the order of the global maximum. It is well known that such particles congregate at large times in clusters of order-one genealogical diameter around local maxima which form a Cox process in the limit. We add to these results by finding the asymptotic size of extreme level sets and the typical height and shape of those clusters which carry such level sets. We also find the right tail decay of the distribution of the distance between the two highest particles. These results confirm two conjectures of Brunet and Derrida (joint work with A. Cortines, O. Louidor). Friday, April 21: Qingsan Zhu - University of British Columbia Title: Branching capacity and critical branching random walks Abstract: In this talk, I will introduce branching capacity for any finite subset of Z^d (d>=5). It turns out to be an important subject in the study of critical branching random walks. I will discuss its connections with critical branching random walks from the following three perspectives: 1) the hitting probability of a set by critical branching random walk; 2) branching recurrence and branching transience; 3) the local limit of critical branching random walk in torus. Friday, April 28: Dapeng Zhan - Michigan State University Title: SLE loop measures Abstract: An SLE loop measure is a $\sigma$-finite measure on the space of loops, which locally looks like a Schramm-Loewner evolution (SLE) curve. In this work, we use Minkowski content (i.e., natural parametrization) of SLE to construct several types of SLE$_\kappa$ loop measures for $\kappa\in(0,8)$. First, we construct rooted SLE$_\kappa$ loop measures in the Riemann sphere $\widehat{\mathbb C}$, which satisfy M\"obius covariance, conformal Markov property, reversibility, and space-time homogeneity, when the loop is parameterized by its $(1+\frac \kappa 8)$-dimensional Minkowski content. Second, by integrating rooted SLE$_\kappa$ loop measures, we construct the unrooted SLE$_\kappa$ loop measure in $\widehat{\mathbb C}$, which satisfies M\"obius invariance and reversibility. Third, we extend the SLE$_\kappa$ loop measures from $\widehat{\mathbb C}$ to subdomains of $\widehat{\mathbb C}$ and to Riemann surfaces using Brownian loop measures, and obtain conformal invariance or covariance of these measures. Finally, using a similar approach, we construct SLE$_\kappa$ bubble measures in simply/multiply connected domains rooted at a boundary point. The SLE$_\kappa$ loop measures for $\kappa\in(0,4]$ give examples of Malliavin-Kontsevich-Suhov loop measures for all $c\le 1$. The space-time homogeneity of rooted SLE$_\kappa$ loop measures in $\widehat{\mathbb C}$ answers a question raised by Greg Lawler. Friday, May 19: Wei Wu - New York University Title: Extremal and local statistics for gradient field models Abstract: We study the gradient field models with uniformly convex potential (also known as the Ginzburg-Landau field) in two dimension. This is a log-correlated (but generally non-Gaussian) random field that arises in different branches of mathematical physics. Previous results (Naddaf-Spencer, and Miller) were focused on the CLT for the linear functionals of the field. In this talk I will describe more precise results on the marginal distribution and the extreme values of the field. Based on joint works with David Belius and Ron Peled. Friday, May 26: Rongfeng Sun - National University of Singapore Title: Scaling limit of the directed polymer on Z^{2+1} in the critical window Abstract: The directed polymer model on Z^{d+1} is the Gibbs transform of a directed random walk on Z^{d+1} in an i.i.d. random potential (disorder). It is known that the model undergoes a phase transition as the disorder strength varies, and disorder is relevant in d=1 and 2 in the sense that the presence of disorder, however weak, alters the qualitative behavior of the underlying random walk, with d=2 being the marginal case. For d=1, Alberts-Khanin-Quastel have shown that if the disorder strength tends to zero as a^{1/4} as the lattice spacing a tends to zero, then the partition functions converge to the solution of the Stochastic Heat Equation. We show that in the marginal dimension d=2, the partition functions admit non-trivial limits if the disorder strength scales as b/\sqrt{log 1/a}, with a transition at a critical point b_c. I will also discuss ongoing work in understanding the limit of the partition functions at b_c. Based on joint work with F. Caravenna and N. Zygouras. Friday, June 2: Vivian Healey - Brown University Title: The Loewner Equation with Branching and the Continuum Random Tree Abstract: In its most well-known form, the Loewner equation gives a correspondence between curves in the upper half-plane and continuous real functions (called the "driving function" for the equation). We consider the generalized Loewner equation, where the driving function has been replaced by a time-dependent real measure. In the first part of the talk, we investigate the delicate relationship between the driving measure and the generated hull, specifying a class of discrete random driving measures that generate hulls in the upper half-plane that are embeddings of trees. In the second part of the talk, we consider the scaling limit of these measures as the trees converge to the continuum random tree, with the goal of constructing an embedding of the CRT. We describe progress in this direction that has been obtained by analyzing the driving measures from an analytic standpoint, and we conclude by describing connections to the complex Burgers equation. Friday, Feb 3: Tobias Johnson - New York University. Title: Galton-Watson fixed points, tree automata, and interpretations Abstract: Consider a set of trees such that a tree belongs to the set if and only if at least two of its root child subtrees do. One example is the set of trees that contain an infinite binary tree starting at the root. Another example is the empty set. Are there any other sets satisfying this property other than trivial modifications of these? I'll demonstrate that the answer is no, in the sense that any other such set of trees differs from one of these by a negligible set under a Galton-Watson measure on trees, resolving an open question of Joel Spencer's. This follows from a theorem that allows us to answer questions of this sort in general. All of this is part of a bigger project to understand the logic of Galton-Watson trees, which I'll tell you more about. Joint work with Moumanti Podder and Fiona Skerman. Friday, Feb 10: Daisuke Shiraishi - Kyoto University. Title: On loops of Brownian motion Abstract: We provide a decomposition of the trace of the Brownian motion into a simple path and an independent Brownian soup of loops that intersect the simple path. More precisely, we prove that any subsequential scaling limit of the loop erased random walk is a simple path (a new result in three dimensions), which can be taken as the simple path of the decomposition. In three dimensions, we also prove that the Hausdorff dimension of any such subsequential scaling limit lies in (1, 5/3]. We conjecture that our decomposition characterizes uniquely the law of the simple path. If so, our results would give a new strategy to the existence of the scaling limit of the loop erased random walk and its rotational invariance. Friday, Feb 17: Nina Holden - MIT. Title: How round are the complementary components of planar Brownian motion? Abstract:Consider a Brownian motion W in the complex plane started from 0 and run for time 1. Let A(1), A(2),... denote the bounded connected components of C-W([0,1]). Let R(i) (resp.\ r(i)) denote the out-radius (resp.\ in-radius) of A(i) for i \in N. Our main result is that E[\sum_i R(i)^2|\log R(i)|^\theta ]<\infty for any \theta<1. We also prove that \sum_i r(i)^2|\log r(i)|=\infty almost surely. These results have the interpretation that most of the components A(i) have a rather regular or round shape. Based on joint work with Serban Nacu, Yuval Peres, and Thomas S. Salisbury. Friday, Mar 3: Wei Qian - ETH Zurich. Title: Decomposition of Brownian loop-soup clusters Abstract:We study the structure of Brownian loop-soup clusters in two dimensions. The first part of the talk is based on joint-work with Wendelin Werner. Among other things, we obtain the following decomposition of the clusters with critical intensity: When one conditions a loop-soup cluster by its outer boundary $l$ (which is known to be an SLE4-type loop), then the union of all excursions away from $l$ by all the Brownian loops in the loop-soup that touch $l$ is distributed exactly like the union of all excursions of a Poisson point process of Brownian excursions in the domain enclosed by $l$. In the second part of the talk, we condition a Brownian loop-soup cluster (of any intensity) on a portion $p$ of its boundary and show that the union of loops that touch $p$ satisfies the restriction property. This result implies that a phase transition occurs at c = 14/15 for the connectedness of the loops that touch $p$. Friday, Mar 10: Pierre-Francois Rodriguez - UCLA. Title: Correlation inequalities for gradient fields and percolation Abstract: We consider a class of massless gradient Gibbs measures, in dimension greater or equal to three, with uniformly convex potential (and non-convex perturbations thereof). A well-known example in this class is the Gaussian free field, which has received considerable attention in recent years. We derive a so-called decoupling inequality for these fields, which yields detailed information about their geometry, and the percolative and non-percolative phases of their level sets. An important aspect is the development of a suitable sprinkling technique, interesting in its own right, which we will discuss in some detail. Roughly speaking, it allows to dominate the strong correlations present in the model, and crucially relies on a particular representation of these correlations in terms of a random walk in a dynamic random environment, due to Helffer and Sjöstrand. Thursday Oct 6: Billingsley lecture! @4:30pm, Eckhart 133 by Jean-Francois Le Gall -University Paris-Sud Orsay. Title: Random planar geometry Friday Oct 7: Jean-Francois Le Gall -University Paris-Sud Orsay. Title: First-passage percolation in random planar lattices October 13-15 (no seminar, but) Midwest probability colloquium @Northwestern University. Zygmund-Calderon Lectures by Martin Harier -University of Warwick. Lecture 1: Taming infinities Monday, October 24, 2016, 4:30pm–5:30pm, Ryerson 251 Abstract: Some physical and mathematical theories have the unfortunate feature that if one takes them at face value, many quantities of interest appear to be infinite! Various techniques, usually going under the common name of "renormalisation" have been developed over the years to address this, allowing mathematicians and physicists to tame these infinities. We will tip our toes into some of the mathematical aspects of these techniques and we will see how they have recently been used to make precise analytical statements about the solutions of some equations whose meaning was not even clear until now. Lectures 2 and 3: The BPHZ theorem for stochastic PDEs Tuesday, October 25, 2016, 4:30pm–5:30pm, Eckhart 202 Wednesday, October 26, 2016, 4pm–5pm, Eckhart 202 Abstract: The Bogoliubov-Parasiuk-Hepp-Zimmermann theorem is a cornerstone of perturbative quantum field theory: it provides a consistent way of "renormalising" the diverging integrals appearing there to turn them into bona fide distributions. Although the original article by Bogoliubov and Parasiuk goes back to the late 50s, it took about four decades for it to be fully understood. In the first lecture, we will formulate the BPHZ theorem as a purely analytic question and show how its solution arises very naturally from purely algebraic considerations. In the second lecture, we will show how a very similar structure arises in the context of singular stochastic PDEs and we will present some very recent progress on its understanding, both from the algebraic and the analytical point of view. Charles Amick Memorial Lectures by Jennifery Chayes -Microsoft Resesarch. First Lecture : Modeling and Estimating Massive Networks: Overview October 28, 4PM, Ryerson 251 Second Lecture: Limits and Stochastic Models for Sparse Massive Networks October 31, 4PM, Eckhart 202 Third Lecture: Exchangeablity and Estimation of Sparse Massive Networks November 1, 4PM, Eckhart 206 Friday Nov 4: Ramon van Handel -Princeton University. Title: The Borell-Ehrhard game Abstract: A precise description of the convexity of Gaussian measures is provided by a remarkable Brunn-Minkowski type inequality due to Ehrhard and Borell. The delicate nature of this inequality has complicated efforts to develop more general geometric inequalities in Gauss space that mirror the rich family of results in the classical Brunn-Minkowski theory. In this talk, I will aim to shed some new light on Ehrhard's inequality by showing that it arises from a somewhat unexpected game-theoretic mechanism. This insight makes it possible to identify new results, such as an improved form of Barthe's reverse Brascamp-Lieb inequality in Gauss space. If time permits, I will also outline how probabilistic ideas enabled us (in work with Yair Shenfeld) to settle the equality cases in the Ehrhard-Borell inequalities. Friday Nov 18: Xinyi Li -University of Chicago. Title: Percolative properties of Brownian interlacements and its vacant set Abstract: In this talk, I will give a brief introduction to Brownian interlacements, and investigate various percolative properties regarding this model. Roughly speaking, Brownian interlacements can be described as a certain Poissonian cloud of doubly-infinite continuous trajectories in the d-dimensional Euclidean space, d greater or equal to 3, with the intensity measure governed by a level parameter. We are interested in both the interlacement set, which is an enlargement ("the sausages") of the union of the trace in the aforementioned cloud of trajectories, and the vacant set, which is the complement of the interlacement set. I will talk about the following results: 1) The interlacement set is "well-connected", i.e., any two "sausages" in d-dimensional Brownian interlacements, can be connected via no more than ceiling((d − 4)/2) intermediate sausages almost surely. 2) The vacant set undergoes a non-trivial percolation phase transition when the level parameter varies. Friday, April 1: Stephane Benoist -Columbia University. Title: Conformally invariant loop measures Abstract: We will discuss several aspects of a conjecture by Kontsevich and Suhov regarding existence and uniqueness of a one parameter family of conformally invariant measures on simple loops (conjecturally related to the SLE family). The most natural case (zero central charge i.e. SLE parameter kappa=8/3) was understood in a paper of Werner predating the conjecture. In a work in progress, Dubédat and myself construct loop measures in the whole conjectural range of existence (i.e. parameters kappa for which SLE is a simple curve). Friday, April 8: Krzysztof Burdzy -University of Washington. Title: Twin peaks Abstract: I will discuss some questions and results on random labelings of graphs conditioned on having a small number of peaks (local maxima). The main open question is to estimate the distance between two peaks on a large discrete torus, assuming that the random labeling is conditioned on having exactly two peaks. Joint work with Sara Billey, Soumik Pal, Lerna Pehlivan and Bruce Sagan. Friday, April 15: Hao Shen -Columbia University. Title: Regularity structure theory and its applications Abstract: Stochastic PDEs arise as important models in probability and mathematical physics. They are typically nonlinear, driven by very singular random forces. Due to lack of regularity it is typically very challenging to even interpret what one means by a solution. In this talk I will explain the solution theories for some of these equations, with a focus on the theory of regularity structures recently developed by Martin Hairer. As applications of these theories, one can make sense of the solutions to these stochastic PDEs, and once their solution theories are established various convergence or approximation problems can be tackled. Friday, April 29: Alex Dunlap - Stanford University. Title: First passage percolation on the exponential of two-dimensional branching random walk: subsequential scaling limit at high temperature Abstract: Abstract: Let \{\eta_{N, v}: v\in V_N\} be a branching random walk in a two-dimensional box V_N of side length N, that is, a 4-ary BRW with Gaussian increments indexed by lattice points (with approximately log-correlated covariances). We study the first passage percolation metric where each vertex v is given a random weight of e^{\gamma \eta_{N, v}}. I will show that for sufficiently small but fixed \gamma>0, for any sequence of \{N_k\} there exists a subsequence along which the appropriately scaled FPP metric converges in the Gromov-Hausdorff sense to a random metric on the unit square in R^2. In addition, all possible (conjecturally unique) scaling limits are non-trivial and are continuous with respect to the Euclidean metric. Joint work with J. Ding. Friday, May 6: no seminar. Conference on New Developments in Probability@ Northwestern University. Friday, May 13: no seminar. Statistics Department anniversarial conference. Friday, May 20: Jun Yin -University of Wisconsin. Title: Delocalization and Universality of band matrices Abstract: In this talk we introduce our new work on band matrices, whose eigenvectors and eigenvalues are widely believed to have the same asymptotic behaviors as those of Wigner matrices. We proved that this belief is true as long as the bandwidth is wide enough. Friday, May 27: no seminar. Workshop on percolation, spin glasses and random media@ Northwestern University. Friday, Jan 8: Tom Hutchcroft - University of British Columbia. Title: Circle packing and uniform spanning forests of planar graphs Abstract: The Koebe-Andreev-Thurston Circle Packing Theorem lets us draw planar graphs in a canonical way, so that the geometry of the drawing reveals analytic properties of the graph. Circle packing has proven particularly effective in the study of random walks on planar graphs, where it allows us to estimate various quantities in terms of their counterparts for Brownian motion in the plane. In this talk, I will introduce the theory of circle packing and discuss work with Asaf Nachmias in which we use circle packing to study uniform spanning forests of planar graphs, a probability model closely related to random walk. We prove that the free uniform spanning forest of any bounded degree, proper planar graph is connected almost surely, answering positively a question of Benjamini, Lyons, Peres and Schramm. Our proof is quantitative, and also shows that uniform spanning forests exhibit some of the same behaviour universally for all bounded degree transient triangulations, provided that one measures distances and areas in the triangulation using the hyperbolic geometry of its circle packing rather than with the usual graph metric and counting measure. Friday, Jan 15: Ewain Gwynne - MIT. Title: An almost sure KPZ relation for SLE and Brownian motion Abstract: I will discuss a KPZ-type formula which relates the Hausdorff dimension of any set associated with SLE, CLE, or related processes; and the Hausdorff dimension of a corresponding set associated with a correlated two-dimensional Brownian motion. In many cases, the dimension of the Brownian motion set is already known or easy to compute. This gives rise to new proofs of the dimensions of several sets associated with SLE, including the SLE curve; the double points and cut points of SLE; and the intersection of two flow lines of a Gaussian free field. The formula is based on the peanosphere construction of Duplantier, Miller, and Sheffield (2014), which encodes a Liouville quantum gravity (LQG) surface decorated with an independent space-filling SLE curve by means of a correlated two-dimensional Brownian motion. I will give a moderately detailed overview of this construction. Based on a joint work with Nina Holden and Jason Miller http://arxiv.org/abs/1512.01223. Friday, Jan 22: Aukosh Jagannath - NYU. Title: The Parisi variational problem Abstract: The Parisi Variational Problem is a challenging non-local, strictly convex variational problem over the space of probability measures whose analysis is of great interest to the study of mean field spin glasses. In this talk, I present a conceptually simple approach to the study of this problem using techniques from PDEs, stochastic optimal control, and convex optimization. We begin with a new characterization of the minimizers of this problem whose origin lies in the first order optimality conditions for this functional. As a demonstration of the power of this approach, we study a prediction of de Almeida and Thouless regarding the validity of the 1 atomic anzatz. We generalize their conjecture to all mixed p-spin glasses and prove that their condition is correct in the entire temperature-external field plane except for a compact set whose phase is unknown at this level of generality. A key element of this analysis is a new class of estimates regarding gaussian integrals in the large noise limit called ``Dispersive Estimates of Gaussians'' . This is joint work with Ian Tobasco (NYU Courant). Friday, Jan 29: Hao Wu - Universite de Geneve. Title: Arm Exponents for SLE Abstract: In the study of lattice models, arm exponents play an important role. In this talk, we first discuss the arm exponents for critical percolation, explain how they are derived and why they are important. Second, we introduce the arm exponents for chordal SLE and explain the application to the critical Ising and FK-Ising model. Finally, we give a brief idea on deriving these exponents and some related open questions. Friday, Feb 12: Greg Lawler - University of Chicago. Title: Convergence of naturally parametrized loop-erased random walk to the Schramm-Loewner evolution parametrized by Minkowski content Abstract: The main goal of this talk is to explain the title. I will define the terms (type of convergence, naturally parametrized, loop-erased random walk, Schramm-Loewner evolution, Minkowski content) as well as the result. This is based on work with Fredrik Wiklund. Friday, Feb 19: Robin Pemantle - University of Pennsylvania. Title: Evolution of one-cells on a line Abstract: We consider systems with the following description. At time zero, the real line is partitioned into intervals. The original partition, which may be random, evolves according to a deterministic rule whereby the interface between consecutive pair of cells move so that the larger cell grows and the smaller cell shrinks. When a cell shrinks to zero it disappears and the two bounding points coalesce. I will discuss one such system: a somewhat degenerate one-dimensional version of a two (and higher) dimensional mean-curvature flow model about which almost nothing rigorous is known. In joint work with Emanuel Lazar, we prove that the Poisson measure is invariant for this evolution, provided that space is rescaled exponentially. We do this by introducing the dual process (time-reversal). This process, unlike the forward process, contains some randomness and may be exactly analyzed. A number of questions remain open, such as uniqueness of trajectories, convergence to Poisson from other initial conditions, and stability under perturbation. Finally, I will discuss other one-dimensional models with similar descriptions about which even less is known. Cancelled! Friday, March 4: Brian Rider -Temple University. Friday, Oct 2: Elchanan mossel - University of Pennsylvania and U.C. Berkeley. Title: Correlation distillation in probability spaces Abstract: Given a finite exchangeable collection of random variables in a probability space, the correlation distillation problem asks for the partition of the space into sets of a given measure as to maximize the probability that all random variables lie in the same set. This problem is closely related to isoperimetric problems and is motivated by applications in voting, theoretical computer science and information theory. In the talk I will survey some older and some recent results on correlation distillation. Many open problems will be presented. (Local probabiltiy events) Oct 3-4, AMS meeting in probability at Loyola . (No Seminar!) Oct 8-10, Midwest Probability Colloquium at Northwestern University. Friday, Oct 16: Jelani Nelson - Harvard University. Title: Dimensionality Reduction Via Sparse Matrices Abstract: This talk will discuss sparse Johnson-Lindenstrauss transforms, i.e. sparse linear maps into much lower dimension which preserve the Euclidean geometry of a set of vectors. Both upper and lower bounds will be presented, as well as applications to certain domains such as numerical linear algebra and compressed sensing. Based on various joint works with Jean Bourgain, Sjoerd Dirksen, Daniel M. Kane, and Huy Le Nguyen. Friday, Oct 23: Tianyi Zheng - Stanford University. Title: Speed of random walks on Cayley graphs of finitely generated groups Abstract: In this walk I will discuss a new construction of a family of groups. We show that up to an absolute constant factor, any function $f$ satisfying $f(1)=1$, $f(n)/\sqrt{n}$, $n/f(n)$ both non-decreasing can be realized as speed function of simple random walk on some finitely generated group. In particular, it implies any number in [1/2,1] can be realized as the speed exponent of simple random walk on some group. The construction is very flexible and allows us to answer positively a recent conjecture of Gideon Amir regarding joint behavior of speed and entropy. We evaluate the Hilbert compression exponents of the groups under consideration. In particular, we show that for any $\alpha\in[2/3,1]$, there exists a 3-step solvable group with Hilbert compression exponent $\alpha$. It follows that there exists uncountably many pairwise non quasi-isometric finitely generated 3-step solvable groups. Joint work with Jeremie Brieussel. Friday, Nov 6: Charles Smart - University of Chicago. Title: SPDE techniques for the random conductance model Abstract: I will survey some of the recent work applying techniques from partial differential equations to the random conductance model on the lattice. This will include some work of mine with Armstrong and some work of Armstrong-Kuusi-Mourat and Gloria-Otto. There are now two approaches to obtaining optimal rates in stochastic homogenization in divergence form. The first obtains Green's function estimates by appealing to the Efron-Stein concentration inequality. The second uses regularity theory to localize the dependence of the solution on the coefficients. I will discuss both of these methods. Friday, Nov 13: Xin Sun - MIT. Title: Almost Sure Multifractal Spectrum of SLE Abstract: 15 years ago B. Duplantier predicted the multifractal spectrum of Schramm Loewner Evolution (SLE), which encodes the fine structure of the harmonic measure of SLE curves. In this talk, I will report our recent rigorous derivation of this prediction. As a byproduct, we also confirm a conjecture of Beliaev and Smirnov for the a.s. bulk integral means spectrum of SLE. The proof uses various couplings of SLE and Gaussian free field, which are developed in the theory of imaginary geometry and Liouville quantum gravity. (Joint work with E. Gwynne and J. Miller.) Friday, Nov 20: Rodrigo Bañuelos - Purdue University. Title: The Hardy-Littlewood-Sobolev inequality via martingale transforms Abstract: We outline a martingale proof of the classical Hardy-Littlewood-Sobolev (HLS) inequality which naturally extends to the setting of Markovian semigroups that have finite dimension in the sense of Varopoulos. The motivation for this approach comes from efforts to employ probabilistic techniques to study (extend) the sharp HLS inequality of E.H.Lieb. Friday, Dec 4: Mykhaylo Shkolnikov - Princeton Univeristy. Title: On multilevel Dyson Brownian motions Abstract: I will discuss how Dyson Brownian motions describing the evolution of eigenvalues of random matrices can be extended to multilevel Dyson Brownian motions describing the evolution of eigenvalues of minors of random matrices. The construction is based on intertwining relations satisfied by the generators of Dyson Brownian motions of different dimensions. Such results allow to connect general beta random matrix theory to particle systems with local interactions, and to obtain novel results even in the case of classical GOE, GUE and GSE random matrix models. Based on joint work with Vadim Gorin. Friday, Dec 11: Stefan Adams - University of Warwick. Title: Isomorphism theorems for space-time random walks Abstract: Loop measures have become important in the analysis of random walks and connected research in mathematical physics. Such measures go back to Symanzik in the late 1960s in the context of Euclidean field theory. We discuss loop measures on graphs with countable infinite different time horizons. These measures are connected to the cycle representation of partition functions in quantum systems (Boson systems). We derive corresponding Dynkin isomorphism theorems for space-time random walks and we prove for some specific models the onset of the so-called Bose-Einstein condensation. Friday, April 3rd: Yimin Xiao - Michigan State University. Title: Discrete Fractal Dimensions and Large Scale Multifractals Abstract: Ordinary fractal dimensions such as Hausdorff dimension and packing dimension are useful for analyzing the (microscopic) geometric structures of various thin sets and measures. For studying (macroscopic or global) fractal phenomena of discrete sets, Barlow and Taylor (1989, 1992) introduced the notions of discrete Hausdorff and packing dimensions. In this talk we present some recent results on macroscopic multifractal properties of random sets associated with the Ornstein-Uhlenbeck process and the mild solution of the parabolic Anderson model. (Joint work with Davar Khoshnevisan and Kunwoo Kim.) (Cancelled!) Friday, April 10th: Mykhaylo Shkolnikov - Princeton Univeristy. Title: On multilevel Dyson Brownian motions. (Speical time and location!) Thursday, April 16th, Billingsley Lecture by Wendelin Werner -ETH. Friday, April 17th: Wendelin Werner -ETH. Title: A simple renormalization flow setup for FK-percolation models Abstract: We will present a simple setup in which one can make sense of a renormalization flow for FK-percolation models in terms of a simple Markov process on a state-sace of discrete weighted graphs. We will describe how to formulate the universality conjectures in this framework (in terms of stationary measures for this Markov process), and how to prove this statement in the very special case of the two-dimensional uniform spanning tree (building on existing results on this model). This is based in part on joint work with Stéphane Benoist and Laure Dumaz. Friday, April 24th: Soumik Pal -University of Washington. Title: Dynamics on random regular graphs: Dyson Brownian motion and the Poisson free field Abstract: : A single permutation, seen as union of disjoint cycles, represents a regular graph of degree two. Consider d many independent random permutations and superimpose their graph structures. It is a common model of a random regular (multi-) graph of degree 2d. Consider the problem of eigenvalue fluctuations of the adjacency matrix of such a graph. We consider the following dynamics. The 'dimension' of each permutation grows by coupled Chinese Restaurant Processes, while in 'time' each permutation evolves according to the random transposition Markov chain. Asymptotically in the size of the graph one observes a remarkable evolution of short cycles and linear eigenvalue statistics in dimension and time. We give a Poisson random surface description in dimension and time of the limiting cycle counts for every d. As d grows to infinity, these Poisson random surfaces converge to the Gaussian Free Field preserved in time by the Dyson Brownian motion. Part of this talk is based on a joint work with Tobias Johnson and the rest is based on a joint work with Shirshendu Ganguly. (Cambridge). Friday, May 1st: Louis-Pierre Arguin -University of Montreal. Title: Maxima of log-correlated Gaussian fields and of the Riemann Zeta function Abstract: A recent conjecture of Fyodorov, Hiary & Keating states that the maxima of the Riemann Zeta function on a bounded interval of the critical line behave similarly to the maxima of a specific class of Gaussian fields, the so-called log-correlated Gaussian fields. These include important examples such as branching Brownian motion and the 2D Gaussian free field. In this talk, we will highlight the connections between the number theory problem and the probabilistic models. We will outline the proof of the conjecture in the case of a randomized model of the Zeta function. We will discuss possible approaches to the problem for the function itself. This is joint work with D. Belius (NYU) and A. Harper (Cambridge). Friday, May 8th: Van Vu -Yale University. Title: Random matrices have simple spectrum Abstract: A symmetric matrix has simple spectrum if all eigenvalues are different. Babai conjectured that random graphs have simple spectrum with probability tending to 1. Confirming this conjecture, we prove the simple spectrum property for a large class of random matrices. If time allows, we will discuss the harder problem of bounding the spacings between consecutive eigenvalues, with applications in mathematical physics, computer science, and numerical linear algebra. Several open questions will also be presented. Joint work with H. Nguyen (OSU) and T. Tao (UCLA). Friday, May 15th: Pierluigi Contucci -U. Bologna. Title: Exactly solvable mean-field monomer-dimers models Abstract: A The seminar will introduce some mean-field models used to describe monomer-dimers systems. In particular the solution for the diluted case and the random impurity case will be shown and the absence of phase transition proved. Friday, May 22th: Dan Romik -UC Davis. Title: A local central limit theorem for random representations of SU(3) Abstract: The number p(n) of integer partitions of n is given approximately for large n by a famous asymptotic formula proved by Hardy and Ramanujan in 1918. This can be interpreted as a statement about the number of inequivalent representations of dimension n of the group SU(2). In this talk I will discuss my recent proof of an analogous result for the asymptotic number of n-dimensional representations of the group SU(3). A key step is to prove a local central limit theorem in a probabilistic model for random representations, which requires some ideas from the theory of modular forms. I will explain these ideas, as well as connections to a mysterious "Witten zeta function" associated with SU(3), and additional applications to understanding the limit shape of random n-dimensional representations of SU(3). No knowledge of representation theory will be assumed or needed. Friday, May 29th: Eyal Lubetzky -NYU. Title: Effect of initial conditions on mixing for the Ising Model Abstract: Recently, the ``information percolation'' framework was introduced as a way to obtain sharp estimates on mixing for the high temperature Ising model, and in particular, to establish cutoff in three dimensions up to criticality from a worst starting state. I will describe how this method can be used to understand the effect of different initial states on the mixing time, both random (''warm start'') and deterministic. Joint work with Allan Sly. (Double talk!) 2:30-3:30 Friday, June 5th: Maury Bramson -UMN. Title: Proportional Switching in FIFO Networks Abstract: A central problem in queueing theory is the development of policies that efficiently allocate available resources. Many standard policies have a fixed capacity at individual sites, rather than the ability to allocate resources across sites. We discuss here the proportional switching policy, where the amount of service at different sites is dependent and the corresponding service vector is required to lie in a convex region. We also assume that packets are served in the FIFO (first-in, first-out) order. Past work on the stability of proportional switching networks has focused on networks with elementary routing structure (such as immediate departure after service at a site). Here, we consider the stability problem for general routing structures. The talk is based on joint work with B. D'Auria and N. Walton. 3:35-4:35 Friday, June 5th: Paul Jung -University of Alabama Birmingham. Title: Levy Khintchine random matrices and the Poisson weighted infinite skeleton tree Abstract: We study a class of Hermitian random matrices which includes Wigner matrices, heavy-tailed random matrices, and sparse random matrices such as adjacency matrices of Erdos-Renyi graphs with p=1/N. Our matrices have real entries which are i.i.d. up to symmetry. The distribution of entries depends on N, and we require sums of rows to converge in distribution; it is then well-known that the limit must be infinitely divisible. We show that a limiting empirical spectral distribution (LSD) exists, and via local weak convergence of associated graphs, the LSD corresponds to the spectral measure associated to the root of a graph which is formed by connecting infinitely many Poisson weighted infinite trees using a backbone structure of special edges. One example covered are matrices with i.i.d. entries having infinite second moments, but normalized to be in the Gaussian domain of attraction. In this case, the LSD is a semi-circle law. Friday, January 9th: Sebastien Roch - UW-Madison. Title: Recent results on the multispecies coalescent Abstract: The multispecies coalescent is a variant of Kingman's coalescent in which several populations are stitched together on a base tree. Increasingly, it plays an important role in phylogenetics where it can be used to model the joint evolution of a large number of genes across multiple species. Motivated by information-theoretic questions, I will present a recent probabilistic analysis of the multispecies coalescent which establishes fundamental limits on the inference of this model from molecular sequence data. No biology background is required. This is joint work with Gautam Dasarathy, Elchanan Mossel, Rob Nowak, and Mike Steel. Friday, January 16th: Steve Lalley - Univ. Chicago. Title: Nash Equilibria for a Quadratic Voting Game Abstract: Voters making a binary decision purchase votes from a centralized clearing house, paying the square of the number of votes purchased. The net payoff to an agent with utility u who purchases v votes is \Psi(S)u−v^2, where \Psi is an odd, monotone function taking values between -1 and +1 and S is the sum of all votes purchased by the n voters participating in the election. The utilities of the voters are assumed to arise by random sampling from a probability distribution F with compact support; each voter knows her own utility, but not those of the other voters, although she does know the sampling distribution F. Nash equilibria for this game are described. Friday, January 23th: Shuwen Lou - UIC. Title: Brownian motion on spaces with varying dimension Abstract: The model can be picturized as the random movement of an insect on the ground with a pole standing on it. That is, part of the state space has dimension 2, and the other part of the state space has dimension 1. We define such a process as a ``darning process'' in terms of Dirichlet form, because 2-dimensional Brownian motion does not hit any singleton. We show that the behavior of this process switches between 1-dimensional and 2-dimensional, which depends on both the time and the positions of the points. An open ongoing project will also be introduced: Can we approximate such a process by random walks? The main results of this talk are based on my joint work with Zhen-Qing Chen. Friday, February 6th: Yury Makarychev - TTIC. Title: Constant Fac­tor Approx­i­ma­tion for Bal­anced Cut in the PIE Model Abstract: We pro­pose and study a new semi-random semi-adversarial model for Bal­anced Cut, a planted model with permutation-invariant ran­dom edges (PIE). Our model is much more general than planted and stochastic mod­els con­sid­ered pre­vi­ously. Con­sider a set of ver­tices V par­ti­tioned into two clus­ters L and R of equal size. Let G be an arbi­trary graph on V with no edges between L and R. Let E_random be a set of edges sam­pled from an arbi­trary permutation-invariant dis­tri­b­u­tion (a dis­tri­b­u­tion that is invari­ant under per­mu­ta­tion of ver­tices in L and in R). Then we say that G + E_random is a graph with permutation-invariant ran­dom edges. We present an approx­i­ma­tion algo­rithm for the Bal­anced Cut prob­lem that finds a bal­anced cut of cost O(|E_random|) + n polylog(n) in this model. In the regime when there are at least \Omega(n polylog(n)) random edges, this is a con­stant fac­tor approx­i­ma­tion with respect to the cost of the planted cut. Joint work with: Konstantin Makarychev and Aravin­dan Vijayaraghavan. Friday, February 13th: James R. Lee - University of Washington. Title: Regularization under diffusion and Talagrand's convolution conjecture Abstract: It is a well-known phenomenon that functions on Gaussian space become smoother under the Ornstein-Uhlenbeck semigroup. For instance, Nelson's hypercontractive inequality shows that if p > 1, then L^p functions are sent to L^q functions for some q > p. In 1989, Talagrand conjectured* that quantitative smoothing is achieved even for functions which are only L^1, in the sense that under the semigroup, such functions have tails that are strictly better than those predicted by Markov's inequality and preservation of mass. Ball, Barthe, Bednorz, Oleszkiewicz, and Wolff (2010) proved that this holds in fixed dimensions. We resolve Talagrand's conjecture conjecture positively (with no dimension dependence). The key insight is to study a subset of Gaussian space at various granularities by approaching it as "efficiently" as possible. To this end, we employ an Ito process that arose in the context of optimal control theory. Efficiency is measured by the average "work" required to couple the approach process to a Brownian motion. *Talagrand's full conjecture is for functions on the discrete cube. Here we address the Gaussian limiting case. This is joint work with Ronen Eldan. Friday, February 20th: Philippe Sosoe - Harvard. Title: On the chemical distance in critical percolation Abstract: In two-dimensional critical percolation, the works of Aizenman-Burchard and Kesten-Zhang imply that macroscopic distances inside percolation clusters are bounded below by a power of the Euclidean distance greater than 1+ε, for some positive ε. No more precise lower bound has been given so far. Conditional on the existence of an open crossing of a box of side length n, there is a distinguished open path which can be characterized in terms of arm exponents: the lowest open path crossing the box. This clearly gives an upper bound for the shortest path. The lowest crossing was shown by Zhang and Morrow to have volume n^4/3+o(1) on the triangular lattice. Following a question of Kesten and Zhang, we compare the length of shortest circuit in an annulus to that of the innermost circuit (defined analogously to the lowest crossing). I will explain how to show that the ratio of the expected length of the shortest circuit to the expected length of the innermost crossing tends to zero as the size of the annulus grows. Joint work with Jack Hanson and Michael Damron. Friday, February 27th: Partha Dey - UIUC. Title: High temperature limits for $(1+1)$-d directed polymer with heavy-tailed disorder. Abstract: The directed polymer model at intermediate disorder regime was introduced by Alberts-Khanin-Quastel (2012). It was proved that at inverse temperature $\beta n^{-\gamma}$ with $\gamma=1/4$ the partition function, centered appropriately, converges in distribution and the limit is given in terms of the solution of the stochastic heat equation. This result was obtained under the assumption that the disorder variables posses exponential moments, but its universality was also conjectured under the assumption of six moments. We show that this conjecture is valid and we further extend it by exhibiting classes of different universal limiting behaviors in the case of less than six moments. We also explain the behavior of the scaling exponent for the log-partition function under different moment assumptions and values of $\gamma$. Based on joint work with Nikos Zygouras. Friday, March 6th: Renming Song - UIUC. Title: Stochastic flows for Levy processes with Holder drifts Abstract: In this talk I will present some new results on the following SDE in $R^d$: $$ dX_t=b(t, X_t)dt+dZ_t, \quad X_0=x, $$ where $Z$ is a Levy process. We show that for a large class of Levy processes $Z$ and Holder continuous drfit $b$, the SDE above has a unique strong solution for every starting point $x\in R^d$. Moreover, these strong solutions form a $C^1$-stochastic flow. In particular, we show that, when $Z$ is a symmetric $\alpha$-stable process with $\alpha\in (0, 1]$ and $b$ is $\beta$-Holder continuous with $\beta\in (1-\alpha/2, 1)$, the SDE above has a unique strong solution. Friday, March 13th: Wei-Kuo Chen - Univ. Chicago. Title: Universality in spin glasses Abstract: This talk is concerned about some universal properties of the Parisi solution in spin glass models. We will show universality of chaos phenomena and ultrametricity in the mixed p-spin model under mild moment assumptions on the environment. We will explain that the results also extend to quenched self-averaging of some physical observables in the mixed p-spin model as well as in different spin glass models including the Edwards-Anderson model and the random field Ising model. Friday, Oct 3rd (Speical time: 1:30-2:30!): Prasad Tetali - Georgia Institute of Technology. Title: Displacement convexity of entropy and curvature in discrete settings Abstract: Inspired by exciting developments in optimal transport and Riemannian geometry (due to the work of Lott-Villani and Sturm), several independent groups have formulated a (discrete) notion of curvature in graphs and finite Markov chains. I will describe some of these approaches briefly, and mention some related open problems of potential independent interest. Friday, Oct 10th: No seminar. Midwest Probability Colloqium Friday, Oct 17th: Thomas Liggett - UCLA. Title: Finitely Dependent Coloring on Z and other Graphs Abstract: In 2008, Oded Schramm asked the following question: For what values of $k$ and $q$ does there exist a stationary, proper, $k-$dependent $q-$coloring of the integers? Schramm had a substantial amount of evidence, which I will describe, that convinced him that such a coloring does not exist for any values of $k$ and $q$. In fact, it turns out that such an object does exist for many values of $k$ and $q$. I will tell you exactly which ones work, and will describe colorings with these properties. No knowledge of advanced probability is needed to follow the lecture. There are several connections with combinatorics, but again, no specialized knowledge is needed. This is joint work with A. Holroyd. Friday, Oct 24th: Tonći Antunović - UCLA. Title: Stationary Eden Model on amenable groups Abstract: We consider stationary versions of the Eden model, on a product of a Cayley graph G of an amenable group and positive integers. The process results in a collection of disjoint trees rooted at G, each of which consists of geodesic paths in a corresponding first passage percolation model on the product graph. Under weak assumptions on the weight distribution and by relying on ergodic theorems, we prove that almost surely all trees are finite. This generalizes certain known results on the two-type Richardson model, in particular of Deijfen and Haggstrom on the Euclidean lattice. This is a joint work with Eviatar Procaccia. Friday, Nov 7th: Jonathan Novak - MIT. Title: Random tilings and Hurwitz numbers Abstract: This talk is about random tilings of a special class of planar domains, which I like to call "sawtooth domains." Sawtooth domains have the special feature that their tilings are in bijective correspondence with Gelfand-Tsetlin patterns, aka semistandard Young tableaux. Consequently, many observables can be expressed in terms of special functions of representation-theoretic origin. In particular, the distribution of tiles of one type along a horizontal slice through a uniformly random tiling is encoded by the Harish-Chandra/Itzykson-Zuber integral, a familiar object from random matrix theory which also happens to be a generating function for a desymmetrized version of the Hurwitz numbers from enumerative algebraic geometry. I will explain how this fact allows one to prove that tiles along a slice fluctuate like the eigenvalues of a Gaussian random matrix. Friday, Nov 14th: Nayantara Bhatnagar - University of Delaware. Title: Lengths of Monotone Subsequences in a Mallows Permutation Abstract: The longest increasing subsequence (LIS) of a uniformly random permutation is a well studied problem. Vershik-Kerov and Logan-Shepp first showed that asymptotically the typical length of the LIS is 2sqrt(n). This line of research culminated in the work of Baik-Deift-Johansson who related this length to the Tracy-Widom distribution. We study the length of the LIS and LDS of random permutations drawn from the Mallows measure, introduced by Mallows in connection with ranking problems in statistics. Under this measure, the probability of a permutation p in S_n is proportional to q^Inv(p) where q is a real parameter and Inv(p) is the number of inversions in p. We determine the typical order of magnitude of the LIS and LDS, large deviation bounds for these lengths and a law of large numbers for the LIS for various regimes of the parameter q. This is joint work with Ron Peled. Friday, Nov 21th: Antonio Auffinger - Northwestern University. Title: Rate of convergence of the mean for sub-additive ergodic sequences Abstract: For a subadditive ergodic sequence {X_{m,n}}, Kingman's theorem gives convergence for the terms X_{0,n}/n to some non-random number g. In this talk, I will discuss the convergence rate of the mean EX_{0,n}/n to g. This rate turns out to be related to the size of the random fluctuations of X_{0,n}; that is, the variance of X_{0,n}, and the main theorems I will present give a lower bound on the convergence rate in terms of a variance exponent. The main assumptions are that the sequence is not diffusive (the variance does not grow linearly) and that it has a weak dependence structure. Various examples, including first and last passage percolation, bin packing, and longest common subsequence fall into this class. This is joint work with Michael Damron and Jack Hanson. Friday, Dec 5th: Brent M. Werness -University of Washington. Title: Hierarchical approximations to the Gaussian free field and fast simulation of Schramm-Loewner evolutions Abstract: The Schramm--Loewner evolutions (SLE) are a family of stochastic processes which describe the scaling limits of curves which occur in two-dimensional critical statistical physics models. SLEs have had found great success in this task, greatly enhancing our understanding of the geometry of these curves. Despite this, it is rather difficult to produce large, high-fidelity simulations of the process due to the significant correlation between segments of the simulated curve. The standard simulation method works by discretizing the construction of SLE through the Loewner ODE which provides a quadratic time algorithm in the length of the curve. Recent work of Sheffield and Miller has provided an alternate description of SLE, where the curve generated is taken to be a flow line of the vector field obtained by exponentiating a Gaussian free field. In this talk, I will describe a new hierarchical method of approximately sampling a Gaussian free field, and show how this allows us to more efficiently simulate an SLE curve. Additionally, we will briefly discuss questions of the computational complexity of simulating SLE which arise naturally from this work. Friday, April 18th: Shankar Bhamidi - UNC. Title: Limited choice and randomness in the evolution of networks Abstract: The last few years have seen an explosion in network models describing the evolution of real world networks. In the context of math probability, one aspect which has seen an intense focus is the interplay between randomness and limited choice in the evolution of networks, ranging from the description of the emergence of the giant component, the new phenomenon of ``explosive percolation'' and power of two choices. I will describe ongoing work in understanding such dynamic network models, their connections to classical constructs such as the standard multiplicative coalescent and applications of these simple models in fitting retweet networks in Twitter. Friday, May 2nd: Tai Melcher - University of Virginia. Title: An example of hypoellipticity in infinite dimensions Abstract: A collection of vector fields on a manifold satisfies H\"{o}rmander's condition if any two points are connected by a path whose tangent vectors only lie in the given directions. It is well-known that a diffusion which is allowed to travel only in these directions is smooth, in the sense that its transition probability measure is absolutely continuous with respect to the volume measure and has a strictly positive smooth density. Smoothness results of this kind in infinite dimensions are typically not known, the first obstruction being the lack of an infinite-dimensional volume measure. We will discuss recent results on a particular class of infinite-dimensional spaces, where we have shown that vector fields satisfying H\"{o}rmander's condition generate a diffusion which has a strictly positive smooth density with respect to an appropriate reference measure. Wednesday, May 7th: Alice Guionnet -MIT. (Mathematics Colloquium, 2:00 -3:00 pm @ Eckhart 206 ) Title: Free probability and random matrices; from isomorphisms to universality Abstract: Free probability is a probability theory for non-commutative variables introduced by Voiculescu about thirty years ago. It is equipped with a notion of freeness very similar to independence. It is a natural framework to study the limit of random matrices with size going to infinity. In this talk, we will discuss these connections and how they can be used to adapt ideas from classical probability theory to operator algebra and random matrices. We will in particular focus on how to adapt classical ideas on transport maps following Monge and Ampere to construct isomorphisms between algebras and prove universality in matrix models. This talk is based on joint works with F. Bekerman, Y. Dabrowski, A. Figalli and D. Shlyakhtenko. Friday, May 9th: Antonio Auffinger - University of Chicago. Title: Strict Convexity of the Parisi Functional Abstract: Spin glasses are magnetic systems exhibiting both quenched disorder and frustration, and have often been cited as examples of "complex systems." As mathematical objects, they provide several fascinating structures and conjectures. This talk will cover recent progress that shed more light in the mysterious and beautiful solution proposed 30 years ago by G. Parisi. We will focus on properties of the free energy of the famous Sherrington-Kirkpatrick model and we will explain a recent proof of the strict convexity of the Parisi functional. Based on a joint work with Wei-Kuo Chen. Friday, May 16th -- Double Talk! : (2:30-3:30) Elton P. Hsu - Northwestern University. Title: Brownian Motion and Gradient Estimates of Positive Harmonic Functions Abstract: Many gradient estimates in differential geometry can be naturally treated by stochastic methods involving Brownian motion on a Riemannian manifold. In this talk, we discuss Hamilton\'92s gradient estimate of bounding the gradient of the logarithm of a positive harmonic function in terms of its supremum from this point of view. We will see how naturally this gradient estimate follows from Ito\'92s formula and extend it to manifolds with boundary by considering reflecting Brownian motion. Furthermore, we will show that in fact Hamilton\'92s gradient estimate can be embedded as the terminal case of a family of gradient estimates which can be treated just as easily by the same stochastic method. (4:00-5:00) Marek Biskup -UCLA. Title: Isoperimetry for two dimensional supercritical percolation Abstract: Isoperimetric problems have been around since ancient history. They play an important role in many parts of mathematics as well as sciences in general. Isoperimetric inequalities and the shape of isoperimetric sets are generally well understood in Euclidean or other "nice" settings but are still subject of research in random domains, graphs, manifolds, etc. In my talk I will address the isoperimetric problem for one example of a random setting: the unique infinite connected component of supercritical bond percolation on the square lattice. In particular, I will sketch a proof of the fact that, as the volume of a (properly defined) isoperimetric set tends to infinity, its asymptotic shape can be characterized by an isoperimetric problem in the plane with respect to a particular (continuum) norm. As an application I will conclude that that the anchored isoperimetric profile with respect to a given point as well as the Cheeger constant of the giant component in finite boxes scale to deterministic quantities. This settles a conjecture of Itai Benjamini for the plane. Based on joint work with O. Louidor, E. Procaccia and R. Rosenthal. Friday, May 23rd: Arnab Sen - University of Minnesota. Title: Continuous spectra for sparse random graphs Abstract: The limiting spectral distributions of many sparse random graph models are known to contain atoms. But a more interesting question is when they also have some continuous part. In this talk, I will give affirmative answer to this question for several widely studied models of random graphs including Erdos-Renyi random graph G(n,c/n) with c > 1, random graphs with certain degree distributions and supercritical bond percolation on Z^2. I will also present several open problems. This is joint work with Charles Bordenave and Balint Virag. Thursday, May 29th ( Billingsley lecture @4:30 PM, Eckhart 133): Scott Sheffield - MIT. Friday, May 30th (regular probability seminar time and location): Scott Sheffield - MIT. Title: Snowflakes, slot machines, Chinese dragons, and QLE Abstract: What is the right way to think of a "random surface" or a "random planar graph"? How can one explain the dendritic patterns that appear in snowflakes, choral reefs, lightning bolts, and other physical systems, as well in as toy mathematical models inspired by these systems? How are these questions related to random walks and random fractal curves (in particular the famous SLE curves)? To begin to address these questions, I will introduce and explain the "quantum Loewner evolution", which is a family of growth processes closely related to SLE. I will explain. through pictures and animations and some discrete arguments, how QLE is defined and what role it might play in addressing the questions raised above. In a continuation of the talk on Friday afternoon (at the probability seminar), I will present a more analytic, continuum construction of QLE and discuss its relationship to the so-called Brownian map. Joint work with Jason Miller. Friday, June 6th: Laurence Field - University of Chicago. Title: Two-sided radial SLE and length-biased chordal SLE Abstract: Models in statistical physics often give measures on self-avoiding paths. We can restrict such a measure to the paths that pass through a marked point, obtaining a "pinned measure". The aggregate of the pinned measures over all possible marked points is just the original measure biased by the path's length. Does the analogous result hold for SLE curves, which appear in the scaling limits of many such models at criticality? We show that it does: the aggregate of two-sided radial SLE is length-biased chordal SLE, where the path's length is measured in the natural parametrization. Friday, Jan 10th: Fredrik Vieklund - Columbia University / Uppsala University. Title: Planar growth models and conformal mapping Abstract: Random, fractal-like growth can be seen in several places in nature. Several mathematical models based in one way or another on harmonic measure exist, but despite significant efforts little is known about these models. I will survey some of the models and problems, focusing in particular on constructions based on conformal maps. Towards the end I will discuss some recent joint work with Sola and Turner on one of these models. Friday, Jan 24th: Double talks! Edward Waymire - Oregon state university. (2:30-3:30) Title: Tree Polymers Under Strong Disorder Abstract: Tree polymers are simplifications of 1+1 dimensional lattice polymers made up of polygonal paths of a (nonrecombining) binary tree having random path probabilities. The path probabilities are (normalized) products of i.i.d. positive weights. As such, they reside in the more general framework of multiplicative cascades and branching random walk. The probability laws of these paths are of interest under weak and strong types of disorder. Some recent results, speculation and conjectures will be presented for this class of models under both weak and strong disorder conditions. This is based on various joint papers with Partha Dey, Torrey Johnson, or Stan Williams. Wei Wu - Brown University. (3:35-4:35) Title: Random fields from uniform spanning trees Abstract: The uniform spanning tree (UST) is a fundamental combinatorial object. In two dimensions, using conformal invariance and planar duality, it is shown that the scaling limits of UST is given by one of the SLE path. We discuss the random field approach, and study the scaling limit of certain random fields coupled with USTs. This approach works on general graphs, and may help to understand the scaling limits of UST in higher dimensions. This talk is based on several joint works with Adrien Kassel, Richard Kenyon and Xin Sun. Friday, Jan 31th: Hao Wu - MIT. Title: Intersections of SLE paths Abstract: SLE curves are introduced by Oded Schramm as the candidate of the scaling limit of discrete models. In this talk, we first describe basic properties of SLE curves and their relation with discrete models. Then we summarize the Hausdorff dimension results related to SLE curves, in particular the new results about the dimension of cut points and double points. Third we introduce Imaginary Geometry, and from there give the idea of the proof of the dimension results. Friday, Feb 7th: Wei-Kuo Chen - Univ Chicago. Title: On Gaussian inequalities for product of functions Abstract: Gaussian inequalities have played important roles in various scientific areas. In this talk, we will present simple algebraic criteria that yield sharp Holder types of inequalities for the product of functions of Gaussian random vectors with arbitrary covariance structure. As an application, we will explain how our results yield several famous inequalities in functional geometry, such as, the Brascamp-Lieb inequality, the sharp Young inequality, etc. This part of the talk is based on the recent joint work with N. Dafnis and G. Paouris. Along this direction, we will discuss a conjecture on the convexity of the Parisi functional arising from the study of the Sherrington-Kirkpatrick model in spin glass. Friday, Feb 14th: Greg Lawler - Univ Chicago. Title: Conformal invariance of the Green's function for loop-erased random walk Abstract: The planar loop-erased random walk (LERW) is obtained from the usual random walk by erasing loops. The LERW is related to a number of other models such as the uniform spanning tree. We consider a fixed simply connected domain in C containing the origin, and two distinct boundary points a and b. For a fixed lattice spacing, we consider the probability that a LERW goes from a to b goes through an edge containing the origin. We show that the normalized limit of this probability goes to a conformally covariant quantity, the Green's function for the Schramm-Loewner evolution. This is joint work with Christian Benes and Fredrik Viklund . Friday, Feb 21st: Asaf Nachmias - UBC. Title: Random walks on planar graphs via circle packings Abstract: I will describe two results concerning random walks on planar graphs and the connections with Koebe's circle packing theorem (which I will not assume any knowledge of): 1. A bounded degree planar triangulation is recurrent if an only if the set of accumulation points of its circle packing is a polar set (that is, has zero logarithmic capacity). This extends a result of He and Schramm who proved recurrence (transience) when the set of accumulation points is empty (a closed Jordan curve). Joint work with Ori Gurel-Gurevich and Juan Souto. 2. The Poisson boundary (the space of bounded harmonic functions) of a transient bounded degree triangulation of the plane is characterized by the topological boundary obtained by circle packing the graph in the unit disk. In other words, any bounded harmonic function on the graph is the harmonic extension of some measurable function on the boundary of the unit disc. Joint work with Omer Angel, Martin Barlow and Ori Gurel-Gurevich. Friday, Feb 28th: Ronen Eldan - Microsoft Research, Redmond. Title: A Two-Sided Estimate for the Gaussian Noise Stability Deficit Abstract: The Gaussian Noise Stability of a set A in Euclidean space is the probability that for a Gaussian vector X conditioned to be in A, a small Gaussian perturbation of X will also be in A. Borell's celebrated inequality states that a half-space maximizes the noise stability among all possible sets having the same Gaussian measure. We present a novel short proof of this inequality, based on stochastic calculus. Moreover, we prove an almost tight, two-sided, dimension-free robustness estimate for this inequality: We show that the deficit between the noise stability of a set A and an equally probable half-space H can be controlled by a function of the distance between the corresponding centroids. As a consequence, we prove a conjecture of Mossel and Neeman, who used the total-variation distance as a metric. Friday, Mar 7th: Nikolaos Dafnis - Texas A&M University. Title: Asymptotic behavior of log-concave probability measures Abstract: A probability measure $\mu$ in ${\mathbb R}^n$ is called log-concave if $\mu\big(\lambda A + (1-\lambda) B\big) \geq \mu(A)^\lambda\,\mu(B)^{1-\lambda}$, for every $\lambda\in[0,1]$ and every $A,B$ Borel subsets of ${\mathbb R}^n$. Two basic examples are the uniform measure restricted to a convex body in ${\mathbb R}^n$ with volume $1$ (Brunn-Minkowski inequality) and the normal Gaussian measure in ${\mathbb R}^n$. We are studying the asymptotic behavior of some random geometric quantities such as the volume and the radius of a random polytope generated by sampling with respect to a log-concave probability measure. We will show that asymptotically ( as the dimension $n$ goes to infinity), they behave like if we had sampled with respect to the Gaussian measure. Friday, Oct 4th: Ofer Zeitouni - Courant Institute and Weizmann Institute of Science. Title: Performance of the Metropolis algorithm on a disordered tree: the Einstein relation. Abstract: Consider a d-ary rooted tree (d>2) where each edge e is assigned an i.i.d. (bounded) random variable X(e) of negative mean. Assign to each vertex v the sum S(v) of X(e) over all edges connecting v to the root, and assume that the maximum S_n* of S(v) over all vertices v at distance n from the root tends to infinity (necessarily, linearly) as n tends to infinity. We analyze the Metropolis algorithm on the tree and show that under these assumptions there always exists a temperature of the algorithm so that it achieves a linear (positive) growth rate in linear time. This confirms a conjecture of Aldous (Algorithmica, 22(4):388-412, 1998). The proof is obtained by establishing an Einstein relation for the Metropolis algorithm on the tree. Joint work with Pascal Maillard. Friday, Oct 11th: Thirty-fifth Midwest Probability Colloquium Friday, Oct 18th: Amir Dembo - Stanford University. Title: Persistence Probabilities. Abstract: Persistence probabilities concern how likely it is that a stochastic process has a long excursion above fixed level and of what are the relevant scenarios for this behavior. Power law decay is expected in many cases of physical significance and the issue is to determine its power exponent parameter. I will survey recent progress in this direction (jointly with Sumit Mukherjee), dealing with stationary Gaussian processes that arise from random algebraic polynomials of independent coefficients and from the solution to heat equation initiated by white noise. If time permits, I will also discuss the relation to joint works with Jian Ding and Fuchang Gao, about persistence for iterated partial sums and other auto-regressive sequences, and to the work of Sakagawa on persistence probabilities for the height of certain dynamical random interface models. Friday, Oct 25th: Erik Lundberg - Purdue University This talk was reschedule to Dec. 13th! Title: Statistics on Hilbert's Sixteenth Problem Abstract: The first part of Hilbert's sixteenth problem concerns real algebraic geometry: We are asked to study the number and possible arrangements of the connected components of a real algebraic curve (or hypersurface). I will describe a probabilistic approach to studying the topology, volume, and arrangement of the zero set (in real projective space) of a random homogeneous polynomial. The outcome depends on the definition of "random". A popular Gaussian ensemble uses monomials as a basis, but we will favor eigenfunctions on the sphere (spherical harmonics) as a basis. As we will see, this "random wave" model produces a high expected number of components (a fraction of the Harnack bound that was an inspiration for Hilbert's sixteenth problem). This is joint work with Antonio Lerario. Friday, Nov 1st: Yashodhan Kanoria - Columbia Business School Title: A Dynamic Graph Model of Barter Exchanges Abstract: Motivated by barter exchanges, we study average waiting time in a dynamic random graph model. A node arrives at each time step. A directed edge is formed independently with probability p with each node currently in the system. If a cycle is formed, of length no more than 3, then that cycle of nodes is removed immediately. We show that the average waiting time for > a node scales as 1/p^{3/2} for small p, for this policy. Moreover, we prove that we cannot achieve better delay scaling by batching. Our results through new light on the operation of kidney exchange programs. The insight offered by our analysis is that the benefit of waiting for additional incompatible patient-donor pairs to arrive (batching) into kidney exchange clearinghouses is not substantial and is outweighed by the cost of waiting. Joint work with Ross Anderson, Itai Ashlagi and David Gamarnik. Friday, Nov 8th: Cris Moore - Santa Fe Institute. Title: Epsilon-biased sets, the Legendre symbol, and getting by with a few random bits Abstract: Subsets of F_2^n that are p-biased, meaning that the parity of any set of bits is even or odd with probability close to 1/2, are useful tools in derandomization. They also correspond to optimal error-correcting codes,i.e. meeting the Gilbert-Varshamov bound, with distance close to n/2. A simple randomized construction shows that such sets exist of size O(n/p^2); recently, Ben-Aroya and Ta-Shma gave a deterministic construction of size O((n/p^2)^(5/4)). I will review deterministic constructions of Alon, Goldreich, Haastad, and Peralta of sets of size O(n/p^3) and O(n^2/p^2), and discuss the delightful pseudorandom properties of the Legendre symbol along the way. Then, rather than derandomizing these sets completely in exchange for making them larger, we will try moving in a different direction on the size-randomness plane, constructing sets of optimal size O(n/p^2) with as few random bits as possible. The naive randomized construction requires O(n^2/p^2) random bits. I will show that this can be reduced to O(n log(n/p)) random bits. Like Alon et al., our construction uses the Legendre symbol and Weil sums, but in a different way to control high moments of the bias. I'll end by saying a few words about Ramsey graphs and random polynomials. This is joint work with Alex Russell. Friday, Nov 15th: Shannon Starr - University of Alabama at Birmingham. Title: Quantum spin systems and graphical representations Abstract: Quantum spin systems are mathematical models for magnetism. But the quantum nature is a difficulty. For some models there are graphical representations, which relate to interacting particle processes (with some changes). I will discuss one application done jointly with Nick Crawford and Stephen Ng, called emptiness formation probability where this approach works. Friday, Nov 22nd: Roman Vershynin - University of Michigan. Title: Delocalization of eigenvectors of random matrices Abstract: Eigenvectors of random matrices are much less studied than eigenvalues, despite their importance. The simplest question is whether the eigenvectors are delocalized, i.e. all of their coordinates are as small as can be, of order n^{-1/2}. Even this simple looking problem has been open until very recently. Currently there are two approaches to delocalization - spectral (via local eigenvalue statistics) and geometric (via high dimensional probability). This talk will explain these approaches and popularize related open problems. Based on joint work with Mark Rudelson (Michigan). Friday, Nov 29th: Thanksgiving Friday, Dec 6th: Shirshendu Chatterjee - Courant Institute Title: Multiple Phase Transitions for long range first-passage percolation on lattices Abstract: Given a graph G with non-negative edge weights, the passage time of a path is the sum of weights of the edges in the path, and the first-passage time to reach u from v is the minimum passage time of a path joining them. We consider a long range first-passage model on Z^d in which, the weight w(x,y) of the edge joining x and y has exponential distribution with mean |x-y|^a for some fixed a > 0, and the edge weights are independent. We analyze the growth of the set of vertices reachable from the origin within time t, and show that there are four different growth regimes depending on the value of a. Joint work with Partha Dey. Friday, Dec 13th: Erik Lundberg - Purdue University Friday, Apr 5th: Alex Fribergh - Universite de Toulouse. Title: On the monotonicity of the speed of biaised random walk on a Galton-Watson tree without leaves. Abstract: We will present different results related to the speed of biased random walks in random environments. Our focus will be on a recent paper by Ben Arous, Fribergh and Sidoravicius proving that the speed of the biased random walk on a Galton-Watson tree without leaves is increasing for high biases. This partially solves a question asked by Lyons, Pemantle and Peres. Friday, Apr 12th: Yuval Peres - Microsoft Research Title: Search Games, The Cauchy process and Optimal Kakeya Sets Abstract: A planar set that contains a unit segment in every direction is called a Kakeya set. These sets have been studied intensively in geometric measure theory and harmonic analysis since the work of Besicovich (1928); we find a new connection to game theory and probability via a search game first analyzed by Adler et al (2003). A hunter and a rabbit move on the n-vertex cycle without seeing each other. At each step, the hunter moves to a neighboring vertex or stays in place, while the rabbit is free to jump to any node. Thus they are engaged in a zero sum game, where the payoff is the capture time. The known optimal randomized strategies for hunter and rabbit achieve expected capture time of order n log n. We show that every rabbit strategy yields a Kakeya set; the optimal rabbit strategy is based on a discretized Cauchy random walk, and it yields a Kakeya set K consisting of 4n triangles, that has minimal area among such sets (the area of K is of order 1/log(n)). Passing to the scaling limit yields a simple construction of a random Kakeya set with zero area from two Brownian motions. (Joint work with Y. Babichenko, R. Peretz, P. Sousi and P. Winkler). Friday, Apr 12th (4:30-5:00): Yuval Peres - Microsoft Research Tutorial Seminar: What is the mixing time for random walk on a graph? Abstract: Consider a simple random walk on a finite graph. The mixing time is the time it takes the walk to reach a position that is approximately independent of the starting point; it has been studied intensively by combinatorialists, computer scientists and probabilists; the mixing time arises in statistical physics as well. Applications of mixing times range from random sampling and card shuffling, to understanding convergence to equilibrium in the Ising model. It is closely related to expansion and eigenvalues. Besides introducing this topic, I will also describe the open problem of understanding which random walks exhibit "cutoff", a sharp transition to stationarity first discovered by Diaconis, Shashahani and Aldous in the early 1980s but still mysterious. Wednesday, Apr 24th 4pm - 5pm at the CAMP seminar: Grigorios Pavliotis - Imperial College London. Title: Convergence to equilibrium for nonreversible diffusions. Abstract: The problem of convergence to equilibrium for diffusion processes is of theoretical as well as applied interest, for example in nonequilibrium statistical mechanics and in statistics, in particular in the study of Markov Chain Monte Carlo (MCMC) algorithms. Powerful techniques from analysis and PDEs, such as spectral theory and functional inequalities (e.g. logarithmic Sobolev inequalities) can be used in order to study convergence to equilibrium. Quite often, the diffusion processes that appear in applications are degenerate (in the sense that noise acts directly to only some of the degrees of freedom of the system) and/or nonreversible. The study of convergence to equilibrium for such systems requires the study of non-selfadjoint, possibly non-uniformly elliptic, second order differential operators. In this talk we show how the recently developed theory of hypocoercivity can be used to prove exponentially fast convergence to equilibrium for such diffusion processes. Furthermore, we will show how the addition of a nonreversible perturbation to a reversible diffusion can speed up convergence to equilibrium. This is joint work with M. Ottobre, K. Pravda-Starov, T. Lelievre and F. Nier. Thursday, May 2nd: Persi Diaconis - Stanford University This is a special event. Billingsley Lectures on Probability in honor of Professor Billingsley. Friday, May 3rd: Persi Diaconis - Stanford University Title: Random Walk with Reinforcement Abstract: Picture a triangle, with vertices labeled A, B, C. A random walker starts at A and chooses a random nearest neighbor. At each stage, the walker adds 1 to the weight of each crossed edge and chooses the next step with probability proportional to the current edge weights. The question is 'what happens?'. This simple problem leads into interesting corners: to Bayesian analysis of the transition mechanism of Markov chains (and protein folding) and to the hyperbolic sigma model of statistical physics. Work of (and with) Billingsley, Baccalado, Freedman, Tarres, and Sabot will be reviewed. Friday, May 10th: Tim Austin - New York University Title: Exchangeable random measures Abstract: Classical theorems of de Finetti, Aldous-Hoover and Kallenberg describe the structure of exchangeable probability measures on spaces of sequences or arrays. Similarly, one can add an extra layer of randomness, and ask after exchangeable random measures on these spaces. It turns out that those classical theorems, coupled with an abstract version of the `replica trick' from statistical physics, give a structure theorem for these random measures also. This leads to a new proof of the Dovbysh-Sudakov Theorem describing exchangeable positive semi-definite matrices. Friday, May 17th: Nike Sun - Stanford University Title: Maximum independent sets in random d-regular graphs Abstract: Satisfaction and optimization problems subject to random constraints are a well-studied area in the theory of computation. These problems also arise naturally in combinatorics, in the study of sparse random graphs. While the values of limiting thresholds have been conjectured for many such models, few have been rigorously established. In this context we study the size of maximum independent sets in random d-regular graphs. We show that for d exceeding a constant d(0), there exist explicit constants A, C depending on d such that the maximum size has constant fluctuations around A*n-C*(log n) establishing the one-step replica symmetry breaking heuristics developed by statistical physicists. As an application of our method we also prove an explicit satisfiability threshold in random regular k-NAE-SAT. This is joint work with Jian Ding and Allan Sly. Friday, May 24th: Lionel Levine - Cornell University Title: Scaling limit of the abelian sandpile Abstract: Which functions of two real variables can be expressed as limits of superharmonic functions from (1/n)Z2 to (1/n2)Z? I'll discuss joint work with Wesley Pegden and Charles Smart on the case of quadratic functions, where this question has a surprising and beautiful answer: the maximal such quadratics are classified by the circles in a certain Apollonian circle packing of the plane. I'll also explain where the question came from (the title is a hint!). Friday, May 31st: Jonathan Weare - University of Chicago Title: The relaxation of a family of broken bond crystal surface models Abstract: We study the continuum limit of a family of kinetic Monte Carlo models of crystal surface relaxation that includes both the solid-on-solid and discrete Gaussian models. With computational experiments and theoretical arguments we are able to derive several partial differential equation (PDE) limits identified (or nearly identified) in previous studies and to clarify the correct choice of surface tension appearing in the PDE and the correct scaling regime giving rise to each PDE. We also provide preliminary computational investigations of a number of interesting qualitative features of the large scale behavior of the models. Friday, Jun 14th: Firas Rassoul-Agha - University of Utah Title: Random polymers and last passage percolation: variational formulas, Busemann functions, geodesics, and other stories Abstract: We give variational formulas for random polymer models, both in the positive- and zero-temperature cases. We solve these formulas in the oriented two-dimensional zero-temperature case. The solution comes via proving almost-sure existence of the so-called Busemann functions. We then use these results to prove existence, uniqueness, and coalescence of semi-infinite directional geodesics, for exposed points of differentiability of the limiting shape function. Friday, July 19th: Louigi Addario-Berry - McGill University. Title: The scaling limit of simple triangulations and quadrangulations Abstract: A graph is simple if it contains no loops or multiple edges. We establish Gromov--Hausdorff convergence of large uniformly random simple triangulations and quadrangulations to the Brownian map, answering a question of Le Gall (2011). In proving the preceding fact, we introduce a labelling function for the vertices of the triangulation. Under this labelling, distances to a distinguished point are essentially given by vertex labels, with an error given by the winding number of an associated closed loop in the map. The appearance of a winding number suggests that a discrete complex-analytic approach to the study of random triangulations may lead to further discoveries. Joint work with Marie Albenque. Friday, Feb 1st (1:30pm to 2:30pm): Marek Biskup - UCLA Title Law of the extremes for the two-dimensional discrete Gaussian Free Field Abstract: A two-dimensional discrete Gaussian Free Field (DGFF) is a centered Gaussian process over a finite subset (say, a square) of the square lattice with covariance given by the Green function of the simple random walk killed upon exit from this set. Recently, much effort has gone to the study of the concentration properties and tail estimates for the maximum of DGFF. In my talk I will address the limiting extreme-order statistics of DGFF as the square-size tends to infinity. In particular, I will show that for any sequence of squares along which the centered maximum converges in law, the (centered) extreme process converges in law to a randomly-shifted Gumbel Poisson point process which is decorated, independently around each point, by a random collection of auxiliary points. If there is any time left, I will review what we know and/or believe about the law of the random shift. This talk is based on joint work with Oren Louidor (UCLA). Friday, Feb 1st (2:30pm to 3:30pm): Fredrik Viklund - Columbia University Title: The Virasoro algebra and discrete Gaussian free field Abstract: The Virasoro algebra is an infinite dimensional Lie algebra that plays an important role in the Conformal Field Theory (CFT) methods employed by physicists to describe and study conformally invariant scaling limits of planar critical lattice models from statistical physics. Despite much progress in the last decade, it seems fair to say that from a mathematical perspective many aspects of the connections between discrete model and continuum limit CFT remain somewhat mysterious. In the talk I will discuss recent joint work with C. Hongler and K. Kytola concerning the discrete Gaussian free field on a square grid. I will explain how for this model discrete complex analysis can be used to construct explicit (exact) representations of the Virasoro algebra of central charge 1 directly on the discrete level. Friday, Feb 8th: James Lee - University of Washington Title: Markov type and the multi-scale geometry of metric spaces Abstract: The behavior of random walks on metric spaces can sometimes be understood by embedding such a walk into a nicer space (e.g. a Hilbert space) where the geometry is more readily approachable. This beautiful theme has seen a number of geometric and probabilistic applications. We offer a new twist on this study by showing that one can employ mappings that are significantly weaker than bi-Lipschitz. This is used to answer questions of Naor, Peres, Schramm, and Sheffield (2004) by proving that planar graph metrics and doubling metrics have Markov type 2. The main new technical idea is that martingales are significantly worse at aiming than one might at first expect. Joint work with Jian Ding and Yuval Peres. Friday, Feb 15th: Michelle Castellana - Princeton University Title: The Renormalization Group for Disordered Systems Abstract: We investigate the Renormalization Group (RG) approach in finite- dimensional glassy systems, whose critical features are still not well-established, or simply unknown. We focus on spin and structural-glass models built on hierarchical lattices, which are the simplest non-mean-field systems where the RG framework emerges in a natural way. The resulting critical properties shed light on the critical behavior of spin and structural glasses beyond mean field, and suggest future directions for understanding the criticality of more realistic glassy systems. Friday, Feb 22nd: Jack Hanson - Princeton University Title: Geodesics and Direction in 2d First-Passage Percolation Abstract: I will discuss geodesics in first-passage percolation, a model for fluid flow in a random medium. There are numerous conjectures about the existence, coalescence, and asymptotic direction of infinite geodesics under the model's random metric. C. Newman and collaborators have proved some of these under strong assumptions. I will explain recent results with Michael Damron which develop a framework for addressing these questions; this framework allows us to prove versions of Newman's results under minimal assumptions. Friday, Mar 1st: Vadim Gorin - M.I.T. Title: Gaussian Free Field fluctuations for general-beta random matrix ensembles. Abstract: It is now known that the asymptotic fluctuations of the height function of uniformly random lozenge tilings of planar domains (equivalently, stepped surfaces in 3d space) are governed by the Gaussian Free Field (GFF), which is a 2d analogue of the Brownian motion. On the other hand, in certain limit regimes such tilings converge to various random matrix ensembles corresponding to beta=2. This makes one wonder whether GFF should also somehow arise in general-beta random matrix ensembles. I will explain that this is indeed true and the asymptotics of fluctuations of classical general-beta random matrix ensembles is governed by GFF. This is joint work with A.Borodin. Friday, Mar 8th: No seminar. Friday, Mar 15th: Alice Guionnet - M.I.T. Title: About heavy tailed random matrices. Abstract:We investigate the behaviour of matrices which do not belong to the universality class of Wigner matrices because their entries have heavy tails. Friday, Oct 5th: Wei-Kuo Chen - University of Chicago Title: Chaos problem in mean field spin glasses Abstract: The main objective in spin glasses from the physical perspective is to understand the strange magnetic properties of certain alloys. Yet the models invented to explain the observed phenomena are also of a rather fundamental nature in mathematics. In this talk we will first introduce the famous Sherrington-Kirkpatrick model as well as some known results about this model such as the Parisi formula and the limiting behavior of the Gibbs measure. Next, we will discuss the problems of chaos in the mixed p-spin models and present mathematically rigorous results including disorder, external field, and temperature chaos. Friday, Oct 12th: Thirty-fourth Midwest Probability Colloquium Friday, Oct 19th: Gerard Ben Arous - Courant Institute Abstract: This seminar was canceled. It will be rescheduled. Friday, Oct 26th: Allan Sly - UC Berkeley Title: The 2D SOS Model Abstract: We present new results on the (2+1)-dimensional Solid-On-Solid model at low temperatures. Bricmont, El-Mellouki and Froelich (1986) showed that in the presence of a floor there is an entropic repulsion phenomenon, lifting the surface to a height which is logarithmic in the side of the box. We refine this and establish the typical height of the SOS surface is precisely the floor of [1/(4\beta)\log n], where n is the side-length of the box and \beta is the inverse-temperature. We determine the asymptotic shape of the top plateau and show that its boundary fluctuation are n^{1/3+o(1)}. Based on joint works with Pietro Caputo, Eyal Lubetzky, Fabio Martinelli and Fabio Toninelli. Friday, Dec 7th: Brian Rider - University of Colorado Boulder Title: Spiking the random matrix hard edge. Abstract: The largest eigenvalue of a finite rank perturbation of a random hermitian matrix is known to exhibit a phase transition (in the infinite dimensional limit). If the perturbation is small one sees the famous Tracy-Widom law, while a large perturbation results in a Gaussian fluctuation. In between there exists is a scaling window about a critical perturbation value leading to a separate family of limit laws. This basic discovery is due to Baik, Ben Arous, and Peche. More recently Bloemendal and Virag have shown this picture persists in the context of the general beta ensembles, giving new formulations of the critical limit laws . Yet another route, explained here, is to go through the random matrix hard edge, perturbing the smallest eigenvalues in the sample covariance set-up. A limiting procedure then recovers all the alluded to distributions. (Joint work with Jose Ramirez.) Friday, Nov 2nd: Gregorio Moreno Flores - University of Wisconsin Title: Directed polymers and the stochastic heat equation Abstract: We show how some properties of the solutions of the Stochastic Heat Equation (SHE) can be derived from directed polymers in random environment. In particular, we show: * A new proof of the positivity of the solutions of the SHE * Improved bounds on the negative moments of the SHE * Results on the fluctuations of the log of the SHE in equilibrium, namely, the Cole-Hopf solution of the KPZ equation (if time allows). Friday, Nov 9th: Milton Jara - IMPA Title: Second-order Boltzmann-Gibbs principle and applications Abstract: The celebrated Botzmann-Gibbs principle introduced by Rost in the 80's roughly says the following. For stochastic systems with one or more conservation laws, fluctuations of the non-conserved quantities are faster than fluctuations of the conserved quantities. Therefore, in the right space-time window, the space-time fluctuations of a given observable are asymptotically equivalent to a linear functional of the conserved quantities. In one dimension, we prove two generalizations of this principle: a non-linear (or second-order) and a local version of it. This result opens a way to show convergence of fluctuations for non-linear models, like the ones on the fashionable KPZ universality class. As a corollary, we prove new convergence results for various observables of the asymmetric exclusion process, given in terms of solutions of the KPZ equation. Joint work with Patricia Gonçalves. Friday, Nov 16th: Mohammad Abbas Rezaei - University of Chicago Title: SLE curves and natural parametrization Friday, Nov 23rd: Thanksgiving. Friday, Nov 30th: Joe Neeman - UC Berkeley Title: Robust Gaussian noise stability Abstract: Given two Gaussian vectors that are positively correlated, what is the probability that they both land in some fixed set A? Borell proved that this probability is maximized (over sets A with a given volume) when A is a half-space. We will give a new and simple proof of this fact, which also gives some stronger results. In particular, we can show that half-spaces uniquely maximize the probability above, and that sets which almost maximize this probability must be close to half-spaces. Winter/Spring 2012 Seminars Friday, Jan 20: Jian Ding - Stanford University Title: Extreme values for random processes of tree structures Abstract: The main theme of this talk is that studying implicit tree structures of random processes is of significance in understanding their extreme values. I will illustrate this by several examples including cover times for random walks, maxima for two-dimensional discrete Gaussian free fields, and stochastic distance models. Our main results include (1) An approximation of the cover time on any graph up to a multiplicative constant by the maximum of the Gaussian free field, which yields a deterministic polynomial-time approximation algorithm for the cover time (D.-Lee-Peres 2010); the asymptotics for the cover time on a bounded-degree graph by the maximum of the GFF (D. 2011); a bound on the cover time fluctuations on the 2D lattice (D. 2011). (2) Exponential and doubly exponential tails for the maximum of the 2D GFF (D. 2011); some results on the extreme process of the 2D GFF (D.-Zeitouni, in preparation). (3) Critical and near-critical behavior for the mean-field stochastic distance model (D. 2011). Friday, Feb 10: Jason Miller - Microsoft Research -Redmond Title: Imaginary Geometry and the Gaussian Free Field Abstract: The Schramm-Loewner evolution (SLE) is the canonical model of a non-crossing conformally invariant random curve, introduced by Oded Schramm in 1999 as a candidate for the scaling limit of loop erased random walk and the interfaces in critical percolation. The development of SLE has been one of the most exciting areas in probability theory over the last decade because Schramm's curves have now been shown to arise as the scaling limit of the interfaces of a number of different discrete models from statistical physics. In this talk, I will describe how SLE curves can be realized as the flow lines of a random vector field generated by the Gaussian free field, the two-time-dimensional analog of Brownian motion. I will also explain how this perspective can be used to prove several new results regarding the sample path behavior of SLE, in particular reversibility for kappa in (4,8). Based on joint works with Scott Sheffied. Friday, Mar 9: Ivan Corwin - Microsoft Research - MIT Title: Directed random polymers and Macdonald processes Abstract: The goal of the talk is to survey recent progress in understanding statistics of certain exactly solvable growth models, particle systems, directed polymers in one space dimension, and stochastic PDEs. A remarkable connection to representation theory and integrable systems is at the heart of Macdonald processes, which provide an overarching theory for this solvability. This is based off of joint work with Alexei Borodin. Friday, April 13th: Brent Werness - University of Chicago Title: Path properties of the Schramm-Loewner Evolution. Friday, May 11: L.P. Arguin - Univesite de Montreal Title: Extrema of branching Brownian motion Abstract: Branching Brownian motion (BBM) on the real line is a particle system where particles perform Brownian motion and independently split into two independent Brownian particles after an exponential holding time. The statistics of extremal particles of BBM in the limit of large time are of interest for physicists and probabilists since BBM constitutes a borderline case, among Gaussian processes, where correlations affect the statistics. In this talk, I will start by reviewing results on the law of the maximum of BBM (the rightmost particle), and present new results on the joint distribution of particles close to the maximum. In particular, I will show how the approach can be used to prove ergodicity of the particle system. If time permits, I will explain how the program for BBM lays out a road map to understand extrema of log-correlated Gaussian fields such as the 2D Gaussian free field. This is joint work with A. Bovier and N. Kistler. Thursday, May 31: S.R. Srinivasa Varadhan - Courant Institute of Mathematical Sciences at New York University This is a special event. Billingsley Lectures on Probability in honor of Patrick Billingsley Title: Large Deviations with Applications to Random Matrices and Random Graphs Abstract: See it here. Friday, June 1st: S.R. Srinivasa Varadhan - Courant Institute of Mathematical Sciences at New York University Title: Large Deviations for an Unusual Sum Friday, Sep 30: Antonio Auffinger - University of Chicago Title: Landscape of random functions in many dimensions via Random Matrix Theory. Abstract: How many critical values a typical Morse function have on a high dimensional manifold? Could we say anything about the topology of its level sets? In this talk I will survey a joint work with Gerard Ben Arous and Jiri Cerny that addresses these questions in a particular but fundamental example. We investigate the landscape of a general Gaussian random smooth function on the N-dimensional sphere. These corresponds to Hamiltonians of well-known models of statistical physics, i.e spherical spin glasses. Using the classical Kac-Rice formula, this counting boils down to a problem in Random Matrix Theory. This allows us to show an interesting picture for the complexity of these random Hamiltonians, for the bottom of the energy landscape, and in particular a strong correlation between the index and the critical value. We also propose a new invariant for the possible transition between the so-called 1-step replica symmetry breaking and a Full Replica symmetry breaking scheme and show how the complexity function is related to the Parisi functional. Friday, Oct 7: Antti Knowles - Harvard University Title: Finite-rank deformations of Wigner matrices. Abstract: The spectral statistics of large Wigner matrices are by now well-understood. They exhibit the striking phenomenon of universality: under very general assumptions on the matrix entries, the limiting spectral statistics coincide with those of a Gaussian matrix ensemble. I shall talk about Wigner matrices that have been perturbed by a finite-rank matrix. By Weyl's interlacing inequalities, this perturbation does not affect the large-scale statistics of the spectrum. However, it may affect eigenvalues near the spectral edge, causing them to break free from the bulk spectrum. In a series of seminal papers, Baik, Ben Arous, and Peche (2005) and Peche (2006) established a sharp phase transition in the statistics of the extremal eigenvalues of perturbed Gaussian matrices. At the BBP transition, an eigenvalue detaches itself from the bulk and becomes an outlier. I shall report on recent joint work with Jun Yin. We consider an NxN Wigner matrix H perturbed by an arbitrary deterministic finite-rank matrix A. We allow the eigenvalues of A to depend on N. Under optimal (up to factors of log N) conditions on the eigenvalues of A, we identify the limiting distribution of the outliers. We also prove that the remaining eigenvalues "stick" to eigenvalues of H, thus establishing the edge universality of H + A. On the other hand, our results show that the distribution of the outliers is not universal, but depends on the distribution of H and on the geometry of the eigenvectors of A. As the outliers approach the bulk spectrum, this dependence is washed out and the distribution of the outliers becomes universal. Friday, Oct. 14, Midwest Probability Colloquium at Northwestern Tuesday, Oct 18: Scientific and Statistical Computing Seminar (3:00 in Eckhart 207) Jonathan Mattingly - Duke University Title: A Menagerie of Stochastic Stabilization Abstract: A basic problem for a stochastic system is to show that it possesses a unique steady state which dictates the long term statistics of the system. Sometimes the existence of such a measure is the difficult part. One needs control of the excursions away from the systems typical scale. As in deterministic system, one popular method is the construction of a Lyapunov Function. In the stochastic setting there lack of systematic methods to construct a Lyapunov Function when the interplay between the deterministic dynamics and stochastic dynamics are important for stabilization. I will give some modest steps in this direction which apply to a number of cases. In particular I will show a system where an explosive deterministic system is stabilized by the addition of noise and examples of physical systems where it is not clear how the deterministic system absorbs the stochastic excitation with out blowing up. Friday, Oct 21: Vladas Sidoravicius - IMPA Title: From random interlacements to coordinate and infinite cylinder percolation Abstract: During the talk I will focus on the connectivity properties of three models with long (infinite) range dependencies: Random Interlacements, percolation of the vacant set in infinite rod model and Coordinate percolation. The latter model have polynomial decay in sub-critical and super-critical regime in dimension 3. I will explain the nature of this phenomenon and why it is difficult to handle these models technically. In the second half of the talk I will present key ideas of the multi-scale analysis which allows to reach some conclusions. At the end I will discuss applications and several open problems. Friday, Nov 4: Jinho Baik - University of Michigan Title: Complete matchings and random matrix theory Abstract: Over the last decade or so, it has been found that the distributions that first appeared in random matrix theory describe several objects in probability and combinatorics which do not come from matrix at all. We consider one such example from the so-called maximal crossing and nesting of random complete matchings of integers. We also discuss related non-intersecting process. This is a joint work with Bob Jenkins. Friday, Nov 11: Michael Damron - Princeton University Title: A simplified proof of the relation between scaling exponents in first-passage percolation Abstract: In first passage percolation, we place i.i.d. non-negative weights on the nearest-neighbor edges of Z^d and study the induced random metric. A long-standing conjecture gives a relation between two "scaling exponents": one describes the variance of the distance between two points and the other describes the transversal fluctuations of optimizing paths between the same points. This is sometimes referred to as the "KPZ relation." In a recent breakthrough work, Sourav Chatterjee proved this conjecture using a strong definition of the exponents. I will discuss work I just completed with Tuca Auffinger, in which we introduce a new and intuitive idea that replaces Chatterjee's main argument and gives an alternative proof of the relation. One advantage of our argument is that it does not require a certain non-trivial technical assumption of Chatterjee on the weight distribution. Wednesday, Nov 16: CAMP/ Nonlinear PDEs Seminar (4pm in Eckhart 202) Ofer Zeitouni - University of Minnessota Title: Traveling waves, branching random walks, and the Gaussian free field Abstract: I will discuss several aspects of Branching random walks and their relation with the KPP equation on the one hand, and the maximum of certain (two dimensional) Gaussian fields on the other. I will not assume any knowledge about either of these terms. Friday, Nov 18: Brent Werness - University of Chicago Title: The parafermionic observable in Schramm-Loewner Evolutions Abstract: In recent years, work by Stanislav Smirnov and his co-authors has greatly advanced our understanding of discrete stochastic processes, such as self-avoiding walk and the Ising model, via the use of a tool known as the parafermionic observable. Much of that work has been done in order to show convergence of these models to Schramm-Loewner Evolutions (SLE) in the scaling limit, although very little work has been done on what the parafermionic observable is in SLE itself. In this talk I will introduce the parafermionic observable, and then discuss one possible generalization to the continuous setting. I will then briefly introduce SLE and compute its parafermionic observable, ending with a couple of open questions. Friday, Nov 25: Thanksgiving holiday. No seminar. Friday, Dec 2: Jonathon Peterson - Purdue University 1:30 pm!! Title: The contact process on the complete graph with random, vertex-dependent infection rates. Abstract: The contact process is an interacting particle system that is a very simple model for the spread of an infection or disease on a network. Traditionally, the contact process was studied on homogeneous graphs such as the integer lattice or regular trees. However, due to the non-homogeneous structure of many real-world networks, there is currently interest in studying interacting particle systems in non-homogeneous graphs and environments. In this talk, I consider the contact process on the complete graph, where the vertices are assigned (random) weights and the infection rate between two vertices is proportional to the product of their weights. This set-up allows for some interesting analysis of the process and detailed calculations of phase transitions and critical exponents. Friday, Dec 9: Paul Bougarde - Harvard University Title: Universality for beta-ensembles. Abstract: Wigner stated the general hypothesis that the distribution of eigenvalue spacings of large complicated quantum systems is universal in the sense that it depends only on the symmetry class of the physical system but not on other detailed structures. The simplest case for this hypothesis is for ensembles of large but finite dimensional matrices. Spectacular progress was done in the past decade to prove universality of random matrices presenting an orthogonal, unitary or symplectic invariance. These models correspond to log-gases with respective inverse temperature 1, 2 or 4. I will report on a joint work with L. Erd\"os and H.-T. Yau, which yields universality for the log-gases at arbitrary temperature. The involved techniques include a multiscale analysis and a local logarithmic Sobolev inequality. Friday, Oct. 8, Fredrik Johansson Viklund, Columbia U. Friday, Oct. 29, Tom Alberts, U. of Toronto, Convergence of Loop-Erased Random Walk to SLE(2) in the Natural Time Parameterization I will discuss work in progress with Michael Kozdron and Robert Masson on the convergence of the two-dimensional loop-erased random walk process to SLE(2), with the time parameterization of the curves taken into account. This is a strengthening of the original Lawler, Schramm, and Werner result which was only for curves modulo a reparameterization. The ultimate goal is to show that the limiting curve is SLE(2) with the very specific natural time parameterization that was recently introduced in Lawler and Sheffield, and further studied in Lawler and Zhou. I will describe several possible choices for the parameterization of the discrete curve that should all give the natural time parameterization in the limit, but with the key difference being that some of these discrete time parameterizations are easier to analyze than the others. Friday, Dec. 3, Pierre Nolin, Courant Institute Connection probabilities and RSW-type bounds for the two-dimensional FK Ising model For two-dimensional independent percolation, Russo-Seymour-Welsh (RSW) bounds on crossing probabilities are an important a-priori indication of scale invariance, and they turned out to be a key tool to describe the phase transition: what happens at and near criticality. In this talk, we prove RSW-type uniform bounds on crossing probabilities for the FK Ising model at criticality, independent of the boundary conditions. A central tool in our proof is Smirnov's fermionic observable for the FK Ising model, that makes some harmonicity appear on the discrete level, providing precise estimates on boundary connection probabilities. We also prove several related results - including some new ones - among which the fact that there is no magnetization at criticality, tightness properties for the interfaces, and the value of the half-plane one-arm exponent. This is joint work with H. Duminil-Copin and C. Hongler.
CommonCrawl
Stealing PINs via mobile sensors: actual risk versus user perception Regular Contribution Maryam Mehrnezhad1, Ehsan Toreini1, Siamak F. Shahandashti1 & Feng Hao1 International Journal of Information Security volume 17, pages 291–313 (2018)Cite this article 1359 Altmetric In this paper, we present the actual risks of stealing user PINs by using mobile sensors versus the perceived risks by users. First, we propose PINlogger.js which is a JavaScript-based side channel attack revealing user PINs on an Android mobile phone. In this attack, once the user visits a website controlled by an attacker, the JavaScript code embedded in the web page starts listening to the motion and orientation sensor streams without needing any permission from the user. By analysing these streams, it infers the user's PIN using an artificial neural network. Based on a test set of fifty 4-digit PINs, PINlogger.js is able to correctly identify PINs in the first attempt with a success rate of 74% which increases to 86 and 94% in the second and third attempts, respectively. The high success rates of stealing user PINs on mobile devices via JavaScript indicate a serious threat to user security. With the technical understanding of the information leakage caused by mobile phone sensors, we then study users' perception of the risks associated with these sensors. We design user studies to measure the general familiarity with different sensors and their functionality, and to investigate how concerned users are about their PIN being discovered by an app that has access to all these sensors. Our studies show that there is significant disparity between the actual and perceived levels of threat with regard to the compromise of the user PIN. We confirm our results by interviewing our participants using two different approaches, within-subject and between-subject, and compare the results. We discuss how this observation, along with other factors, renders many academic and industry solutions ineffective in preventing such side channel attacks. Smartphones equipped with many different sensors such as GPS, light, orientation, and motion are continuously providing more features to end users in order to interact with their real-world surroundings. Developers can have access to the mobile sensors either by (1) writing native code using mobile OS APIs [1], (2) recompiling HTML5 code into a native app [2], or (3) using standard APIs provided by the W3C which are accessible through JavaScript code within a mobile browser.Footnote 1 The last method has the advantage of not needing any app-store approval for releasing the app or doing future updates. More importantly, the JavaScript code is platform independent, i.e., once the code is developed it can be executed within any modern browser on any mobile OS. PINlogger.js potential attack scenarios; a the malicious code is loaded in an iframe and the user is on the same tab, b the attack tab is already open and the user is on a different tab, c the attack content is already open in a minimized browser, and the user is on an installed app, d the attack content is already open in a (minimized) browser, and the screen is locked. The attacker listens to the side channel motion and orientation measurements of the victim's mobile device through JavaScript code, and uses machine learning methods to discover the user's sensitive information such as activity types and PINs In-browser access risks While sensor-enabled mobile web applications provide users more functionalities, they raise new privacy and security concerns. Both the academic community and the industry have recognized such issues regarding certain sensors such as geolocation [3]. For a website to access the geolocation data, it must ask for explicit user permission. However, to the best of our knowledge, there is little work evaluating the risks of in-browser access to other sensors. Unlike in-app attacks, an in-browser attack, i.e., via JavaScript code embedded in a web page, does not require any app installation. In addition, JavaScript code does not require any user permission to access sensor data such as device motion and orientation. Furthermore, there is no notification while JavaScript is reading the sensor data stream. Hence, such in-browser attacks can be carried out far more covertly than the in-app counterparts. However, an effective in-browser attack still has to overcome the technical challenge that the sampling rates available in browser are much lower than those in app. For example, as we observed in [4], frequency rates of motion and orientation sensor data available in-browser are 3 to 5 times lower than those of accelerometer and gyroscope available in-app. In-browser attacks Many popular browsers such as Safari, Chrome, Firefox, Opera, and Dolphin have already implemented access to the above sensor data. As we demonstrated in [5] and [4], all of these mobile browsers allow such access when the code is placed in any part of the active tab including iframes (Fig. 1a). In some cases such as Chrome and Dolphin on iOS, an inactive tab can have access to the sensor measurements as well (Fig. 1b). Even worse, some browsers such as Safari allow the inactive tabs to access the sensor data, when the browser is minimized (Fig. 1c), or even when the screen is locked (Fig. 1d). Through experiments, we find that mobile operating systems and browsers do not implement consistent access control policies in regard to mobile orientation and motion sensor data. Partly, this is because W3C specifications [6] do not specify any policy and do not discuss any risks associated with this potential vulnerability. Also, because of the low sampling rates available in browser, the community have been neglecting the security risks associated with in-browser access to such sensor data. However, in TouchSignatures [4], we showed that despite the low sampling rates, it is possible to identify user touch actions such as click, scroll, and zoom and even the numpad's digits. In this paper, we introduce PINLogger.js, an attack on full 4-digit PINs as opposed to only single digits in [4]. Mobile sensors Today, sensors are everywhere: from your personalized devices such as mobiles, tablets, watches, fitness trackers, and other wearables, to your TV, car, kitchen, home, and to the roads, parking lots, and smart cities. These new technologies are equipped with many different sensors such as NFC, accelerometer, orientation, and motion and are connected to each other. These sensors are continuously providing more features to end users in order to interact with their real-world surroundings. While the users are benefiting from richer and more personalized apps which are using these sensors for different applications such as fitness, gaming, and even security application such as authentication, the growing number of sensors introduces new security and privacy risks to end users, and makes the task of sensor management more complex. Research questions While sensors on mobile platforms are getting more powerful and starting to collect more information about the users and their environment, we want to evaluate the general knowledge about these sensors among the mobile users. We are particularly interested to know the level of concern people may have about these sensors being able to threaten their privacy and security. Contributions In this work, we contribute to the study of sensors and their actual risks and their perceived risks by users as follows: We introduce PINLogger.js, an attack on full 4-digit PINs as opposed to only single digits in [4]. We show that unregulated access to these sensors imposes more serious security risks to the users in comparison with more well-known sensors such as camera, light, and microphone. We conduct user studies to investigate users' understanding about these sensors and also their perception of the security risks associated with them. We show that users in fact have fewer security concerns about these sensors comparing to more well-known ones. We study and challenge current suggested solutions, and discuss why our studies show they cannot be effective. We argue that a usable and secure solution is not straightforward and requires further research. Left Three dimensions (x, y, and z) of acceleration data including gravity (from the motion sensor). The start time, duration, and end time of four phone calls are easily recognizable from these measurements. Right The screenshot of the call history of the phone during the experiment 2 User activities The potential threats to the user security posed by an unauthorized access to the motion and orientation sensor data are not immediately clear. Here we demonstrate two simple scenarios which show that sensitive user information such as phone calls timing and physical activities can be deduced from device orientation and motion sensor data obtained from JavaScript. Users tend to move their mobile devices in distinctive manners while performing certain tasks on the devices, or by simply carrying them. Examples of the former include answering a call or taking a photograph, while the latter covers their transport mode. In both cases, an identifiable succession of movements is exhibited by the device. As a result, a web-based program which has access to the device orientation and motion data may reveal sensitive facts about users such as the exact timing information of the start and end of phone calls and that of taking photographs. On the other hand, while the user is simply carrying her device, the device movement pattern may reveal information about the user's transport mode, e.g. if the user is stationary at one place, walking, running, on the bus, in a car or on the train. We present the results of two initial experiments that we have performed on a Nexus 5 using Maxthon Browser (as an example of a browser that allows JavaScript to access sensor data even when the screen is locked). Motion and orientation sensors detail Before, presenting the results, we first explain the motion and orientation sensors in detail. According to W3C specifications [6], motion and orientation sensor data are a series of different measurements as follows: device orientation which provides the physical orientation of the device, expressed as three rotation angles (\(\alpha \), \(\beta \), \(\gamma \)) in the device's local coordinate frame, device acceleration which provides the physical acceleration of the device, expressed in Cartesian coordinates (x, y, z) in the device's local coordinate frame, device acceleration including gravity which is similar to acceleration except that it includes gravity as well, device rotation rate which provides the rotation rate of the device about the local coordinate frame, expressed as three rotation angles (\(\alpha \), \(\beta \), \(\gamma \)), and interval which provides the constant sampling rate and is expressed in milliseconds (ms). The device coordinate frame is defined with respect to the standard position of the mobile screen. When it is in the portrait mode, x and y axes are in the plane of the screen and are positive towards the screen's right and up, and z is perpendicular to the plane of the screen and is positive outwards from the screen. Moreover, the sensor data discussed above are processed sensor data obtained from multiple physical sensors such as gyroscope and accelerometer. In the rest of this paper, unless specified otherwise, by sensor data we mean the sensor data accessible through mobile browsers which include acceleration, acceleration including gravity, rotation rate, and orientation. Phone call timing In the first experiment, we opened the website carrying our JavaScript code and then locked the screen. The JavaScript code continued to log orientation and motion data while the Android phone was left on a desk. For this experiment, we used another phone to call the Android phone four times with a few seconds apart between the calls. As demonstrated in Fig. 2 (left), the 4 distinct phone calls along with their timing are recognizable from the three dimensions of acceleration (including gravity) which come from the device motion sensor. For a better comparison, Fig. 2 (right) shows the received call history of the phone during the experiment with their start times and durations. As shown in this figure, the captured sensor data match the call history. User physical activities In the second experiment, we again locked the phone and recorded the sensor data during 22 s of sitting, 34 s of walking and 25 s of slow running. We observed that the mentioned activities have visibly distinctive sensor streams. As an example, Fig. 3 shows the acceleration data from motion sensor. As it can be seen, the mentioned activities are recognizable from each other since they are visibly different in the sensor measurements. Our initial evaluations suggest that discovering device movement related information such as call times and user's mode of transport can be easily implemented. However, as we will explain, distinguishing user PINs is a lot harder as the induced sensor measurements are only subtly different. In the following sections, we will demonstrate that, with advanced machine learning techniques, we are able to remotely infer the entered PINs on a mobile phone with high accuracy. Three dimensions (x, y, and z) of acceleration data (from the motion sensor) during 22 s of sitting, 34 s of walking and 25 s of running 3 PINlogger.js In this section, we describe an advanced attack on user's PINs by introducing PINlogger.js. In the following subsections, we describe the attack approach, our program implementation, data collection, feature extraction, and neural network. 3.1 Attack approach We consider an attacker who wants to learn the user's PIN tapped on a soft keyboard of a smartphone via side channel information. We consider (digit-only) PINs since they are popular credentials used by users for many purposes such as unlocking phone, SIM PIN, NFC payments, bank cards, other banking services, gaming, and other personalized applications such as health care and insurance. Unlike similar works which have to gain the access through an installed app [7,8,9,10,11,12,13,14,15,16], our attack does not require any user permission. Instead, we assume that the user has loaded the malicious web content in the form of an iframe, or another tab while working with the mobile browser as shown in Fig. 1. At this point, the attack code has already started listening to the sensor sequences from the user's interaction with the phone. In order to uncover when the user enters his PIN, we need to classify his touch actions such as click, scroll, and zoom. We have already shown in TouchSignatures [4] that with the same sensor data and by applying classification algorithms, it is possible to effectively identify user's touch actions. Here, we consider a scenario after the touch action classification. In other words, our attacker already knows that the user is entering his PIN. Moreover, unless explicitly noted, we consider a generic attack scenario which is not user dependant. This means that we do not need to train our machine learning algorithm with the same user as the subject of the attack. Instead, we have a one-round training phase with data from multiple voluntary users and use the obtained trained algorithm to output other users' PINs later. This approach has the benefit of not needing to trick individual users to collect their data for training. 3.2 Web program implementation We implemented a web page with embedded JavaScript code in order to collect the data from voluntary users. Our code registers two listeners on the window object to have access to orientation and motion data separately. The event handlers defined for these purposes are named DeviceOrientationEvent and DeviceMotionEvent, respectively. On the client side, we developed a GUI in HTML5 which shows random 4-digit PINs to the users and activates a numpad for them to enter the PINs as shown in Fig. 4. All sensor sequences are sent to the database along with their associated labels which are the digits of the entered PINs. We implemented our server program using Node.js (nodejs.org). Our code sends the orientation and motion sensor data of the mobile device to our NoSQL database using MongoLab (mongolab.com, web-based service for MongoDB). When the event listener fires, it establishes a socket by using Socket.IO (socket.io) between the client and the server and constantly transmits the sensor data to the database. Both Node.js and MongoDB (as a document-oriented database) are known for being capable of supporting data intensive applications in real time. In the proof-of-concept implementation of the attack, we focus on working with active web pages, which allows us to easily identify the start of a touch action through the JavaScript access to the onkeydown event. A similar approach is adopted in other works, e.g. TouchLogger [8] and TapLogger [16]. In an extended attack scenario, a more complex segmentation process would be needed to identify the start and end of a touch action. This could be achieved by measuring the peak amplitudes of a signal, as done in [12]. Different input methods used by the users for PIN entrance 3.3 Data collection Following the approach of Aviv et al. [7] and Spreitzer [15], we consider a set of 50 fixed PINs with uniformly distributed digits. We created these PINs in a way that all digits are repeated about the same time (around 20 times). The data collection code is publicly available via GitHub. Technical details of the data collection process and the collected data are publicly available too.Footnote 2 We conducted our user studies using Chrome on an Android device (Nexus 5). The experiments and results are based on the collected data from 10 users, each entering all the 50 4-digit PINs for 5 times. Our voluntary participants were university students and staff and performed the experiments at university offices. We simply explained to them that all they needed was to enter a few PINs shown in a web page. In relation to the environmental setting for the data collection, we asked the users to remain sitting in a chair while working with the phone. We did not require our users to hold the phone in any particular mode (portrait or landscape) or work with it by using any specific input method (using one or two hands). We let them choose their most comfortable posture for holding the phone and working with it as they do in their usual manner. While watching the users during the experiments, we noticed that all of our users used the phone in the portrait mode by default. Users were either leaning their hands on the desk or freely keeping them in the air. We also observed the following input methods used by the users. Holding the phone in one hand and entering the PIN with the thumb of the same hand (Fig. 4 left). Holding the phone in one hand and entering the PIN with the fingers of the other hand (Fig. 4 centre). Holding the phone with two hands and entering the PIN with the thumbs or fingers of both hands (Fig. 4 right). In the first two cases, users exchangeably used either their right hands or left hands in order to hold the phone. In order to simulate a real-world data collection environment, we took the phone to each user's workspace and briefly explained the experiment to them, and let them complete the experiment without our supervision. All users found this way of data collection very easy and could finish the experiments without any difficulties. Our participants were given each an Amazon voucher (worth £10) at the end for their participation. 3.4 Feature extraction In order to build the feature vector as the input to our classifier algorithm, we consider both time-domain and frequency-domain features. We improve our suggested feature vectors in [4] by adding some more complex features such as the correlation between the measurements. This addition improves the results, as we will discuss in Sect. 4. As discussed before, 12 different sequences obtained from the collected data include orientation (ori), acceleration (acc), acceleration including gravity (accG), and rotation rate (rotR) with three sequences (either x, y and z, or \(\alpha \), \(\beta \) and \(\gamma \)) for each sensor measurement. As a pre-processing step and in order to remove the effect of the initial position and orientation of the device, we subtract the initial value in each sequence from subsequent values in the sequence. We use these pre-processed sequences for feature extraction in time domain directly. In frequency domain, we apply the fast Fourier transform (FFT) on the pre-processed sequences and use the transformed sequences for feature extraction. In order to build our feature vector, first we obtain the maximum, minimum, and average values of each pre-processed and FFT sequences. These statistical measurements give us \(3\times 12 = 36\) features in the time domain and the same number of features in the frequency domain. We also consider the total energy of each sequence in both time and frequency domains calculated as the sum of the squared sequence values, i.e., \(E=\sum {v_i^2}\) which gives us 24 new features. The next set of features are in time domain and are based on the correlation between each pair of sequences in different axes. We have 4 different sequences; ori, acc, accG, and rotR, each represented by 3 measurements. Hence, we can calculate 6 different correlation values between the possible pairs; (ori, acc), (ori, accG), (ori, rotR), (acc, accG), (acc, rotR), and (accG, rotR), each presented in a vector with 3 elements. We use the Correlation coefficient function in order to calculate the similarity rate between the mentioned sequences. The correlation coefficient method is commonly used to compare the similarity of the shapes of two signals (e.g. [17]). Given two sequences A and B and \(\mathrm {Cov}(A,B)\) denoting covariance between A and B, the correlation coefficient is computed as below: $$\begin{aligned} {R_{AB}} = {\mathrm {Cov}(A,B) \over \sqrt{\mathrm {Cov}(A,A) \cdot \mathrm {Cov}(B,B)}} \end{aligned}$$ The correlation coefficient of two vectors measures their linear dependence by using covariance. By adding these new 18 features, our feature vector consists of a total of 114 features. 3.5 Neural network We apply a supervised machine learning algorithm by using an artificial neural network (ANN) to solve this classification problem. The input of an ANN system could be either raw data, or pre-processed data from the samples. In our case, we have pre-processed our samples by building a feature vector as described before. Therefore, as input, our ANN receives a set of 114 features for each sample. As explained before, we collected 5 samples per each 4-digit PIN from 10 users. While reading the records, we realized that some of the PINs have been entered wrongly by some users. This was expected since each user was required to enter 250 PINs. Since we recorded both expected and entered PINs in our data collection, we could easily identify these PINs and exclude them from our analysis. Overall, out of 2500 records collected from 10 users, 12 of the PINs were entered wrongly. Hence we ended up with 2488 samples for our ANN. The feature vectors are mapped to specific labels from a finite set: i.e., 50 fixed random 4-digit PINs. We train and validate our algorithm with two different subsets of our collected data, and test the neural network against a separate subset of the data. We train the network with 70% of our data, validate it with 15% of the records, and test it with the remaining 15% of our data set. We use a pattern recognition/classifying network in MATLAB with one hidden layer and 1000 nodes. Pattern recognition/classifying networks normally use a scaled conjugate gradient (SCG) back-propagation algorithm for updating weight and bias values in training. Scaled conjugate gradient is a fast supervised learning algorithm [18]. In this section, we present the results of our attack on 4-digit PINs in two different forms: multi-users mode and same-user mode. We also train separate ANN systems to learn individual digits of PINs and compare these results with other works. 4.1 Multiple-users mode The second column of Table 1 shows the accuracy of our ANN trained with the data from all users. In this mode, the results are based on training, validating, and testing our ANN using the collected data from all of our 10 participants. As the table shows, in the first attempt PINlogger.js is able to infer the user's 4-digit PIN correctly with accuracy of 74.43%, and as expected it gets better in further attempts. By comparison, a random attack can guess a PIN from a set of 50 PINs with the probability of 2% in the first attempt and 6% in three attempts. Table 1 PINlogger.js's PIN identification rates in different attempts 4.2 Same-user mode In order to study the impact of individual training, we trained, validated, and tested the network with the data collected from one user. We refer to this mode of analysis as the same-user mode. We asked our user to enter 50 random PINs, each five times, and repeated the experiment for 10 times (rounds). The reason we have repeated the experiments is that the classifier needs to receive enough samples to be able to train the system. Interestingly, our user used all three different input methods shown in Fig. 4 during the PIN entrance. As expected, our classifier performs better when it is personalized: the accuracy reaches 79.23% in the first attempt and increases to 93.52 and 97.71% in two and three attempts, respectively. In the same-user mode, convincing the users to provide the attacker with sufficient data for training customized classifiers is not easy, but still possible. Approaches similar to gaming apps such as Math TrainerFootnote 3 could be applied. Math-based CAPTCHAs are possible web-based alternatives. Any other web-based game application which segments the GUI similar to a numerical keypad will do as well. Nonetheless, in this paper we mainly follow the multiple-users approach. 4.3 Identification of PIN digits One might argue that the attack should be evaluated against the whole 4-digit PIN space. However, we believe that the attack could still be practical when selecting from a limited set of PINs since users do not select their PINs randomly [19]. It has been reported that around 27% of all possible 4-digit PINs belong to a set of 20 PINs,Footnote 4 including straightforward ones like "1111", "1234", or "2000". Nevertheless, we present the results of our analysis of the attack against the entire search space for the two experiment modes discussed above. We considered 10 classes of the entered digits (0–9) from the data we collected on 4-digit PINs used in Sect. 4.1. In the multiple-users mode, we trained, validated, and tested our system with data from all 10 users. In the same-user mode, we trained personalized classifiers for each user. Unlike the test condition of Sect. 4.2, we did not have to increase the number of rounds of PIN entry here since we had enough samples for each digit per user. In the same-user mode in this section, we used the average of the results of our 10 users. The average identification rates of different digits for three different approaches are presented in Table 2. Table 2 Average digit identification rates in different attempts The results in our multiple-users mode indicate that we can infer the digits with a success probability of 70.75, 83.27, and 94.03% in the first, second, and third attempts, respectively. This means that for a 4-digit PIN and based on the obtained sensor data, the attacker can guess the PIN from a set of \(3^4 = 81\) possible PINs with a probability of success of \(0.9206^4 = 71.82\%\). A random attack, however, can only predict the 4-digit PIN with the probability of 0.81% in 81 attempts. By comparison, PINlogger.js achieves a dramatically higher success rate than a random attacker. Table 3 Comparison of PINlogger.js with related attacks on 4-digit PINs Using a similar argument, in the same-user mode the success probability of guessing the PIN in 81 attempts is 85.46%. In the same setting, Cai and Chen report a success rate of 65% using accelerometer and gyroscope data [20] and Simon and Anderson's [14] PIN Skimmer only achieves a 12% success rate in 81 attempts using camera and microphone. Our results in digit recognition in this paper are also better than what is achieved in TouchSignatures [4]. In summary, PINlogger.js performs better than all sensor-based digit-identifier attacks in the literature. 4.4 Comparison with related work Obtaining sensitive information about users such as PINs based on mobile sensors has been actively explored by researchers in the field [21, 22]. In particular, there is a number of research which uses mobile sensors through a malicious app running in the background to extract PINs entered on the soft keyboard of the mobile device. For example, GyroPhone, by Michalevsky et al. [10], shows that gyroscope data are sufficient to identify the speaker and even parse speech to some extent. Other examples include Accessory [13] by Owusu et al. and Tapprints by Miluzzo et al. [11]. They infer passwords on full alphabetical soft keyboards based on accelerometer measurements. Touchlogger [8] is another example by Cai and Chen [20] which shows the possibility of distinguishing user's input on a mobile numpad by using accelerometer and gyroscope. The same authors demonstrate a similar attack in [9] on both numerical and full keyboards. The only work which relies on in-browser access to sensors to attack a numpad is our previous work, TouchSignatures [4]. All of these works, however, aim for the individual digits or characters of a keyboard, rather than the entire PIN or password. Another category of works directly targets user PINs. For example, PIN skimmer by Simon and Anderson [14] is an attack on a user's numpad and PINs using the camera and microphone on the smartphone. Spreitzer suggests another PIN Skimming attack [15] and steals a user's PIN based on the measurements from the smartphone's ambient light sensor. Narain et al. introduce another attack [12] on smartphone numerical and alphabetical keyboards and the user's PINs and credit card numbers by using the smartphone microphone. TapLogger by Xu et al. [16] is another attack on the smartphone numpad which outputs the pressed digits and PINs based on accelerometer and orientation sensor data. Similarly, Aviv et al. introduce an accelerometer-based side channel attack on the user's PINs and patterns in [7]. We choose to compare PINlogger.js with the works in this category since they have the same goal of revealing the user's PINs. Table 3 presents the results of our comparison. As shown in Table 3, PINlogger.js is the only attack on PINs which acquires the sensor data via JavaScript code. In-browser JavaScript-based attacks impose even more security threats to users since unlike in-app attacks, they do not require any app installation and user permission to work. Moreover, the attacker does not need to develop different apps for different platforms such as Android, iOS, and Windows. Once the attacker develops the JavaScript code, it can be deployed to attack all mobile devices regardless of the platform. Moreover, Touchlogger.js is the only works which present the results of the attack for multiple-users modes. By contrast, the results from other works are mainly based on training the classifiers for individual users. In other words, they assume the attacker is able to collect input training data from the victim user before launching the PIN attack. We do not have such an assumption as the training data are obtained from all users in the experiment. In terms of accuracy, with the exception of [12], PINlogger.js generally outperforms other works with an identification rate of 74% in the first attempt. This is a significant success rate (despite that the sampling rate in-browser is much lower than that available in-app) and confirms that the described attack imposes a serious threat to the users' security and privacy. 5 Why does this vulnerability exist? Although reports of side channel attacks based on the in-browser access to mobile sensors via JavaScript are relatively recent, similar attacks via in-app access to mobile sensors have been known for years. Yet the problem has not been fixed. Here, we discuss the reasons why such a vulnerability has remained unfixed for a long time. Table 4 Motion sensors supported by Android and their corresponding W3C definitions Table 5 Position sensors supported by Android and their corresponding W3C definitions 5.1 Unmanaged sensors In an attempt to explain multiple sensor-related in-app vulnerabilities, Xu et al. [16] argue that "the fundamental problem is that sensing is unmanaged on existing smartphone platforms". There are multiple in-app side-channel attacks that support this argument, as we discussed in the previous section. Our work shows that the problem of in-app access to "unmanaged sensors" is now spreading to in-browser access. Here we present the "unmanaged" motion and orientation sensor case which shows how the technical mismanagement of these sensors causes serious user privacy consequences when it comes to unregulated access to such sensors via JavaScript. W3C vs. Android According to W3C specifications [6], the motion and orientation sensor streams are not raw sensor data, but rather high-level data which are agnostic to the underlying source of information. Common sources of information include gyroscopes, compasses, and accelerometers. In Tables 4 and 5, we present raw (low-level) and synthesized (high-level) motion sensors supported by Android [1] along with their descriptions and units, as well as their corresponding W3C definitions [6]. As it can be seen from the tables, different terminologies have been used for describing the same measurements in-app and in-browser. For example, while in-app access uses the raw sensor terminology, i.e., accelerometer, gyroscope, magnetic field, the in-browser access uses synthesized sensor terminology, i.e., motion and orientation [6]. This creates confusion for users (as we will explain later) and developers (as we experienced it ourselves). One of the W3C's specifications on mobile sensors, "Generic Sensor API" [23], dedicates a few sections to the issue of naming sensors, and low-level and high-level sensors. It discusses how the terminology for in-browser access has been high-level so far. It also mentions that the low-level use cases are increasingly popular among the developers. As stated in this specification: "The distinction between high-level and low-level sensor types is somewhat arbitrary and the line between the two is often blurred". And "Because the distinction is somewhat blurry, extensions to this specification are encouraged to provide domain-specific definitions of high-level and low-level sensors for the given sensor types they are targeting". We believe that due to the rapid increase in mobile sensors, it is necessary to come up with a consistent approach. 5.2 Unknown sensors We believe another contributing factor is that users seem to be less familiar with the relatively newer (and less advertised) sensors such as motion and orientation, as opposed to their immediate familiarity with well-established sensors such as camera and GPS. For example, a user has asked this question on a mobile forum: "\(\ldots \) What benefits do having a gyroscope, accelerometer, proximity sensor, digital compass, and barometer offer the user? I understand it has to do with the phone orientation but am unclear in their benefits. Any explanation would be great! Thanks!".Footnote 5 We design and conduct user studies in this work in order to investigate to what extent are these sensors and their risks known to the users. List of mobile sensors We prepared a list of different mobile sensors by inspecting the official websites of the latest iOS and Android products, and the specifications that W3C and Android provide for developers. We also added some extra sensors as common sensing mobile hardware which are not covered before. iPhone 6Footnote 6: Touch ID, Barometer, Three-axis gyro, Accelerometer, Proximity sensor, Ambient light sensor. Nexus 6PFootnote 7: Fingerprint sensor, Accelerometer, Gyroscope, Barometer, Proximity sensor, Ambient light sensor, Hall sensor, Android Sensor hub. Android [1]: Accelerometer, Ambient temperature, Gravity (software or hardware), Gyroscope, Light, Linear Acceleration (software or hardware), Magnetic Field, Orientation (software), Pressure, Proximity, Relative humidity, Rotation vector (software or hardware), Temperature. W3CFootnote 8 [6]: Device orientation (software), Device motion (software), Ambient light, Proximity, Ambient temperature, Humidity, Atmospheric Pressure. Extra sensors (common sensing hardware): Wireless technologies (WiFi, Bluetooth, NFC), Camera, Microphone, Touch screen, GPS. Unless specified otherwise, all the listed sensors are hardware sensors. We added the last category of the sensors to this list since they indeed sense the device's surrounding although in different ways. However, they are neither counted as sensors in mobile product descriptions, nor in technical specifications. These sensors are often categorized as OS resources [24], and hence, different security policies apply to them. 5.3 User study In this section, we aimed to observe the amount of knowledge that mobile users have about mobile sensors. We prepared a list of sensors based on what we explained above and asked volunteer participants to rate the level of their familiarity with each sensor. All of our experiments and user studies were approved by Newcastle University's ethical committee. 5.3.1 Participants We recruited 60 participants to take part in this study via different means including mailing lists, social networking, vocational networks, and distributing flyers in different places such as different schools in the university, colleges, local shops, churches, and mosques. A sample of our call for participation and participants' demographics are available in "Appendix 1". Among our participants, 28 self-identified themselves as male and 32 as female, from 18 to 67 years old, with a median age of 33.85. None of the participants were studying or working in the field of mobile sensor security. Our university participants were from multiple degree programs and levels, and the remaining participants worked in a different range of fields. Moreover, our participants owned a wide range of mobile devices and had been using a smartphone/tablet for 5.6 years on average. Our participants were from different countries, and all could speak English. We interviewed our participants at a university office and gave each an Amazon voucher (worth £10) at the end for their participation. Details of the interview template can be found in "Appendix 2". 5.3.2 Study approach For a list of 25 different sensors, we used a five-point scale self-rated familiarity questionnaire as used in [25]: "I've never heard of this", "I've heard of this, but I don't know what this is", "I know what this is, but I don't know how this works", "I know generally how this works", and "I know very well how this works". The list of sensors was randomly ordered for each user to minimize bias. In addition, we needed to observe the experiments to make sure users were answering the questions based on their own knowledge in order to avoid the effect of processed answers. Full descriptions of all studies are provided in "Appendix 2". Level of self-declared knowledge about different mobile sensors 5.3.3 Findings Figure 5 summarizes the results of this study. This figure shows the level of self-declared knowledge about different mobile sensors. The question was: "To what extent do you know each sensor on a mobile device?". Sensors are ordered based on the aggregate percentage of participants declaring they know generally or very well how each sensor works. This aggregate percentage is shown on the right-hand side. In the case of equal aggregate percentage, the sensor with a bigger share on being known very well by the participants is shown earlier. Our participants were generally surprised to hear about some sensors and impressed by the variety. As one may expect, newer sensors tend to be less known to the users in comparison with older ones. In particular, our participants were generally not familiar with ambient sensors. Although some of our participants knew the ambient sensors in other contexts (e.g. thermostats used at home), they could not recognize them in the context of a mobile device. Low-level hardware sensors such as accelerometer and gyroscope seem to be less known to the users in comparison with high-level software ones such as motion, orientation, and rotation. We suspect that this is partly due to the fact that the high-level sensors are named after their functionalities and can be more immediately related to user activities. We also noticed that a few of the participants knew some of the low-level sensors by name but they could not link them to their functionality. For example, one of our participants who knew almost all of the listed sensors (except hall sensor and sensor hub) stated that: "When I want to buy a mobile [phone], I do a lot of search, that is why I have heard of all of these sensors. But, I know that I do not use them (like accelerometer and gyroscope)". On the other hand, as the functionalities of mobile devices grow, vendors quite naturally turn to promote the software capabilities of their products, instead of introducing the hardware. For example, many mobile devices are recognized for their gesture recognition features by the users; however, the same users might not know how these devices provide such a feature. For instance, one of the participants commented on a feature on her smartphone called "Smart Stay"Footnote 9 as follows: "I have another sensor on my phone: Smart Stay. I know how it works, but I don't know which sensors it uses". 6 User studies on risk perception of mobile sensors In this section, we study the participants' risk perception of mobile sensors. There have been several studies on risk perception addressing different aspects of mobile technology. Some works discuss the risks that users perceive on smartphone authentication methods such as PINs and patterns [26], TouchID and Android face unlock [27], and implicit authentication [28]. Other works focus on the privacy risks of certain sensors such as GPS [29]. Raji et al. [30] show users' concerns (on disclosure of selected behaviours and contexts) about a specific sensor-enabled device called AutoSenseFootnote 10. To the best of our knowledge, the research presented in this paper is the first that studies the user risk perception for a comprehensive list of mobile sensors (25 in total). We limit our study to the level of perceived risks users associate with their PINs being discovered by each sensor. The reasons we chose PINs are that first, finding one's PIN is a clear and intuitive security risk, and second, we can put the perceived risk levels in context with respect to the actual risk levels for a number of sensors as described in Table 3. 6.1 Methodology For this study, we divide our 60 participants into two groups and studied the two group separately using two different approaches: within-subject and between-subject. In the within-subject study, we interviewed 30 participants for all parts of the study. In contrast, in the between-subject study, we interviewed a new group of 30 participants, and we later compared the results with the previous group. By these two approaches, we aim to measure differences (after informing users on descriptions of sensors) within a participant and between participants, respectively. 6.1.1 Within-subject study In this approach, we asked 30 participants to rate the level of risk they perceive for each sensor in regard to revealing their PINs in two phases. In phase one, we gave the same sensor list (randomized for each user). We described a specific scenario in which a game app which has access to all these sensors is open in the background and the user is working on his online banking app, entering a PIN. We used a self-rated questionnaire with five-point scale answers following the same terminology as used in [30]: "Not concerned", "A little concerned", "Moderately concerned", "Concerned", and "Extremely concerned". During this phase, we asked the users to rely on the information that they already had about each sensor (see "Appendix 2" for details). In the second phase, first we provided the participants with a short description of each sensor and let them know that they can ask further questions until they feel confident that they understand the functionality of all sensors. Participants could use a dictionary on their device to look at the words that were less familiar to them. Afterwards, we asked the participants to fill in another copy of the same questionnaire on risk perceptions (details in "Appendix 2"). Participants could keep the sensor description paper during this phase to refer to it in the case they forgot the description of certain sensors. 6.1.2 Between-subject study In this study, first we gave the description of the sensors to our second group of 30 participants, and similar to previous study, we gave them enough time to familiarize themselves with the sensors and to ask as many questions as they wanted until they felt confident about each sensor. Then, we presented the participants with the questionnaire on risk perceptions (details in "Appendix 2"). Similar to our previous study, participants could keep the sensor description paper while filling in this questionnaire. Users' perceived risk for different mobile sensors for within-subject approach Users' perceived risk for different mobile sensors for between-subject approach 6.2 Intuitive risk perception The results of our within-subject study are presented in Fig. 6. These results present the users' perceived risk for different mobile sensors for the same group of users before (top bars) and after (bottom bars) being presented with descriptions of sensors. The results of our between-subject study are presented in Fig. 7. Note that this figure represents the risk perception of group one of our participants before knowing the sensors descriptions, and group two of participants after knowing the sensors descriptions. For both figures, the question was: "To what extent are you concerned about each sensor's risk to your PIN?", sensors are ordered based on the aggregate percentage of participants declaring they are either concerned or extremely concerned about each sensor before seeing the descriptions. This aggregate percentage is the first value presented on the right-hand side. In the case of equal aggregate percentage, the sensor with a bigger share on being perceived extremely concerned by the participants is shown earlier. We make the following observations from the results of the experiment. Touch Screen Although our participants rated touch screen as one of the most risky sensors in relation to a PIN discovery scenario, still about half of our participants were either moderately concerned, a little concerned, or not concerned at all. Through our conversations with the users, we received some interesting comments, e.g. "Why any of these sensors should be dangerous on an app while I have officially installed it from a legal place such as Google Play?", and "As long as the app with these sensors is in the background, I have no concern at all". It seems that a more general risk model in relation to mobile devices is affecting the users' perception in regard to the presented PIN discovery threat. This fact can be a topic of research on its own and is out of the scope of this paper. Communicational Sensors One category of the sensors which users are relatively more concerned about includes WiFi, Bluetooth, and NFC. For example, one of the participants commented that: "I am not concerned with physical [motion, orientation, accelerometer, etc.]/ environmental [light, pressure, etc.] sensors, but network ones. Hackers might be able to transfer my information and PIN". These sensors appearing more risky to the users are understandable since we asked them to what extent they were concerned about each sensor in regard to the PIN discovery. Identity-Related Sensors Another category which has been rated more risky than others contains those sensors which can capture something related to the user's identity, i.e. fingerprint, TouchID, GPS, camera, and microphone. Despite that we described a PIN-related scenario, our participants were still concerned about these sensors. This was also pointed out by a few participants through the comments. For example, a user stated: "\(\ldots \), however, GPS might reveal the location along with the user input PIN that has a risk to reveal who (and where) that PIN belongs to. Also the fingerprint/TouchID might recognize and record the biometrics with the user's PIN". Some of these sensors such as GPS, fingerprint, and TouchID, however, cannot cause the disclosure of PINs on their own. Hence, the concern does not entirely match the actual risk. Similar to the discussion on touch screen, we believe that a more general risk model on mobile technology influences the users to perceive risk on specific threats such as the one we presented to them. Environmental Sensors The level of concern on ambient sensors (humidity, light, pressure, and temperature) is generally low and stays low after the users are provided with the description of the sensors (see Fig. 6). In many cases, our users expressed that they were concerned about these sensors simply because they did not know them: "[now that I know these sensors,] I am quite certain that movement/environmental sensors would not affect the security of personal id/passwords, etc.". In fact, researchers have reported that it is possible to infer the user's PIN using the ambient light sensor data [15], although, to our knowledge, exploits of other environmental sensors have not been reported in the literature. Movement Sensors On the sensors related to the movement and the position of the phone (accelerometer, gyroscope, motion, orientation, and rotation), the users display varying levels of the risk perceptions. In some cases, they are slightly more concerned, but in others they are less concerned once they know the functionality. Some of our users stated that since they did not know these sensors, they were not concerned at all, but others were more concerned when they were faced with new sensors. Overall, knowing or not knowing these sensors has not affected the perceived risk level significantly, and they were rated generally low in both cases. Motion and Orientation Sensors The sensors which we used in our attack, namely orientation, rotation, and motion, have not been generally scored high for their risk in revealing PINs. Users do not seem to be able to relate the risk of these sensors to the disclosure of their PINs, despite that they seem to have an average general understanding about how they work. On hardware sensors such as accelerometer and gyroscope, the risk perception seems to be even lower. A few comments include: "In my everyday life, I don't even think about these [movement] sensors and their security. There is nothing on the news about their risk", and "I have never been thinking about these [movement] sensors and I have not heard about their risk". On the other hand, some of the participants expressed more concerns for sensors that they were familiar with, as one wrote, "You always hear about privacy stuff for example on Facebook when you put your location or pictures". Similarly, it seems that having a previous risk model is a factor that might explain the correlation between the user's knowledge and their perceived risk. 7.1 General knowledge versus risk perception Figures 5 and 6 suggest that there may be a correlation between the relative level of knowledge users have about sensors and the relative level of risk they perceive from them. We confirm our observation of correlation using Spearman's rank-order correlation measure. As shown in Table 6, we present the Spearman's correlation between the comparative knowledge and the perceived risk about different sensors for different participants' data set: group one before being presented with the sensor descriptions, group one after sensor description, group two after sensor descriptions, and finally groups one and two after being presented with the sensor descriptions. For each participants' data set, the sensors are separately ranked based on the level that the users are familiar with them, similar to Fig. 5. Accordingly, the levels of concern are ranked too. The Spearman's correlation equation has been applied on these ranks for each group separately. For example, the Spearman's correlation between the comparative knowledge (median: "I know what this is, but I don't know how this works", IQRFootnote 11: "I've never heard of this"–"I know very well how this works") and the perceived risk about different sensors for group one (median: "Not concerned", IQR: "Not concerned"–"A little concerned") before knowing the sensor descriptions is \(r = 0.61\) (\(p<0.05\)). As it can be seen, these results support that the more the users know about these sensors, the more concern they express about the risk of the sensors revealing PINs. We acknowledge that other methods of ranking the results, e.g. using median, produce slightly different final rankings. However, given the high confidence level of the above test, we expect the correlation to be supported if other methods of ranking are used. Table 6 Spearman's correlation between the comparative knowledge and the perceived risk about different sensors Assuming that customer demand drives better security designs, the above correlation may explain why sensors that are newer to the market have not been considered as OS resources and consequently have not been subject to similar strict access control policies. 7.2 Perceived risk versus the actual risk We are specifically interested in the users' relative risk perception of sensors in revealing their PINs in comparison with the actual relative risk level of these sensors. We list the results reported in the literature in Table 3 for the following sensors: light, camera, microphone, gyroscope, motion, and orientation. Figure 6 shows that users generally have expressed more concern about sensors such as camera and microphone than accelerometer, gyroscope, orientation, and motion. This does not match the actual risk levels since the latter sensors allow PIN recovery with higher accuracy as we have shown in Sect. 4. When asked after filling the questionnaire, most participants could not come up with realistic attack scenarios using camera and microphone. For microphone, some users thought they might say the PIN out loud. For camera, a few of our participants thought face recognition might be used to recover the PIN; hence, they rated camera's risk to their PINs high. One user thought the camera might capture the reflection of the entered PIN in her glasses. Among our participants, one mentioned but described doubt about motion, orientation, accelerometer, and gyroscope being able to record the shakes of the mobile phone while entering a PIN after they saw the sensor descriptions: "I feel those positional sensors might be able to reveal something about my activities, for example if I open my banking app or enter my PIN. But it is extremely hard for different users, and when working with different hands and positions". This participant expressed only "a little concern" about them, stating that: "\(\ldots \), and by little concern, I mean extremely little concern". One of our participants was completely familiar with these attacks and in fact had read some related papers. This user was "extremely concerned". Other users who rated these sensors risky in general said they were generally concerned about different sensors. One commented: "I can not think of any particular situation in which these sensors can steal my PIN, but the hackers can do everything these days". 7.3 Possible solutions In this section, we discuss the current academic and industrial countermeasures to mitigate sensor-based attacks. 7.3.1 Academic approach Different solutions to address the in-app access attacks have been suggested in the literature, e.g. restricting the sensor to one app, reducing the sampling rate, temporal pause of the sensor on sensitive entries such as keyboard, rearranging keyboard for password entrance, asking for explicit permission from the user, ranking apps based on their similarities to malware, and obfuscating anomalies in sensor data [7, 10,11,12,13,14,15,16, 31, 32]. However, after many years of research on showing the serious security risks of sensors such as accelerometer and gyroscope, none of the major mobile platforms have revised their in-app access policy. We believe that the risks of unmanaged sensors on mobile phones, specially through JavaScript code, are not known very well yet. More specifically, many OS-/app-level solutions such as asking for permissions at the installation time or malware detection approaches would not work in the context of a web attack. In our previous work [4], we suggested to apply the same security policies as those for camera, microphone, and GPS for the motion and orientation sensors. Our suggestion was to set a multi-layer access control system on the OS and browser levels. However, the usability and effectiveness of this solution are arguable. First, asking too many permissions from the user for different sensors might not be usable. Furthermore, for some basic use cases such as gesture recognition to clear a web form, or adjusting the screen from portrait to landscape, it might not make sense to ask for user permission for every website. Second, with the increase in the number of sensors accessible through mobile browsers, this approach might not be effective due to the classic problem of sidestepping the security procedure by users when it is too much of a burden [33]. As stated by one of our participants: "I don't mind these sensors being risky anyway. I don't even review the permission list. I have no other choice to be able to use the app". Moreover, as we have shown in Sect. 5, users generally do not understand the implications of these sensors on discovering their PINs, for example, even though they know how these sensors work. Hence, such an approach might not be effective in practice. 7.3.2 Industrial approach W3C Device Orientation Event Specification. There is no Security and Privacy section in the latest official W3C Working Draft Document on Device Orientation Event [6]. However, at the time of writing this paper, a new version of the W3C specification is being drafted, which includes a new section on security and privacy issues related to mobile sensors,Footnote 12 as suggested by us in [4]. The authors working on the revision of the W3C specification point out the problem of fingerprinting mobile devices [31], and touch action recovery [4] through these sensors, and suggest the following mitigations: "Do not fire events when the page where they were registered on is not visible or has been backgrounded." "Fire events only on the top-level browsing context or same-origin nested iframes." "Limit the frequency of events (typically 60 Hz seems to be sufficient)." We believe that these measures may be too restrictive in blocking useful functionalities. For example, imagine a user consciously running a web program in the browser to monitor his daily physical activities such as walking and running. This program needs to continue to have access to the motion and orientation sensor data when the user is working on another tab or minimizes the browser. One might argue that such a program should be available as an app instead; hence, the use case is not valid. However, it is expected that the boundary between installed apps and embedded JavaScript programs in the browser will gradually diminish [34]. Mobile browsers As we showed in [4], browsers and mobile operating systems behave differently on providing access to sensors. Some allow access only on the active webpage and any embedded iframes (although with different origins), some allow access to other tabs, when browser is minimized, or even when the phone is locked. Hence, there is not a consistent approach across all browsers and mobile platforms. Reducing the frequency rate has been applied to all well-known browsers at the moment [4]. For instance, Chrome reduced the sensor readings from 200 to 60 Hz due to security concerns.Footnote 13 However, our attack shows that security risks are still present even at lower frequencies. iOS and Android limit the maximum frequency rate of some sensors such as Gyroscope to 100 and 200 Hz, respectively. It is expected that these frequencies will increase on mobile OSs in the near future and in-browser access is no exception. In fact, current mobile gyroscopes support much higher sampling frequencies, e.g. up to 800 Hz by STMicroelectronics (on Apple products) and up to 8000 Hz by InvenSense (on the Google Nexus range) [10]. With higher frequencies available, attacks such as ours can perform better in the future if adequate security countermeasures are not applied. Following our report of the issue to Mozilla, starting from version 46 (released in April 2016), Firefox restricts JavaScript access to motion and orientation sensors to only top-level documents and same-origin iframes.Footnote 14 In the latest Apple Security Updates for iOS 9.3 (released in March 2016), Safari took a similar countermeasure by "suspending the availability of this [motion and orientation] data when the web view is hidden".Footnote 15 However, we believe the implemented countermeasures should only serve as a temporary fix rather than the ultimate solution. In particular, we are concerned that it has the drawback of prohibiting potentially useful web applications in the future. For example, a web page running a fitness program has a legitimate reason to access the motion sensors even when the web page view is hidden. However, this is no longer possible in the new versions of Firefox and Safari. Our concern is confirmed by members in the Google Chromium team,Footnote 16 who also believe that the issue remains unresolved. 7.4 Biometric sensors As we explained in Sect. 5.2, there exist around 25 different sensors on mobile platforms. They include communicational sensors such as WiFi, environmental sensors such as ambient light, movement sensors such as motion and orientation, and biometric sensors such as Fingerprint. Here we specifically discuss biometric sensors since they are highly related to the individuals' identity. After decades of working on password, it seems that people still cannot remember strong passwords. Biometrics have been offered to users as an effective authentication mechanism. Examples include TouchID and Fingerprint sensors on iOS and Android devices, respectively. But the biometric-based authentication is not limited to mobile devices only. For example, when paying with iPhone contactlessly, you need to rest your finger on TouchID and hold your iPhone in close proximity to the contactless reader until the task is finished. Furthermore, since many banks have already moved their services to mobile platforms, they benefit from the biometrics sensors available on mobile devices, say for implementing 2-factor authentication. As an example, in addition to user name and passwords, HSBC authenticates their customers through TouchIDFootnote 17 and voice ID.Footnote 18 Another example is Smile to Pay facial recognition appFootnote 19 where deep learning is applied to overcome the difficulty of face authentication when the face photograph is not in the normal form. Recently Yahoo has also introduced its ear-based smartphone identification system.Footnote 20 On the other hand, our findings show that mobile users are relatively concerned with identity-related or biometric sensors. However, we discussed that these sensors are not necessarily the most risky ones to PINs in practice. As we mentioned earlier, we believe that this might be the influence of a more general risk model that the users have on mobile technology. We believe that this is an important research topic and requires further studies. 7.5 Limitations We consider this work a pilot study that explores user risk perception on a comprehensive list of mobile sensors. We envisage the following future work to address these limitations and expand this work: More Participants We performed our user studies on a set of users who were recruited from a wide range of backgrounds. Yet the number of the participants is limited. A larger set of participants will improve the confidence in the results. With a large and diverse set of participants, we can also study the effect of demographic factors on perceived risk. Other Risks We studied the perceived risk on PINs as a serious and immediate risk to users' security. The study can be expanded by studying users' risk perception on other issues such as attackers discovering phone call timing, physical activities, or shopping habits. Other Types of Access When interviewing our participants, we presented them with a scenario involving a game app which is installed on their smartphone. This only covers the in-app access to sensors. However, people might express different risk levels for other types of access, e.g. in-browser access. This needs further investigation. Issues with Training Users We decided to provide our participants with a short description of each sensor's functionality (details in "Appendix 2", part 3). Furthermore, the participants were given the chance to ask as many questions as they wanted to fully understand the functionality of each sensor. This might not be the most effective way to inform users about sensors since some descriptions might seem too technical (and hence not fully understandable) to some users. How to inform users in an effective way is a complex topic of research which can be explored in the future. In this paper, we introduced PINlogger.js, a web-based program which reveals users' PINs by recording the mobile device's orientation and motion sensor data through JavaScript code. Access to mobile sensor data via JavaScript is limited to only a few sensors at the moment. This will probably expand in the future, specially with the rapid development of sensor-enabled devices in the Internet of things (IoT). We also showed that users do not generally perceive a high risk about such sensors being able to steal their PINs. Furthermore, we showed that people are not even generally knowledgeable about these sensors on mobile devices. Accordingly, we discussed the complexity of designing a usable and secure solution to prevent the proposed attacks. Hence, designing a general mechanism for secure and usable sensor data management remains a crucial open problem for future research. Many of the suggested academic solutions either have not been applied by the industry as a practical solution, or have failed. Given the results in our user studies, designing a practical solution for this problem does not seem to be straightforward. A combination of different approaches might help researchers devise a usable and secure solution. Having control on granting access before opening a website and during working with it, in combination with a smart notification feature in the browser would probably achieve a balance between security and usability. Users should also have control on reviewing, updating and deleting these data, if stored by the website or shared with a third party afterwards. Solutions such as Taintroid [35], a tracking app for monitoring sources of sensitive data on a mobile which has been applied for GPS in [29], could be helpful. After all, it seems that an extensive study is required towards designing a permission framework which is usable and secure at the same time. Such research is a very important usable security and privacy topic to be explored further in the future. http://w3.org/TR/#tr_Javascript_APIs. http://github.com/maryammjd/Reading-sensor-data-for-fifty-4digit-PINs. http://play.google.com/store/apps/details?id=com.solirify.mathgame. http://datagenetics.com/blog/september32012/. http://forums.androidcentral.com/verizon-galaxy-nexus/171482-barometer-accelerometer-how-they-useful.html. http://apple.com/uk/iphone-6/specs/. http://store.google.com/product/nexus_6p. http://w3.org/2009/dap/. http://samsung.com/us/support/answer/ANS00035658/234302/SCH-R950TSAUSC. http://sites.google.com/site/autosenseproject/. Interquartile range. http://w3c.github.io/deviceorientation/spec-source-orientation.html. http://bugs.chromium.org/p/chromium/issues/detail?id=421691. http://mozilla.org/en-US/security/advisories/mfsa2016-43/. http://support.apple.com/en-gb/HT206166. http://us.hsbc.com/1/2/home/personal-banking/pib/mobile/touchid. http://hsbc.co.uk/1/2/contact-and-support/banking-made-easy/voice-id. http://brandchannel.com/2015/03/16/alibaba-demos-smile-to-pay-facial-recognition-app/. http://bbc.co.uk/news/technology-32498222. Google. Location and sensors APIS. http://developer.android.com/guide/topics/sensors/index.html Jin, X., Hu, X., Ying, K., Du, W., Yin, H., Peri, G.N.: Code injection attacks on HTML5-based mobile apps: characterization, detection and mitigation. In: Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, CCS '14, pp. 66–77. ACM, New York (2014) Kim, H., Lee, S., Kim, J.: Exploring and mitigating privacy threats of HTML5 geolocation API. In: Annual Computer Security Applications Conference (ACSAC). New Orleans (2014) Mehrnezhad, M., Toreini, E., Shahandashti, S.F., Hao, F.: Touchsignatures: identification of user touch actions and PINs based on mobile sensor data via javascript. J. Inform. Secur. Appl. 26, 23–38 (2016) Mehrnezhad, M., Toreini, E., Shahandashti, S.F., Hao, F.: Touchsignatures: Identification of user touch actions based on mobile sensors via javascript (extended abstract). In: Proceedings of the 9th ACM Symposium on Information, Computer and Communications Security (ASIACCS 2015). ACM (2015) W3C Working Draft Document on Device Orientation Event. http://www.w3.org/TR/orientation-event/ Aviv, A.J., Sapp, B., Blaze, M., Smith, J.M.: Practicality of accelerometer side channels on smartphones. In: Proceedings of the 28th Annual Computer Security Applications Conference, pp 41–50. ACM (2012) Cai, L., Chen, H.: Touchlogger: Inferring keystrokes on touch screen from smartphone motion. In: HotSec (2011) Cai, L., Chen, H.: On the practicality of motion based keystroke inference attack. In: Katzenbeisser, S., Weippl, E., Camp, L., Volkamer, M., Reiter, M., Zhang, X. (eds.) Trust and Trustworthy Computing. Lecture Notes in Computer Science, vol. 7344, pp. 273–290. Springer, Berlin (2012) Michalevsky, Y., Boneh, D., Nakibly, G.: Gyrophone: recognizing speech from gyroscope signals. In: Proceedings of the 23rd USENIX Security Symposium (2014) Miluzzo, E., Varshavsky, A., Balakrishnan, S., Choudhury, R.R.: Tapprints: your finger taps have fingerprints. In: Proceedings of the 10th international conference on Mobile systems, applications, and services, pp. 323–336. ACM (2012) Narain, S., Sanatinia, A., Noubir, G.: Single-stroke language-agnostic keylogging using stereo-microphones and domain specific machine learning. In: Proceedings of the 2014 ACM Conference on Security and Privacy in Wireless and Mobile Networks, WiSec '14, pp. 201–212. ACM, New York (2014) Owusu, E., Han, J., Das, S., Perrig, A., Zhang, J.: Accessory: password inference using accelerometers on smartphones. In: Proceedings of the Twelfth Workshop on Mobile Computing Systems and Applications, p. 9. ACM (2012) Simon, L., Anderson, R.: Pin skimmer: inferring pins through the camera and microphone. In: Proceedings of the Third ACM Workshop on Security and Privacy in Smartphones and Mobile Devices, SPSM '13, pp. 67–78. ACM, New York (2013) Spreitzer, R.: Pin skimming: exploiting the ambient-light sensor in mobile devices. In: Proceedings of the 4th ACM Workshop on Security and Privacy in Smartphones and Mobile Devices, SPSM '14, pp. 51–62. ACM, New York (2014) Xu, Z., Bai, K., Zhu, S.: Taplogger: inferring user inputs on smartphone touchscreens using on-board motion sensors. In: Proceedings of the fifth ACM conference on Security and Privacy in Wireless and Mobile Networks, pp. 113–124. ACM (2012) Bichler, D., Stromberg, G., Huemer, M., Löw, M.: Key generation based on acceleration data of shaking processes. In: Krumm, J., Abowd, G., Seneviratne, A., Strang, T. (eds.) UbiComp 2007: Ubiquitous Computing. Lecture Notes in Computer Science, vol. 4717, pp. 304–317. Springer, Berlin (2007) Møller, M.F.: A scaled conjugate gradient algorithm for fast supervised learning. Neural Netw. 6(4), 525–533 (1993) Bonneau, J., Preibusch, S., Anderson, R.: A birthday present every eleven wallets? the security of customer-chosen banking pins. In: Keromytis, A. (ed.) Financial Cryptography and Data Security. Lecture Notes in Computer Science, vol. 7397, pp. 25–40. Springer, Berlin (2012) Al-Haiqi, A., Ismail, M., Nordin, R.: On the best sensor for keystrokes inference attack on android. Proced. Technol. 11, 989–995 (2013). (4th International Conference on Electrical Engineering and Informatics, ICEEI 2013) Li, M., Meng, Y., Liu, J., Zhu, H., Liang, X., Liu, Y., Ruan, N.: When CSI meets public WiFi: inferring your mobile phone password via WiFi signals. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, CCS '16, pp. 1068–1079. ACM, New York (2016) Wang, H., Lai, T.T.-T., Roy Choudhury, R.: Mole: motion leaks through smartwatch sensors. In: Proceedings of the 21st Annual International Conference on Mobile Computing and Networking, MobiCom '15, pp. 155–166. ACM, New York (2015) W3C Editor's Draft on Generic Sensor API. http://w3c.github.io/sensors/ Watanabe, T., Akiyama, M., Sakai, T., Mori, T.: Understanding the inconsistencies between text descriptions and the use of privacy-sensitive resources of mobile apps. In: Eleventh Symposium On Usable Privacy and Security (SOUPS 2015), pp. 241–255. USENIX Association, Ottawa (2015) Kang, R., Dabbish, L., Fruchter, N., Kiesler, S.: "my data just goes everywhere:" user mental models of the internet and implications for privacy and security. In: Eleventh Symposium on Usable Privacy and Security (SOUPS 2015), pp. 39–52. USENIX Association, Ottawa (2015) Harbach, M., von Zezschwitz, E., Fichtner, A., Luca, A.D., Smith, M.: It's a hard lock life: a field study of smartphone (un)locking behavior and risk perception. In: Symposium On Usable Privacy and Security (SOUPS 2014), pp. 213–230. USENIX Association, Menlo Park (2014) De Luca, A., Hang, A., von Zezschwitz, E., Hussmann, H.: I feel like i'm taking selfies all day!: towards understanding biometric authentication on smartphones. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI '15, pp. 1411–1414. ACM, New York (2015) Khan, H., Hengartner, U., Vogel, D.: Usability and security perceptions of implicit authentication: convenient, secure, sometimes annoying. In: Eleventh Symposium on Usable Privacy and Security (SOUPS 2015), pp. 225–239. USENIX Association, Ottawa (2015) Balebako, R., Jung, J., Lu, W., Cranor, L.F., Nguyen, C.: "Little brothers watching you:" raising awareness of data leaks on smartphones. In: Symposium on Usable Privacy and Security. ACM: Association for Computing Machinery (2013) Raij, A., Ghosh, A., Kumar, S., Srivastava,M.: Privacy risks emerging from the adoption of innocuous wearable sensors in the mobile environment. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '11, pp. 11–20. ACM, New York (2011) Bojinov, H., Michalevsky, Y., Nakibly, G., Boneh, D.: Mobile device identification via sensor fingerprinting. CoRR, abs/1408.1416 (2014) Das, A., Borisov, N., Caesar, M.: Exploring ways to mitigate sensor-based smartphone fingerprinting. CoRR, abs/1503.01874 (2015) Bravo-Lillo, C., Komanduri, S., Cranor, L.F., Reeder, R.W., Sleeper, M., Downs, J., Schechter, S.: Your attention please: Designing security-decision UIs to make genuine risks harder to ignore. In: Proceedings of the Ninth Symposium on Usable Privacy and Security, SOUPS '13, pp. 6:1–6:12. ACM, New York (2013) Charland, A., Leroux, B.: Mobile application development: Web vs. native. Commun. ACM 54(5), 49–53 (2011) Enck, W., Gilbert, P., Han, S., Tendulkar, V., Chun, B.-G., Cox, L.P., Jung, J., McDaniel, P., Sheth, A.N.: Taintdroid: an information-flow tracking system for realtime privacy monitoring on smartphones. In: Transactions on Computer Systems (2014) We would like to thank professor Angela Saase for her inspirational speech at the annual Privacy Enhancing Technologies Symposium 2016 (PETS2016), which has influenced some parts of this paper. We would like to thank Dr. Kovila Coopamootoo from Newcastle University for her constructive feedback on designing the user studies of this paper. We also would like to thank the voluntary participants who contributed to our data collection and user studies. The last three authors are supported by ERC Starting Grant No. 306994. This paper is based on: "Stealing PINs via Mobile Sensors: Actual Risk versus User Perception.", by Maryam Mehrnezhad, Ehsan Toreini, Siamak F. Shahandashti, Feng Hao, which appeared in the Proceedings of EuroUSEC, 2016. School of Computing Science, Newcastle University, Newcastle upon Tyne, UK Maryam Mehrnezhad, Ehsan Toreini, Siamak F. Shahandashti & Feng Hao Maryam Mehrnezhad Ehsan Toreini Siamak F. Shahandashti Feng Hao Correspondence to Maryam Mehrnezhad. Appendix 1: Call for participation flyer and participant demographics In this section, we present the participants demographics in details and the flyers that we used for call for participation of our user studies (Fig. 8; Table 7). Sample of flyer distributed for participant recruitment Table 7 Participants' self-reported demographics in the two studies, (y) indicates the years of owning a smartphone Appendix 2: Interview script Hi. Thanks very much for contributing to our study. In this interview, we will ask you to fill in a few questionnaires about mobile sensors such as GPS, camera, light, motion, and orientation. You are encouraged to think out loud as you go through, and please feel free to provide any comments during the interview. There is no right or wrong answer, and our purpose is to evaluate the mobile sensors, not you. Everything about this interview is anonymous. Please provide some information about yourself in Table 8. 1.1 Part one A list of multiple mobile sensors is presented below. To what extent do you know each sensor on a mobile device? Please rate them in the table (Table 9 was used). Table 8 Demography Table 9 This form was used for part one 1.2 Part two Imagine that you own a smartphone which is equipped with all these sensors. Consider this scenario: you have opened a game app which can have access to all mobile sensors. You leave the game app open in the background, and open your banking app which requires you to enter your PIN. Do you think any of these sensors can help the game app discover your entered PIN? To what extent are you concerned about each sensor's risk to your PIN? Please rate them in the table (Table 10 was used). In this section, please only rely on the knowledge you already have about the sensors, and if you do not know some of them, describe your feeling of security about them. Table 10 This form was used for parts two and three 1.3 Part three Let us explain each sensor here: GPS: identifies the real-world geographic location. Camera, Microphone: capture pictures/videos and voice, respectively. Fingerprint, TouchID: scans the fingerprint. Touch Screen: enables the user to interact directly with the display by physically touching it. WiFi: is a wireless technology that allows the device to connect to a network. Bluetooth: is a wireless technology for exchanging data over short distances. Near-Field Communication (NFC): is a wireless technology for exchanging data over shorter distances (less than 10 cm) for purposes such as contactless payment. Proximity: measures the distance of objects from the touch screen. Ambient Light: measures the light level in the environment of the device. Ambient Pressure (Barometer), Ambient Humidity, and Ambient Temperature: measure the air pressure, humidity, and temperature in the environment of the device, respectively. Device Temperature: measures the temperature of the device. Gravity: measures the force of gravity. Magnetic Field: reports the ambient magnetic field intensity around the device. Hall sensor: produces voltage based on the magnetic field. Accelerometer: measures the acceleration of the device movement or vibration. Rotation: reports how much and in what direction the device is rotated. Gyroscope: estimates the rotation rate of the device. Motion: measures the acceleration and the rotation of the device. Orientation: reports the physical angle that the device is held in. Sensor Hub: is an activity recognition sensor and its purpose is to monitor the device's movement. Please feel free to ask us about any of these sensors for more information. Now that you have more knowledge about the sensors, let us describe the same scenario here again. Imagine that you own a smartphone which is equipped with all these sensors. You have opened a game app which can have access to all mobile sensors. You leave the game app open in the background, and open your banking app which requires you to enter your PIN. Do you think any of these sensors can help the game app to discover your entered PIN? To what extent are you concerned about each sensor's risk to your PIN? Please rate them in the table (Table 10 was used). In this part, please make sure that you know the functionality of all the sensors. If you are unsure, please have another look at the descriptions, or ask us about them. Thanks very much for taking part in this study. Please leave any extra comment here. An Amazon voucher and a business card are in this envelope. Please contact us if you have any questions about this interview, or are interested in the results of this study. Mehrnezhad, M., Toreini, E., Shahandashti, S.F. et al. Stealing PINs via mobile sensors: actual risk versus user perception. Int. J. Inf. Secur. 17, 291–313 (2018). https://doi.org/10.1007/s10207-017-0369-x Issue Date: June 2018 DOI: https://doi.org/10.1007/s10207-017-0369-x Mobile sensors JavaScript attack Risk perception User study
CommonCrawl
Thermophysiological comfort of sonochemically synthesized nano TiO2 coated woven fabrics Comfort evaluation of ZnO coated fabrics by artificial neural network assisted with golden eagle optimizer model Nesrine Amor, Muhammad Tayyab Noman, … Neethu Sebastian Stress, strain and deformation of poly-lactic acid filament deposited onto polyethylene terephthalate woven fabric through 3D printing process Prisca Aude Eutionnat-Diffo, Yan Chen, … Vincent Nierstrasz Rheological, physical, and mechanical properties of chicken skin gelatin films incorporated with potato starch Syazwani Aqilah Alias & Norizah Mhd Sarbon The cooling mechanism of minuscule ribbed surfaces M. Nishikawa, H. Otomo, … T. Yamamoto Stretchable Hydrophobic Surfaces and Self-Cleaning Applications Bekir Sami Yilbas, Ghassan Hassan, … Johnny Adukwu Ebaika Adukwu The influence of multi-layered varnishes on moisture protection and vibrational properties of violin wood Sarah L. Lämmlein, David Mannes, … Ingo Burgert Specificity of UV-C LED disinfection efficacy for three N95 respirators C. Carolina Ontiveros, David C. Shoults, … Graham A. Gagnon Adhesive free, conformable and washable carbon nanotube fabric electrodes for biosensing Md. Milon Hossain, Braden M. Li, … Philip D. Bradford Effect of different cocoon stifling methods on the properties of silk fibroin biomaterials Salvador D. Aznar-Cervantes, Ana Pagan, … José L. Cenis Muhammad Tayyab Noman1, Michal Petru1, Nesrine Amor2, Tao Yang1 & Tariq Mansoor3 Materials for energy and catalysis Nanoscale materials Nanoscience and technology Structural materials This work investigates thermophysiological comfort properties of sonochemically synthesized nano TiO2 coated cotton and polyester woven fabrics. The obtained results were analysed on heat and mass transfer basis. Moisture management tester and Alambeta were utilised for moisture transportation and thermal evaluation. This study precisely investigates the effects of sonication on surface roughness of nano TiO2 coated and uncoated samples. Ultrasonic acoustic method was applied to imbibe nano TiO2 on fabric samples. Surface topography, morphology and the existence of nano TiO2 on investigated samples were analysed by scanning electron microscopy and inductively coupled plasma atomic emission spectroscopy. In addition, standard test methods were applied to estimate physical and thermophysiological comfort properties i.e. thermal resistance, thermal diffusivity, heat flow, wetting time and accumulative one-way transport index of uncoated and nano TiO2 coated samples. Thermophysiological comfort is one of the most demanding and desirable features of any textiles that is analysed by heat and mass transfer. Thermophysiological properties help the consumers to choose suitable garments for cold and hot weather. Clothing comfort is generally divided into various categories, however, thermophysiological comfort and sensorial comfort are the most important categories among all from experimental point of view. This work is the extension of a previously performed study about thermophysiological comfort evaluation. In a previous study, thermophysiological properties i.e. thermal conductivity, thermal absorptivity, relative water vapour permeability, evaporative resistance, air permeability and overall moisture management capacity of different fabrics (cotton and polyester) were analysed and discussed in detail1. In this study, thermophysiological circle is extended for thermal resistance, thermal diffusivity, maximum heat flow, wetting time and accumulative one-way transport index of nano TiO2 coated cotton and polyester woven fabrics. In recent years, many researchers worked on thermophysiological properties of different textiles and reported interesting results. Dalbasi and Kayseri studied thermophysiological properties of multicellular linen fabrics under various enzymatic treatments and reported that thermal conductivity is affected significantly by enzymatic treatments. In addition, enzymes treated linen shirts showed the maximum value of thermal resistance2. Azeem el al. studied thermophysiological properties of multifilament polyester and reported significantly higher value of thermal conductivity for nanofilament polyester than coolmax and cotton. Moreover, their results showed low thermal absorptivity for coolmax (warm feeling) and highest thermal absorptivity value for nanofilament polyester (cool feeling)3. Arumugam et al. studied comfort properties of 3D spacer fabrics and reported that fabric thickness is an extremely important variable for thermal conductivity and water vapour permeability. The results indicated that water vapour permeability is a function of porosity and thickness4. In another work, Mansoor et al. proposed a novel method to predict thermal resistance of socks in dry and wet states. They used various fibres with different combinations to develop plain socks and compared their results with thermal foot model. They reported that thermal conductivity and filling coefficient are significantly dependent on moisture content5. There are many other reported works on thermophysiological comfort properties of non-coated fabrics6,7,8,9,10. Titanium dioxide (TiO2) is a versatile material commonly used for photocatalytic applications in textile sector11,12,13,14,15. The applications of TiO2 varies from sunscreens to paints and waste water treatment to self-cleaning performance16. Many researchers have reported successful coating of TiO2 on textiles for photocatalytic applications17,18,19,20,21,22,23. Sonication (utilization of ultrasonic energy) has shown its potential as a facile, economical and eco-friendly method for the fabrication of nanostructures and their deposition on textiles24. Acoustic cavitation is the fundamental of sonication. Ultrasonic energy induces physicochemical properties in liquids and generates infinite number of unstable bubbles that violently collapse with one another due to pressure difference and generates an excessive amount of heat with a local increase in pressure and temperature till 20 MPa and 5000 K respectively with 1010 Ks−1 cooling rate. In a previous study, a successful synthesis of nano TiO2 on textiles by sonication have been achieved25. The literature cited in above discussion indicates that very limited information is available regarding thermophysiological properties of nano TiO2 coated textiles. To the best of knowledge, no available literature explains the relationship between thermophysiological properties and sonochemically synthesized nano TiO2 coated woven fabrics. Therefore, we propose a schematic study that explicitly describes and elaborates the effects of sonication and nano TiO2 on thermophysiological properties of cotton and polyester woven fabrics. Furthermore, it is believed that this work is unique and extendable in its scope for other types of materials i.e. CuO and ZnO, and textile substrates. 100% pure cotton and polyester fabrics were used throughout the study. Titanium Tetrachloride (TiCl4) and Isopropanol ((CH3)2CHOH) were received from Sigma Aldrich and used without any further processing. Physical testing The fabrics were first conditioned for 24 h at standard conditions i.e. temperature 25 ± 2 °C and relative humidity 65 ± 2% before physical testing in accordance to standard test method ASTM D 1776-16. If a sample contains low or high humidity before testing, this test neutralizes the moisture until equilibrium achieved. Fabric mass i.e. gram per square meter (GSM) was determined by standard test method ASTM D 3776. The conditioned samples were placed in a holder and the total weight was determined for each sample. GSM was calculated by subtracting the holder weight from the total weight. Fabric thickness was measured by ASTM D 1777-96 (2019) standard method with SDL thickness meter at a pressure of 100 Pa. Samples were placed on the base of thickness gauge and the displacement among the presser foot and base was considered as sample thickness. The warp and weft yarns were made up of same materials i.e. cotton and polyester. The detail of the constructional parameters of used fabric samples is presented in Table 1. Table 1 Constructional parameters of used fabrics in detail. Simultaneous synthesis and coating of nano TiO2 on fabrics The simultaneous synthesis and coating of nano TiO2 on both fabrics were achieved by a method as reported in a previous study12. In a typical process, fabric samples were immersed in glass beakers containing TiCl4, distilled H2O and isopropanol. The suspension was then sonicated under Bandelin Sonopuls HD 3200 ultrasonic system with 20 kHz frequency, 200 W input power and 50% efficiency for 1 h to complete the reaction mechanism. The schematic illustration of proposed process is presented (see the Supporting Information). The deposition of nano TiO2 on samples, surface topography and morphology were analysed by ultrahigh-resolution scanning electron microscopy (UHR-SEM), from Carl Zeiss. Energy dispersive X-ray (EDX) spectrophotometer was utilised to estimate elemental percentage on samples surface. X-ray diffractometry (XRD) analysis was performed by an X'Pert PRO X-ray diffractometer under Cu Kα radiation at wavelength λ = 0.15406 nm and scanning angle (2θ) range 10°–70°. The deposited amount of nano TiO2 on samples was calculated by inductively coupled plasma atomic emission spectroscopy (ICP-AES). Thermophysiological comfort properties For thermal resistance (R) [m2 KW−1], thermal diffusivity (a) [m2 s−1] and transient heat flow (Q) [Wm−2], Alambeta by Sensora, Czech Republic was used. Alambeta measures the thermal properties of a sample both in transient and steady states. The working principle is based on coefficient of thermal conductivity that computes the net amount of heat passes from a material having area of 1 m2 within 1 s and covers a distance of 1 m with temperature difference of 1 K. Thermal resistance (R) is measured by given equation. $$R = {\raise0.7ex\hbox{$h$} \!\mathord{\left/ {\vphantom {h \lambda }}\right.\kern-\nulldelimiterspace} \!\lower0.7ex\hbox{$\lambda $}} $$ In Eq. 1, \(\lambda \) represents coefficient of thermal conductivity and h is fabric thickness. Thermal diffusivity (a) is a measure of heat flow along with fabric thickness in a perpendicular direction to surface area. Thermal diffusion is a transient thermal parameter that is precisely associated with two other important thermal comfort parameters i.e. thermal conductivity and thermal absorptivity. Transient parameters are calculated when the fabric gets a real contact with the body or the skin. Thermal diffusivity is calculated by Eq. 2. $$a = \left( {{\raise0.7ex\hbox{$\lambda $} \!\mathord{\left/ {\vphantom {\lambda b}}\right.\kern-\nulldelimiterspace} \!\lower0.7ex\hbox{$b$}}} \right)^{2} $$ In above equation, \(\lambda \) and b represent the coefficient of thermal conductivity and thermal absorptivity. Another important thermal parameter measured by Alambeta is heat flow. When a body is in contact with the fabric in the absence of wind flow and body movement, Alambeta senses the flow of heat from body to fabric. Moisture transportation properties i.e. wetting time and accumulative one-way transport index were measured by moisture management tester (MMT). Standard test method (AATCC 195-2009) was used to investigate moisture transportation. In MMT, wetting time (top and bottom) is considered as a time when surface (top and bottom) of fabric sample begins to wet accordingly. However, accumulative one-way transport index is the difference between area of moisture content curves of a sample for top and bottom surfaces with respect to time. Regression analysis and analysis of variance (ANOVA) were applied to test the significance and the effectiveness of selected variables on thermophysiological properties of woven fabrics. Results and discussions SEM, EDX and XRD analysis Figure 1 illustrates the results of surface topography and morphology of uncoated and nano TiO2 coated samples. SEM images were taken at magnification 5.0 k, 10.0 k, 250 × and 10.0 k for nano TiO2 coated cotton and polyester samples respectively. Figure 1a and d show very smooth and clean surfaces of uncoated samples (untreated). Figure 1c and f were taken at higher magnification to visually evaluate the coated amount (percentage) of nano TiO2 on samples. A quasi-spherical shape of nano TiO2 and homogenous distribution on both substrates was observed as described (Fig. 1b,c,e,f). It was found that the entire surface is being covered by nanoparticles due to longer sonication process and particles are attached with surface as a thick smooth layer and overwhelmingly aggregated. SEM images of cotton samples (a) S1 (untreated), (b) sample S3, and (c) higher magnification of S3, and SEM photographs of polyester samples (d) S10 (untreated), (e) sample S12, and (f) higher magnification of S12. Moreover, EDX and XRD analysis were carried out in order to detect the elements, their composition and their weight percentage in the developed samples as well as to detect the purity of the crystalline phase respectively and the results are illustrated and discussed (see the Supporting Information). Inductively coupled plasma atomic emission spectroscopy (ICP-AES) ICP-AES study confirmed the presence of nano TiO2 on all coated samples i.e. S2, S3, S5, S6, S8, S9, S11 and S12, and absence on uncoated samples. The characteristic peak of Ti element was counted as observed in emission spectra to measure the coated amount on all samples and reported in Table 1. The coated amount of nano TiO2 was 990 ppm, 965 ppm, 972 ppm and 985 ppm for samples S3, S6, S9, and S12 respectively. Thermophysiological properties Table 2 represents the experimental results of thermophysiological properties i.e. thermal resistance, thermal diffusivity, heat flow, wetting time and accumulative one-way transport index of all samples. The results were discussed one by one in this section. This discussion is a depiction that thermophysiological comfort is a function of fabric thickness and nano TiO2 coated amount on samples. Furthermore, regression method was performed to evaluate the influential tendency of variables and their dependency strength. A linear regression equation with coefficient of determination (R2) was derived individually for all properties. Table 2 The overall thermophysiological comfort properties of used woven fabrics. In textiles, thermal resistance is considered as one of the most influential, significant and important comfort evaluation criteria. Thermal resistance reflects the capability of a fabric to prevent heat flow from one side to another at a given unit area. A lower value of thermal resistance indicates a significantly higher amount of heat transfer from our body to fabric and vice versa. The results regarding thermal resistance of all fabric samples are presented in Fig. 2a. Thermal resistance of uncoated cotton samples (S1 and S4) and uncoated polyester samples (S7, S10) was significantly higher than nano TiO2 coated ones i.e. (S2, S3, S5, S6) for cotton and (S8, S9, S11, S12) for polyester. The results are quite interesting as thermal resistance increases with an augmentation in thickness. However, there are many other parameters that remain constant i.e. structure of the fabric. In this study, the impact of sonication on structure and surface of woven fabrics was investigated. The obtained results illustrate that applied treatment reduced air gaps inside the fabric structure and hence induced a positive effect on the structure and surface properties of used textiles. In addition, the results depict that nano TiO2 coating covered the void spaces on the surface of cotton and polyester fabrics and results in the reduction of air trapped inside the fibre volume, therefore, a decrease in thermal resistance of all coated samples was observed. The results also explain that cotton fabric samples showed less reduction in thermal resistance than polyester which means that the applied treatment improves the thermal conductivity of polyester more than cotton. Therefore, the overall results conclude a more significant effect of applied method on polyester than cotton. The obtained results are supported by a previous investigation of Gunasekaran et al.26. (a) Thermal resistance of all tested samples for cotton (S1 to S6) and polyester (S7 to S12) fabrics. (b) Thermal resistance of used woven fabrics as a function of thickness. Figure 2b shows thermal resistance results of all samples as a function of thickness. Before discussion and for better understanding of this thematic study, a key point should be noted that fabric thickness is a function of nano TiO2 coated amount on fabrics. This means thermophysiological properties are directly related to nano TiO2 coated amount and this coated amount is the outcome of prolonged sonication time. So, only thickness related results are elaborated and discussed for thermophysiological properties of nano TiO2 coated samples. The trendline shows a decreasing tendency for thermal resistance with an augmentation in thickness as presented in Fig. 2b. The R2 coefficient and regression equation statically describe thermal resistance dependency on fabric thickness. A strong negative linear relationship with a strong dependency trend was observed between thickness and thermal resistance. Thermal diffusivity is another influential and transient thermal parameter and a subject of great interest as it is attributed with thermal conductivity and absorptivity. Thermal diffusivity has an inverse relationship with thermal absorptivity as described in Eq. 2. Therefore, a higher thermal diffusivity value gives warmer feeling when the skin gets in touch with fabric. Thermal diffusivity results of all samples are shown in Fig. 3a. The experimental values of thermal diffusivity for all nano TiO2 coated samples were lower than uncoated samples of cotton (S1 and S4) and polyester (S7 and S10) respectively. The results confirmed that all nano TiO2 coated samples provide cool feeling on touch. The obtained results are supported by a previous investigation of Varshney et al.27. (a) Thermal diffusivity of all tested samples for cotton (S1 to S6) and polyester (S7 to S12) fabrics. (b) Thermal diffusivity of used woven fabrics as a function of thickness. Figure 3b shows thermal diffusivity results of all samples as a function of thickness. The trendline illustrates a decreasing tendency for thermal diffusivity with an augmentation in fabric thickness. The R2 coefficient and regression equation explain thermal diffusivity dependency on fabric thickness. A negative linear relationship with a dependency trend was found between thermal diffusivity and thickness. A lower value of R2 indicates a random distribution of the experimental values and the presence of an outlier. Heat flow The quantity of transient heat flow (Q) is calculated in terms of peak heat flow and stationary heat flow i.e. qmax and qs respectively. Gradually, the value of peak heat flow decreases and stabilise at stationary heat flow. Theoretically, an augmentation in heat flow results an increment in thermal conductivity of the fabric and vice versa. The results of transient heat flow of all samples are illustrated in Fig. 4a. The results of transient heat flow for all nano TiO2 coated samples were higher than uncoated samples of cotton (S1 and S4) and polyester (S7 and S10) respectively. The results confirmed that an augmentation in fabric thickness due to applied treatment increased the amount of heat flow for nano TiO2 coated fabrics. In comparison, polyester fabric samples showed higher results for heat flow than cotton fabric samples. The obtained results are supported by a previous investigation of Dalbasi and Kayseri2. (a) Heat flow result of all tested samples for cotton (S1 to S6) and polyester (S7 to S12) fabrics. (b) Heat flow evaluation of used woven fabrics as a function of thickness. Figure 4b explains the results of heat flow as a function of thickness. The trendline displays an increasing tendency of transient heat flow with an augmentation in fabric thickness. The R2 coefficient and regression equation explain heat flow dependency on fabric thickness. Wetting time Liquid transportation behaviour of selected fabrics was studied by two crucial parameters i.e. wetting and accumulative one-way transport index. For wetting, top wetting time was considered a threshold to understand the nature of the fibre used in fabric construction. In general, hydrophobic fibres take longer time for wetting. The wetting time for all samples was evaluated and presented in Fig. 5a. The results show that wetting time for all nano TiO2 coated samples was lower than uncoated samples of cotton (S1 and S4) and polyester (S7 and S10) respectively. These results depict that coating of nano TiO2 by sonication (applied treatment) induced positive effects on hydroscopic nature of used textiles and increased their hydrophilicity to a significant level. Moreover, these results explain the role of sonication for an augmentation of thermophysiological comfort of woven fabrics. These results are supported by a previous investigation of Karthik et al.28. (a) Wetting time of all tested samples for cotton (S1 to S6) and polyester (S7 to S12) fabrics. (b) Wetting time of used woven fabrics as a function of thickness. Figure 5b shows that wetting time is a function of thickness and the trendline displays an increasing tendency of wetting time with an augmentation in fabric thickness. The R2 coefficient and regression equation explain the dependency of wetting time on fabric thickness. A random distribution of the experimental values and the presence of an outlier lowered the value of R2 coefficient. Accumulative one-way transport index Another important indicator that reflects the overall thermophysiological comfort of textiles to a great extent is accumulative one-way transport index. By definition, accumulative one-way transport index is the difference between the area of moisture content curve among top and bottom surfaces of a sample with respect to time. The results achieved for accumulative one-way transport index of all samples are illustrated in Fig. 6a. The results proposed a significant augmentation in the values of transport index for all nano TiO2 coated samples than uncoated samples of cotton (S1 and S4) and polyester (S7 and S10) respectively. The results reveal that coating of nano TiO2 on both fabrics by sonication significantly improved moisture transportation properties. The acceleration of fluid flow during sonication and liquid penetration inside the fibre internal structure as well as fibre swelling due to cavitation results in better moisture management properties12. (a) Accumulative one-way transport index of all tested samples for cotton (S1 to S6) and polyester (S7 to S12) fabrics. (b) Accumulative one-way transport index of used woven fabrics as a function of thickness. Figure 6b shows the results of accumulative one-way transport index as a function of thickness. The trendline shows a prominent increase in accumulative one-way transport index values with an augmentation in fabric thickness. Moreover, R2 coefficient and regression equation explain the dependency of one-way transport index on fabric thickness. A random distribution was observed for the results of accumulative one-way transport index. These results are supported by a previous investigation of Angelova et al.6. The results collectively propose that applied treatment (coating of nano TiO2 on both fabrics by sonication) induced positive physicochemical changes and enhanced thermophysiological comfort properties of woven textiles. The benefits of sonication for the synthesis of nanomaterials were explained thoroughly in previous studies23,25. A twinkling comparison for investigated thermophysiological properties of all samples is presented in a spider plot based on the original experimental values and illustrated in Fig. 7. Spider plot for a twinkling comparison of investigated thermophysiological properties of nano TiO2 coated samples. Analysis of variance (ANOVA) Analysis of variance (ANOVA) is an important tool to examine the effectiveness of various parameters, the interaction of data between variables and observed responses, the accuracy and the repeatability of the experiments. The results were used to judge the goodness of fit for all variables with respect to relevant response. The results demonstrate that the designed ANOVA model for thermal resistance was statistically significant for F-value of 73.899 and p-value prob > F of 0.0000036 as shown in Table 3. R2 coefficient was used to analyse the fit of the model. The results explained that only 3.49% of the total variables cannot be explained by the designed model for thermal resistance of all samples. Table 3 ANOVA results for thermal resistance. ANOVA results for thermal diffusivity were significant (F-value 5.221) and prob > F (p-value 0.0274) as described in Table 4. The results of R-squared coefficient explained that 33.81% of the total variables cannot be explained by the designed model for thermal diffusivity of used samples. Table 4 ANOVA results for thermal diffusivity. Table 5 describes the results of ANOVA test for heat flow and shows that the designed model of heat flow was significant (F-value 11.337) and prob > F (p-value 0.0029) respectively. R-squared coefficient explains that 19.05% of the total variables could not be explained by the designed model for heat flow of all developed samples. Table 5 ANOVA results for heat flow. In Table 6, the results demonstrated that the designed ANOVA model for wetting time is statistically significant (F-value 5.644) and prob > F (p-value 0.0224) respectively. The results regarding R-squared coefficient explained that 32.09% of the total variables cannot be explained for wetting time of all samples. Table 6 ANOVA results for wetting time. It was observed that ANOVA test for accumulative one-way transport index is significant (F-value 24.670) and prob > F (p-value 0.00021) as shown in Table 7. R-squared coefficient explained that only 9.76% of the total variables cannot be explained by the designed model for accumulative one-way transport index of all samples. Table 7 ANOVA results for accumulative one-way transport index. The objective of this work was to examine the impacts of sonication and nano TiO2 coating on thermophysiological properties of fabrics with variation in thickness. For a comprehensive study based on heat and mass transfer, following conclusions were drawn. Fabric thickness is a notable parameter that affects thermophysiological properties particularly thermal resistance, thermal diffusivity, wetting time and accumulative one-way transport index. Furthermore, statistically significant results (ANOVA) were obtained for thermal resistance against selected variables of all samples with R2 value 0.9651. The result illustrated that sonication and nano TiO2 coated fabrics provide significant improvements for thermal insulation. Moreover, in a parallel comparison, the results of thermal resistance of polyester fabric were much lower than cotton fabric. A notable consistency was detected for thermal diffusivity of nano TiO2 coated and uncoated samples for both fabrics. Fabric thickness performed a metaphorical role in thermal diffusivity. The R2 coefficient value was lower for thermal diffusivity due to abnormal distribution of data points. The results of heat flow were augmented for both type of fabrics as nano TiO2 coated amount and fabric thickness increased. Polyester fabric showed much better results of heat flow than cotton that indicates a higher thermal conductivity of polyester fabric. The results of heat flow were significant (R2 0.8095). Fabric structure and surface morphology play critical role in the evaluation of thermophysiological comfort properties. The behaviour of moisture transportation inside fabric structure significantly depends on porosity. The results of wetting time were declined for both type of fabrics as the deposited amount and fabric thickness increased. The decrease of wetting time was a reflection of significantly higher liquid moisture transportation. The inspirational findings of this thematic and novel work open a gateway for us to go deeper and extend the procedure for other types of textiles and nanomaterials i.e. polypropylene, polyamide, zinc oxide, copper oxide etc. Noman, M. T. & Petru, M. Effect of sonication and nano TiO2 on thermophysiological comfort properties of woven fabrics. ACS Omega 5, 11481–11490 (2020). Dalbaşi, E. S. & Özçelik Kayseri, G. A research on the comfort properties of linen fabrics subjected to various finishing treatments. J. Nat. Fib. 1, 1–14 (2019). Azeem, M. et al. Comfort properties of nano-filament polyester fabrics: thermo-physiological evaluation. Ind. Tex. 69, 315–321 (2018). Arumugam, V., Mishra, R., Militky, J., Davies, L. & Slater, S. Thermal and water vapor transmission through porous warp knitted 3D spacer fabrics for car upholstery applications. J. Text. Inst. 109, 345–357 (2018). Mansoor, T., Hes, L., Bajzik, V. & Noman, M. T. Novel method on thermal resistance prediction and thermo-physiological comfort of socks in wet state. Text. Res. J 90, 1–20. https://doi.org/10.1177/0040517520902540 (2020). Angelova, R. A. et al. Heat and mass transfer through outerwear clothing for protection from cold: influence of geometrical, structural and mass characteristics of the textile layers. Text. Res. J 87, 1060–1070 (2017). Chen, Q., Tang, K.-P.M., Ma, P., Jiang, G. & Xu, C. Thermophysiological comfort properties of polyester weft-knitted fabrics for sports T-shirt. J. Text. Inst. 108, 1421–1429 (2017). Mishra, R., Veerakumar, A. & Militky, J. Thermo-physiological properties of 3D spacer knitted fabrics. Int. J. Cloth. Sci. Tech. 28, 328–339 (2016). Öner, E. & Okur, A. Thermophysiological comfort properties of selected knitted fabrics and design of T-shirts. J. Text. Inst. 106, 1403–1414 (2015). Shaid, A., Fergusson, M. & Wang, L. Thermophysiological comfort analysis of aerogel nanoparticle incorporated fabric for fire fighter's protective clothing. Chem. Mat. Eng. 2, 37–43 (2014). Noman, M. T., Ashraf, M. A., Jamshaid, H. & Ali, A. A novel green stabilization of TiO2 nanoparticles onto cotton. Fib. Polym. 19, 2268–2277 (2018). Noman, M. T. et al. In-situ development of highly photocatalytic multifunctional nanocomposites by ultrasonic acoustic method. Ultrason. Sonochem. 40, 41–56 (2018). Riaz, S., Ashraf, M., Hussain, T., Hussain, M. T. & Younus, A. Fabrication of robust multifaceted textiles by application of functionalized TiO2 nanoparticles. Colloids Surf. Physicochem. Eng. Aspects 581, 123799 (2019). Xu, S. et al. Colored TiO2 composites embedded on fabrics as photocatalysts: decontamination of formaldehyde and deactivation of bacteria in water and air. Chem. Eng. J. 121949 (2019). Ashraf, M. A., Wiener, J., Farooq, A., Saskova, J. & Noman, M. T. Development of maghemite glass fibre nanocomposite for adsorptive removal of methylene blue. Fib. Polym. 19, 1735–1746 (2018). Noman, M. T., Ashraf, M. A. & Ali, A. Synthesis and applications of nano-TiO2: a review. Environ. Sci. Pollut. Res. 26, 3262–3291 (2019). Asadnajafi, S., Shahidi, S. & Dorranian, D. In situ synthesis and exhaustion of nano TiO2 on fabric samples using laser ablation method. J. Text. Inst. 1–7 (2019). da Silva, L. S., Gonçalves, M. M. M. & Raddi de Araujo, L. R. Combined photocatalytic and biological process for textile wastewater treatments. Water Environ. Res. (2019). Diaz-Angulo, J. et al. Enhancement of the oxidative removal of diclofenac and of the TiO2 rate of photon absorption in dye-sensitized solar pilot scale CPC photocatalytic reactors. Chem. Eng. J. 381, 122520 (2020). El Nemr, A., Helmy, E. T., Gomaa, E. A., Eldafrawy, S. & Mousa, M. Photocatalytic and biological activities of undoped and doped TiO2 prepared by green method for water treatment. J. Environ. Chem. Eng. 7, 103385 (2019). Peter, A. et al. Fabric impregnated with TiO2 gel with self-cleaning property. Int. J. App. Ceram. Technol. 16, 666–681 (2019). Sirirerkratana, K., Kemacheevakul, P. & Chuangchote, S. Color removal from wastewater by photocatalytic process using titanium dioxide-coated glass, ceramic tile, and stainless steel sheets. J. Clean. Prod. 215, 123–130 (2019). Noman, M. T., Petru, M., Militký, J., Azeem, M. & Ashraf, M. A. One-pot sonochemical synthesis of ZnO nanoparticles for photocatalytic applications, Modelling and Optimization. Material 13, 14. https://doi.org/10.3390/ma13010014 (2020). Noman, M. T. & Petru, M. Functional properties of sonochemically synthesized zinc oxide nanoparticles and cotton composites. Nanomaterials 10, 1661. https://doi.org/10.3390/nano10091661 (2020). Noman, M. T. et al. Sonochemical synthesis of highly crystalline photocatalyst for industrial applications. Ultrasonic 83, 203–213 (2018). Gunasekaran, G., Prakash, C. & Periyasamy, S. Effect of Charcoal Particles on Thermophysiological Comfort Properties of Woven Fabrics. J. Nat. Fib. 1–14 (2019). Varshney, R., Kothari, V. & Dhamija, S. A study on thermophysiological comfort properties of fabrics in relation to constituent fibre fineness and cross-sectional shapes. J. Text. Inst. 101, 495–505 (2010). Karthik, T., Senthilkumar, P. & Murugan, R. Analysis of comfort and moisture management properties of polyester/milkweed blended plated knitted fabrics for active wear applications. J. Ind. Text. 47, 897–920 (2018). This work was funded and supported by the Ministry of Education, Youth and Sports of the Czech Republic and the European Union (European Structural and Investment Funds-Operational Programme Research, Development and Education) in the frames of the project "Modular platform for autonomous chassis of specialized electric vehicles for freight and equipment transportation", Reg. No. CZ.02.1.01/0.0/0.0/16_025/0007293 and also supported by Internal Grant of CXI TUL. Department of Machinery Construction, Institute for Nanomaterials, Advanced Technologies and Innovation (CXI), Technical University of Liberec, Studentská 1402/2, 461 17, Liberec 1, Czech Republic Muhammad Tayyab Noman, Michal Petru & Tao Yang Acoustic Signal Analysis and Processing Group, Faculty of Mechatronics, Informatics and Interdisciplinary Studies, Technical University of Liberec, Studentská 1402/2, 461 17, Liberec 1, Czech Republic Nesrine Amor Department of Textile Evaluation, Faculty of Textile Engineering, Technical University of Liberec, Studentská 1402/2, 461 17, Liberec 1, Czech Republic Tariq Mansoor Muhammad Tayyab Noman Michal Petru Tao Yang M.T.N. conceived, designed, performed experiments and wrote manuscript. M.P. analyzed the results, supervised and acquired funding. N.A. conceived and analyzed the results and make statistical analysis. T.Y. performed experiments and analyzed the results. T.M. performed characterization and thermal analysis of developed samples. All of the authors participated in critical analysis and preparation of the manuscript. Correspondence to Muhammad Tayyab Noman. Supplementary information 1. Noman, M.T., Petru, M., Amor, N. et al. Thermophysiological comfort of sonochemically synthesized nano TiO2 coated woven fabrics. Sci Rep 10, 17204 (2020). https://doi.org/10.1038/s41598-020-74357-6
CommonCrawl
Dr. Takayuki Tomaru Assistant Professor at High Energy Accelerator Research Organization KEK SPIE Involvement: Proceedings Article | 10 July 2018 Electrical characterization and tuning of the integrated POLARBEAR-2a focal plane and readout (Conference Presentation) Darcy Barron, Kam Arnold, Tucker Elleflot, John Groh, Daisuke Kaneko, Nobuhiko Katayama, Adrian Lee, Lindsay Lowry, Haruki Nishino, Aritoki Suzuki, Sayuri Takatori, P. Ade, Y. Akiba, A. Ali, M. Aguilar, A. Anderson, P. Ashton, J. Avva, D. Beck, C. Baccigalupi, S. Beckman, A. Bender, F. Bianchini, D. Boettger, J. Borrill, J. Carron, S. Chapman, Y. Chinone, G. Coppi, K. Crowley, A. Cukierman, T. de Haan, M. Dobbs, R. Dunner, J. Errard, G. Fabbian, S. Feeney, C. Feng, G. Fuller, N. Galitzki, A. Gilbert, N. Goeckner-Wald, T. Hamada, N. Halverson, M. Hasegawa, M. Hazumi, C. Hill, W. Holzapfel, L. Howe, Y. Inoue, J. Ito, G. Jaehnig, O. Jeong, B. Keating, R. Keskitalo, T. Kisner, N. Krachmalnicoff, A. Kusaka, M. Le Jeune, D. Leon, E. Linder, A. Lowitz, A. Madurowicz, D. Mak, F. Matsuda, T. Matsumura, A. May, N. Miller, Y. Minami, J. Montgomery, T. Natoli, M. Navroli, J. Peloton, A. Pham, L. Piccirillo, D. Plambeck, D. Poletti, G. Puglisi, C. Raum, G. Rebeiz, C. Reichardt, P. Richards, H. Roberts, C. Ross, K. Rotermund, Max Silva Feaver, Y. Segawa, B. Sherwin, P. Siritanasak, L. Steinmetz, R. Stompor, O. Tajima, S. Takakura, D. Tanabe, R. Tat, G. Teply, A. Tikhomirov, T. Tomaru, C. Tsai, C. Verges, B. Westbrook, N. Whitehorn, A. Zahn Proc. SPIE. 10708, Millimeter, Submillimeter, and Far-Infrared Detectors and Instrumentation for Astronomy IX KEYWORDS: Bolometers, Telescopes, Capacitors, Polarization, Sensors, Receivers, Aluminum, Microwave radiation, Frequency division multiplexing, Circuit switching Read Abstract + POLARBEAR is a cosmic microwave background (CMB) polarization experiment located in the Atacama desert in Chile. The science goals of the POLARBEAR project are to do a deep search for CMB B-mode polarization created by inflationary gravitational waves, as well as characterize the CMB B-mode signal from gravitational lensing. POLARBEAR-1 started observations in 2012, and the POLARBEAR team has published a series of results from its first two seasons of observations, including the first measurement of a non-zero B-mode polarization angular power spectrum, measured at sub-degree scales where the dominant signal is gravitational lensing of the CMB. The Simons Array expands POLARBEAR to include an additional two telescopes with next-generation POLARBEAR-2 multi-chroic receivers, observing at 95, 150, 220, and 270 GHz. The POLARBEAR-2A focal plane has 7,588 transition-edge sensor bolometers, read out with frequency-division multiplexing, with 40 frequency channels within the readout bandwidth of 1.5 to 4.5 MHz. The frequency channels are defined by a low-loss lithographed aluminum spiral inductor and interdigitated capacitor in series with each bolometer, creating a resonant frequency for each channel's unique voltage bias and current readout. Characterization of the readout includes measuring resonant peak locations and heights and fitting to a circuit model both above and below the bolometer superconducting transition temperature. This information is used determine the optimal detector bias frequencies and characterize stray impedances which may affect bolometer operation and stability. The detector electrical characterization includes measurements of the transition properties by sweeping in temperature and in voltage bias, measurements of the bolometer saturation power, as well as measuring and removing any biases introduced by the readout circuit. We present results from the characterization, tuning, and operation of the fully integrated focal plane and readout for the first POLARBEAR-2 receiver, POLARBEAR-2A, during its pre-deployment integration run. WATCH PRESENTATION POLARBEAR-2: a new CMB polarization receiver system for the Simons array (Conference Presentation) Masaya Hasegawa, The POLARBEAR COLLABORATION, Peter Ade, Mario Aguilar, Yoshiki Akiba, Aamir Ali, Kam Arnold, Peter Ashton, Carlo Baccigalupi, Darcy Barron, Dominic Beck, Shawn Beckman, Amy Bender, Federico Bianchini, David Boettger, Julian Borrill, Julien Carron, Scott Chapman, Yuji Chinone, Gabriele Coppi, Kevin Crowley, Ari Cukierman, Matt Dobbs, Rolando Dunner, Tucker Elleflot, Josquin Errard, Giulio Fabbian, Stephen Feeney, Chang Feng, George Fuller, Nicholas Galitzki, Andrew Gilbert, Neil Goeckner-Wald, John Groh, Tijmen Haan, Nils Halverson, Takaho Hamada, Masashi Hazumi, Charles Hill, William Holzapfel, Logan Howe, Yuki Inoue, Jennifer Ito, Greg Jaehnig, Andrew Jaffe, Oliver Jeong, Maude Jeune, Daisuke Kaneko, Nobuhiko Katayama, Brian Keating, Reijo Keskitalo, Theodore Kisner, Nicoletta Krachmalnicoff, Akito Kusaka, Adrian Lee, David Leon, Eric Linder, Lindsay Lowry, Alex Madurowicz, Suet Mak, Frederick Matsuda, Tomotake Matsumura, Andrew May, Nathan Miller, Yuto Minami, Joshua Montgomery, Martin Navaroli, Haruki Nishino, Julien Peloton, Anh Pham, Lucio Piccirillo, Richard Plambeck, Davide Poletti, Giuseppe Puglisi, Christopher Raum, Gabriel Rebeiz, Christian Reichardt, Paul Richards, Hayley Roberts, Colin Ross, Kaja Rotermund Rotermund, Yuuko Segawa, Blake Sherwin, Maximiliano Silva-Feaver, Praween Siritanasak, Leo Steinmetz, Radek Stompor, Aritoki Suzuki, Osamu Tajima, Satoru Takakura, Sayuri Takatori, Daiki Tanabe, Raymond Tat, Grant Teply, Takayuki Tomaru, Calvin Tsai Tsai, Clara Verges, Ben Westbrook, Nathan Whitehorn, Alex Zahn, Junichi Suzuki, Takahiro Okamura KEYWORDS: Staring arrays, Bolometers, Telescopes, Polarization, Sensors, Superconductors, Receivers, Multiplexing, Helium, Microwave radiation POLARBEAR-2 is a new receiver system, which will be deployed on the Simons Array telescope platform, for the measurement of Cosmic Microwave Background (CMB) polarization. The science goals with POLARBEAR-2 are to characterize the B-mode signal both at degree and sub-degree angular-scales. The degree-scale polarization data can be used for quantitative studies on inflation, such as the reconstruction of the energy scale of inflation. The sub-degree polarization data is an excellent tracer of large-scale structure in the universe, and will lead to precise constraints on the sum of the neutrino masses. In order to achieve these goals, POLARBEAR-2 employs 7588 polarization-sensitive antenna-coupled transition-edge sensor (TES) bolometers on the focal plane cooled to 0.27K with a three-stage Helium sorption refrigerator, which is ~6 times larger array over the current receiver system. The large TES bolometer array is read-out by an upgraded digital frequency-domain multiplexing system capable of multiplexing 40 bolometers through a single superconducting quantum interference device (SQUID). The first POLARBEAR-2 receiver, POLARBEAR-2A is constructed and the end-to-end testing to evaluate the integrated performance of detector, readout, and optics system is being conducted in the laboratory with various types of test equipments. The POLARBEAR-2A is scheduled to be deployed in 2018 at the Atacama desert in Chile. To further increase measurement sensitivity, two more POLARBEAR-2 type receivers will be deployed soon after the deployment (Simons Array project). The Simons Array will cover four frequency bands at 95GHz, 150GHz, 220GH and 270GHz for better control of the foreground signal. The projected constraints on a tensor-to-scalar ratio (amplitude of inflationary B-mode signal) is σ(r=0.1) = $6.0 \times 10^{-3}$ after foreground removal ($4.0 \times 10^{-3}$ (stat.)), and the sensitivity to the sum of the neutrino masses when combined with DESI spectroscopic galaxy survey data is 40 meV at 1-sigma after foreground removal (19 meV(stat.)). We will present an overview of the design, assembly and status of the laboratory testing of the POLARBEAR-2A receiver system as well as the Simons Array project overview. Proceedings Article | 8 August 2016 POLARBEAR-2: an instrument for CMB polarization measurements Y. Inoue, P. Ade, Y. Akiba, C. Aleman, K. Arnold, C. Baccigalupi, B. Barch, D. Barron, A. Bender, D. Boettger, J. Borrill, S. Chapman, Y. Chinone, A. Cukierman, T. de Haan, M. Dobbs, A. Ducout, R. Dünner, T. Elleflot, J. Errard, G. Fabbian, S. Feeney, C. Feng, G. Fuller, A. Gilbert, N. Goeckner-Wald, J. Groh, G. Hall, N. Halverson, T. Hamada, M. Hasegawa, K. Hattori, M. Hazumi, C. Hill, W. Holzapfel, Y. Hori, L. Howe, F. Irie, G. Jaehnig, A. Jaffe, O. Jeong, N. Katayama, J. Kaufman, K. Kazemzadeh, B. Keating, Z. Kermish, R. Keskitalo, T. Kisner, A. Kusaka, M. Le Jeune, A. Lee, D. Leon, E. Linder, L. Lowry, F. Matsuda, T. Matsumura, N. Miller, K. Mizukami, J. Montgomery, M. Navaroli, H. Nishino, H. Paar, J. Peloton, D. Poletti, G. Puglisi, C. Raum, G. Rebeiz, C. Reichardt, P. Richards, C. Ross, K. Rotermund, Y. Segawa, B. Sherwin, I. Shirley, P. Siritanasak, N. Stebor, R. Stompor, J. Suzuki, A. Suzuki, O. Tajima, S. Takada, S. Takatori, G. Teply, A. Tikhomirov, T. Tomaru, N. Whitehorn, A. Zahn, O. Zahn Proc. SPIE. 9914, Millimeter, Submillimeter, and Far-Infrared Detectors and Instrumentation for Astronomy VIII KEYWORDS: Bolometers, Optical filters, Mirrors, Polarization, Lenses, Sensors, Physics, Receivers, Optical filtering, Signal detection POLARBEAR-2 (PB-2) is a cosmic microwave background (CMB) polarization experiment that will be located in the Atacama highland in Chile at an altitude of 5200 m. Its science goals are to measure the CMB polarization signals originating from both primordial gravitational waves and weak lensing. PB-2 is designed to measure the tensor to scalar ratio, r, with precision &sigma;(r) &gt; 0:01, and the sum of neutrino masses, &Sigma;m<sub>z</sub>, with &sigma;(&Sigma;m<sub>v</sub>) &lt; 90 meV. To achieve these goals, PB-2 will employ 7588 transition-edge sensor bolometers at 95 GHz and 150 GHz, which will be operated at the base temperature of 250 mK. Science observations will begin in 2017. LiteBIRD: lite satellite for the study of B-mode polarization and inflation from cosmic microwave background radiation detection H. Ishino, Y. Akiba, K. Arnold, D. Barron, J. Borrill, R. Chendra, Y. Chinone, S. Cho, A. Cukierman, T. de Haan, M. Dobbs, A. Dominjon, T. Dotani, T. Elleflot, J. Errard, T. Fujino, H. Fuke, T. Funaki, N. Goeckner-Wald, N. Halverson, P. Harvey, T. Hasebe, M. Hasegawa, K. Hattori, M. Hattori, M. Hazumi, N. Hidehira, C. Hill, G. Hilton, W. Holzapfel, Y. Hori, J. Hubmayr, K. Ichiki, H. Imada, J. Inatani, M. Inoue, Y. Inoue, F. Irie, K. Irwin, H. Ishitsuka, O. Jeong, H. Kanai, K. Karatsu, S. Kashima, N. Katayama, I. Kawano, T. Kawasaki, B. Keating, S. Kernasovskiy, R. Keskitalo, A. Kibayashi, Y. Kida, N. Kimura, K. Kimura, T. Kisner, K. Kohri, E. Komatsu, K. Komatsu, C.-L. Kuo, S. Kuromiya, A. Kusaka, A. Lee, D. Li, E. Linder, M. Maki, H. Matsuhara, T. Matsumura, S. Matsuoka, S. Matsuura, S. Mima, Y. Minami, K. Mitsuda, M. Nagai, T. Nagasaki, R. Nagata, M. Nakajima, S. Nakamura, T. Namikawa, M. Naruse, T. Nishibori, K. Nishijo, H. Nishino, A. Noda, T. Noguchi, H. Ogawa, W. Ogburn, S. Oguri, I. Ohta, N. Okada, A. Okamoto, T. Okamura, C. Otani, G. Pisano, G. Rebeiz, P. Richards, S. Sakai, Y. Sakurai, Y. Sato, N. Sato, Y. Segawa, S. Sekiguchi, Y. Sekimoto, M. Sekine, U. Seljak, B. Sherwin, T. Shimizu, K. Shinozaki, S. Shu, R. Stompor, H. Sugai, H. Sugita, J. Suzuki, T. Suzuki, A. Suzuki, O. Tajima, S. Takada, S. Takakura, K. Takano, S. Takatori, Y. Takei, D. Tanabe, T. Tomaru, N. Tomita, P. Turin, S. Uozumi, S. Utsunomiya, Y. Uzawa, T. Wada, H. Watanabe, B. Westbrook, N. Whitehorn, Y. Yamada, R. Yamamoto, N. Yamasaki, T. Yamashita, T. Yoshida, M. Yoshida, K. Yotsumoto Proc. SPIE. 9904, Space Telescopes and Instrumentation 2016: Optical, Infrared, and Millimeter Wave KEYWORDS: Bolometers, Telescopes, Mirrors, Polarization, Sensors, Satellites, Detector arrays, Space telescopes, Space operations, Microwave radiation LiteBIRD is a next generation satellite aiming for the detection of the Cosmic Microwave Background (CMB) B-mode polarization imprinted by the primordial gravitational waves generated in the era of the inflationary universe. The science goal of LiteBIRD is to measure the tensor-to-scaler ratio r with a precision of &delta;r &lt; 10<sup>-3</sup>&diams;, offering us a crucial test of the major large-single-field slow-roll inflation models. LiteBIRD is planned to conduct an all sky survey at the sun-earth second Lagrange point (L2) with an angular resolution of about 0.5 degrees to cover the multipole moment range of 2 &le; &ell; &le; 200. We use focal plane detector arrays consisting of 2276 superconducting detectors to measure the frequency range from 40 to 400 GHz with the sensitivity of 3.2 &mu;K·arcmin. including the ongoing studies. The Simons Array CMB polarization experiment N. Stebor, P. Ade, Y. Akiba, C. Aleman, K. Arnold, C. Baccigalupi, B. Barch, D. Barron, S. Beckman, A. Bender, D. Boettger, J. Borrill, S. Chapman, Y. Chinone, A. Cukierman, T. de Haan, M. Dobbs, A. Ducout, R. Dunner, T. Elleflot, J. Errard, G. Fabbian, S. Feeney, C. Feng, T. Fujino, G. Fuller, A. Gilbert, N. Goeckner-Wald, J. Groh, G. Hall, N. Halverson, T. Hamada, M. Hasegawa, K. Hattori, M. Hazumi, C. Hill, W. Holzapfel, Y. Hori, L. Howe, Y. Inoue, F. Irie, G. Jaehnig, A. Jaffe, O. Jeong, N. Katayama, J. Kaufman, K. Kazemzadeh, B. Keating, Z. Kermish, R. Keskitalo, T. Kisner, A. Kusaka, M. Le Jeune, A. Lee, D. Leon, E. Linder, L. Lowry, F. Matsuda, T. Matsumura, N. Miller, J. Montgomery, M. Navaroli, H. Nishino, H. Paar, J. Peloton, D. Poletti, G. Puglisi, C. Raum, G. Rebeiz, C. Reichardt, P. Richards, C. Ross, K. Rotermund, Y. Segawa, B. Sherwin, I. Shirley, P. Siritanasak, L. Steinmetz, R. Stompor, A. Suzuki, O. Tajima, S. Takada, S. Takatori, G. Teply, A. Tikhomirov, T. Tomaru, B. Westbrook, N. Whitehorn, A. Zahn, O. Zahn KEYWORDS: Staring arrays, Bolometers, Telescopes, Astronomy, Polarization, Polarization, Sensors, Physics, Receivers, Detector arrays, Cryogenics The Simons Array is a next generation cosmic microwave background (CMB) polarization experiment whose science target is a precision measurement of the B-mode polarization pattern produced both by inflation and by gravitational lensing. As a continuation and extension of the successful POLARBEAR experimental program, the Simons Array will consist of three cryogenic receivers each featuring multichroic bolometer arrays mounted onto separate 3.5m telescopes. The first of these, also called POLARBEAR-2A, will be the first to deploy in late 2016 and has a large diameter focal plane consisting of dual-polarization dichroic pixels sensitive at 95 GHz and 150 GHz. The POLARBEAR-2A focal plane will utilize 7,588 antenna-coupled superconducting transition edge sensor (TES) bolometers read out with SQUID amplifiers using frequency domain multiplexing techniques. The next two receivers that will make up the Simons Array will be nearly identical in overall design but will feature extended frequency capability. The combination of high sensitivity, multichroic frequency coverage and large sky area available from our mid-latitude Chilean observatory will allow Simons Array to produce high quality polarization sky maps over a wide range of angular scales and to separate out the CMB B-modes from other astrophysical sources with high fidelity. After accounting for galactic foreground separation, the Simons Array will detect the primordial gravitational wave B-mode signal to r &gt; 0.01 with a significance of &gt; 5&sigma; and will constrain the sum of neutrino masses to 40 meV (1σ) when cross-correlated with galaxy surveys. We present the current status of this funded experiment, its future, and discuss its projected science return. Proceedings Article | 19 August 2014 The Simons Array: expanding POLARBEAR to three multi-chroic telescopes K. Arnold, N. Stebor, P. A. Ade, Y. Akiba, A. Anthony, M. Atlas, D. Barron, A. Bender, D. Boettger, J. Borrill, S. Chapman, Y. Chinone, A. Cukierman, M. Dobbs, T. Elleflot, J. Errard, G. Fabbian, C. Feng, A. Gilbert, N. Goeckner-Wald, N. Halverson, M. Hasegawa, K. Hattori, M. Hazumi, W. Holzapfel, Y. Hori, Y. Inoue, G. Jaehnig, A. Jaffe, N. Katayama, B. Keating, Z. Kermish, R. Keskitalo, T. Kisner, M. Le Jeune, A. Lee, E. Leitch, E. Linder, F. Matsuda, T. Matsumura, X. Meng, N. Miller, H. Morii, M. Myers, M. Navaroli, H. Nishino, T. Okamura, H. Paar, J. Peloton, D. Poletti, C. Raum, G. Rebeiz, C. Reichardt, P. Richards, C. Ross, K. Rotermund, D. Schenck, B. Sherwin, I. Shirley, M. Sholl, P. Siritanasak, G. Smecher, B. Steinbach, R. Stompor, A. Suzuki, J. Suzuki, S. Takada, S. Takakura, T. Tomaru, B. Wilson, A. Yadav, O. Zahn Proc. SPIE. 9153, Millimeter, Submillimeter, and Far-Infrared Detectors and Instrumentation for Astronomy VII KEYWORDS: Telescopes, Astronomy, Aerospace engineering, Polarization, Sensors, Physics, Receivers, Space telescopes, Spatial resolution, Baryon acoustic oscillations The Simons Array is an expansion of the POLARBEAR cosmic microwave background (CMB) polarization experiment currently observing from the Atacama Desert in Northern Chile. This expansion will create an array of three 3.5m telescopes each coupled to a multichroic bolometric receiver. The Simons Array will have the sensitivity to produce a &ge; 5&sigma; detection of inationary gravitational waves with a tensor-to-scalar ratio r &ge; 0:01, detect the known minimum 58 meV sum of the neutrino masses with 3&sigma; confidence when combined with a next-generation baryon acoustic oscillation measurement, and make a lensing map of large-scale structure over the 80% of the sky available from its Chilean site. These goals require high sensitivity and the ability to extract the CMB signal from contaminating astrophysical foregrounds; these requirements are met by coupling the three high-throughput telescopes to novel multichroic lenslet-coupled pixels each measuring CMB photons in both linear polarization states over multiple spectral bands. We present the status of this instrument already under construction, and an analysis of its capabilities. Thermal and optical characterization for POLARBEAR-2 optical system Y. Inoue, N. Stebor, P. A. Ade, Y. Akiba, K. Arnold, A. Anthony, M. Atlas, D. Barron, A. Bender, D. Boettger, J. Borrilll, S. Chapman, Y. Chinone, A. Cukierman, M. Dobbs, T. Elleflot, J. Errard, G. Fabbian, C. Feng, A. Gilbert, N. Halverson, M. Hasegawa, K. Hattori, M. Hazumi, W. Holzapfel, Y. Hori, G. Jaehnig, A. Jaffe, N. Katayama, B. Keating, Z. Kermish, Reijo Keskitalo, T. Kisner, M. Le Jeune, A. Lee, E. Leitch, E. Linder, F. Matsuda, T. Matsumura, X. Meng, H. Morii, M. Myers, M. Navaroli, H. Nishino, T. Okamura, H. Paar, J. Peloton, D. Poletti, G. Rebeiz, C. Reichardt, P. Richards, C. Ross, D. Schenck, B. Sherwin, P. Siritanasak, G. Smecher, M. Sholl, B. Steinbach, R. Stompor, A. Suzuki, J. Suzuki, S. Takada, S. Takakura, T. Tomaru, B. Wilson, A. Yadav, H. Yamaguchi, O. Zahn KEYWORDS: Bolometers, Thermography, Optical filters, Lenses, Sensors, Physics, Receivers, Optical testing, Optical filtering, Temperature metrology POLARBEAR-2 (PB-2) is a cosmic microwave background (CMB) polarization experiment for B-mode detection. The PB-2 receiver has a large focal plane and aperture that consists of 7588 transition edge sensor (TES) bolometers at 250 mK. The receiver consists of the optical cryostat housing reimaging lenses and infrared filters, and the detector cryostat housing TES bolometers. The large focal plane places substantial requirements on the thermal design of the optical elements at the 4K, 50K, and 300K stages. Infrared filters and lenses inside the optical cryostat are made of alumina for this purpose. We measure basic properties of alumina, such as the index of refraction, loss tangent and thermal conductivity. All results meet our requirements. We also optically characterize filters and lenses made of alumina. Finally, we perform a cooling test of the entire optical cryostat. All measured temperature values satisfy our requirements. In particular, the temperature rise between the center and edge of the alumina infrared filter at 50 K is only 2:0 ± 1:4 K. Based on the measurements, we estimate the incident power to each thermal stage. Optimization of cold resonant filters for frequency domain multiplexed readout of POLARBEAR-2 Kaori Hattori, Yoshiki Akiba, Kam Arnold, Darcy Barron, Amy Bender, Matthew Adam Dobbs, Tijmen de Haan, Nicholas Harrington, Masaya Hasegawa, Masashi Hazumi, William Laird Holzapfel, Yasuto Hori, Brian Keating, Adrian Lee, Joshua Montgomery, Hideki Morii, Michael Myers, Kaja Rotermund, Ian Shirley, Graeme Smecher, Nathan Stebor, Aritoki Suzuki, Takayuki Tomaru KEYWORDS: Bolometers, Electronics, Capacitors, Polarization, Calibration, Resistance, Superconductors, Multiplexing, Microwave radiation, Inductance For the next generation of Cosmic Microwave Background (CMB) experiments, kilopixel arrays of Transition Edge Sensor (TES) bolometers are necessary to achieve the required sensitivity and their science goals. We are developing read-out electronics for POLARBEAR-2 CMB experiment, which multiplexes 32-TES bolometers through a single superconducting quantum interface device (SQUID). To increase both the bandwidth of the SQUID electronics and the multiplexing factor, we are modifying cold wiring and developing LC filters, and a low-inductance superconducting cable. Using these components, we will show frequency domain multiplexing up to 3 MHz. LiteBIRD: mission overview and design tradeoffs T. Matsumura, Y. Akiba, J. Borrill, Y. Chinone, M Dobbs, H. Fuke, M. Hasegawa, K. Hattori, M. Hattori, M. Hazumi, W. Holzapfel, Y. Hori, J. Inatani, M. Inoue, Y. Inoue, K. Ishidoshiro, H. Ishino, H. Ishitsuka, K. Karatsu, S. Kashima, N. Katayama, I. Kawano, A. Kibayashi, Y. Kibe, K. Kimura, N. Kimura, E. Komatsu, M. Kozu, K. Koga, A. Lee, H. Matsuhara, S. Mima, K. Mitsuda, K. Mizukami, H. Morii, T. Morishima, M. Nagai, R. Nagata, S. Nakamura, M. Naruse, T. Namikawa, K. Natsume, T. Nishibori, K. Nishijo, H. Nishino, A. Noda, T. Noguchi, H. Ogawa, S. Oguri, I. Ohta, N. Okada, C. Otani, P. Richards, S. Sakai, N. Sato, Y. Sato, Y. Segawa, Y. Sekimoto, K. Shinozaki, H. Sugita, A. Suzuki, T. Suzuki, O. Tajima, S. Takada, S. Takakura, Y. Takei, T. Tomaru, Y. Uzawa, T. Wada, H. Watanabe, Y. Yamada, H. Yamaguchi, N. Yamasaki, M. Yoshida, T. Yoshida, K. Yotsumoto KEYWORDS: Telescopes, Mirrors, Sun, Polarization, Sensors, Calibration, Satellites, Physics, Antennas, Microwave radiation We present the mission design of LiteBIRD, a next generation satellite for the study of B-mode polarization and inflation from cosmic microwave background radiation (CMB) detection. The science goal of LiteBIRD is to measure the CMB polarization with the sensitivity of &delta;r = 0:001, and this allows testing the major single-field slow-roll inflation models experimentally. The LiteBIRD instrumental design is purely driven to achieve this goal. At the earlier stage of the mission design, several key instrumental specifications, e.g. observing band, optical system, scan strategy, and orbit, need to be defined in order to process the rest of the detailed design. We have gone through the feasibility studies for these items in order to understand the tradeoffs between the requirements from the science goal and the compatibilities with a satellite bus system. We describe the overview of LiteBIRD and discuss the tradeoffs among the choices of scientific instrumental specifications and strategies. The first round of feasibility studies will be completed by the end of year 2014 to be ready for the mission definition review and the target launch date is in early 2020s. Development and characterization of the readout system for POLARBEAR-2 D. Barron, P. A. Ade, Y. Akiba, C. Aleman, K. Arnold, M. Atlas, A. Bender, J. Borrill, S. Chapman, Y. Chinone, A. Cukierman, M. Dobbs, T. Elleflot, J. Errard, G. Fabbian, G. Feng, A. Gilbert, N. Halverson, M. Hasegawa, K. Hattori, M. Hazumi, W. Holzapfel, Y. Hori, Y. Inoue, G. Jaehnig, N. Katayama, B. Keating, Z. Kermish, R. Keskitalo, T. Kisner, M. Le Jeune, A. Lee, F. Matsuda, T. Matsumura, H. Morii, M. Myers, M. Navroli, H. Nishino, T. Okamura, J. Peloton, G. Rebeiz, C. Reichardt, P. Richards, C. Ross, M. Sholl, P. Siritanasak, G. Smecher, N. Stebor, B. Steinbach, R. Stompor, A. Suzuki, J. Suzuki, S. Takada, T. Takakura, T. Tomaru, B. Wilson, H. Yamaguchi, O. Zahn KEYWORDS: Bolometers, Telescopes, Capacitors, Polarization, Sensors, Physics, Receivers, Multiplexing, Picosecond phenomena, Inductance POLARBEAR-2 is a next-generation receiver for precision measurements of polarization of the cosmic microwave background, scheduled to deploy in 2015. It will feature a large focal plane, cooled to 250 milliKelvin, with 7,588 polarization-sensitive antenna-coupled transition edge sensor bolometers, read-out with frequency domain multiplexing with 32 bolometers on a single SQUID amplifier. We will present results from testing and characterization of new readout components, integrating these components into a scaled-down readout system for validation of the design and technology. Proceedings Article | 5 October 2012 POLARBEAR-2 optical and polarimeter designs Tomotake Matsumura, Peter Ade, Kam Arnold, Darcy Barron, Julian Borrill, Scott Chapman, Yuji Chinone, Matt Dobbs, Josquin Errard, Giullo Fabbian, Adnan Ghribi, William Grainger, Nils Halverson, Masaya Hasegawa, Kaori Hattori, Masashi Hazumi, William Holzapfel, Yuki Inoue, Sou Ishii, Yuta Kaneko, Brian Keating, Zigmund Kermish, Nobuhiro Kimura, Ted Kisner, William Kranz, Adrian Lee, Frederick Matsuda, Hideki Morii, Michael Myers, Haruki Nishino, Takahiro Okamura, Erin Quealy, Christian Reichardt, Paul Richards, Darin Rosen, Colin Ross, Akie Shimizu, Michael Sholl, Praween Siritanasak, Peter Smith, Nathan Stebor, Radek Stompor, Aritoki Suzuki, Jun-ichi Suzuki, Suguru Takada, Ken-ichi Tanaka, Takayuki Tomaru, Oliver Zahn Proc. SPIE. 8452, Millimeter, Submillimeter, and Far-Infrared Detectors and Instrumentation for Astronomy VI KEYWORDS: Thermography, Telescopes, Optical design, Modulation, Polarization, Sensors, Receivers, Polarimetry, Space telescopes, Microwave radiation POLARBEAR-2 is a ground based cosmic microwave background (CMB) radiation experiment observing from Atacama, Chile. The science goals of POLARBEAR-2 are to measure the CMB polarization signals originating from the inflationary gravity-wave background and weak gravitational lensing. In order to achieve these science goals, POLARBEAR-2 employs 7588 polarization sensitive transition edge sensor bolometers at observing fre­ quencies of 95 and 150 GHz with 5.5 and 3.5 arcmin beam width, respectively. The telescope is the off-axis Gregorian, Huan Tran Telescope, on which the POLARBEAR-1 receiver is currently mounted. The polarimetry is based on modulation of the polarized signal using a rotating half-wave plate and the rotation of the sky. We present the developments of the optical and polarimeter designs including the cryogenically cooled refractive optics that achieve the overall 4 degrees field-of-view, the thermal filter design, the broadband anti-reflection coating, and the rotating half-wave plate. Proceedings Article | 27 September 2012 The bolometric focal plane array of the POLARBEAR CMB experiment K. Arnold, P. A. Ade, A. Anthony, D. Barron, D. Boettger, J. Borrill, S. Chapman, Y. Chinone, M. Dobbs, J. Errard, G. Fabbian, D. Flanigan, G. Fuller, A. Ghribi, W. Grainger, N. Halverson, M. Hasegawa, K. Hattori, M. Hazumi, W. Holzapfel, J. Howard, P. Hyland, A. Jaffe, B. Keating, Z. Kermish, T. Kisner, M. Le Jeune, A. Lee, E. Linder, M. Lungu, F. Matsuda, T. Matsumura, N. Miller, X. Meng, H. Morii, S. Moyerman, M. Myers, H. Nishino, H. Paar, E. Quealy, C. Reichardt, P. Richards, C. Ross, A. Shimizu, C. Shimmin, M. Shimon, M. Sholl, P. Siritanasak, H. Spieler, N. Stebor, B. Steinbach, R. Stompor, A. Suzuki, T. Tomaru, C. Tucker, O. Zahn KEYWORDS: Staring arrays, Bolometers, Telescopes, Sensors, Dielectrics, Silicon, Antennas, Niobium, Reactive ion etching, Semiconducting wafers The POLARBEAR Cosmic Microwave Background (CMB) polarization experiment is currently observing from the Atacama Desert in Northern Chile. It will characterize the expected B-mode polarization due to gravitational lensing of the CMB, and search for the possible B-mode signature of inflationary gravitational waves. Its 250 mK focal plane detector array consists of 1,274 polarization-sensitive antenna-coupled bolometers, each with an associated lithographed band-defining filter. Each detector's planar antenna structure is coupled to the telescope's optical system through a contacting dielectric lenslet, an architecture unique in current CMB experiments. We present the initial characterization of this focal plane. The POLARBEAR experiment Zigmund Kermish, Peter Ade, Aubra Anthony, Kam Arnold, Darcy Barron, David Boettger, Julian Borrill, Scott Chapman, Yuji Chinone, Matt Dobbs, Josquin Errard, Giulio Fabbian, Daniel Flanigan, George Fuller, Adnan Ghribi, Will Grainger, Nils Halverson, Masaya Hasegawa, Kaori Hattori, Masashi Hazumi, William Holzapfel, Jacob Howard, Peter Hyland, Andrew Jaffe, Brian Keating, Theodore Kisner, Adrian Lee, Maude Le Jeune, Eric Linder, Marius Lungu, Frederick Matsuda, Tomotake Matsumura, Xiaofan Meng, Nathan Miller, Hideki Morii, Stephanie Moyerman, Mike Myers, Haruki Nishino, Hans Paar, Erin Quealy, Christian Reichardt, Paul Richards, Colin Ross, Akie Shimizu, Meir Shimon, Chase Shimmin, Mike Sholl, Praween Siritanasak, Helmuth Spieler, Nathan Stebor, Bryan Steinbach, Radek Stompor, Aritoki Suzuki, Takayuki Tomaru, Carole Tucker, Oliver Zahn KEYWORDS: Bolometers, Telescopes, Optical filters, Polarization, Sensors, Calibration, Receivers, Semiconducting wafers, Signal detection, Cryogenics We present the design and characterization of the POLARBEAR experiment. POLARBEAR will measure the polarization of the cosmic microwave background (CMB) on angular scales ranging from the experiment's 3.5' beam size to several degrees. The experiment utilizes a unique focal plane of 1,274 antenna-coupled, polarization sensitive TES bolometers cooled to 250 milliKelvin. Employing this focal plane along with stringent control over systematic errors, POLARBEAR has the sensitivity to detect the expected small scale B-mode signal due to gravitational lensing and search for the large scale B-mode signal from inflationary gravitational waves. POLARBEAR was assembled for an engineering run in the Inyo Mountains of California in 2010 and was deployed in late 2011 to the Atacama Desert in Chile. An overview of the instrument is presented along with characterization results from observations in Chile. The POLARBEAR-2 experiment Takayuki Tomaru, Masashi Hazumi, Adrian Lee, Peter Ade, Kam Arnold, Darcy Barron, Julian Borrill, Scott Chapman, Yuji Chinone, Matt Dobbs, Josquin Errard, Giullo Fabbian, Adnan Ghribi, William Grainger, Nils Halverson, Masaya Hasegawa, Kaori Hattori, William Holzapfel, Yuki Inoue, Sou Ishii, Yuta Kaneko, Brian Keating, Zigmund Kermish, Nobuhiro Kimura, Ted Kisner, William Kranz, Frederick Matsuda, Tomotake Matsumura, Hideki Morii, Michael Myers, Haruki Nishino, Takahiro Okamura, Erin Quealy, Christian Reichardt, Paul Richards, Darin Rosen, Colin Ross, Akie Shimizu, Michael Sholl, Praween Siritanasak, Peter Smith, Nathan Stebor, Radek Stompor, Aritoki Suzuki, Jun-ichi Suzuki, Suguru Takada, Ken-ichi Tanaka, Oliver Zahn KEYWORDS: Bolometers, Telescopes, Optical filters, Mirrors, Polarization, Lenses, Sensors, Superconductors, Receivers, Semiconducting wafers POLARBEAR-2 (PB-2) is a cosmic microwave background (CMB) polarization experiment observing at Atacama plateau in Chile. PB-2 is designed to improve the sensitivity to measure the CMB B-mode polarization by upgrading the current POLARBEAR-1 receiver that is currently mounted on the Huan Tran telescope. The improvements in PB-2 include, i) the dual band observations at 95 GHz and 150 GHz in each pixel using an sinuous antenna, ii) the increase of the total number of detectors, 7588 Al-Ti bilayer transition-edge sensor (TES) bolometers, iii) the bath temperature of bolometers at 100mK in the second phase of observation (300mK in the first phase.) With the expected sensitivity of 5.7 μK √ s, PB-2 is sensitive to a tensor-to-scalar ratio, r, of 0.01 at 95% confidence level (CL) and constrains the sum of neutrino masses as 90meV by PB-2 alone and 40meV by combining PB-2 and Planck at 68% CL. We schedule to deploy in 2014. LiteBIRD: a small satellite for the study of B-mode polarization and inflation from cosmic background radiation detection M. Hazumi, J. Borrill, Y. Chinone, M. Dobbs, H. Fuke, A. Ghribi, M. Hasegawa, K. Hattori, M. Hattori, W. Holzapfel, Y. Inoue, K. Ishidoshiro, H. Ishino, K. Karatsu, N. Katayama, I. Kawano, A. Kibayashi, Y. Kibe, N. Kimura, K. Koga, E. Komatsu, A. Lee, H. Matsuhara, T. Matsumura, S. Mima, K. Mitsuda, H. Morii, S. Murayama, M. Nagai, R. Nagata, S. Nakamura, K. Natsume, H. Nishino, A. Noda, T. Noguchi, I. Ohta, C. Otani, P. Richards, S. Sakai, N. Sato, Y. Sato, Y. Sekimoto, A. Shimizu, K. Shinozaki, H. Sugita, A. Suzuki, T. Suzuki, O. Tajima, S. Takada, Y. Takagi, Y. Takei, T. Tomaru, Y. Uzawa, H. Watanabe, N. Yamasaki, M. Yoshida, T. Yoshida, K. Yotsumoto KEYWORDS: Bolometers, Mirrors, Polarization, Satellites, X-rays, Superconductors, Physics, Multiplexing, Microwave radiation, Semiconducting wafers LiteBIRD [Lite (Light) satellite for the studies of B-mode polarization and Inflation from cosmic background Radiation Detection] is a small satellite to map the polarization of the cosmic microwave background (CMB) radiation over the full sky at large angular scales with unprecedented precision. Cosmological inflation, which is the leading hypothesis to resolve the problems in the Big Bang theory, predicts that primordial gravitational waves were created during the inflationary era. Measurements of polarization of the CMB radiation are known as the best probe to detect the primordial gravitational waves. The LiteBIRD working group is authorized by the Japanese Steering Committee for Space Science (SCSS) and is supported by JAXA. It has more than 50 members from Japan, USA and Canada. The scientific objective of LiteBIRD is to test all the representative inflation models that satisfy single-field slow-roll conditions and lie in the large-field regime. To this end, the requirement on the precision of the tensor-to-scalar ratio, r, at LiteBIRD is equal to or less than 0.001. Our baseline design adopts an array of multi-chroic superconducting polarimeters that are read out with high multiplexing factors in the frequency domain for a compact focal plane. The required sensitivity of 1.8&mu;Karcmin is achieved with 2000 TES bolometers at 100mK. The cryogenic system is based on the Stirling/JT technology developed for SPICA, and the continuous ADR system shares the design with future X-ray satellites. The POLARBEAR CMB polarization experiment K. Arnold, P. Ade, A. E. Anthony, F. Aubin, D. Boettger, J. Borrill, C. Cantalupo, M. A. Dobbs, J. Errard, D. Flanigan, A. Ghribi, N. Halverson, M. Hazumi, W. Holzapfel, J. Howard, P. Hyland, A. Jaffe, B. Keating, T. Kisner, Z. Kermish, A. Lee, E. Linder, M. Lungu, T. Matsumura, N. Miller, X. Meng, M. Myers, H. Nishino, R. O'Brient, D. O'Dea, H. Paar, C. Reichardt, I. Schanning, A. Shimizu, C. Shimmin, M. Shimon, H. Spieler, B. Steinbach, R. Stompor, A. Suzuki, T. Tomaru, H. T. Tran, C. Tucker, E. Quealy, P. Richards, O. Zahn Proc. SPIE. 7741, Millimeter, Submillimeter, and Far-Infrared Detectors and Instrumentation for Astronomy V KEYWORDS: Bolometers, Telescopes, Polarization, Lenses, Sensors, Photons, Dielectrics, Receivers, Antennas, Cryogenics POLARBEAR is a Cosmic Microwave Background (CMB) polarization experiment that will search for evidence of inflationary gravitational waves and gravitational lensing in the polarization of the CMB. This proceeding presents an overview of the design of the instrument and the architecture of the focal plane, and shows some of the recent tests of detector performance and early data from the ongoing engineering run. Showing 5 of 16 publications Access to SPIE eBooks is limited to subscribing institutions. Access is not available as part of an individual subscription. Purchase complete book on SPIE.org. Shibboleth users login to see if you have access.
CommonCrawl
Consider the Curve $y= x-x^7.$ How do I find the slope of the tangent line to the curve at the point $(1, 0).$ I have no idea how to solve this question. a) Find the slope of the tangent line to the curve at the point $(1, 0)$. The answer is -6. Yet, how did they arrive at this answer using this formula: $$m=\lim_{h\to0}\frac{f(a+h)-f(a)}h$$ Now the steps are shown as so: (a) Using Definition 1: $m=\lim_{x\to a}\frac{f(x)-f(a)}{x-a}$ with $f(x)=x-x^\color{red}7$ and $P(\color{red}1,\color{red}0)$, $$\begin{align} m&=\lim_{x\to1}\frac{f(x)-\color{red}0}{x-\color{red}1}=\lim_{x\to1}\frac{x-x^\color{red}7}{x-\color{red}1}=\lim_{x\to1}\frac{x(1-x^\color{red}6)}{x-\color{red}1}\\&=\lim_{x\to1}\frac{x(1-x)\color{red}{(1+x+x^2+x^3+x^3+x^4+x^5)}}{x-\color{red}1}\\&=\lim_{x\to1}\left[-x\color{red}{\left(1+x+x^2+x^3+x^3+x^4+x^5\right)}\right]=-1(\color{red}6)=\color{red}{-6}. \end{align}$$ (b) An equation of the tangent line is $$\begin{align} &y-f(a)=f'(a)(x-a)\\\implies&y-f(\color{red}1)=f'(\color{red}1)(x-\color{red}1)\\\implies&y-\color{red}0=\color{red}{-6}(x-\color{red}1),\text{ or }y=-\color{red}{-6}x+\color{red}6 \end{align}$$ …but I don't understand them. Can someone explain these simply? Cetshwayo CetshwayoCetshwayo $\begingroup$ i think answer should be -6 . $\endgroup$ – avz2611 Feb 22 '15 at 16:21 $\begingroup$ Can you be more clear about exactly which of these steps you don't understand? $\endgroup$ – Gregory Grant Feb 22 '15 at 16:21 $\begingroup$ The second line 4th equation. How was that expanded? $\endgroup$ – Cetshwayo Feb 22 '15 at 16:27 $\begingroup$ $1-x^6 \text{ was factored } \\ \text{ You could divide } -x^6+1 \text{ by} -x+1 \text{ to find that other factor if you don't know the formula that is }$ $\endgroup$ – randomgirl Feb 22 '15 at 16:31 From a differential coefficient meaning / viewpoint slope is demonstrated to be -6. All that was discredited when you saw a new result given as 6. You must see which is stronger evidence as you understood. A simple printer's devil/typo or the calculation of derivative from basics? Such self judgment is important all the time in maths. NarasimhamNarasimham Take the derivative of $f(x)=x-x^7$ which is $f'(x)=1-7x^6$ and evaluate it in $x_0=1$: you'll obtain $f'(1)=1-7=-6$ which is the slope of the tangent line to the curve $y=x-x^7$ at the point $(1,0)$. I think there must be a typo in your book. $\begingroup$ They never explained within the book how to solve a problem such as this. What is the algorithm (strategy) called to solving problems such as this? Is it taking the derivative? $\endgroup$ – Cetshwayo Feb 22 '15 at 17:36 $\begingroup$ In general it's dangerous to speak about "algorithm" in Mathematics. Think at integration: there isn't a unique way to solve them, you have to experience yourself. However in this case you have to know some basic matters in calculus. The question you asked was based on the fact that the definition of derivative of a function $f$ in a given point $x_0$ is the angular coefficient of the tangent line to the graphic of the curve defined from your function $f$ at the given point $(x_0,f(x_0))$. $\endgroup$ – Joe Feb 22 '15 at 18:16 Not the answer you're looking for? Browse other questions tagged calculus or ask your own question. Find the equation of the other tangent given the point of parallel tangent. Help with finding tangent to curve at a point Slope of the tangent line, Calculus find the slope of the tangent line Finding slope $\frac{dy}{dx}$ of tangent line to a curve defined in polar coordinates How to Find the Equation of the Tangent Line to the Curve of $x^{12}y^{4}-x^9y^{10}=0$ at the Point (1,1)? Find an equation of the tangent line to the curve at the given point. y = sin(3x) sin2 (3x) given the point (0,0) Find equations of the tangents to a parametric curve that pass through a given point
CommonCrawl
How to find the distance along a sphere from an angle? Imagine there are two points on planet Earth and a light is shone from one to the other by reflecting off an object 500km up (think of this as a mirror oriented parallel to the surface right below it). Let us assume the Earth is a perfect sphere. As the distance between the points increases, the angle that the light is received at increases with a limit of 90 degrees which corresponds to a tangent of the sphere. I would like to know how far apart the two points along the sphere are as a function of this angle. To try to solve it I first notice that the angle of incidence equals the angle of reflection. So we can draw an isosceles triangle with the reflector at the top and the two points as the other two vertices. The height of the triangle is a function of how far apart the two points are. But now I am stuck. The radius of the Earth is 6371km. geometry trigonometry spheres fomin fominfomin $\begingroup$ Not one hundred percent sure I understand the problem.. But a figure may help $\endgroup$ – Abdullah O. Alfaqir Jun 14 '20 at 7:53 $\begingroup$ Can you just think of the sphere as being of radius $6371+r$ to simplify things a little? $\endgroup$ – Anush Jun 14 '20 at 7:55 $\begingroup$ @Anush: That's kilometers, probably; you need to multiply by $1000$, perhaps? $\endgroup$ – Brian Tung Jun 14 '20 at 7:58 $\begingroup$ The reverse problem is easier: given the distance $D$ between the two 'observers', half distance angle is $\alpha=\frac{D}{2R}$. Then half of the angle at the reflector is given by $\tan \delta = \frac{R\sin\alpha}{R(1-\cos\alpha)+r}$. $R$ is Earth radius, while $r$ is the altitude of the reflector. $\endgroup$ – N74 Jun 16 '20 at 19:36 $\begingroup$ This object that's 500km up, is it directly above one of the surface objects? Above the point on the surface halfway between them? Somewhere else? $\endgroup$ – user307169 Jun 17 '20 at 16:21 I interpreted the problem as asking for a function that, for general $R$ and $h$, gives the distance between the two points for any reflection angle (i.e., not only for the case in which the beams are tangent), and allows to calculate the maximal reflection angle and the maximal distance. This function can then be used to calculate the maximal angle and the maximal distance in the specific scenario provided in the OP, with $R=6371$ Km and $h=500$ Km. Let us consider the Earth circumference as represented by the circle $x^2+y^2=R^2$, with the center in the origin and where $R=6371$. We can place our object in $A(0,R+h)$ on the $y$-axis, representing a point that is $h$ Km up the Earth surface. Now let us draw two lines passing through $A$, symmetric with respect to the $y$-axis and intersecting the circumference. For each line, let us consider the intersection point that is nearer to the $y$-axis. Let us call the two new points $B$ (in the first quadrant) and $C$ (in the second quadrant). These represent the two points on Earth surface. Due to the symmetry of the construction, we can continue by analyzing only one of these two points, e.g. $C$. The equation of the line containing $AC$ can be written as $y=sx+R+h$, where $s$ is its positive slope. To determine where this line crosses the circumference, we can set $$sx+R+h=\sqrt{R^2-x^2}$$ whose solutions are $$x=\frac{-sR-sh \pm \sqrt{\Delta}}{s^2 + 1}$$ where $\Delta=s^2 R^2 - 2h R - h^2$. As stated above, we are interested in the less negative solution for $x$, as it is that nearer to the $y$-axis. So we get that the $x$-coordinate of $C$ is $$X_C=\frac{-sR-sh + \sqrt{\Delta}}{s^2 + 1}$$ and the $y$-coordinate is $$Y_C=\frac{s(-sR-sh + \sqrt{\Delta})}{s^2 + 1}+R+h$$ As a result, the equation $y=tx$ of the line $OC$ has slope $$t=\frac{Y_C}{X_C}x=s-\frac{(s^2+1)(R+h)}{sh+sR-\sqrt{\Delta}}$$ Now setting $\angle{BAC}=\alpha$ and $\angle{BOC}=\beta$, we have $s=\cot(\alpha/2)$ and $t=-\cot(\beta/2)$. Thus, we get $$ \cot(\beta/2) =\frac{(\cot^2(\alpha/2)+1)(R+h)}{\cot(\alpha/2)(R+h)-\sqrt{\Delta}} - \cot(\alpha/2)$$ $$ \beta =2\cot^{-1}\left[\frac{(\cot^2(\alpha/2)+1)(R+h)}{\cot(\alpha/2)(R+h)-\sqrt{\Delta}} - \cot(\alpha/2)\right] $$ So the length of the arc $D$ corresponding to $ \beta$, which is the distance along the spherical surface asked in the OP, is $$ D =2R\cot^{-1}\left[\frac{(\cot^2(\alpha/2)+1)(R+h)}{\cot(\alpha/2)(R+h)-\sqrt{\Delta}} - \cot(\alpha/2)\right] $$ where $\Delta=\cot^2(\alpha/2)R^2 - 2h R - h^2$. The last equation can be simplified as $$ D =2R\cot^{-1}\left[\frac{(R+h)+\sqrt{\Delta}}{\cot(\alpha/2)(R+h)-\sqrt{\Delta}} \right] $$ For example, for $\alpha=\pi/2$ and $h= (\sqrt{2}-1)R$, as expected we have $\Delta=0$ (this is the situation where $\alpha$ is a right angle and the light beams are tangent to the surface). In this case, $\beta$ is also a right angle and $D=\pi/2\,R$. Accordingly the formula above gives this result, as shown by WA here. For any value of $R$ and $h$, the maximal angle $\alpha_{max}$ and the maximal distance $D_{max}$ (i.e., those obtained with the beams tangent to the surface) can be determined by considering the case in which $\Delta=0$. This case occurs when $\cot^2(\alpha/2)R^2 - 2h R - h^2=0$. Solving for $\alpha$ in the range $0 \leq \alpha \leq \pi$ we get $$\alpha_{max} = 2 \cot^{-1}\left(\frac{\sqrt{h(h+2R)}}{R}\right)$$ Interestingly, when $\Delta=0$, the formula for the distance is considerably simplified, and by few calculations reduces to $$ D_{max} =2R\cot^{-1}\left[\tan(\alpha/2)\right] $$ As shown here, in the specific scenario described in the OP, substituting $R=6371$ and $h=500$, we get $$\alpha= 2 \cot^{-1}\left(\frac{10 \sqrt{66210}}{6371}\right) \approx 2.3739 \,\,\text{radians}$$ which corresponds to about $136$ degrees. Here is the plot of the distance $D$ (in Km) as a function of $\alpha$ (in radians) for $R=6371$ and $h=500$, as obtained by WA. The plot confirms the maximal real value of $\alpha$, concordant with the predicted value of $2.3739$. The blue and red lines indicate the real and imaginary part, respectively. Lastly, from the simplified formula for the maximal distance, taking $R=6371$ and $\alpha=2.3739$, we get $$D_{max}\approx 4891 \, \text{Km}$$ AnatolyAnatoly $\begingroup$ Please note that the angle $\angle$ given in this answer is not the one specified in the question. $\endgroup$ – fomin Jun 24 '20 at 5:14 We could make a triangle with the center of the sphere, one of the observer and the mirror. From this triangle, we know the angle at the observer is a right angle since the radius is perpendicular to the tangent. the side from the center to the observer mesure $6371$ km (radius). the side from the center to the mirror mesure $6871$ km (radius $+$ height of the mirror). We are able to find the angle at the center of the sphere. $$\cos \theta=\frac{6371}{6871}$$ $$\theta=0.383848\ \text{rad}$$ The same triangle could be built with the second observer. So the angular distance between the two observers is $$2\theta=0.767696\ \text{rad}$$ We multiply this value with the radius to get the distance between the two observers. $$\text{distance}=2\theta\times6371=4891\ \text{km}$$ More generaly, if an observer is on a sphere of radius $r$ uses a mirror placed at a height $h$ above the surface. The further distance that he could reach is given by $$\text{distance}=2\times r\times\arccos\left(\frac{r}{r+h}\right)$$ Alain RemillardAlain Remillard I understood the problem as follows (see the picture). We have $OA=OA'=R$ and $OM=R+h$, where $R=6371$ km is the radius of Earth and $H=500$ km its the height of the reflector over the Earth's surface. The point $P$ on the tangent is chosen to provide $AP||OM$. Given the reflection angle $\angle PMA=\alpha$, find the distance $d$ between the points $A$ and $A'$ along the sphere. But $d=2R\angle AOM=2R\beta$. We have $PM=AM\cos\alpha=OA\sin\beta=R\sin\beta$. By theorem of cosinuses $$AM^2=OA^2+OM^2-2OA\cdot OM\cos\beta=R^2+(R+H)^2-2R(R+H) \cos\beta.$$ $$(R^2+(R+H)^2-2R(R+H) \cos\beta) \cos^2\alpha= R^2\sin^2\beta.$$ Putting $h=H/R$, we have $$(1+(1+h)^2-2(1+h) \cos\beta)\cos^2\alpha=\sin^2\beta=1-\cos^2\beta.$$ This is a quadratic equation for $\cos\beta$, whose solution gives $$\cos\beta=1+h-\sqrt{(h^2+2h+1)\cos^4\alpha-(h^2+2h+2)\cos^2\alpha+1}.$$ Alex RavskyAlex Ravsky Consider the triangle formed by one of the points, the mirror and the centre of the Earth. If $\alpha$ is "the angle the light is received" then $\alpha-\beta$ is the angle at the mirror in the triangle, where $\beta$ is the angle at the centre of the Earth in the same triangle. If $R$ is the radius of the Earth and $d$ is the distance of the mirror from the surface of the Earth $(500~\text{km})$, by the sine law we have: $$ R:\sin(\alpha-\beta)=(R+d):\sin\alpha. $$ We can expand $\sin(\alpha-\beta)$ and solve for $\sin\beta$: $$ \sin\beta={R\sin\alpha\over R+d} \left(-\cos\alpha+\sqrt{{(R+d)^2\over R^2}-\sin^2\alpha}\right). $$ The distance between the two points is then $2R\sin\beta$ (straight line), or $2R\beta$ along the surface of the Earth ($\beta$ measured in radians). Intelligenti paucaIntelligenti pauca Analytical geometry calculation. You want intersection between ( meridians of ) sphere and cone of semi-vertical angle $\theta$ from a celestial mirror at $O$. $$ z^2+r^2=R^2 \tag1$$ and a ray ( generator of cone ) from the mirror $$ r \cot \theta-z = R+h\tag2$$ Eliminate $z$ between (1),(2) $$ r^2( 1+\cot^2 \theta) -2 a r \frac{\cos \theta}{\sin \theta} + (a^2-R^2) \tag3$$ $$ r^2-2 a r \frac{\cos \theta}{\sin \theta}+ h (2R+h)\sin^2 \theta $$ The quadratic has two roots $$ \frac{r}{\sin \theta}= a \cos \theta \pm \sqrt{ a^2 \cos^2 \theta -h(2R+h)} \tag4$$ $-ve $ sign for required spherical cap nearby and $+ve,$ if the ray is produced to pierce second half hemisphere. By geometry of right triangles we can find when the ray is tangential to the sphere: $$ (z_m,r_m)= \dfrac{h(2R+h)}{(R+h)},\dfrac{R\sqrt{h(2R+h)}}{(R+h)}\tag5$$ is the required relation, graphed. If $OM$ is along polar axis connecting north/south poles, the the latitude, co-latitude of required circle circle of horizon which has $$ \cos^{-1}\frac{r_m}{R}, \sin^{-1}\frac{y_m}{R}\tag6$$ NarasimhamNarasimham Not the answer you're looking for? Browse other questions tagged geometry trigonometry spheres or ask your own question. Calculating longitude degrees from distance? How to calculate distance on a sphere with an earth-like coordination system? Angle at which a body bounces off a sphere Finding the height of a balloon given two observers on the ground at a fixed distance apart Constructing an isosceles triangle inside an acute angle. geometry - find the path a light ray must take to reach a destination with one bounce off a mirror Interesting Physics/Calculus Problem Generate two valid vertices of isosceles triangle, given one vertex, an angle, and a distance
CommonCrawl
Taylor Series beyond 2-3 terms? im looking to understand the tangent taylor series, but im struggling to understand how to use long division to divide one series (sine) into the other (cosine). I also can't find examples of the Tangent series much beyond X^5 (wikipedia and youtube videos both stop at the second or third term), which is not enough for me to see any pattern. (x^3/3 + 2x^5/15 tells me nothing). Wiki says Bernouli Numbers which i plan on studying next, but seriously, i could really use an example of tangent series out to 5-6 just to get a ballpark of what's going on before i start plug and pray. If someone can explain why the long division of the series spits out x^3/3 instead of x^3/3x^2, that would help too, because I took x^3/6 divided by x^2/2 and got 2x^3/6x^2, following the logic that 4/2 divided by 3/5 = 2/0.6 or 20/6. So I multiplied my top and bottom terms for the numerator, and my two middle terms for the denominator (4x5)/(2x3) = correct. But when i do that with terms in the taylor series I'm doing something wrong. does that first x from sine divided by that first 1 from cosine have anything to do with it? Completely lost. sequences-and-series taylor-expansion TristianTristian $\begingroup$ The power series coefficients for the tangent (expanded at the origin) are somewhat mysterious, and yes, the Bernoulli numbers are worth studying if you really want to know what they are and how to compute them. The process of dividing two power series can also produce a continued fraction, with a much more easily recognized pattern. $\endgroup$ – hardmath Jul 5 '18 at 21:03 $\begingroup$ An odd question: When you say 'understand', what particularly are you aiming to understand? If your interest is in calculation then there are much better ways of calculating tangent than its Taylor series; if your interest is in understanding the coefficients themselves, then I don't know that computing them past the first few terms will help particularly much with that understanding (but studying Bernoulli numbers will). What's your ultimate goal? $\endgroup$ – Steven Stadnicki Jul 5 '18 at 22:54 $\begingroup$ not sure this will even be seen, but note on the series long division below: in 5th row of the long division, when i derive (x^3/3 * x^6/720 = x^9/2160)'' i get (9 * 8 * x^7/2160) = x^7/30, NOT x^7/72. How did they get 72? I divided 2160 by 72 and got 30 $\endgroup$ – Tristian Jul 7 '18 at 4:31 $$\tan(x) = x+{\frac{1}{3}}{x}^{3}+{\frac{2}{15}}{x}^{5}+{\frac{17}{315}}{x}^{7}+ {\frac{62}{2835}}{x}^{9}+{\frac{1382}{155925}}{x}^{11}+{\frac{21844}{ 6081075}}{x}^{13}+\ldots$$ EDIT: Long division: $$ \matrix{& & x &+ \frac{x^3}{3} &+ \frac{2 x^5}{15} &+ \frac{17 x^7}{315}&+ \ldots\cr& &---&---&---&---&--- \cr 1 - \frac{x^2}{2} + \frac{x^4}{24} - \frac{x^6}{720} + \ldots & | & x &- \frac{x^3}{6} &+ \frac{x^5}{120} &- \frac{x^7}{5040} &+ \ldots\cr & & x &- \frac{x^3}{2} &+ \frac{x^5}{24} &- \frac{x^7}{720} &+ \ldots\cr & & ---&---&---&---&---\cr & & &\frac{x^3}{3} &- \frac{x^5}{30} &+ \frac{x^7}{840} &+ \ldots\cr & & & \frac{x^3}{3} & - \frac{x^5}{6} & + \frac{x^7}{72} &+\ldots\cr & & & --- & --- & --- & ---\cr & & & & \frac{2 x^5}{15} & - \frac{4 x^7}{315} & +\ldots\cr & & & & \frac{2 x^5}{15} & - \frac{2 x^7}{30} & +\ldots\cr & & & & --- & --- & ---\cr & & & & & \frac{17 x^7}{315} & + \ldots }$$ Robert IsraelRobert Israel $\begingroup$ wow!! how did you get those? and how do I get those? $\endgroup$ – Tristian Jul 5 '18 at 20:03 $\begingroup$ This does not provide an answer to the question. To critique or request clarification from an author, leave a comment below their post. - From Review $\endgroup$ – max_zorn Jul 5 '18 at 20:37 $\begingroup$ @max_zorn: With respect, it actually does answer the Question. OP asks about the Taylor series for "the tangent" and says after looking at Wikipedia's mention of Bernoulli numbers that (s)he "plan(s) on studying next, but seriously, [I] could really use an example of tangent series out to 5-6 [terms]". $\endgroup$ – hardmath Jul 5 '18 at 20:43 $\begingroup$ @hardmath: I interpret this as if the OP wants to see details on the division of the two power series. How does dumping a solution show Tristian anything? How were the coefficients obtained? By Wolfram Alpha or some other tool or by some paper-and-pencil work? $\endgroup$ – max_zorn Jul 5 '18 at 20:50 $\begingroup$ @max_zorn: I like your idea of showing the OP how to form the ratio of two power series, because this matches what they seem to have tried. But the OP seems overwhelmed by the details of what such a computation entails and specifically asks for "a solution" (I assume so that they can know what the answer should look like). Certainly there is an opportunity to write up something further on this Question, if you are so inclined. $\endgroup$ – hardmath Jul 5 '18 at 20:57 You might find it conceptually easier to set up the identity of power series and compare the first few coefficients, and solve. This is algebraically equivalent to long division, though the order of some of the arithmetic operations is somewhat rearranged. Write the desired Taylor series at $x = 0$ as $$\tan x \sim a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \cdots .$$ Since $\tan$ is odd, all of the coefficients of the even terms vanish, i.e., $0 = a_0 = a_2 = a_4 = \cdots$. (This insight isn't necessary---we'd recover this fact soon anyway---but it does make the next computation easier.) Replacing the functions in $$\cos x \tan x = \sin x$$ with their Taylor series gives $$\left(1 - \frac{1}{2!} x^2 + \frac{1}{4!} x^4 - \cdots\right)(a_1 x + a_3 x^3 + a_5 x^5 \cdots) = x - \frac{1}{3!} x^3 + \frac{1}{5!} x^5 - \cdots .$$ Now, comparing the coefficients of the terms $x, x^3, x^5, \ldots$, on both sides respectively gives $$\begin{align*} a_1 &= 1 \\ a_3 - \tfrac{1}{2} a_1 &= -\tfrac{1}{6} \\ a_5 - \tfrac{1}{2} a_3 + \tfrac{1}{24} a_1 &= \tfrac{1}{120} \\ & \,\,\vdots \end{align*}$$ and successively solving and substituting gives $$\tan x \sim x + \tfrac{1}{3} x^3 + \tfrac{2}{15} x^5 + \cdots .$$ Of course, it's straightforward (if eventually tedious) to compute as many terms as you want this way. An efficient proof of the formula you mentioned involving the Bernoulli numbers for the general coefficient is given in this answer. TravisTravis My impression is that it's kind of backwards, in a numerical sense, to think about the coefficients of the $\tan$ series in terms of the Bernoulli numbers because it's simple and numerically stable to calculate the $\tan$ coefficients directly and in fact provides a reasonable method for computing the Bernoulli numbers given the formula in @RobJohn's post. Since $y(x)=\tan x$ is an odd function of $x$ analytic at $x=0$, $$y=\sum_{n=0}^{\infty}a_nx^{2n+1}$$ Then $y^{\prime}=\sec^2x=\tan^2x+1=y^2+1$ so $$\sum_{n=0}^{\infty}(2n+1)a_nx^{2n}=1+\sum_{i=0}^{\infty}\sum_{j=0}^{\infty}a_ia_jx^{2i+2j+2}=1+\sum_{n=1}^{\infty}\left(\sum_{i=0}^{n-1}a_ia_{n-i-1}\right)x^{2n}$$ The constant term reads $$a_0=1$$ The terms in $x^{4n}$ are $$a_{2n}=\frac1{4n+1}\sum_{i=0}^{2n-1}a_ia_{2n-i-1}=\frac2{4n+1}\sum_{i=0}^{n-1}a_ia_{2n-i-1}$$ While the terms in $x^{4n+2}$ are $$a_{2n+1}=\frac1{4n+3}\sum_{i=0}^{2n}a_ia_{2n-i}=\frac1{4n+3}\left(a_n^2+2\sum_{i=0}^{n-1}a_ia_{2n-i}\right)$$ The numerical stability arises because all terms in the formulas for $a_n$ have the same sign. 11.3k22 gold badges99 silver badges2020 bronze badges An alternative and straightforward method is: $$\begin{align}y&=\tan x \ (=0)\\ y'&=\frac{1}{\cos^2 x}=1+\tan^2x=1+y^2 \ (=1)\\ y''&=2yy'=2y(1+y^2)=2y+2y^3 \ (=0) \\ y'''&=2y'+6y^2y'=2+8y^2+6y^4 \ (=2)\\ y^{(4)}&=16yy'+24y^3y'=16y+40y^3+24y^5 \ (=0)\\ y^{(5)}&=16+120y^2y'+120y^4y'=16+136y^2+240y^4+120y^6 \ (=16)\\ y^{(6)}&=272yy'+960y^3y'+720y^5y'=272y+1232y^3+1680y^5+720y^7 \ (=0)\\ y^{(7)}&=272y'+3696y^2y'+8400y^4y'+5040y^6y'=272+3968y^2+\cdots \ (=272)\end{align}$$ Hence: $$\begin{align}\tan x&=0+\frac{1}{1!}x+\frac{0}{2!}x^2+\frac{2}{3!}x^3+\frac{0}{4!}x^4+\frac{16}{5!}x^5+\frac{0}{6!}x^6+\frac{272}{7!}x^7+\cdots\\ &=x+\frac{1}{3}x^3+\frac{2}{15}x^5+\frac{17}{315}x^7+\cdots\end{align}$$ Note: You can continue as far as you want, though the computation gets tedious. WA shows the expansion to many more terms (press on "More terms" button). farruhotafarruhota Write $\frac{\sin x}{x}=\frac{\tan x}{x}\cos x$ as a power series in $x^2$, with $\frac{\tan x}{x}=t_0+t_1 x^2+t_2 x^4+\cdots$. Equating coefficients of powers of $x^2$ one by one gives $1=t_0,\,-\frac{1}{6}=-\frac{t_0}{2}+t_1,\,\frac{1}{120}=\frac{t_0}{24}-\frac{t_1}{2}+t_2$ etc. Write down as many of those as you like. Thus $t_0=1,\,t_1=\frac{1}{3},\,t_2=\frac{2}{15}$ etc. J.G.J.G. Not the answer you're looking for? Browse other questions tagged sequences-and-series taylor-expansion or ask your own question. How to do a very long division: continued fraction for tan Bernoulli numbers, taylor series expansion of tan x Fastest convergence Series which approximates function Polynomial long division: different answers when reordering terms Finding the limit of a function with sines and cosines by using the taylor expansion Taylor series expansion of multiple terms Find the Taylor series and evaluate at $f^{39}(0)$ Second Order and Beyond for Multivariable Taylor Series How was this central difference formula calculated? MATLAB calculating sine and cosine using Taylor series in Command Window How does Taylor series work for sine and cosine? How do you write the sines of a binary expansion as an infinite series?
CommonCrawl
WASABI: a dynamic iterative framework for gene regulatory network inference Arnaud Bonnaffoux ORCID: orcid.org/0000-0003-0472-07611,2,3, Ulysse Herbach1,2,4, Angélique Richard1, Anissa Guillemin1, Sandrine Gonin-Giraud1, Pierre-Alexis Gros3 & Olivier Gandrillon1,2 Inference of gene regulatory networks from gene expression data has been a long-standing and notoriously difficult task in systems biology. Recently, single-cell transcriptomic data have been massively used for gene regulatory network inference, with both successes and limitations. In the present work we propose an iterative algorithm called WASABI, dedicated to inferring a causal dynamical network from time-stamped single-cell data, which tackles some of the limitations associated with current approaches. We first introduce the concept of waves, which posits that the information provided by an external stimulus will affect genes one-by-one through a cascade, like waves spreading through a network. This concept allows us to infer the network one gene at a time, after genes have been ordered regarding their time of regulation. We then demonstrate the ability of WASABI to correctly infer small networks, which have been simulated in silico using a mechanistic model consisting of coupled piecewise-deterministic Markov processes for the proper description of gene expression at the single-cell level. We finally apply WASABI on in vitro generated data on an avian model of erythroid differentiation. The structure of the resulting gene regulatory network sheds a new light on the molecular mechanisms controlling this process. In particular, we find no evidence for hub genes and a much more distributed network structure than expected. Interestingly, we find that a majority of genes are under the direct control of the differentiation-inducing stimulus. Together, these results demonstrate WASABI versatility and ability to tackle some general gene regulatory networks inference issues. It is our hope that WASABI will prove useful in helping biologists to fully exploit the power of time-stamped single-cell data. It is widely accepted that the process of cell decision making results from the behavior of an underlying dynamic gene regulatory network (GRN) [1]. The GRN maintains a stable state but can also respond to external perturbations to rearrange the gene expression pattern in a new relevant stable state, such as during a differentiation process. Its identification has raised great expectations for practical applications in network medicine [2] like somatic cells [3–5] or cancer cells reprogramming [6, 7]. The inference of such GRNs has, however, been a long-standing and notoriously difficult task in systems biology. GRN inference was first based upon bulk data [8] using transcriptomics acquired through micro array or RNA sequencing (RNAseq) on populations of cells. Different strategies has been used for network inference including dynamic Bayesian networks [9, 10], boolean networks [11–13] and ordinary differential equations (ODE) [14] which can be coupled to Bayesian networks [15]. More recently, single-cell transcriptomic data, especially RNAseq [16], have been massively used for GRN inference (see [17, 18] for recent reviews). The arrival of those single-cell techniques led to question the fundamental limitations in the use of bulk data. Observations at the single-cell level demonstrated that any and every cell population is very heterogeneous [19–21]. Two different interpretations of the reasons behind single-cell heterogeneity led to two different research directions: 1. In the first view, this heterogeneity is nothing but a noise that blurs a fundamentally deterministic smooth process. This noise can have different origins, like technical noise ("dropouts") or temporal desynchronization as during a differentiation process. This view led to the re-use of the previous strategies and was at the basis of the reconstruction of a "pseudo-time" trajectory (reviewed in [22]). For example, SingleCellNet [23] and BoolTraineR [24] are based on boolean networks with preprocessing for cell clustering or pseudo-time reconstruction. Such asynchronous Boolean network models have been successfully applied in [25]. Other probabilistic algorithms such as SCOUP [26], SCIMITAR [27] or AR1MA1-VBEM [28] also use pseudo-time reconstruction complemented with correlation analysis. ODE based methods can be exemplified with SCODE [29] and InferenceSnapshot [30] algorithms which also use pseudo-time reconstruction. 2. The other view is based upon a representation of cells as dynamical systems [31, 32]. Within such a frame of mind, "noise" can be seen as the manifestation of the underlying molecular network itself. Therefore cell-to-cell variability is supposed to contain very valuable information regarding the gene expression process [33]. This view was advocated among others by [34], suggesting that heterogeneity is rooted into gene expression stochasticity, and that cell state dynamic is a highly stochastic process due to bursting that jumps discontinuously between micro-states. Dynamic algorithms like SINCERITIES [35] are based upon comparison of gene expression distributions, incorporating (although not explicitly) the bursty nature of gene expression. We have recently described a more explicit network formulation view based upon the coupling of probabilistic two-state models of gene expression [36]. We devised a statistical hidden Markov model with interpretable parameters, which was shown to correctly infer small two-gene networks [36]. Despite their contributions and successes, all existing GRN inference approaches are confronted to some limitations: 1. The inference of interactions through the calculation of correlation between gene expression, whether based upon or linear [27] or non-linear [26] assumptions, is problematic. Such correlations can only reproduce events that have been previously observed. As a consequence, predictions of GRN response to new stimulus or modifications is not possible. Furthermore, correlation should not be mistaken for causality. The absence of causal relationship severely hampers any predictive ability of the inferred GRN. 2. The very possibility of making predictions relies upon our ability to simulate the behavior of candidate networks. This implicitly implies that network topologies are explicitly defined. Nevertheless, several inference algorithms [27–29, 35] propose a set of possible interactions with independent confidence levels, generally represented by an interaction matrix. The number of possible actionable networks deduced from combining such interactions is often too large to be simulated. 3. Regulatory proteins within a GRN are usually restricted to transcription factors (TF), like in [24, 26–30]. Possible indirect interactions are completely ignored. A trivial example is a gene encoding a protein that induces the nuclear translocation of a constitutive TF. In this case, the regulator gene will indirectly regulate TF target genes, and its effect will be crucial in understanding the GRN behavior. 4. Most single-cell inference algorithms rely upon the use of a single type of data, namely transcriptomics. By doing so, they implicitly assume protein levels to be positively correlated with RNA amounts, which has been proven to be wrong in case of post-translational regulation (see [33] for an illustration in circadian clock). Besides, at single-cell scale, mRNA and proteins typically have a poor linear correlation [34], even in the absence of post-translational regulation. 5. The choices of biological assumptions are also important for the biological relevance of GRN models. The use of statistical tools can be really powerful to handle large-scale network inference problem with thousand of genes, but the price to pay is loss of biological representativeness. By definition a model is a simplification of the system, but when simplifying assumptions are induced by mathematical tools, like linear [27–29, 35] or binary (boolean) requirements [23, 24], the model becomes solvable at the expense of its biological relevance. In the present work we address the above limitations and we propose an iterative algorithm called WASABI, dedicated to inferring a causal dynamical network from time-stamped single-cell transcriptomic data, with the capability to integrate protein measurements. In the first part we present the WASABI framework which is based upon a mechanistic model for gene-gene interactions [36]. In the second part we benchmark our algorithm using in silico GRNs with realistic gene parameter values. Finally we apply WASABI on our in vitro data [37] and analyze the resulting GRN candidates. Our goal is to infer causalities involved in GRN through analysis of dynamic multi-scale/level data with the help of a mechanistic model [36]. We first present an overview of the WASABI principles and framework. We then benchmark its ability to correctly infer in silico-generated toy GRNs. Finally, we apply WASABI on our in vitro data on avian erythroid differentiation model [38] to generate biologically relevant GRN candidates. WASABI inference principles and implementation WASABI stands for "WAveS Analysis Based Inference". It is a framework built on a novel inference strategy based on the concept of "waves". We posit that the information provided by an external stimulus will affect genes one-by-one through a cascade, like waves spreading through a network (Fig. 1-a). This wave process harbors an inertia determined by mRNA and protein half-lives which are given by their degradation rate. WASABI at a glance. a Schematic view of a GRN: the stimulus is represented by a yellow flash, genes by blue circles and interactions by green (activation) or red (inhibition) arrows. The stimulus-induced information propagation is represented by blue arcs corresponding to wave times. Genes and interactions that are not affected by information at a given wave time are shaded. At wave time 5, gene C returns information on gene A and B by feedback interaction creating a backflow wave. b Promoter wave times: Promoter wave times correspondto inflections point of gene promoter activity defined as the kon/(kon+koff) ratio. c Protein wave times: Protein wave times correspondto inflections point of mean protein level. d Inference process. Blue arrows represent interactions selected for calibration. Based on promoter waves classification genes are iteratively added to sub-GRN previously inferred to get new expanded GRN. Calibration is performed by comparison of marginal RNA distributions between in silico and in vitro data. Inference is initialized with calibration of early genes interaction with stimulus, which gives initial sub-GRN. Latter genes are added one by one to a subset of potential regulators for which a protein wave time is close enough to the added gene promoter wave time. Each resulting sub-GRN is selected regarding its fit distance to in vitro data. If fit distance is too important sub-GRN can be eliminated (red cross). An important benefit of this process is the possibility to parallelize the sub-GRN calibrations over several cores, which results in a linear computational time regarding the number of genes. Note that only a fraction of all tested sub-GRN is shown By definition, causality is the link between cause and consequence, and causes always precede consequences. This temporal property is therefore of paramount importance for causality inference using dynamic data. In our mechanistic and stochastic model of GRN [36] (detailed in "Methods" section Fig. 7), the cause corresponds either to the protein of the regulating gene or a stimulus, which level modulates as a consequence the promoter state switching rates kon (i.e. probability to switch from inactive to active state) and koff (active to inactive) of the target gene. A direct consequence of causality principle for GRNs is that a dynamical change in promoter activity can only be due to a previous perturbation of a regulating protein or stimulus. For example, assuming that the system starts at a steady-state, early activated genes (referred to as early genes) can only be regulated by the stimulus, because it is the only possible cause for their initial evolution. An illustration is given in Fig. 1-a: gene A initial variation can only be due to the stimulus and not by the feedback from gene C, which will occur later. A generalization of these concepts is that for a given time after the stimulus, we can infer the subnetwork composed exclusively by genes affected by the spreading of information up to this time. Therefore we can infer iteratively the network by adding one gene at a time (Fig. 1-d) regarding their promoter wave time order (Fig. 1-b) and comparing with protein wave time of previous added genes (Fig. 1-c). For this, we need to estimate promoter and protein wave times for each gene and then sort them by promoter wave time. We define the promoter activity level by the kon/(kon+koff) ratio, which corresponds to the local mean active duration (Fig. 1-b). Promoter wave time is defined as the inflection time point of promoter activity level where 50% of evolution between minimum and maximum is reached. Since promoter activity is not observable, we estimate the inflection time point of mean RNA level from single-cell transcriptomic kinetic data [37], and retrieve the delay induced by RNA degradation to deduce promoter wave time. Protein wave times correspond to the inflection point of mean protein level, which can be directly observed with our proteomic data [39]. A detailed description of promoter and protein wave time estimation can be found in the "Methods" section. One should note that a gene can have more than one wave time in case of non monotonous variation of promoter activity, due to feedbacks (like gene A in our example) or incoherent feed-forward loop. The WASABI inference process (Fig. 1-c) takes advantage of the gene wave time sorting by adopting a divide and conquer strategy. We remind that a main assumption of our interaction model is the separation between mRNA and protein timescales [36]. As a consequence, for a given interaction between a regulator gene and a regulated gene, the regulated promoter wave time should be compatible with the regulator protein wave time. At each step, WASABI proposes a list of possible regulators in order to reduce the dimension of the inference problem. This list is limited to regulators with compatible protein wave time within the range of 30 hours before and 20 hours after the promoter wave time of the added regulated gene. This constraint has been set up from in silico study (see next section). For example, in Fig. 1, gene B can be regulated by gene A or D since their protein wave time are close to gene B promoter wave time. Gene C can be regulated by gene B or D, but not A because its protein wave time is too earlier compared to gene C promoter wave time. For new proposed interactions, a typical calibration algorithm can be used to finely tune interaction parameter in order to fit simulated mRNA marginal distribution with experimental marginal distribution from transcriptomic single-cell data. To avoid over-fitting issues, only efficiency interaction parameter θi,j (Fig. 7) is tuned. To estimate fitting quality we define a GRN fit distance based on the Kantorovitch distances between simulated and experimental mRNA marginal distributions (please refer to "Methods" section for a detailed description of interaction function and calibration process). If the resulting fitting is judged unsatisfactory (i.e. GRN fit distance is greater than a threshold), the sub-GRN candidate is pruned. For genes presenting several waves, like gene A, each wave will be separately inferred. For example, gene A initial increase is fitted during initialization step, but only the first experimental time points during promoter activity increase will be used for calibration. Genes B and C regulated after gene A up-regulation will be added to expand sub-GRN candidates. Finally, the wave corresponding to gene A down-regulation is then fitted considering possible interactions with previously added genes (namely gene B and C), which permits the creation of feedback loops or incoherent feed-forward loops. Positive feedback loops cannot be easily detected by wave analysis because they only accelerate, and eventually amplify, gene expression. Yet, their inference is important for the GRN behavior since they create a dynamic memory and, for example, may thus participate to irreversibility of the differentiation process. To this end, we developed an algorithm to detect the effect of positive feedback loops on gene distribution before the iterative inference (see Supporting information). We modeled the effect of positive feedback loops by adding auto-positive interactions. Note that such a loop does not necessarily mean that the protein directly activates its own promoter: it simply means that the gene is influenced by a positive feedback, which can be of different nature. For example, in the GRN presented in Fig. 1-a, genes B and C mutually create a positive feedback loop. If this positive feedback loop is detected we consider that each gene has its own auto-positive interaction as illustrated in Fig. 1-c. Positive feedback loops could also arise from the existence of self-reinforcing open chromatin states [40] or be due to the fact that binding of one TF can shape the DNA in a manner that it promotes the binding of the second TF [41]. In silico benchmarking We decided to first calibrate and then assess WASABI performance in a controlled and representative setting. Calibration of inference parameters In the first phase we assessed some critical values to be used in the inference process. We generate realistic GRNs (Fig. 2-a) where 20 genes from in vitro data were randomly selected with associated in vitro estimated parameters (see Supporting information). Interactions were randomly defined in order to create cascade networks with no feedback nor auto-positive feedback as an initial assessment phase. Cascade in silico GRN a Cascade GRN types are generated to study wave dynamics. Genes correspond to in vitro ones with their estimated parameters. S1 corresponds to stimulus. Genes are identified by our list gene ID. b Based on 10 in silico GRN we compare promoter wave time of early genes (blue) with other genes (red). Displayed are promoter waves with a wave time lower than 15h for graph clarity. c For each interactions of 10 in silico GRNs we compute the difference between estimated regulated promoter wave time minus its regulator protein wave time. Distribution of promoter/protein wave time difference is given for all interactions of all in silico GRNs We limited ourselves to 4 network levels (with 5 genes at each level, see Fig. 2-a for an example) because we observed that the information provided by the stimulus is almost completely lost after 4 successive interactions in the absence of positive feedback loops. This is very likely caused by the fact that each gene level adds both some intrinsic noise, due to the bursty nature of gene expression, as well as a filtering attenuation effect due to RNA and protein degradation. We first analyzed the special case of early genes that are directly regulated by the stimulus (Fig. 2-b). Their promoter wave times were lower than all other genes but one. Therefore we can identify early genes with good confidence, based on comparison of their promoter wave time with a threshold. Given these in silico results, we then decided in the WASABI pre-processing step to assume that genes with a promoter wave time below 5h must be early genes, and that genes with a promoter wave time larger than 7h can not be early genes. Interactions between the stimulus and intermediate genes, with promoter wave times between 5h and 7h, have to be tested during the inference iterative process and preserved or not. We then assessed what would be the acceptable bounds for the difference between regulator protein wave time and regulated gene promoter activity. Ten in silico cascade GRNs were generated and simulated for 500 cells to generate population data from which both protein and promoter wave times were estimated for each gene. Based on these data, we computed the difference between estimated regulated promoter wave time minus its regulator protein wave time for all interactions in all networks. The distribution of these wave differences is given in Fig. 2-c. One can notice that some wave differences had negative values. This is due to the shape of the Hill interaction function (see Eq. 3 in "Methods" section) with a moderate transition slope (γ=2). If the protein threshold (which corresponds to typical EC50 value) is too close to the initial protein level, then a slight protein increase will activate target promoter activity. Therefore, promoter activity will be saturated before regulator protein level and thus the difference of associated wave times is negative. This shows that one can accelerate or delay information, depending on the protein threshold value. In order to be conservative during the inference process, we set the RNA/Protein wave difference bounds to [ − 20h; 30h] in accordance with the distribution in Fig. 2-c. One should note that this range, even if conservative, already removes two thirds of all possible interactions, thereby reducing the inference complexity. We finally observed that for interactions with genes harboring an auto-positive feedback, wave time differences could be larger. In this case, wave difference bounds were estimated to [ − 30h, 50h] (see supporting information). We interpret this enlargement by an under-sampling time resolution problem since auto-positive feedback results in a sharper transition. As a consequence, promoter state transition from inactive to active is much faster: if it happens between two experimental time points, we cannot detect precisely its wave time. Inference of in silico GRNs WASABI was then tested for its ability to infer in silico GRNs (complete definition in supporting information) from which we previously simulated experimental data for mRNA and protein levels at single-cell and population scales. We first assessed the simplest scenario with a toy GRN composed of two branches with no feedback (a cascade GRN; Fig. 3-a). The GRN was limited to 6 genes and to 3 levels in order to reduce computational constraints. Nevertheless, even in such a simple case, the inference problem is already a highly complex challenge with more than 1020 possible directed networks. In silico cascade GRN inference a The cascade GRN. Genes parameters were taken from in vitro estimations to mimic realistic behavior. Experimental data were generated to obtain time courses of transciptomic data, at single-cell and population scale, and also proteomic data at population scale. b WASABI was run to infer in silico cascade GRN and generated 88 candidates. A dot represents a network candidate with its associated fit distance and inference quality (percentage of true interactions). True GRN is inferred (red dot, 100% quality). Acceptable maximum fit distance (green dashed line) corresponds to variability of true GRN fit distance. Its computation is detailed in figure C. Three GRN candidates (including the true one) have a fit distance below threshold. c Variability of true GRN fit distance (green dashed line in figures B and C) is estimated as the threshold where 95% of true GRN fit distance is below. Fit distance distribution is represented for true GRN (green) and candidates (blue) for cascade in silico GRN benchmark. True GRNs are calibrated by WASABI directed inference while candidates are inferred from non-directed inference. Fit distance represents similitude between candidates generated data and reference experimental data Wave times were estimated for each gene from simulated population data for RNA and protein (data available in supporting information). Table 1 provides estimated waves time for the cascade GRN. It is clear that the gene network level is correctly reproduced by wave times. Table 1 Wave times We then ran WASABI on the generated data and obtained 88 GRN candidates (Fig. 3-b). The huge reduction in numbers (from 1020 to 88) illustrates the power of WASABI to reduce complexity by applying our waves-based constraints. We defined two measures for further assessing the relevance of our candidates: 1. Quality quantifies proportion of real interactions that are conserved in the candidate network (see supporting information for a detailed description). A 100% corresponds to the true GRN. 2. A fit distance, defined as the mean of the 3 worst gene fit distances, where gene fit distance is the mean of the 3 worst Kantorovitch distances [42] among time points (see the "Methods" section). We observed a clear trend that higher quality is associated with a lower fit distance (Fig. 3-b), which we denote as a good specificity. When inferring in vitro GRNs, one does not have access to quality score, contrary to fit distance. Hence, having a good specificity enables to confidently estimate the quality of GRN candidates from their fit distance. Thus, this result demonstrates that our fit distance criterion can be used for GRN inference. Nevertheless, even in the case of a purely in silico approach, quality and fit distance can not be linked by a linear relationship. In other words, the best fit distance can not be taken for the best quality (see below for other toy GRNs). This is likely to be due to both the stochastic gene expression process as well as the estimation procedure. We therefore needed to estimate an acceptable maximum fit distance threshold for true GRN. For this, we ran directed inferences, where WASABI was informed beforehand of the true interactions, but calibration was still run to calibrate interaction parameters. We ran 100 directed inferences and defined the maximum acceptable fit distance (Fig. 3-c) as the distance for which 95% of true GRN fit distance was below. This threshold could also be used as a pruning threshold (green dashed line in Fig. 3-b) in subsequent iterative inferences, thereby progressively reducing the number of acceptable candidates. We then analyzed a situation where we added either an auto-activation loop or a negative feedback (Fig. 4-a and c and supporting information for estimated wave times). In silico GRN with feedbacks a Addition of one positive feedback onto the cascade GRN. b WASABI was run to infer in silico cascade GRN with a positive feedback and generated 59 candidates, 31 of which having an acceptable fit distance. See legend to Fig. 3-b for details. c Addition of one negative feedback onto the cascade GRN. d WASABI was run to infer in silico cascade GRN with a negative feedback and generated 476 candidates, all of which having an acceptable fit distance. See legend to Fig. 3-b for details In both cases, GRN inference specificity was lower than for cascade network inference. Nevertheless in both cases the true network was inferred and ranked among the first candidates regarding their fit distance (Fig. 4-b and d), demonstrating that WASABI is able to infer auto-positive and negative feedback patterns. However there were more candidates below the acceptable maximum fit distance threshold and there was no obvious correlation between high quality and low fit distance. We think it could be due to data under-sampling regarding the network dynamics (see upper and discussion). In vitro application of WASABI We then applied WASABI on our in vitro data, which consists in time stamped single-cell transcriptomic [37] and bulk proteomic data [39] acquired during T2EC differentiation [38], to propose relevant GRN candidates. We first estimated the wave times (Fig. 5). Promoter waves ranged from very early genes regulated before 1h to late genes regulated after 60h. Promoter activity appeared bimodal with an important group of genes regulated before 20h and a second group after 30h. Protein wave distribution was more uniform from 10h to 60h, in accordance with a slower dynamics for proteins. Remarkably, 10 genes harbored non-monotonous evolution of their promoter activity with a transient increase. It can be explained by the presence of a negative feedback loop or an incoherent feed-forward interaction. These results demonstrate that real in vitro GRN exhibits distinguishable "waves". Promoter and protein wave time distributions. Distribution of in vitro promoter (a) and protein (b) wave times for all genes estimated from RNA and proteomic data at population scale. Counts represent number of genes. Note: a gene can have several waves for its promoter or protein In order to limit computation time, we decided to further restrict the inference to the most important genes in term of the dynamical behavior of the GRN. We first detected 25 genes that are defined as early with a promoter time lower than 5h. We then defined a second class of genes called "readout" which are influenced by the network state but can not influence in return other genes. Their role for final cell state is certainly crucial, but their influence on the GRN behavior is nevertheless limited. 41 genes were classified as readout so that 24 genes were kept for iterative inference, in addition to the 25 early genes. 9 of these 24 genes have 2 waves due to transient increase, which means that we have 33 waves to iteratively infer. In vitro GRN candidates After running for 16 days using 400 computational cores, WASABI returned a list of 381 GRN candidates. Candidate fit distances showed a very homogeneous distribution (see supporting information) with a mean value around 30, together with outliers at much higher distances. Removing those outliers left us with 364 candidates. Compared to inference of in silico GRN, in vitro fitting is less precise, as we could expect. But it is an appreciable performance and it demonstrates that our GRN model is relevant. We then analyzed the extent of similarities among the GRN candidates regarding their topology by building a consensus interaction matrix (Fig. 6-a). The first observation is that the matrix is very sparse (except for early genes in first raw and auto-positive feedbacks in diagonal) meaning that a sparse network is sufficient for reproducing our in vitro data. We also clearly see that all candidate GRNs share closely related topologies. This is clearly obvious for early genes and auto-positive feedbacks. Columns with interaction rates lower than 100% correspond to latest integrated genes in the iterative inference process with gene index (from earlier to later) 70, 73, 89, 69 and 29. Results from existing algorithms are usually presented in such a form, where the percent of interactions are plotted [27–29, 35]. But one main advantage of our approach is that it actually proposes real GRN candidates, which may be individually examined. Inference from in vitro data aIn vitro interaction consensus matrix. Each square in the matrix represents either the absence of any interaction, in black, or the presence of an interaction, the frequency of which is color-coded, between the considered regulator ID (row) and regulated gene ID (column). First row correspond to stimulus interactions. b Best candidate. Green: positive interaction; red: negative interaction; plain lines: interactions found in 100% of the candidates; dashed lines: interaction found only in some of the candidates; orange: genes the product of which participates to the sterol synthesis pathway; purple: 5 last added genes during iterative inference We therefore took a closer look at the "best" candidate network, with the lowest Fit distance to the data (Fig. 6-b). We observed very interesting and somewhat unexpected patterns: 1. Most of the genes (84%) with an auto-activation loop. As mentioned earlier, this was a consensual finding among the candidate networks. It is striking because typical GRN graphs found in the literature do not have such predominance of auto-positive feedbacks. 2. A very large number of genes were found to be early genes that are under the direct control of the stimulus. It is noticeable that most of them were found to be inhibited by the stimulus, and to control not more than one other gene at one next level. 3. We previously described the genes whose product participates in the sterol synthesis pathway, as being enriched for early genes [37]. This was confirmed by our network analysis, with only one sterol-related gene not being an early gene. 4. Among 7 early genes that are positively controlled by the stimulus, 6 are influenced by an incoherent feedforward loop, certainly to reproduce their transient increase experimentally observed [37]. 5. One important general rule is that the network depth is limited to 3 genes. One should note that this is not imposed by WASABI which can create networks with unlimited depth. It is consistent with our analysis on signal propagation properties in in silico GRN. If network depth is too large, signal is too damped and delayed to accurately reproduce experimental data. 6. One do not see network hubs in the classical sense. The genes in the GRNs are connected to at most four neighbors. The most impacting "node" is the stimulus itself. 7. One can also observe that the more one progress within the network, the less consensual the interaction are. Adding the leaves in the inference process might help to stabilize those late interactions. Altogether those results show the power of WASABI to offer a brand-new vision of the dynamical control of differentiation. In the present work we introduced WASABI as a new iterative approach for GRN inference based on single-cell data. We benchmarked it on a representative in silico environment before its application on in vitro data. WASABI tackles GRN inference limitations Usually, to demonstrate that a new inference method outperforms previous ones benchmarking is performed [43–45]. However, evaluation of GRN inference methods is a problem per se due to the lack of a gold standard against which different algorithms might be benchmarked [46]. For example, typical in silico model like [47] are based on population deterministic behavior (only a Gaussian white-noise is added) and do not consider post-translational regulation (degradation rates are constant). If we benchmark WASABI with other inference algorithm based on our GRN mechanistic model it is quite obvious that we will outperform other methods, for example just because we consider post-translational regulation integrating both transcriptomic and proteomic data, unlike other methods. Another point comes from the metric usually used to compare inference methods like ROC (Receiver Operating Characteristic). This metric focuses on the number of true inferred interactions instead of the overall network topology, or the dynamical network behavior. More over, in our view it would be meaningless to compare our approach to any other approach that would not yield a representative executable model [48, 49] which most approach do not provide. For example, SINCERITIES [35] analyses single cell transcriptomic time-course data to reconstruct an interaction matrix, but this matrix is not executable and can not reproduce time series of transcriptomic data. Other methods, like Single Cell Network Synthesis toolkit [49] based on a boolean model, propose to reconstruct executable models from single cell data. However, to our knowledge, none of these executable methods is able to reproduce time series of experimental distribution observed at single cell level, which limits fundamentally they ability to produce testable predictions. We definitively consider that the only way to evaluate an inference algorithm is to experimentally validate its predictions. This is the reason why we are willing to couple WASABI with an iterative process of Design Of Experiment (DOE) as discussed later. However, despite experimental validation, we are convinced that WASABI has the ability to tackle some general GRN inference issues based on the assumptions on which WASABI as been designed and on in silico validation results. 1. WASABI goes beyond mere correlations to infer causalities from time stamped data analysis as demonstrated on in silico benchmark (Fig. 3) even in the presence of circular causations (Fig. 4), based upon the principle that the cause precedes the effect. 2. Contrary to most GRN inference algorithms [27–29, 35] based upon the inference of interactions, WASABI is network centered and generates several candidates with explicitly defined networks topology (Fig. 6-b), which is required for prediction making and simulation capability. Generating a list of interactions and their frequency from such candidates is a trivial task (Fig. 6-a) whereas the reverse is usually not possible. Moreover, WASABI explicitly integrates the presence of an external stimulus, which surprisingly is never modeled in other approaches based on single-cell data analysis. It could be very instrumental for simulating for example pulses of stimuli. 3. WASABI is not restricted to TFs. Most of the in vitro genes we modeled are not TFs. This is possible thanks to the use of our mechanistic model [36] which integrates the notion of timescale separation. It assumes that every biochemical reaction such as metabolic changes, nuclear translocations or post-translational modifications are faster than gene expression dynamics (imposed by mRNA and protein half-life) and that they can be abstracted in the interaction between 2 genes. Our interaction model is therefore an approximation of the underlying biochemical cascade reactions. This should be kept in mind when interpreting an interaction in our GRN: many intermediaries (fast) reactions may be hidden behind this interaction. 4. Optionally, WASABI offers the capability to integrate proteomic data to reproduce translational or post-translational regulation. Our proteomic data [39] demonstrate that nearly half of detected genes exhibit mRNA/protein uncoupling during differentiation and allowed to estimate the time evolution of protein production and degradation rates. Nevertheless, we are not fully explanatory since we do not infer causalities of these parameters evolution. This is a source of improvement discussed later. 5. We deliberately developed WASABI in a "brute force" computational way to guarantee its biological relevance and versatility. This allowed to minimize simplifying assumptions potentially necessary for mathematical formulations. During calibration, we used a simple Euler solver to simulate our networks within model (1). This facilitates addition of any new biological assumption, like post-translation regulations, without modifying the WASABI framework, making it very versatile. Thanks to the splitting and parallelization allowed by WASABI original gene-by-gene iterative inference process, the inference problem becomes linear regarding the network size, whereas typical GRN inference algorithms face combinatorial curse. This strategy also allowed the use of High Parallel Computing (HPC) which is a powerful tool that remains underused for GRN inference [23, 50]. WASABI performances, improvements and next steps WASABI has been developed and tested on an in silico controlled environment before its application on in vitro data. Each in silico network true topology was successfully inferred. Cascade type GRN is totally inferred (Fig. 3) with a good specificity. Auto-positive and negative feedback networks (Fig. 4) were also inferred, demonstrating WASABI's ability to infer circular causations, but specificity is lower. This might be due to a time sampling of experimental data being longer than the network dynamic time scale. Auto-positive feedback creates a switch like response, the dynamic of which is much quicker than simple activation. Thus, to capture accurately auto-positive feedback wave time, we should use high frequency time sample for RNA experimental data during auto-positive feedback activation short period. For negative feedback interactions, WASABI calibrated initial increase considering only first experimental time points before feedback effect. Consequently, precision of first interaction was decreased and more false positive sub-GRN candidates were selected. Increasing the frequency of experimental time sampling during initial phase should overcome this problem. As it stands our mechanistic model is only accounting for transcriptional regulation through proteins. It does not take into account other putative regulation level, including translational or post-translational regulations, or regulation of the mRNA half-life, although there is ample evidence that such regulation might be relevant [51, 52]. Provided that sufficient data is available, it would be straightforward to integrate such information within the WASABI framework. For example, the estimation of the degradation rates at the single-cell level for mRNAs and proteins has recently been described [53], the distribution of which could then be used as an input into the WASABI inference scheme. Cooperativity and redundancies are not considered in the current WASABI framework, so that a gene can only be regulated by one gene, except for negative feedback or incoherent feedforward interactions. However, many experimentally curated GRN show evidence for cooperations (2 genes are needed to activate a third gene) or redundant interactions (2 genes independently activating a third gene) [54]. We intentionally did not considered such multi-interactions because our current calibration algorithm relies on the comparison of marginal distributions which are not sufficiently informative for inferring cooperative effects. It is our belief that the use of joint distribution of two genes or more should enable such inference. We previously developed in our group a GRN inference algorithm which is based on joint distribution analysis [36] but which does not consider time evolution. We are therefore planning to integrate joint-distribution-based analyses within the WASABI framework in order to improve calibration, by upgrading the objective function with measurement considering joint-distribution comparison. HPC capacities used during iterative inference impacts WASABI accuracy. Indeed late iterations are supposed more discriminative than the first one because false GRN candidates have accumulated too many wrong interactions so that calibration is not able to compensate for errors. However, if the expansion phase is limited by available computational nodes, the true candidate may be eliminated because at this stage inference is not discriminative enough. Therefore improving computing performances would represent an important refinement and we have initiated preliminary studies in that direction [50]. As it stands, WASABI is limited to infer networks with less than 100 genes in a reasonable time. However, by means of all improvements described above, WASABI can be upscaled to infer network with more than 1000 genes using recent sc-RNA-seq technologies [55]. This is achievable because WASABI inference computational time is linear regarding the number of genes. As a consequence, increasing the number of genes by one order of magnitude only imposes to decrease computational time by the same ratio, which is fairly workable. Nevertheless, despite all possible improvements, GRN inference will remain per se an asymptotically solvable problem due to inferability limitations [56], intrinsic biological stochasticity, experimental noise and sampling. This is why we propose a set of GRN candidates with acceptable confidence level. A natural companion of the WASABI approach would be a phase of design of experiments (DOE) specifically aiming at selecting the most informative experiments to discriminate among the candidates. Such DOE procedures have already been developed for GRN inference, but none of them takes into account the mechanistic aspects and the stochasticity of gene expression [56, 57]. Extending the DOE framework to stochastic models is currently being developed in our group. New insights on typical GRN topology The application of WASABI on our in vitro model of differentiation generated several GRN candidates with a very interesting consensus topology (Fig. 6). 1. We can see that the stimulus (i.e. medium change [37]) is a central regulator of our GRN. We are strongly confident with this result because initial RNA kinetic of early genes can only be explained by fast regulation at promoter level several minutes after stimulation. Proteins dynamics are way too slow to justify these early variations. 2. Twenty-two of the 29 inferred early genes are inhibited by the stimulus, while inhibitions are only present in 7 of the 28 non-early interactions. Thus inhibitions are overrepresented in stimulus-early genes interactions. An interpretation is that most of genes are auto-activated and their inhibition requires a strong and long enough signal to eliminate remaining auto-activated proteins. A constant and strong stimulus should be very efficient for this role like in [32] where stimulus long duration and high amplitude is required to overcome an auto-activation feedback effect. It could be very interesting in that respect to assess how the network would respond to a temporary stimulus, mimicking the commitment experiment described in [37] or [58]. 3. None of our GRN candidates do contain so-called "hubs genes" affecting in parallel many genes, whereas existing GRN inferred generally present consequent hubs [26, 28, 29, 35]. A possible interpretation is that hub identifications is mostly a by-product of correlation analysis. This interpretation is in line with the sparse nature of our candidate networks, as compared to some previous network (see e.g. [25] or [59]). This strongly departs with the assumption that small-world network might represent "universal laws" [60]. 4. In order to reproduce non-monotonous gene expression variations, WASABI inferred systematically incoherent feedforward pattern instead of "simpler" negative feedback. This result is interesting because nothing in WASABI explain this bias since in silico benchmarking proved that WASABI is able to infer simple negative feedbacks (Fig. 4). Such "paradoxical components" have been proposed to provide robustness, generate temporal pulses, and provide fold-change detection [61]. 5. WASABI candidates are limited in network depth by a maximum of 3 levels. We did not include readout genes during inference but addition of these genes would only increase GRN candidate depth by one level. GRN realistic candidates depth are thus limited by 4 levels. This might be due to the fact that information can only be relayed by limited number of intermediaries because of induced time delay, damping and noise. Indeed, general mechanism of molecules production/degradation behaves exactly as a low pass filter with a cutting frequency equivalent to the molecule degradation rate. Furthermore, protein information will be transmitted at the promoter target level by modulation of burst size and frequency, which are stochastic parameters, thereby adding noise to the original signal. Such a strong limitation for information carrying capacity in GRN is at stake with long differentiation sequences, say from the hematopoietic stem cell to a fully committed cell. In such a case, tens of genes will have to be sequentially regulated. This might be resolved by the addition of auto-positive feedbacks. Such auto-positive feedbacks will create a dynamic memory whereby the information is maintained even in the absence of the initial information. An important implication is the loss of correlation between auto-activated gene and its regulator gene. Consequently, all algorithms based on stationary RNA single-cell correlation [26, 27] will hardly catch regulators of auto-activated genes. Considering the importance of auto-positive feedback benefits on GRN information transfert, it is therefore not surprising to see that more than 80% of our GRN genes present auto-positive feedback signatures in their RNA distribution. Moreover, experimentally observed auto-positive feedback influence is stronger in our in vitro model than in our in silico models. Such a strong prevalence of auto-positive feedbacks has also been observed in a network underlying germ cell differentiation [59]. As mentioned earlier, care should be taken in interpreting such positive influences, which very likely rely on indirect influences, like epigenomic remodeling. Inferring the structure of GRN is an inverse problem which has occupied the systems biology community for decades. This last few years, with the arrival of single cell transcriptomic data, many GRN inference algorithms based on the analysis of these data have been developed. Despite their contributions and successes, these approaches are confronted to some limitations such as: restriction to correlations which impairs predictive ability restriction to transcription factors to target gene interactions mono-data type, namely transcriptomic, ignoring protein level regulation biological over-simplifying assumptions induced by mathematical tools Our work aims to provide a significant innovation in GRN inference problem to tackle these issues. We propose a divide-and-conquer strategy called WASABI, which splits the potentially untractable global problem into much simpler subproblems. We show that by adding one gene at a time, we can infer small networks, the behavior of which has been simulated in silico using a mechanistic model which incorporates the fundamentally probabilistic nature of the gene expression process. When applied to real-life data, our algorithm sheds a new fascinating light onto the molecular control of a differentiation process. GRN candidates were generated with a very interesting common topology which stands apart from typical literature and which is biologically relevant regarding several aspects as the very central role of the stimulus, the absence of "hub genes", the limitation in network depth and the presence of many auto-activation loops. Together, these results demonstrate WASABI ability to tackle some general GRN inference issues: inference of causalities even in case of feedbacks definition of functional interactions underlying indirect regulations as post-translational regulation or nuclear translocation capability to integrate proteomic data to reproduce translational or post-translational regulation (observed in 50% of our genes) versatility and computational tractability using HPC facilities enabled by WASABI original iterative process We believe that WASABI should be of great interest for biologists interested in GRN inference, and beyond for those aiming at a dynamical network view of their biological processes. We are convinced that this could really advance the field, opening an entire new way of analyzing single cell data for GRN inference. Mechanistic GRN model Our approach is based on a mechanistic model that has been previously introduced in [36] and which is summed-up in Fig. 7. GRN mechanistic and stochastic model. Our GRN model is composed of coupled piecewise deterministic Markov processes. In this example 2 genes are coupled. A gene i is represented by its promoter state (dashed box) which can switch randomly from ON to OFF, and OFF to ON, respectively at kon,i and koff,i mean rate. When promoter state is ON, mRNA molecules are continuously produced at a s0,i rate. mRNA molecules are constantly degraded at a d0,i rate. Proteins are constantly translated from mRNA at a s1,i rate and degraded at a d1,i rate. The interaction between a regulator gene j and a target gene i is defined by the dependence of kon,i and koff,i with respect to the protein level Pj of gene j and the interaction parameter θi,j. Likewise, a stimulus (yellow flash) can regulate a gene i by modulating its kon,i and koff,i switching rates with interaction parameter θi,0 In all that follows, we consider a set of G interacting genes potentially influenced by a stimulus level Q. Each gene i is described by its promoter state Ei=0 (off) or 1 (on), its mRNA level Mi and its protein level Pi. We recall the model definition in the following equation, together with notations that will be extensively used throughout this article. $$ \left\{ \begin{aligned} {E_{i}(t)} &: 0\ \overset{k_{\text{on}}}{\rightarrow} 1, \;\; 1\ \overset{k_{\text{off}}}{\rightarrow} 0 \\ {{M_{i}^{\prime}(t)}} &= s_{0,i} {E_{i}(t)} - d_{0,i} {M_{i}(t)} \\ {{P_{i}^{\prime}(t)}} &= s_{1,i} {M_{i}(t)} - d_{1,i} {P_{i}(t)} \end{aligned}\right. $$ The first line in model (1) represents a discrete, Markov random process, while the two others are ordinary differential equations (ODEs) describing the evolution of mRNA and protein levels. Interactions between genes and stimulus are then characterized by the assumption that kon and koff are functions of P=(P1,…,PG) and Q. The form for kon is the following (for koff, replace θi,j by −θi,j): $$ {k_{\text{on}}}(P,Q) = \frac{{k_{\mathrm{on\_min,}{i}}} +{ k_{\mathrm{on\_max,}{i}}} \beta_{i} \Phi_{i}(P,Q)} {1+\beta_{i} \Phi_{i}(P,Q)} $$ $$ \Phi_{i}(P,Q)=\frac{1+e^{\theta_{i,0}}Q}{1+Q} \prod_{j=1}^{G} \frac{1+e^{\theta_{i,j}}\left(\frac{P_{j}}{H_{j}}\right)^{\gamma}}{1+\left(\frac{P_{j}}{H_{j}}\right)^{\gamma}} $$ This interaction function slightly differs from [36] since auto-feedback is considered as any other interactions and stimulus effect is explicitly defined. Exponent parameter γ is set to default value 2. Interaction threshold Hj is associated to protein j. Interaction parameters θi,j will be estimated during the iterative inference. Parameter βi corresponds to GRN external and constant influence on gene to define its basal expression: it is computed at simulation initialization in order to set kon and koff to their initial value. From now on, we drop the index i to simplify our notation when there is no ambiguity. Overview of WASABI workflow WASABI framework is divided in 3 main steps as described in Fig. 8. First, individual gene parameters defined in model (1) (all except θ and H) are estimated before network inference from a number of experimental data types acquired during T2EC differentiation. They include time stamped single-cell transcriptomic [37], bulk transcription inhibition kinetic [37] and bulk proteomic data [39]. In a second step, genes are sorted regarding their wave times (see "Results" section for a description of wave concept) estimated from the mean of single cell transcriptomic data for promoter waves, and bulk proteomic data for protein waves. Finally, network iterative inference step is performed from single transcriptomic data, previously inferred gene parameters and sorted genes list. All methods are detailed in following sections, an overview of workflow is given by Fig. 8. Parameters estimation workflow. Schematic view of WASABI workflow with 3 main steps: (1) individual gene parameters estimation (red zone), (2) waves sorting (green zone) and (3) network iterative interaction inference (blue zone). Wave concept is introduced in "Results" section. Model parameters (square boxes) are estimated from experimental data (flasks) with a specific method (grey hexagones). All methods are detailed in "Methods" section. Estimated data relative to waves are represented by round boxes. Input arrows represent data required by methods to compute parameters. There are 3 types of experimental data, (i) bulk transcription inhibition kinetic (green flask), (ii) single-cell transcriptomic (blue flask) and (iii) proteomic data (orange flask). Model parameters are specific to each gene, except for θ, which is specific to a pair of regulator/regulated genes. Notations are consistent with Eq. (1), γauto represents exponent term of auto-positive feedback interaction. Only d0(t), d1(t) and s1(t) are time dependent. One gene can have several wave times For T2EC in vitro application, tables of gene parameters and wave times are provided in supporting information. For in silico benchmarking we assume that gene parameters d0,d1,s1 are known. Single-cell data and bulk proteomic data are simulated from in silico GRNs for time points 0, 2, 4,8, 24, 33, 48, 72 and 100h. First step - Individual gene parameters estimation Exponential decay fitting for mRNA degradation rate (d 0) estimation The degradation rate d0 corresponds to active decay (i.e. destruction of mRNA) plus dilution due to cell division. The RNA decay was already estimated in [37] before differentiation (0h), 24h and 72h after differentiation induction from population-based data of mRNA decay kinetic using actinomycin D-treated T2EC (https://osf.io/k2q5b). Cell division dilution rate is assumed to be constant during the differentiation process and cell cycle time has been experimentally measured at 20h [38]. Maximum estimator for mRNA transcription rate (s 0) estimation To infer the transcription rate s0, we used a maximum estimator based on single-cell expression data generated in [37]. We suppose that the highest possible mRNA level is given by s0/d0. Thus s0 corresponds to the maximum mRNA count observed in all cells and time points multiplied by \(\max \limits _{t}(d_{0}(t))\). Method of moments and bootstrapping for range of promoter switching rates (\(k_{\text {on/off\_min/max}}\)) estimation Dynamic parameters kon and koff are bounded respectively by constant parameters \([k_{\text {on\_min}}; k_{\text {on\_max}}]\) and \([k_{\text {off\_min}}; k_{\text {off\_max}}]\) (see Eq. (2)) which are estimated as follows from time course single-cell transcriptomic data. Parameters s0 and d0(t) are supposed to be previously estimated for each gene at time t. Range parameters shall be compliant with constraints (Eq. (4)) imposed by the transcription dynamic regime observed in vitro. RNA distributions [37] have many zeros, which is consistent with the bursty regime of transcription. There is no observed RNA saturation in distributions. Moreover, all GRN parameters should also comply with computational constraints. On the one hand, the time step dt used for simulations shall be small enough regarding GRN dynamics to avoid aliasing (under-sampling) effects. On the other hand, dt should not be too small to save computation time. These constraints correspond to $$ k_{\text{on}} < d_{0} < k_{\text{off}} < \frac{1}{dt} $$ and we deduce inequalities for ranges: $$ k_{\text{on\_min}} < k_{\text{on\_max}} < d_{0} < k_{\text{off\_min}} < k_{\text{off\_max}} < \frac{1}{dt}. $$ We set the default value \(k_{\text {on\_min}}\) to 0.001 h−1. Parameter \(k_{\text {on\_max}}\) is estimated from time course single-cell transcriptomic data after removing zeros. This truncation mimics a distribution where gene is always activated, so that kon is close to its maximum value \(k_{\text {on\_max}}\). With these truncated distributions, for each time point t, we estimate kon,t using a moment-based method defined in [62]. We bootstrapped 1000 times to get a list of kon,t,n with index n corresponding to bootstrap sample n. For each time point we compute the 95% percentile of kon,t,n, then we consider the mean value of these percentiles to have a first estimate of \(k_{\text {on\_max}}\). This \(k_{\text {on\_max}}\) is then down and up limited respectively between \(k_{\text {on\_max\_lim\_min}}\) and \(k_{\text {on\_max\_lim\_max}}\) given in Eq. (6) to guarantee that observed kon can be easily reached during simulations with reasonable values of protein level (because of asymptotic behavior of interaction function). In other words \(k_{\text {on\_max}}\) shall not be too close from minimum or maximum observed kon considering 10% margins. Finally, this limited \(k_{\text {on\_max}}\) is up-limited by \(0.5\times \max \limits _{t}(d_{0}(t))\) to guarantee a 50% margin with d0(t). $$ {\begin{aligned} k_{\text{on\_max\_lim\_min}} &= \frac{\max\limits_{t}(\underset{n}{\operatorname{median}}(k_{\text{on},t,n})) - 0.1\times k_{\text{on\_min}}}{0.9}\\ k_{\text{on\_max\_lim\_max}} &= \frac{\max\limits_{t}(\underset{n}{\operatorname{median}}(k_{\text{on},t,n})) - 0.9\times k_{\text{on\_min}}}{0.1}\\ \end{aligned}} $$ Parameter \(k_{\text {off\_min}}\) is set to \(\max \limits _{t}(d_{0}(t))\) to comply with equation Eq. (5). Parameter \(k_{\text {off\_max}}\) is estimated like \(k_{\text {on\_max}}\) from time course single-cell transcriptomic data but without zero truncation.For each time point t, we estimate koff,t using a moment-based method defined in [62]. We bootstrapped 1000 times to get a list of koff,t,n with index n corresponding to bootstrap sample n. For each time point we compute the 95% percentile of koff,t,n, then we consider the mean value of these percentiles to have a first estimate of \(k_{\text {off\_max}}\). This \(k_{\text {off\_max}}\) is then down and up limited respectively between \(k_{\text {off\_max\_lim\_min}}\) and \(k_{\text {off\_max\_lim\_max}}\) given in Eq. (7) to guarantee that observed koff can be easily reached during simulations with reasonable values of protein level (because of asymptotic behavior of interaction function). In other words \(k_{\text {off\_max}}\) shall not be too close from minimum or maximum observed koff considering 10% margins. Finally, this limited \(k_{\text {off\_max}}\) is up-limited by 1/dt to guaranty simulation anti-aliasing. $$ {{} \begin{aligned} k_{\text{off\_max\_lim\_min}} &= \frac{\max\limits_{t}\left(\underset{n}{\operatorname{median}}(k_{\text{off},t,n})\right) - 0.1\times k_{\text{off\_min}}}{0.9}\\ k_{\text{off\_max\_lim\_max}} &= \frac{\max\limits_{t} \left(\underset{n}{\operatorname{median}}(k_{\text{off},t,n})\right) - 0.9\times k_{\text{off\_min}}}{0.1}\\ \end{aligned}} $$ ODE fitting for protein translation and degradation rates (d 1,s 1) estimation Rates d1(t) and s1(t) are estimated from comparison of proteomic population kinetic data [39] with RNA mean value kinetic data computed from single-cell data [37]. Parameter d1(t) corresponds to protein active decay rate while total protein degradation rate \(d_{1\_tot}(t)\) includes decay plus cell division dilution. Associated total protein half-life is referred to as \(t_{1\_tot}(t)\). Parameters s1(t) and \(d_{1\_tot}(t)\) are estimated using a calibration algorithm based on a maximum likelihood estimator (MLE) from package [63]. Objective function is given by the Root Mean Squared Error function (provided by the package) comparing experimental protein counts with simulated ones given by ODEs from our model (1) with RNA level provided by experimental mean RNA data: $$P^{\prime}(t) = s_{1}(t) M(t) - d_{1}(t) P(t) $$ Fifty two out of our 90 selected genes were detected in proteomic data. 23 of these fit correctly experimental data with a constant d1 and s1 during differentiation. 5 genes were estimated with a variable s1(t) and a constant d1 to fit a constant protein level with a decreasing RNA level. For the remaining 24 genes, protein level decreased while RNA is constant, which is modeled with s1 constant and d1(t) variable. For the genes that were not detected in our proteomic data we turned to the literature [64] and found 13 homologous genes with associated estimation of d1 and s1. For the remaining 25 genes, we estimated parameters with the following rationale: we consider that the non-detection in the proteomic data is due to low protein copy number, lower than 100. Moreover [64] proposed an exponential relation between s1 and the mean protein level that we confirmed with our data (see supporting information), resulting in the following definition: $$s_{1} = 10^{-1.47}\times P^{0.81}$$ Linear regression was performed using the Python scipy.stats.linregress() method from Scipy package with the following parameters: r2=0.55, slope=0.81, intercept=−1.47 and p=2.97×10−9. Therefore, if we extrapolate this relation for low protein copy numbers assuming P<100 copies, s1 should be lower than 1 molecule/RNA/hour. Assuming the relation $$\text{Prot} = \text{RNA} \times \frac{s_{1}}{d_{1\_tot}}$$ between mean protein and RNA levels, we deduced a minimum value of d1 from mean RNA level given by: d1>RNA/100. We set s1 and d1 respectively to their maximum and minimum estimated values. Bimodal distribution likelihood for auto-positive feedback exponent (γ auto) estimation We inferred the presence of auto-positive feedback by fitting an individual model for each gene, based on [36]. The model is characterized by a Hill-type power coefficient. The value of this coefficient was inferred by maximizing the model likelihood, available in explicit form. The key idea is that genes with auto-positive feedback typically show, once viewed on an appropriate scale, a strongly bimodal distribution during their transitory regime. The interested reader may find some details in the Additional file 1 of [36], especially in sections 3.6 and 5.2. Note that such auto-positive feedback may reflect either a direct auto-activation, or a strong but indirect positive loop, potentially involving other genes. Estimated Hill-type power coefficients for in silico and in vitro networks are provided in supporting information. Second step - Waves sorting Inflexion estimator for wave time estimation Wave time for gene promoter Wprom and protein Wprot are estimated regarding their respective mean trace \(\overline {E}\) and \(\overline {P}\). Estimation differs depending on mean trace monotony. In vitro wave times are provided in supporting information. 1) If the mean trace is monotonous (checked manually), it is smoothed by a 3rd order polynomial approximation using method poly1d() from python numpy package. Wave time is then defined as the inflection time point of polynomial function where 50% of evolution between minimum and maximum is reached. 2) If the mean trace is not monotonous, it is approximated by a piecewise-linear function with 3 breakpoints that minimizes the least square error. Linear interpolations are performed using the polynomial.polyfit() function from python numpy package. Selection of breakpoints is performed using optimize.brute() function from python numpy package. We obtained a series of 4 segments with associated breakpoints coordinate and slope. Slopes are thresholded: if absolute value is lower than 0.2 it is considered null. Then, we looked for inflection break times where segments with non null slope have an opposite sign compare to the previous segment, or if previous segment has a null slope. Each inflection break time corresponds to an initial effect of a wave. A valid time, when wave effect applies, is associated and corresponds to next inflection break time or to the end of differentiation. Thus, we obtained couples of inflection break time and valid time which defined the temporal window of associated wave effect. For each wave window, if mean trace variation between inflection break time and valid time is large enough (i.e., greater than 20% of maximal variation during all differentiation process for the gene), a wave time is defined as the time where half of mean trace variation is reached during wave time window. Protein mean trace \(\overline {P}\) is given by proteomic data if available, else it is computed from simulation traces with 500 cells using the model with the parameters estimated earlier. Promoter mean trace \(\overline {E}\) is computed as follows from mean RNA trace (from single-cell transcriptomic data) with time delay correction induced by mRNA degradation rate d0. $$\overline{E}(t) = \frac{k_{\text{on}}(t)}{k_{\text{on}}(t) + k_{\text{off}}(t)} $$ $$\overline{E}\left(t-\frac{1}{d_{0}(t)}\right) = \frac{d_{0}}{s_{0}}\times\overline{M}(t)\times\left(t-\frac{1}{d_{0}(t)}\right)$$ Genes sorting Genes are sorted regarding their promoter waves time Wprom. Genes with multiple waves, in case of feedback for example, are present several times in the list. Moreover, genes are classified by groups regarding their position in the network. Genes directly regulated by the stimulus are called the early genes; Genes that regulates other genes are defined as regulatory genes; Genes that do not influence other genes are identified as readout genes. Note that genes can belong to several group. We can deduce the group type for each gene from its wave time estimation. Subsequent constraints have been defined from in silico benchmarking (see "Results" section). A gene i belongs to one of these groups according to following rules: if Wprom<5h then it is an early gene if Wprom<7h then it could be an early gene or another types if \(\max \limits _{i}(W_{prom,i}) +30\mathrm {h} < W_{prot}\) then it is a readout gene else it could be a regulatory or a readout gene Third step - Network iterative inference Interaction threshold (H) Interaction threshold H is estimated for each protein. It corresponds to mean protein level at 25% between minimum and maximum mean protein level observed during differentiation by in silico simulations: $$H = P_{\text{min}} + 0.25(P_{\text{max}} - P_{\text{min}})$$ We choose the value of 25% to maximize the amplitude variation of kon and koff of gene target induced by the shift of the regulator protein level from its minimal to maximal value (see Eq. (2)). Iterative calibration algorithm (θ i,j) The following algorithm gives a global overview of the iterative inference process: Generate_EARLY_network(): In a first step we calibrate the interactions between early genes and stimulus (θi,0) to obtain an initial sub-GRN. Calibration algorithm Calibrate() is defined below. List_genes_sorted_by_Wave_time: This list is computed prior to iterative inference (see previous subsection). Get_all_possible_interaction(GRN, Gene, Wave): For each GRN candidate we estimate all possible interactions with the new gene and prior regulatory genes, or stimulus, regarding their respective promoter wave and protein wave with the following logic: if promoter wave is lower than 7h, interaction is possible between stimulus and the new gene. If the difference of promoter wave minus protein wave is between − 20h and + 30h, then there is a possible interaction between the new gene and regulatory gene. Note: if WASABI is run in "directed" mode, only the true interaction is returned. Calibrate(New_GRN): For interaction parameter calibration we used a Maximum Likelihood Estimator (MLE) from package spotpy [63]. The goal is to fit simulated single-cell gene marginal distribution with in vitro ones tuning efficiency interaction parameter θi,j. For in silico study we defined GRN Fit distance as the mean of the 3 worst gene-wise fit distances. For in vitro study we defined GRN Fit distance as the mean of the fit distances of all genes. Gene-wise fit distance is defined as the mean of the 3 higher Kantorovitch distances[42] among time points. For a given time point and a given gene, the Kantorovitch fit distance corresponds to a distance between marginal distributions of simulated and experimental expression data. At the end of calibration the set of interaction parameter θi,j with associated GRN Fit distance is returned. Select_Best_New_GRN() We fetch all GRN calibration fitting outputs from remote servers and select best new GRNs to be expanded for next iteration updating list of List_GRN_candidate. New networks candidates are limited by number of available computational cores. GRN simulation We use a basic Euler solver with fixed time step (dt=0.5 h) to solve mRNA and protein ODEs [36]. The promoter state evolution between t and t+dt is given by a Bernoulli distributed random variable $$E(t+dt) = \operatorname{Bernoulli}(p(t))$$ drawn with probability p(t) depending on current kon, koff and promoter state: $$p(t)\,=\, E(t) e^{-dt (k_{\text{on}} + k_{\text{off}})} \,+\, \frac{k_{\text{on}}}{k_{\text{on}} + k_{\text{off}}} \left(1 - e^{-dt (k_{\text{on}} + k_{\text{off}})}\right).$$ Time-dependent parameters like d0, d1 and s1 are linearly interpolated between 2 points. The stimulus Q is represented by a step function between 0 and 1000 at t=0 h. Simulation starts at t=−60 h to ensure convergence to steady state before the stimulus is applied. Parameters kon and koff are given by Eq. (2). DOE: GRN: Gene regulatory network HPC: High parallel computing TF: Transcription factor WASABI: WAveS analysis based inference MacNeil LT, Walhout AJ. Gene regulatory networks and the role of robustness and stochasticity in the control of gene expression. Genome Res. 2011; 21(5):645–57. Greene JA, Loscalzo J. Putting the patient back together - social medicine, network medicine, and the limits of reductionism. N Engl J Med. 2017; 377(25):2493–9. https://doi.org/10.1056/NEJMms1706744. Sugimura R, Jha DK, Han A, Soria-Valles C, da Rocha EL, Lu Y-F, Goettel JA, Serrao E, Rowe RG, Malleshaiah M, Wong I, Sousa P, Zhu TN, Ditadi A, Keller G, Engelman AN, Snapper SB, Doulatov S, Daley GQ. Haematopoietic stem and progenitor cells from human pluripotent stem cells. Nature; 545:432. https://doi.org/10.1038/nature22370. Lis R, Karrasch CC, Poulos MG, Kunar B, Redmond D, Duran JGB, Badwe CR, Schachterle W, Ginsberg M, Xiang J, Tabrizi AR, Shido K, Rosenwaks Z, Elemento O, Speck NA, Butler JM, Scandura JM, Rafii S. Conversion of adult endothelium to immunocompetent haematopoietic stem cells. Nature; 545:439. https://doi.org/10.1038/nature22326. Ieda M, Fu J-D, Delgado-Olguin P, Vedantham V, Hayashi Y, Bruneau BG, Srivastava D. Direct reprogramming of fibroblasts into functional cardiomyocytes by defined factors. Cell. 2010; 142(3):375–86. Madhamshettiwar PB, Maetschke SR, Davis MJ, Reverter A, Ragan MA. Gene regulatory network inference: evaluation and application to ovarian cancer allows the prioritization of drug targets. Genome Med. 2012; 4(5):41. https://doi.org/10.1186/gm340. Creixell P, Schoof EM, Erler JT, Linding R. Navigating cancer network attractors for tumor-specific therapy. Nat Biotechnol. 2012; 30(9):842. Chai LE, Loh SK, Low ST, Mohamad MS, Deris S, Zakaria Z. A review on the computational approaches for gene regulatory network construction. Comput Biol Med; 48:55–65. https://doi.org/10.1016/j.compbiomed.2014.02.011. Dojer N, Gambin A, Mizera A, Wilczyński B, Tiuryn J. Applying dynamic bayesian networks to perturbed gene expression data. BMC Bioinformatics. 2006; 7(1):249. https://doi.org/10.1186/1471-2105-7-249. PubMed PubMed Central Article CAS Google Scholar Vinh NX, Chetty M, Coppel R, Wangikar PP. Gene regulatory network modeling via global optimization of high order dynamic bayesian networks. BMC Bioinf. 2012; 27:2765–6. Akutsu T, Miyano S, Kuhara S. Identification of genetic networks from a small number of gene expression pattern under the boolean model. Pac Symp Biocomput. 1999; 4:17–28. Saadatpour A, Albert R. Boolean modeling of biological regulatory networks: A methodology tutorial. Methods. 2013; 62(1):3–12. https://doi.org/10.1016/j.ymeth.2012.10.012. Zhao W, Serpedin E, Dougherty ER. Inferring gene regulatory networks from time series data using the minimum description length principle. Bioinformatics. 2006; 22(17):2129–35. https://doi.org/10.1093/bioinformatics/btl364. Polynikins A, Hogan SJ, Bernardo M. Comparing different ode modelling approaches forgene regulatory networks. J Theor Biol. 2009; 261:511–30. Bansal M, Belcastro V, Ambesi-Impiombato A, Di Bernardo D. How to infer gene networks from protein profiles. Mol Syst Biol. 2007; 3:1–10. Svensson V, Vento-Tormo R, Teichmann S. Exponential scaling of single-cell rnaseq in the last decade. Nat Protoc. 2018; 13:599–604. Fiers M, Minnoye L, Aibar S, Bravo Gonzalez-Blas C, Kalender Atak Z, Aerts S. Mapping gene regulatory networks from single-cell omics data. Brief Funct Genomics. 2018. https://doi.org/10.1093/bfgp/elx046. PubMed PubMed Central CAS Article Google Scholar Babtie A, Chan TE, Stumpf MPH. Learning regulatory models for cell development from single cell transcriptomic data. Curr Opin Syst Biol. 2017; 5:72–81. Yvert G. 'particle genetics': treating every cell as unique. Trends Genet. 2014; 30(2):49–56. https://doi.org/10.1016/j.tig.2013.11.002. Dueck H, Eberwine J, Kim J. Variation is function: Are single cell differences functionally important?: Testing the hypothesis that single cell variation is required for aggregate function. Bioessays. 2016; 38(2):172–80. https://doi.org/10.1002/bies.201500124. Symmons O, Raj A. What's luck got to do with it: Single cells, multiple fates, and biological nondeterminism. Mol Cell. 2016; 62(5):788–802. https://doi.org/10.1016/j.molcel.2016.05.023. Cannoodt R, Saelens W, Saeys Y. Computational methods for trajectory inference from single-cell transcriptomics. Eur J Immunol. 2016; 46(11):2496–506. https://doi.org/10.1002/eji.201646347. Chen H, Guo J, Mishra SK, Robson P, Niranjan M, Zheng J. Single-cell transcriptional analysis to uncover regulatory circuits driving cell fate decisions in early mouse development. Bioinformatics. 2015; 31(7):1060–6. https://doi.org/10.1093/bioinformatics/btu777. Lim CY, Wang H, Woodhouse S, Piterman N, Wernisch L, Fisher J, Gottgens B. Btr: training asynchronous boolean models using single-cell expression data. BMC Bioinformatics. 2016; 17(1):355. https://doi.org/10.1186/s12859-016-1235-y. Moignard V, Woodhouse S, Haghverdi L, Lilly AJ, Tanaka Y, Wilkinson AC, Buettner F, Macaulay IC, Jawaid W, Diamanti E, Nishikawa S, Piterman N, Kouskoff V, Theis FJ, Fisher J, Gottgens B. Decoding the regulatory network of early blood development from single-cell gene expression measurements. Nat Biotechnol; 33(3):269–76. https://doi.org/10.1038/nbt.3154. Matsumoto H, Kiryu H. Scoup: a probabilistic model based on the ornstein-uhlenbeck process to analyze single-cell expression data during differentiation. BMC Bioinformatics. 2016; 17(1):232. https://doi.org/10.1186/s12859-016-1109-3. Cordero P, Stuart JM. Tracing co-regulatory network dynamics in noisy, single-cell transcriptome trajectories: World scientific; 2016, pp. 576–87. https://doi.org/10.1142/9789813207813-0053. Sanchez-Castillo M, Blanco D, Tienda-Luna IM, Carrion MC, Huang Y. A bayesian framework for the inference of gene regulatory networks from time and pseudo-time series data. Bioinformatics. 2017. https://doi.org/10.1093/bioinformatics/btx605. Matsumoto H, Kiryu H, Furusawa C, Ko MSH, Ko SBH, Gouda N, Hayashi T, Nikaido I. Scode: an efficient regulatory network inference algorithm from single-cell rna-seq during differentiation. Bioinformatics. 2017; 33(15):2314–21. https://doi.org/10.1093/bioinformatics/btx194. Ocone A, Haghverdi L, Mueller NS, Theis FJ. Reconstructing gene regulatory dynamics from high-dimensional single-cell snapshot data. Bioinformatics. 2015; 31(12):89–96. https://doi.org/10.1093/bioinformatics/btv257. Huang S. Non-genetic heterogeneity of cells in development: more than just noise. Development. 2009; 136(23):3853–62. https://doi.org/10.1242/dev.035139. Sokolik C, Liu Y, Bauer D, McPherson J, Broeker M, Heimberg G, Qi LS, Sivak DA, Thomson M. Transcription factor competition allows embryonic stem cells to distinguish authentic signals from noise. Cell Syst. 2015; 1(2):117–29. https://doi.org/10.1016/j.cels.2015.08.001. Munsky B, Trinh B, Khammash M. Listening to the noise: random fluctuations reveal gene network parameters. Mol Syst Biol. 2009; 5:318. https://doi.org/10.1038/msb.2009.75. Moris N, Pina C, Arias AM. Transition states and cell fate decisions in epigenetic landscapes. Nat Rev Genet. 2016; 17(11):693–703. https://doi.org/10.1038/nrg.2016.98. Papili Gao N, Ud-Dean MSM, Gandrillon O, Gunawan R. Sincerities: Inferring gene regulatory networks from time-stamped single cell transcriptional expression profiles. 2016. https://doi.org/10.1101/089110. Herbach U, Bonnaffoux A, Espinasse T, Gandrillon O. Inferring gene regulatory networks from single-cell data: a mechanistic approach. BMC Syst Biol. 2017; 11:105. https://doi.org/10.1186/s12918-017-0487-0. Richard A, Boullu L, Herbach U, Bonnaffoux A, Morin V, Vallin E, Guillemin A, Papili Gao N, Gunawan R, Cosette J, Arnaud O, Kupiec JJ, Espinasse T, Gonin-Giraud S, Gandrillon O. Single-cell-based analysis highlights a surge in cell-to-cell molecular variability preceding irreversible commitment in a differentiation process. PLoS Biol. 2016; 14(12):1002585. https://doi.org/10.1371/journal.pbio.1002585. Gandrillon O, Schmidt U, Beug H, Samarut J. Tgf-beta cooperates with tgf-alpha to induce the self-renewal of normal erythrocytic progenitors: evidence for an autocrine mechanism. Embo J. 1999; 18(10):2764–81. Leduc M, Gautier E-F, Guillemin A, Broussard C, Salnot V, Lacombe C, Gandrillon O, Guillonneau F, Mayeux P. Deep proteomic analysis of chicken erythropoiesis. bioRxiv. 2018. https://doi.org/10.1101/289728. https://www.biorxiv.org/content/early/2018/03/27/289728.full.pdf. Liu Z, Tjian R. Visualizing transcription factor dynamics in living cells. J Cell Biol. 2018. https://doi.org/10.1083/jcb.201710038. Lambert SA, Jolma A, Campitelli LF, Das PK, Yin Y, Albu M, Chen X, Taipale J, Hughes TR, Weirauch MT. The human transcription factors. Cell. 2018; 172:650–65. Baba A, Komatsuzaki T. Construction of effective free energy landscape from single-molecule time series. Proc Natl Acad Sci U S A. 2007; 104(49):19297–302. https://doi.org/10.1073/pnas.0704167104. Chai LE, Loh SK, Low ST, Mohamad MS, Deris S, Zakaria Z. A review on the computational approaches for gene regulatory network construction. Comput Biol Med. 2014; 48:55–65. https://doi.org/10.1016/j.compbiomed.2014.02.011. Hecker M, Lambeck S, Toepfer S, Van Someren E, Guthke R. Gene regulatory network inference: data integration in dynamic models—a review. Biosystems. 2009; 96(1):86–103. Chen S, Mar JC. Evaluating methods of inferring gene regulatory networks highlights their lack of performance for single cell gene expression data. BMC Bioinformatics. 2018; 19(1):232. https://doi.org/10.1186/s12859-018-2217-z. Stolovitzky G, Monroe D, Califano A. Dialogue on reverse engineering assessment and methods. Ann N Y Acad Sci. 2007; 1115(1):1–22. Schaffter T, Marbach D, Floreano D. Genenetweaver: in silico benchmark generation and performance profiling of network inference methods. Bioinformatics. 2011; 27(16):2263–70. Fisher J, Henzinger TA. Executable cell biology. Nat Biotechnol. 2007; 25(11):1239–49. https://doi.org/10.1038/nbt1356. Woodhouse S, Piterman N, Wintersteiger CM, Gottgens B, Fisher J. Scns: a graphical tool for reconstructing executable regulatory networks from single-cell genomic data. BMC Syst Biol. 2018; 12(1):59. https://doi.org/10.1186/s12918-018-0581-y. Bonnaffoux A, Caron E, Croubois H, Gandrillon O. A cloud-aware autonomous workflow engine and its application to gene regulatory networks inference. Presented at CLOSER 2018-8th International conference on Cloud computing and Service Science. Funchal: 2018. p. 1–8. Olsen JV, Mann M. Status of large-scale analysis of post-translational modifications by mass spectrometry. Mol Cell Proteomics. 2013; 12(12):3444–52. https://doi.org/10.1074/mcp.O113.034181. Manning KS, Cooper TA. The roles of rna processing in translating genotype to phenotype. Nat Rev Mol Cell Biol. 2017; 18(2):102–14. https://doi.org/10.1038/nrm.2016.139. Mandic A, Strebinger D, Regali C, Phillips NE, Suter DM. A novel method for quantitative measurements of gene expression in single living cells. Methods. 2017; 120:65–75. https://doi.org/10.1016/j.ymeth.2017.04.008. Lin YT, Hufton PG, Lee EJ, Potoyan DA. A stochastic and dynamical view of pluripotency in mouse embryonic stem cells. PLoS Comput Biol. 2018; 14(2):1006000. https://doi.org/10.1371/journal.pcbi.1006000. Zheng GXY, Terry JM, Belgrader P, Ryvkin P, Bent ZW, Wilson R, Ziraldo SB, Wheeler TD, McDermott GP, Zhu J, Gregory MT, Shuga J, Montesclaros L, Underwood JG, Masquelier DA, Nishimura SY, Schnall-Levin M, Wyatt PW, Hindson CM, Bharadwaj R, Wong A, Ness KD, Beppu LW, Deeg HJ, McFarland C, Loeb KR, Valente WJ, Ericson NG, Stevens EA, Radich JP, Mikkelsen TS, Hindson BJ, Bielas JH. Massively parallel digital transcriptional profiling of single cells. Nat Commun. 2017; 8:14049. Ud-Dean SM, Gunawan R. Optimal design of gene knockout experiments for gene regulatory network inference. Bioinformatics. 2016; 32(6):875–83. https://doi.org/10.1093/bioinformatics/btv672. Kreutz C, Timmer J. Systems biology: experimental design. FEBS J. 2009; 276(4):923–42. https://doi.org/10.1111/j.1742-4658.2008.06843.x. Semrau S, Goldmann J, Soumillon M, Mikkelsen TS, Jaenisch R, van Oudenaarden A. Lineage commitment revealed by single-cell transcriptomics of differentiating embryonic stem cells. 2016. https://doi.org/10.1101/068288. Jang S, Choubey S, Furchtgott L, Zou LN, Doyle A, Menon V, Loew EB, Krostag AR, Martinez RA, Madisen L, Levi BP, Ramanathan S. Dynamics of embryonic stem cell differentiation inferred from single-cell transcriptomics show a series of transitions through discrete cell states. Elife. 2017; 6. https://doi.org/10.7554/eLife.20487. Barabasi AL, Oltvai ZN. Network biology: understanding the cell's functional organization. Nat Rev Genet. 2004; 5(2):101–13. Hart Y, Alon U. The utility of paradoxical components in biological circuits. Mol Cell. 2013; 49(2):213–21. https://doi.org/10.1016/j.molcel.2013.01.004. Peccoud J, Ycart B. Markovian modelling of gene product synthesis. Theor Popul Biol. 1995; 48:222–34. Houska T, Kraft P, Chamorro-Chavez A, Breuer L. Spotting model parameters using a ready-made python package. PLoS ONE. 2015; 10(12):0145180. https://doi.org/10.1371/journal.pone.0145180. Schwanhausser B, Busse D, Li N, Dittmar G, Schuchhardt J, Wolf J, Chen W, Selbach M. Corrigendum: Global quantification of mammalian gene expression control. Nature. 2013; 495(7439):126–7. PubMed Article CAS Google Scholar We thank the computational center of IN2P3 (Villeurbanne/France), specially Pascal Calvat, for access to HPC facilities; Eddy Caron (Avalon, ENS Lyon/INRIA) for his support on parallel computing implementation; Patrick Mayeux for proteomic data; and Rudiyanto Gunawan (ETH, Zürich) for critical reading of the manuscript. We would like to thank all members of the SBDM team, Dracula team, and Camilo La Rota (Cosmotech) for enlightening discussions, We also thank the BioSyL Federation and the Ecofect Labex (ANR-11-LABX-0048) of the University of Lyon for inspiring scientific events. This work was supported by funding from the French agency ANR (ICEBERG; ANR-IABI-3096 and SinCity; ANR-17-CE12-0031) and the Association Nationale de la Recherche Technique (ANRT, CIFRE 2015/0436). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Single-cell transcriptomic data are available from [37]. Proteomic data are available from [39]. In silico generated data are available at https://osf.io/gkedt/. University Lyon, ENS de Lyon, University Claude Bernard, CNRS UMR 5239, INSERM U1210, Laboratory of Biology and Modelling of the Cell, Lyon, France Arnaud Bonnaffoux, Ulysse Herbach, Angélique Richard, Anissa Guillemin, Sandrine Gonin-Giraud & Olivier Gandrillon Inria Team Dracula, Inria Center Grenoble Rhône-Alpes, Lyon, France Arnaud Bonnaffoux, Ulysse Herbach & Olivier Gandrillon Cosmotech, Lyon, France Arnaud Bonnaffoux & Pierre-Alexis Gros Univ Lyon, Université Claude Bernard Lyon 1, CNRS UMR 5208, Institut Camille Jordan, Villeurbanne, France Ulysse Herbach Arnaud Bonnaffoux Angélique Richard Anissa Guillemin Sandrine Gonin-Giraud Pierre-Alexis Gros Olivier Gandrillon AB, UH, PAG and OG designed the study. AB performed the theoretical derivations, implemented the algorithms and conceived/analyzed the in silico study. UH implemented the algorithm for auto-positive feedback exponent estimation. AR, SG and AG participated in data generation. OG secured the funding. AB drafted the paper. UH, AR, AG, SG, PAG and OG revised the paper. All authors read and approved the final manuscript. Correspondence to Arnaud Bonnaffoux. The results of this work will be exploited within the frame of a new company VIDIUM for which AB will serve as CSO. Additional file with several tables and figures related to in vitro genes parameters and waves estimation, in silico benchmarking and in vitro GRN candidates fit distance distribution. (PDF 312 kb) Bonnaffoux, A., Herbach, U., Richard, A. et al. WASABI: a dynamic iterative framework for gene regulatory network inference. BMC Bioinformatics 20, 220 (2019). https://doi.org/10.1186/s12859-019-2798-1 Single-cell transcriptomics Gene network inference Multiscale modelling Proteomic T2EC Erythropoiesis Novel computational methods for the analysis of biological systems
CommonCrawl
Tensorial blind source separation for improved analysis of multi-omic data Andrew E. Teschendorff1,2,3, Han Jing1,4, Dirk S. Paul5, Joni Virta6 & Klaus Nordhausen7 Genome Biology volume 19, Article number: 76 (2018) Cite this article There is an increased need for integrative analyses of multi-omic data. We present and benchmark a novel tensorial independent component analysis (tICA) algorithm against current state-of-the-art methods. We find that tICA outperforms competing methods in identifying biological sources of data variation at a reduced computational cost. On epigenetic data, tICA can identify methylation quantitative trait loci at high sensitivity. In the cancer context, tICA identifies gene modules whose expression variation across tumours is driven by copy-number or DNA methylation changes, but whose deregulation relative to normal tissue is independent of such alterations, a result we validate by direct analysis of individual data types. Omic data is now most often generated in a multi-dimensional context. For instance, for the same individual and tissue type, one may measure different data modalities (e.g. genotype, mutations, DNA methylation or gene expression), which may help pinpoint disease-driver genes [1]. Alternatively, for the same individual, the same data type may be measured across different tissues or cell types [2, 3], which may help identify the most relevant cell types or tissues for understanding disease aetiology. We refer to all of these types of multi-dimensional data generally as multi-way or multi-omic data, and when samples and molecular features are matched, the data can be brought into the form of a multi-dimensional array, formally known as a tensor [4]. While several statistical algorithms for the analysis of multi-way or tensorial data are available [4–7], their application to real data has been challenging. There are mainly three reasons for this. First, the associated multi-way datasets are often very large and how well the algorithms perform on such large sets is currently still unclear. Second, the algorithms can be computationally demanding, compromising their benefit-to-cost ratio [4]. Third, interpreting the output of these algorithms requires an in-depth understanding of the underlying methods. Exacerbating this problem, most available software packages are not user-friendly, requiring the user to have such an in-depth understanding to extract the relevant biological information. Beyond these technical challenges, there is also a lack of comparative studies, making it difficult to choose the appropriate algorithm for the task in question. To help address some of these outstanding challenges, we here consider and evaluate a novel data tensor decomposition algorithm [8, 9], which is based on blind source separation (BSS), and specifically independent component analysis (ICA) [10]. Although common BSS techniques such as non-negative matrix factorisation and ICA have been successfully applied to a wide range of single omic data types, including e.g. gene expression [11–16], DNA methylation [17] and mutational data [18], their application to multi-way data is largely unexplored [19]. For single-omic datasets, the improved performance of ICA over non-BSS techniques like principal component analysis (PCA) is due primarily to the non-Gaussian and often sparse nature of biological sources of variation, which means that statistical deconvolution of biological samples benefits from non-linear decorrelation measures such as statistical independence (as used in ICA) [13]. It is, therefore, natural to consider analogous ICA algorithms for multi-way data, as we do here, since these may also lead to improved inference. To assess this, we here benchmark our novel tensorial BSS algorithm against some of the most popular and powerful algorithms for inferring sources of variation from multi-omic data, including JIVE (joint and individual variation explained) [5], PARAFAC (parallel factor analysis) [4, 6], iCluster [7] and canonical correlation analysis (CCA) [20–22]. Each of these algorithms has particular strengths and weaknesses, which render comparisons between them highly non-trivial. For instance, a limitation of CCA is that it can infer only common sources of variation between data types or tissues, in contrast to JIVE or PARAFAC, which can infer both joint as well as individual sources of variation. On the other hand, JIVE and CCA can be run on multiple data matrices with different numbers of molecular features, while PARAFAC and iCluster require matched sets of features (and samples) for each data type. Model complexity also differs substantially between methods, with PARAFAC exhibiting a much lower model complexity than an algorithm such as iCluster. Thus, a comparison of all of these methods is of paramount interest, and here we do so in a tensorial context, i.e. one where the multi-way data is defined over a matched set of molecular features (e.g. genes or CpGs) and samples across all data types, allowing the data to be brought into the form of a tensor. Specifically, we shall here consider order-3 data tensors, i.e. data which can be brought into the form of an array with three dimensions (often called modes). In our evaluation and comparison of all multi-way algorithms, we consider both simulated data as well as data from real epigenome-wide association studies (EWAS). We further illustrate potential uses of our tensorial BSS algorithm (i) to detect cell-type-independent and cell-type-specific methylation quantitative trait loci (mQTLs) in multi-cell-type or multi-tissue EWAS and (ii) to detect cancer gene modules deregulated by copy-number and DNA methylation changes. Tensorial ICA outperforms JIVE, PARAFAC, iCluster and CCA on simulated data Tensorial ICA (tICA) aims to infer from a data tensor statistically independent sources of data variation, which should correspond better to underlying biological factors ('Methods'). Indeed, since biological sources of data variation are generally non-Gaussian and often sparse, the statistical independence assumption implicit in the ICA formalism can help improve the deconvolution of complex mixtures and thus, better identify the true sources of data variation (Fig. 1). As with ordinary ICA itself, there are different ways of implementing tICA, and we here consider two different flavours: tensorial fourth-order blind identification (tFOBI) and tensorial joint approximate diagonalisation of high-order eigenmatrices (tJADE) ('Methods'). Specifically, we consider two modified versions of these, whereby tensorial PCA is applied as a noise reduction step (also called whitening) prior to implementing tICA, resulting in two algorithms we call tWFOBI and tWJADE ('Methods'). Decomposing data tensors using independent component analysis. Tensorial ICA (tICA) works by decomposing a data tensor, here depicted as an order-3 tensor with three dimensions representing features (CpGs/genes), samples and tissue or data type, into a source tensor S and two mixing matrices defined over tissue/data type and samples, respectively. The key property of tICA is that the independent components in S are as statistically independent from each other as possible. Statistical independence is a stronger criterion than linear decorrelation and allows improved inference of sparse sources of data variation. Positive kurtosis can be used to rank independent components to select the most sparse factors. The largest absolute weights within each independent component can be used for feature selection, while the corresponding component in the mixing matrices informs about the pattern of variation of this component across tissue/data types and samples, respectively. In the latter case, the weights can be correlated to sample phenotypes, such as normal/cancer status or genotype. For the first mixing matrix, the weights inform us about the relation between data types (e.g. if the copy-number change is positively correlated with gene expression), or for a multi-cell EWAS, whether mQTLs are cell type independent or not. + ve positive, −ve negative, CNV copy-number variation, DNAm DNA methylation, EWAS epigenome-wide association study, mQTL methylation quantitative trait locus, mRNA messenger RNA First, we tested the two tICA algorithms, as well as tensorial PCA (tPCA), on simulated multi-way data consisting of two different data matrices defined over the same 1000 features (genes) and 100 samples ('Methods'). The data for the two matrices was generated with a total of four sources of variation, two for each matrix, and with one source in each data matrix describing joint variation, driven by a total of 100 genes. A total of nine different noise levels were simulated, ranging from a high signal-to-noise ratio (SNR) regime (SNR = 3 and noise level = 1) to a low SNR regime (SNR = 0.6 and noise level = 5). For each noise level, a total of 1000 Monte Carlo runs were performed. In each run, we compared the multi-way algorithms in terms of their sensitivity (SE) and specificity (SP) to detect the 50 genes driving the joint variation. We did not consider the corresponding performance measures for the individual variation (i.e. the variation specific to one data type), because not all algorithms infer sources of individual variation (e.g. CCA), thus precluding direct comparison between them, and because identifying sources of joint variation is always the main purpose of multi-way algorithms. The number of components chosen for each method and the number of genes selected within components to compute SE and SP is explained in detail in 'Methods'. SE and SP values for joint variation of each algorithm and noise level were averaged over the 1000 runs ('Methods'). Benchmarking tICA and tPCA against PARAFAC, CCA, JIVE and iCluster, we observed that for low noise levels, all algorithms performed similarly, except PARAFAC, which exhibited significantly worse SE and SP values (Fig. 2a,c). For larger noise levels, we observed worse performance for JIVE, CCA and iCluster compared to the two different tICA methods (tWFOBI and tWJADE) (Fig. 2, 'Methods'). Differences in SE and SP between the tICA methods and JIVE, CCA, iCluster and PARAFAC were statistically significant (Fig. 2b,d). On this data, and since tICA uses tPCA as a preliminary step, we did not observe a substantial difference between tPCA and tICA (Fig. 2). We note that in this evaluation on the simulated data, we did not consider sparse CCA (SCCA), since the sparsity itself does not optimise sensitivity and thus SCCA would perform substantially worse than CCA (data not shown). Results were unchanged if we replaced Gaussian distributions (as the sources of variation) with super-Gaussian Laplace distributions, indicating that our results are not dependent on the type of data distribution (Additional file 1: Figure S1). Comparison of multi-way algorithms on simulated data. a Sensitivity (SE) versus noise level (x-axis) for seven different methods as indicated, as evaluated on simulated data (data points are averages over 1000 Monte Carlo runs). In each case, the data tensor was of size 2×100×1000, i.e. two data types, 100 samples and 1000 genes. b Left panel: Box plots of SE values for the same seven methods for the largest noise level (5). Each box contains the SE values over the 1000 Monte Carlo runs. Right panel: Corresponding heat map of P values of significance for each pairwise comparison of methods. P values were computed from a one-tailed Wilcoxon rank sum test. For each entry specified by a given row and column, the alternative hypothesis is that the method specified in the row has a higher SE than the method specified in the column. c,d As a,b, but for the specificity (SP). SE sensitivity, SP specificity Tensorial PCA/ICA reduces running time compared to JIVE, PARAFAC and iCluster Using the same simulated data, we further compared the algorithms in terms of their running times. A detailed comparison is cumbersome because the parameters specifying the number of components to search for are not directly comparable and differ substantially between methods. Nevertheless, using reasonable parameter choices for the simulated model above, we found that tPCA and tICA substantially speed up inference over methods such as JIVE or iCluster (Table 1). In fact, even when specifying a larger number of components for tPCA/tICA, compared to PARAFAC, JIVE or iCluster, the latter were substantially slower (Table 1), whilst also exhibiting marginally worse SE and SP values (Fig. 2). In general, we observed tICA methods to be at least 50 times faster than PARAFAC, and at least 100 times faster than JIVE and iCluster (Table 1). For much larger datasets, we found the application of iCluster to be computationally demanding and not practical. Thus, in subsequent analyses on real datasets, we decided to benchmark tPCA/tICA against PARAFAC, CCA, SCCA and JIVE. Table 1 Comparison of running times of multi-way algorithms tICA exhibits improved power in a real multi-tissue smoking EWAS Next, we asked if tPCA/tICA also leads to improved power on real data. An objective evaluation on real data is challenging due to the difficulty of defining a gold-standard set of true positive associations. Fortunately, however, a meta-analysis of several smoking EWAS in blood has demonstrated that smoking-associated differentially methylated CpGs (smkDMCs) are highly reproducible, defining a gold-standard set of 62 smkDMCs ('Methods') [23]. Recently, we also showed that effectively all 62 smkDMCs are associated with smoking exposure if DNA methylation (DNAm) is measured in buccal samples [2]. Thus, one way to compare algorithms objectively is in terms of their sensitivity to identify these 62 smkDMCs in a matched blood-buccal EWAS consisting of Illumina 450k DNAm profiles for a total of 152 women ('Methods', [2]). Because there are two distinct samples (one blood plus one buccal) per individual, most of the variation is genetic. Hence, to reduce this background genetic variation, we first computed the SE values on a reduced data matrix obtained by combining the 62 smkDMCs with 1000 randomly selected non-smoking associated CpGs (a total of 100 Monte Carlo randomisations). We considered both the maximum SE value attained by a component, as well as the overall SE obtained by combining selected CpGs from components significantly enriched for smkDMCs ('Methods'). This revealed that JIVE, CCA/SCCA and PARAFAC were all superseded by tPCA and tICA (Fig. 3a,b). Differences between tPCA and tICA were generally not significant (Fig. 3a), although tWFOBI attained higher combined SE values than tPCA and tWJADE (Fig. 3b). Comparison of multi-way algorithms on a multi-tissue smoking EWAS. a Left panel: Box plot of sensitivity (SE) values for each of the seven methods as applied to the data tensors of dimension 2×152×1062 (two tissues, 152 samples and 1000 randomly selected non-smkDMCs plus 62 smkDMCs) and for 100 different selections of non-smkDMCs. SE(Max) is the maximum sensitivity to capture 62 smkDMCs among all inferred components. Right panel: Heat map of the corresponding one-tailed paired Wilcoxon rank sum test, benchmarking the SE values of each method (y-axis) against each other method (x-axis). b As a, but now for the combined sensitivity (SE(All)) obtained from all enriched components. c,d As a,b, but now for data tensors of dimension 2×152×10 062 and for 100 randomly selected 10 000 non-smkDMCs. EWAS epigenome-wide association study, SE sensitivity, smkDMC smoking-associated differentially methylated CpG, SP specificity Next, we scaled up the data matrices by combining the 62 smkDMCs with a larger set of 10 000 non-smkDMCs, recomputing the SEs (again for 100 different Monte Carlo selections of 10 000 non-smkDMCs). As expected, with an increase in the number of CpGs, the SE of all algorithms dropped, likely driven by increased confounding due to genetic variation (Fig. 3c,d). With the increase in probe number, tICA (tWFOBI and tWJADE) outperformed not only JIVE, PARAFAC and CCA/SCCA, but also tPCA (Fig. 3c,d), in line with the increased sparsity of the smoking-associated source of variation. To illustrate how the output produced by tICA can be used for valuable inference, we focus on a particular Monte Carlo run and a specific component (estimated using tWJADE), which obtained a high sensitivity for smkDMCs (component 12, Fig. 4a). We note that the two independent components (ICs) S1,12,i and S2,12,i exhibited a less correlative structure than the corresponding components projected onto the blood and buccal dimensions, demonstrating that tWJADE does indeed identify components that are less statistically dependent (Fig. 4a). Confirming the high sensitivity of these ICs, the 62 smkDMCs were highly enriched among CpGs with the largest absolute weights in any one of the two ICs (Fig. 4a, Fisher test P<1×10−36 and SE=41/62∼0.66). We further verified that the 41 enriched smkDMCs exhibited strong Pearson correlations between their DNAm profiles in blood and buccal, as required since smoking exposure is associated with similar DNAm patterns in these two tissue types (Fig. 4b) [2]. Further confirming that component 12 is associated with smoking exposure, we correlated the weights of the corresponding column of the estimated mixing matrix with two different measures of smoking exposure, demonstrating in both cases a strong association (Fig. 4c). Thus, application of tICA on DNAm data results in components that are readily interpretable in terms of their associations with known smoking exposure across features and samples. Validation of tensorial ICA on multi-tissue smoking EWAS. a Left panel: Scatterplot of the weights of estimated independent components S1,12,i and S2,12,i from the data tensor of dimension 2×152×1062, with mode 1 representing tissue type, mode 2 the different women and mode 3 the CpGs. Red denotes the smkDMCs. Middle panel: As left panel, but now for the rotated tensor, projecting the data onto the whole blood (WB) and buccal (BUC) dimensions, demonstrating the strong correlation between the DNAm variation in whole blood and buccal tissue. Right panel: As left panel, but now for the absolute weights. The green dashed lines represent the cutoff point selecting the 62 CpGs with the largest absolute weights. There are in total 41 smkDMCs among the three larger quadrants, corresponding to a sensitivity of 41/62=0.66, with the enrichment P value given above the plot. b Pearson correlation heat map of the 41 smkDMCs between whole blood (WB) and buccal (BUC) tissue, with correlations computed over the 152 samples. c Plots of the 12th independent component of the mixing matrix in the sample space (y-axis) against smoking exposure for the 152 samples. Left panel: Smoking-pack-years. Right panel: Smoking status (never smokers, ex-smokers and smokers at sample draw). P values are from linear regressions. BUC buccal, DNAm DNA methylation, EWAS epigenome-wide association study, ExSmk ex-smoker, smkDMC smoking-associated differentially methylated CpG, WB whole blood tICA identifies mQTLs in a multi-cell-type EWAS Having established the better performance of tICA over other state-of-the-art methods, we next considered the application of tICA (specifically tWFOBI) in an EWAS of 47 healthy individuals, for which three purified cell types (B cells, T cells and monocytes) have been profiled with Illumina 450k DNAm bead arrays [3] ('Methods'). We chose tWFOBI over tWJADE because of its computational efficiency (Table 1). Given that three cell types were measured for each individual, the expectation is that a significant amount of inter-individual variation in DNAm would correlate with genetic variants (i.e. mQTLs) [24]. Thus, it is important to evaluate the ability of tICA to detect mQTLs and to determine whether these are blood-cell-subtype specific or not. Applying tWFOBI to the data tensor for the 3 cell types × 47 samples × 388 618 probes, we inferred a total of 11 ICs in the sample-mode space (yielding 33 ICs across sample and cell-type modes combined). For each of these 11 ICs in each cell type, we ranked probes according to their absolute weights and tested the enrichment of the top-500 probes against a high-quality list of 22 245 mQTLs as derived in [25] ('Methods'). This high-confidence list of mQTLs all passed a very stringent unadjusted P value threshold of P=1×10−14 in each of five different human cohorts, encompassing five different age groups [25]. We observed strong statistical enrichment for mQTLs in many ICs (Fig. 5a). We also tested separately for enrichment of chromosomes. This revealed enrichment, notably of chromosomes 6 and 21, but also of 1, 4, 7 and 8 (Fig. 5b). For instance, IC-9 was enriched for mQTLs and chromosome 1 in all three cell types (Fig. 5a,b). Supporting this, we found a clear example of a cell-type-independent mQTL mapping to the 1q32 locus of the PM20D1 gene (Fig. 5c), a major genome-wide association study (GWAS) locus associated with Parkinson's disease [26]. Focusing on chromosome 6, another cell-type-independent mQTL mapped to MDGA1 (Additional file 1: Figure S2), a major susceptibility locus for schizophrenia [27]. Other mQTLs driving ICs were cell type specific, e.g. mQTLs mapping to ATXN1 and SYNJ2 were dominant in the ICs projected along B cells, but not among T cells or monocytes (Additional file 1: Figure S3). Although assessing whether mQTLs are truly cell type independent or cell type specific is not possible without genotype information, we nevertheless estimated, based on the IC weight distribution of the mQTLs across cell types, that approximately 75% of the mQTLs enriched in ICs were cell type independent (Additional file 1: Figure S4). This estimate of the non-specificity of blood-cell-subtype mQTLs is similar to that obtained by a previous study (≥79%) using neutrophils, monocytes and T cells [28]. Tensorial ICA identifies components enriched for mQTLs in an EWAS of purified cell types. a Left panel: Bar plot of the odds ratio (OR) of enrichment of the top-ranked 500 CpGs for mQTLs in each of the 11 ICs and cell types, as indicated. Right panel: Corresponding heat map indicating the P values of enrichment as estimated using a one-tailed Fisher's exact test. b Heat maps of enrichment P values of the top-ranked 500 CpGs from each IC for chromosomes. The significance of P values is indicated in different colours using same scheme as in a. c An example of a cell-type-independent mQTL mapping to chromosome 1. Plots show the weights of the corresponding components for B cells, T cells and monocytes, with the selected CpGs mapping to the mQTL indicated in red. d Validation of the mQTL in c in an independent blood-buccal EWAS. f Venn diagram showing the overlap of mQTLs derived from the ICs in the purified cell-type EWAS with those derived from the blood-buccal EWAS. The odds ratio (OR) and one-tailed Fisher test P value of the overlap are given. Chr chromosome, EWAS epigenome-wide association study, IC independent component, mQTL methylation quantitative trait locus, OR odds ratio Next, we validated the mQTLs found using an independent dataset. Thus, we applied tWFOBI to the blood-buccal EWAS considered earlier. We inferred a source tensor of dimension 2×26×447 259, i.e. a total of 52 ICs, defined over two tissue types and 26 components in sample-mode space. As before, we observed very strong enrichment, notably for the same chromosomes 6 and 21 (Additional file 1: Figure S5). The previously found mQTL at the PM20D1 locus was also prominent in one of the inferred ICs in this blood-buccal EWAS, confirming its validity and further supporting that this mQTL is cell type independent (Fig. 5d). Overall, from the pure blood-cell-subtype EWAS, we detected a total of 1763 mQTLs, of which 547 were also observed in the blood-buccal EWAS (odd ratio = 12.8, Fisher test P<1×10−50, Fig. 5e). Thus, we can conclude that tWFOBI is able to identify components of variation across cell types and samples that capture a significant number of mQTLs, without matched genotype information. tICA outperforms JIVE and PARAFAC in their sensitivity to detect mQTLs Given the ability of tICA to detect mQTLs, we next benchmarked the performance of all algorithms in terms of their sensitivity to detect mQTLs in the EWAS of the three purified blood cell subtypes considered earlier. Because of the presence of three cell types, for this analysis we excluded CCA and sCCA since these methods are designed for only two data matrices. As before, we computed two sensitivity measures to detect the 22 245 mQTLs from the Aries database [25], designed to assess the overall sensitivity across all inferred components, and another designed to assess the maximum sensitivity attained by any single component. Varying the number of top-ranked selected CpGs in components from 500 up to 22 245, we observed that over the whole range, tFOBI and tJADE were optimal, clearly outperforming both PARAFAC and JIVE (Fig. 6a). The maximum sensitivity attained by any individual component was also best for the tICA methods (Fig. 6b). To better evaluate the enrichment of these components for mQTLs, we also considered the ratio of the sensitivity to the maximum possible sensitivity, recording the maximum value attained by any component. This demonstrated that when selecting the top-500 CpGs, the components inferred using tICA could capture over 60% of the maximum possible number of mQTLs, i.e. over 60% of the 500 CpGs mapped to mQTLs (Fig. 6c). In contrast, JIVE components contained only just over 40% of mQTLs (Fig. 6c). We note that although the performance of JIVE could be significantly improved by also including the components of individual variation, that approximately 80% of mQTLs have been estimated to be independent of blood cell subtype [28], supporting the view that JIVE is less sensitive in capturing cell-type-independent mQTLs. All these results were stable to repeated runs of the algorithms, as only PARAFAC exhibited variation between runs. However, this variation was relatively small (Additional file 1: Figure S6). tICA outperforms JIVE and PARAFAC in detecting mQTLs. a Plot of the overall sensitivity (SE(ALL), y-axis) against the number of top-ranked CpGs selected in a component (x-axis) for five different algorithms. b As a, but now for the maximum sensitivity attained by any single component (SE(MAX), y-axis). c Bar plot of the maximum sensitivity attained by any single component expressed as a fraction of the maximum possible value given the number of selected top-ranked CpGs per component. mQTL methylation quantitative trait locus, SE sensitivity Next, we repeated the same sensitivity analysis to detect mQTLs in our buccal-blood EWAS, now also including CCA and sCCA (as there are only two tissue/cell types). Confirming the previous analysis, tICA methods outperformed JIVE and PARAFAC by over 20% in terms of the overall sensitivity, whilst also attaining a better sensitivity at the individual component level (Additional file 1: Figure S7). Of note, the sensitivity of both CCA and sCCA was substantially worse, since mainly only the top canonical vector was significant. Application of tICA to multi-omic cancer data reveals dosage-independent effects of differentially expressed genes To demonstrate further the ability of tICA to retrieve interesting patterns of variation in a multi-omic context, we applied it to the colon cancer The Cancer Genome Atlas (TCGA) dataset [1], comprising a matched subset of copy-number variation (CNV), DNAm and RNA-seq data over 13 971 genes and 272 samples (19 normals plus 253 cancers) [29]. We applied tWFOBI to the resulting 3×272×13 971 data tensor, inferring a total of 3×37 ICs, which were ranked in order of decreasing kurtosis ('Methods'). Of the 37 ICs, 20 correlated with normal/cancer status (P<0.05/37∼0.001), with four of these capturing correlations between CNV and gene expression (Additional file 1: Table S1). All four ICs were strongly enriched for specific chromosomal bands (Additional file 1: Table S1), in line with those reported in the literature [1, 30], and one of these (IC-35) also exhibited concomitant correlation between DNAm and gene expression (Additional file 1: Table S1). Plotting the weights of IC-35 along the CNV, DNAm and mRNA axes confirmed the ability of tWFOBI to identify patterns of mRNA expression variation, which are driven by local CNV and which also associate with local variation in DNAm (Fig. 7a). The corresponding weights along the sample mode confirmed the association with normal/cancer status (Fig. 7b). Scatterplots of the z-score normalised CNV and DNAm patterns against gene expression for one of the main driver genes (STX6) confirmed the strong associations between CNV/DNAm and mRNA expression (Fig. 7c). Strikingly, we observed that while variations in copy number and DNAm of STX6 modulate expression differences between colon cancers, that the deregulation of STX6 expression between normal and cancer is clearly independent of copy-number and DNAm state (Fig. 7c). Validation of tICA on a multi-omic cancer set. a Manhattan-like plots of IC-35 in gene space, as inferred using tWFOBI on the colon TCGA set, projected along the CNV, DNAm and mRNA axes. Red points highlight genes that had large weights in both CNV and mRNA dimensions (CNV), in both DNAm and mRNA dimensions (DNAm), and the union of these (mRNA). Chromosomes are arranged in increasing order and displayed in alternating colours. b Box plots of the corresponding weights of IC-35 in the sample space, discriminating normal colon (N) from colon cancer (C). P value is from a Wilcoxon rank sum test. c Scatterplots of a driver gene (STX6) between z-score normalised segment level (CNV) and mRNA expression (top panel) and between z-score normalised DNAm level and mRNA expression (lower panel). Colours indicate normal (green) and cancer (red). The regression line, Pearson correlation coefficient and P value are shown. C cancerous, CNV copy-number variation, DNAm DNA methylation, IC independent component, mRNA messenger RNA, N normal, PCC Pearson correlation coefficient, Pos. position To validate this important finding and determine the extent of this phenomenon, we analysed five additional TCGA datasets (see 'Methods'), but now using a more direct approach. For each TCGA set, we first identified the subset of differentially expressed genes (DEGs) between normal and cancer (adjusted P value threshold of 0.05) that also exhibit a positive correlation between expression and copy number as assessed over cancers only, i.e. we selected those DEGs with a CNV-dosage effect across cancers. For those overexpressed in cancer, we then asked if individual tumours exhibiting a neutral CNV state (the CNV state of the normal samples) or a CNV loss still exhibited overexpression relative to the normal samples. Remarkably, we observed that a very high fraction of these DEGs remained overexpressed when restricting to the subset of cancer samples with low or neutral CNV, thus indicating that their overexpression in cancer is not dependent on CNV state, despite their expression across individual cancer samples being modulated by CNV state (Fig. 8a). This pattern of differential expression being independent of CNV state was also seen for DEGs with a CNV-dosage effect across tumours and which were underexpressed in cancer. Indeed, restricting to cancers with neutral or copy-number gain (Fig. 8a), these genes were generally still underexpressed in these cancer samples compared to normal tissue. Similar patterns were observed when DEGs were selected for DNAm-expression dosage effects across tumours (Fig. 8a). Specific examples for lung squamous cell carcinoma (LSCC) confirmed that DEGs in LSCC that exhibit a CNV or DNAm dosage effect across tumours exhibit differential expression that is not dependent on CNV or DNAm state (Fig. 8b,c). Thus, these data support the finding obtained using tICA, demonstrating the value and power of tICA to extract biologically important and novel patterns of data variation in a multi-omic context. Multi-dimensional patterns of differential expression in cancer. a Box plots of the fraction of differentially expressed genes in cancer, which remain differentially expressed when specific cancer subsets are compared to normal-adjacent samples, for six different TCGA cancer types (LSCC, LUAD, KIRC, KIRP, BLCA and COAD), and for four different scenarios: genes overexpressed in cancer and considering cancers with neutral or copy-number loss of that gene (first panel), genes underexpressed in cancer and considering cancers with neutral or copy-number gain (second panel), genes overexpressed in cancer and considering cancers with the highest levels of gene promoter DNAm (third panel), and finally genes underexpressed in cancer and considering cancers with the lowest levels of gene promoter DNAm (fourth panel). In each panel, blue denotes the fraction of over/underexpressed genes that are differentially expressed when only the specific cancer subset is compared to the normal samples. Magenta denotes the fraction that are overexpressed and green denotes the fraction that are underexpressed. b Scatterplots of mRNA expression against either copy-number variation level (CNV) or DNAm level for selected genes in LSCC. The selected genes represent examples of genes from a. For instance, BIRC5 in LSCC is overexpressed in cancer compared to normal, and this overexpression relative to normal is independent of the CNV of the cancer. c As b, but the 3D scatterplots also display the CNV or DNAm level. These plots illustrate that the difference in expression between cancer and normal is also independent of the other variable (e.g. CNV or DNAm). For instance, the underexpression of GPX3 in LSCC is neither driven by promoter DNAm nor by CNV losses. BLCA bladder adenocarcinoma, C cancerous, CN copy number, CNV copy-number variation, DEG differentially expressed gene, COAD colon adenocarcinoma, DNAm DNA methylation, KIRC kidney renal cell carcinoma, KIRP kidney papillary carcinoma, LSCC lung squamous cell carcinoma, LUAD lung adenocarcinoma, mRNA messenger RNA, N normal, Overexpr. overexpression, Underexr. underexpression Here we have assessed and benchmarked a novel suite of tensorial decomposition algorithms (tPCA, tWFOBI, and tWJADE) against a number of state-of-the-art alternatives. Specifically, while popular multi-way algorithms such as JIVE, iCluster or CCA/SCCA are in principle applicable to non-tensorial multi-way data (e.g. if the features across data types are distinct or not matched), when assessed in a tensorial context (i.e. when all dimensions are matched), these established methods are outperformed by the tensorial PCA and ICA methods considered here. This was demonstrated not only on simulated data, but also in the context of two real EWAS, where tICA methods were significantly more powerful in detecting differentially methylated CpGs associated with an epidemiological factor (smoking) and single-nucleotide polymorphisms (SNPs; mQTLs). For a real EWAS, tICA also outperformed tPCA, in line with the fact that biological sources of data variation are non-Gaussian and sparse, and therefore, more readily identified using statistical independence as a (non-linear) deconvolution criterion (as opposed to the linear decorrelation criterion used in tPCA). Thus, this extends the improvements seen for ICA over PCA on ordinary omic data matrices [13, 16] to the tensorial context. In addition, tPCA and tICA offer substantial (50–100-fold) speed advantages over methods like iCluster, JIVE and PARAFAC, which can become computationally demanding or even prohibitive. Further application of tICA to a multi-cell-type EWAS (B cells, T cells and monocytes) revealed its ability to identify loci enriched for cis-mQTLs (as cis-mQTLs make up over 90% of validated mQTLs in the Aries database [25]). Indeed, tICA achieved relatively high sensitivity values with top-ranked CpGs in components containing over 60% mQTLs. Given that here we were limited because we did not have access to matched genotype information, our results demonstrate the potential of tICA to detect mQTLs in the absence of such genotype information. For instance, it identified many cell-type-independent mQTLs, of which a substantial proportion have been validated in an independent blood-buccal EWAS study, and with several mapping to key GWAS loci for important diseases like Parkinson's and schizophrenia. Although most of the identified mQTLs were blood cell type independent, tICA estimated that approximately 25% of mQTLs may be blood cell type specific, in line with the estimate of 20% obtained by Blueprint using a slightly different combination of blood cell subtypes (neutrophils, monocytes and T cells) [28]. We note that application of tICA to any multi-cell-type or multi-tissue EWAS is likely to have components strongly enriched for mQTLs, since for the same individuals, DNAm is being measured in at least two different tissues or cell types, and therefore, genetic effects that do not depend on cell type are bound to explain most of the inter-individual variation [24, 31]. Thus, we conclude that tICA could be an extremely versatile tool for identifying novel candidate mQTLs in multi-cell EWAS for which matched genotype information may not be available. tICA may also help to identify groups of widely separated mQTLs that are regulated by the same SNP and bound e.g. by a common transcription factor [32]. More generally, tICA can be applied to any multi-way data tensor to identify complex patterns of variation correlating with phenotypes of interest and the underlying features driving these variation patterns. This is accomplished by first correlating inferred ICs of variation in the sample-mode space with sample phenotype information (e.g. age, smoking, normal/cancer status and genotype) and subsequently selecting the features with the largest weights in these correlated components. As an illustrative example, the application of tICA to a multi-omic TCGA dataset revealed a deep novel insight: namely, that most DEGs in cancer that exhibit a CNV or DNAm dosage-dependent effect on expression across individual tumours exhibit differential expression relative to the normal tissue in a manner that does not, in fact, depend on CNV or promoter DNAm state. In other words, although CNV and DNAm variation strongly modulates expression variation of these DEGs across individuals tumours, for most of the genes exhibiting this CNV or DNAm dosage-dependent expression pattern, their deregulation relative to normal cells appears to be independent of the underlying CNV or promoter DNAm state. Although it is clear that differential expression in cancer can be the result of many mechanisms other than CNV or DNAm, our observation is significant, because we did not just select cancer DEGs, but the subset of these that exhibit a CNV or DNAm dosage-dependent effect on expression across tumours. The implications of our observation are important, given that many cancer classifications have been derived from unsupervised (clustering) analyses that were performed using only tumours, thus ignoring their patterns of variation relative to the normal reference state. Other large cancer studies, such as METABRIC [33], which did not profile normal tissue samples, identified novel candidate oncogenes and tumour suppressors solely based on CNV-dosage effects on gene expression across cancers, yet our results indicate that this could identify many false positives in the sense that their overexpression or underexpression in cancer is not dependent on the underlying CNV state. We point out that although this finding could have been obtained without application of a multi-way algorithm, that this would have required substantial prior insight. Therefore, this subtle pattern of variation across multiple data types was only discovered thanks to applying an agnostic method like tICA. Although we have shown the value of tICA in identifying mQTLs and interesting patterns of variation across different data types in cancer-genome data, it is also important to discuss some of the limitations, which, however, also apply to all the other multi-way algorithms considered here. In particular, identifying sources of DNAm variation associated with epidemiological factors in a multi-tissue EWAS setting can be difficult due to confounding genetic variation. Indeed, in our application to a buccal-blood Illumina 450k EWAS, we found that the sensitivity of all algorithms dropped very significantly if they were applied to all ∼480 000 CpGs. Thus, it is important to devise improvements to these tensorial methods. For instance, one solution may be to first perform dimensional reduction using supervised feature selection on separate data types, and subsequently applying the tensorial methods on a reduced feature space. Alternatively, supervised tensorial methods, such as tensorial slice inverse regression [34], may help to identify sources of variation specifically associated with epidemiological variables. In summary, the combined tPCA and tICA methods presented here will be an extremely valuable tool for analysis and interpretation of complex multi-way data, including multi-omic cancer data, as well as for the detection and clustering of mQTLs in multi-cell-type EWAS where genotype information may not be available. Below we briefly describe the main tensorial BSS algorithms [8, 9, 35] as implemented here. For more technical details, see [8, 9, 35]. We also provide brief details of our implementation of JIVE, PARAFAC, iCluster, CCA and SCCA. All these implementations are available as R functions within Additional file 2. Tensorial PCA We assume that we have i=1,…,p independent and identically distributed realisations of a matrix \(X_{i}\in \mathbf {R}^{p_{1}\times p_{2}}\), which can be structured as an order-3 data tensor X of dimension p1×p2×p. Then, tPCA decomposes X as follows: $$ X=S\odot_{m=1}^{2}\Omega_{m}, $$ where S is also a 3-tensor of dimension p1×p2×p and Ω m (m=1,2) are orthogonal p m ×p m matrices, i.e. \(\Omega _{m}^{T}\Omega _{m}=I_{p_{m}}\). Here, ⊙ denotes the tensor contraction operator. For instance, for Z an r-tensor of dimension p1×⋯×p r and A a matrix of dimension p m ×p m , Z⊙ m A describes the r-tensor with entries \((Z\odot _{m}A)_{i_{1} \ldots i_{m} {\ldots } i_{r}}=Z_{i_{1} {\ldots } j_{m} {\ldots } i_{r}}A_{i_{m}j_{m}}\phantom {\dot {i}\!}\) where the Einstein summation convention is assumed (i.e. indices appearing twice are summed over, e.g. \(M_{ik}M_{in}=\sum _{i}{M_{ik}M_{in}}=\left (M^{T}M)_{kn}\right)\). Thus, \(S\odot _{m=1}^{2}\Omega _{m}\) is a 3-tensor with entries $$ \left(S\odot_{m=1}^{2}\Omega_{m}\right)_{i_{1}i_{2}i}=S_{k_{1}k_{2}i}(\Omega_{1})_{i_{1}k_{1}}(\Omega_{2})_{i_{2}k_{2}}. $$ In the above tPCA decomposition, the entries \(S_{k_{1}k_{2}}\) are assumed to be linearly uncorrelated. Introducing the operator ⊙−m, which for general r is defined in entry form by $$ (X\odot_{-m}X)_{uv}=X_{i_{1} {\ldots} i_{m-1}ui_{m+1} {\ldots} i_{r}i}X_{i_{1} {\ldots} i_{m-1}vi_{m+1} {\ldots} i_{r}i} $$ uncorrelated components, means that the covariance matrix S⊙−mS=Λ m is diagonal of dimension p m ×p m . Its entries are the ranked eigenvalues of the m-mode covariance matrix (X⊙−mX), which can be expressed as $$ (X\odot_{-m}X)=\Omega_{m}\Lambda_{m}\Omega_{m}^{T}. $$ These ranked eigenvalues are useful for performing dimensional reduction, i.e. projecting the data onto subspaces carrying significant variation. For instance, one could use random matrix theory (RMT) [17, 36] on each of the m-mode covariance matrices above to estimate the appropriate dimensionalities d1,…,d r . This would lead to a tPCA decomposition of the form \(X=S\odot _{m=1}^{2}\Omega ^{(R)}_{m}\), with S a d1×d2×p tensor and each \(\Omega ^{(R)}_{m}\) a reduced matrix obtained from Ω m by selecting the first d m columns. We note that for any of the original dimensions p1,…,p r that are small, such dimensional reduction is not necessary. In the applications considered here, our data tensor X is typically of dimension n t ×n s ×n G , where n t denotes the number of data or tissue types, n s the number of samples and n G the number of features (e.g. genes or CpGs). We note that the tPCA decomposition is performed on the first two dimensions (typically data type and samples), so there are two relevant covariance matrices. In the special case of a data matrix (a 2-tensor), standard PCA involves the diagonalisation of one data covariance matrix.Hence, for a 3-tensor, there are two data covariance matrices, and for an (r+1)-tensor, there are r. Here we use tPCA as implemented in the tensorBSS R package [37]. tICA: the tWFOBI and tWJADE algorithms For a data tensor \(X\in \mathbf {R}^{p_{1}\times \dots \times p_{r}\times p}\), the tICA model is $$ X = S\odot_{m=1}^{r}\Omega_{m}, $$ but now with the p1,…,p r random variables \(S_{k_{1} {\ldots } k_{r}}\in \mathbf {R}^{p}\) (\(S\in \mathbf {R}^{p_{1}\times {\ldots } \times p_{r}\times p}\phantom {\dot {i}\!}\)) mutually statistically independent and satisfying \(\operatorname {E}[\!S_{k_{1} {\ldots } k_{r}}]=0\) and \(\operatorname {Var}[\!S_{k_{1} {\ldots } k_{r}}]=I\). We note that X could be a suitably dimensionally reduced version X(R) of X, such as that obtained using tPCA. For instance, in our applications, X(R) would typically be a 3-tensor of dimension n t ×d S ×n G where d S <n S . This dimensional reduction, and optionally the scaling of variances, is known as whitening (W). As with ordinary ICA, there are different algorithms for inferring mutually statistical ICs \(S_{k_{1} {\ldots } k_{r}}\). One algorithm is based on the concept of simultaneously maximising the fourth-order moments (kurtosis) of the ICs (since by the central limit theorem, linear mixtures of these are more Gaussian and therefore, have smaller kurtosis values). This approach is known as fourth-order blind identification (FOBI) [38]. Alternatively, one may attempt a joint approximate diagonalisation of higher order eigenmatrices (JADE) [35, 39]. We note that although we use the tFOBI and tJADE functions in tensorBSS, that these do not implement tPCA beforehand. Hence, in this work we implement modified versions of tFOBI and tJADE, which include a prior whitening transformation with tPCA. We call these modified versions tWFOBI and tWJADE. Benchmarking of tPCA and tICA against other tensor decomposition algorithms JIVE (joint and individual variation explained) [5] is a powerful decomposition algorithm that identifies both joint and individual sources of data variation, i.e. sources of variation that are common and specific to each data type. For two data types (i.e. two tissue types or two types of molecular features), three key parameters need to be specified or estimated to run JIVE. These are the number of components of joint variation (dJ) and the number of components of variation that are specific to each data type (dI1 and dI2). On simulated data, these parameters are chosen to be equal to the true (known) values, i.e. for our simulation model, dJ=1, dI1=1 and dI2=1. In our real-data applications, dJ is estimated using RMT on the concatenated matrix obtaining by merging the two data type matrices together (after z-score normalising features to make them comparable), whilst dI i are estimated using RMT [17]. We note that these are likely upper bounds on the true number of individual sources of variation that are not also joint. We implemented JIVE using the r.jive R package available from http://www.r-project.org. PARAFAC (parallel factor analysis) [4, 6] is a tensor decomposition algorithm in which a data tensor is decomposed into the sum of R terms. Each term is a factorised outer product of rank-1 tensors (i.e. vectors) over each mode. Thus, the one key parameter is R, which is the number of terms or components in the decomposition. In our simulation model, we chose R=4. Although one of the two sources of variation in each data type is common to both (hence, there are three independent sources), we nevertheless ran PARAFAC with one additional component to assess its ability to infer components of joint variation more fairly. In the real-data applications, we estimated R as \(\sum _{i=1}^{n_{t}}{dI_{i}}-dJ\) (with n t the number of tissue or cell types), since this should approximately equal the total number of independent sources of variation. We implemented PARAFAC using the multiway R package available from http://www.r-project.org. iCluster [7] is a joint clustering algorithm for multi-way data. It models joint and individual sources of variation as latent Gaussian factors. The key parameter is K, which is the total number of clusters to infer. Although for the simulated data there were only three independent sources of variation, we chose K=4 to assess more fairly the ability of the algorithm to infer the joint variation (choosing K=3 would force the algorithm to find the source of joint variation). We implemented iCluster using the iCluster R package available from http://www.r-project.org. CCA [20] and its sparse version, sparse CCA (sCCA/SCCA) [21, 22], are methods to identify joint sources of variation (called canonical vectors) between two data matrices, where at least one of the dimensions is matched across data types. Here we implement the version of CCA and sCCA in the R package PMA available from http://www.r-project.org. One key parameter is K, the maximum number of canonical vectors to search for. Another parameter is the number of permutations used to estimate the significance of the covariance of each of the K canonical vectors. In each permutation, one of the data matrices is randomised (say by permuting the features around) and CCA/sCCA is reapplied. Since the data matrices are typically large, the distribution of covariances for the permuted cases is very tight. Thus, even 25 permutations are sufficient to estimate the number of significant canonical vectors reasonably well. The number of significant canonical vectors was defined as the number of components that exhibit observed covariances larger than the maximum value obtained over all 25 permutations, and is, thus, bounded above by K. In the non-sparse case, the two penalty parameters were chosen to be equal to 1, which means no penalty term is used. For sCCA, we estimated the best penalty parameters using an optimisation procedure, as described in [21, 22], with the number of permutations set to 25 and the number of iterations equal to 15. On the simulated data, we ran CCA with K=3, as K needs to specify only the maximum number of components to search for (the actual number of significant canonical vectors is one in our instance, as we have one source of joint variation). In the real-data applications, we chose K to be equal to dJ, as estimated using the procedure for JIVE, and used a larger number of iterations (50) per run. Evaluation on simulated data Here we describe the simulation model. The model first generates two data matrices of dimension 1000×100, representing two data types (e.g. DNA methylation and gene expression) where rows represent features and columns samples. We assume that the column and row labels (i.e. samples and genes) of the two matrices are identical and ordered in the same way. We assume one source of individual variation (IV) for each data matrix, each driven by 50 genes and 10 samples with the 50 genes and 10 samples unique to each data matrix. We also assume one source of common variation driven by a common set of 20 samples. The genes driving this common source of variation, however, are assumed distinct for each data matrix. In total, there are 100 genes (50 for each data matrix) associated with this joint variation (JV). For the 50 genes driving the JV in one data type and the 20 samples associated with this JV, we draw the values from a Gaussian distribution \(\mathcal {N}(e,\sigma)\), whereas for the other 50 genes in the other data type, we draw them from \(\mathcal {N}(-e,\sigma)\), all with e=3 and σ representing the noise level. Likewise for the IV, we use Gaussians \(\mathcal {N}(e,\sigma)\). The rest of the data is modelled as noise \(\mathcal {N}(0,\sigma)\). We consider a range of nine noise levels, with σ ranging from 1 to 5 in steps of 0.5. Thus, at σ=3, the SNR=e/σ=1. For each noise level, we perform 1000 Monte Carlo runs, and for each run and algorithm, we estimate SE and SP for correctly identifying the 100 genes driving the JV. For tPCA, tWJADE and tWFOBI, SE and SP were calculated as follows. We inferred a total of 12 components over the combined data type and sample modes (2 in data type mode × 6 in sample space). We then projected the inferred components onto the original data-type dimensions, using the inferred 2×2 mixing matrix. For each data type and each of the six components, we then selected the top-ranked 50 genes by absolute weight in the component. This allowed us to compute a SE and SP value for each data type and component. For each component, we then averaged the SE and SP values over the two data types. In the last step, we select the component with the largest SE and SP value and record these values. We note that the resulting SE and SP values are not dependent on choosing 12 components. As long as the number of estimated components is larger than the total number of components of variation in the data (which for the simulated data is four), the results are invariant to the number of inferred components. For CCA, which can only infer sources of joint variation, we ran it to infer a number of components (K=3) larger than the true number (there is only one source of JV). Pairs of canonical vectors were then selected according to whether their joint variance is larger than expected, as assessed using permutations. From hereon, the procedure to compute SE and SP proceeds as for the other algorithms, by selecting the component with the best SE and SP value. As with the other methods, the results do not depend on how we choose K as long as K is larger than or equal to 1. For PARAFAC, we ran it to infer R=4 components. Since for PARAFAC there is only one inferred projection across features per component, for each component we rank the features according to their absolute weight, select the top-ranked 50, and then compute two separate SE (or SP) values, one for each of the two sets of 50 true positive genes driving JV. We then select for each set of JV driver genes, the component achieving the best SE (or SP). Finally, we average the SE and SP values for the two sets of true positives. As with the other algorithms, the results do not depend on the choice of R, as long as R is larger then or equal to 4 (since there are four sources of variation, two of IV and two of JV, which counts as two in the PARAFAC setting). For JIVE, we ran it to infer one source of JV and two sources of IV. Because JIVE stacks the data matrices corresponding to the two data types together, we then select the 100 top-ranked genes, ranked by absolute weight in the inferred JV matrix. SE and SP are then computed. Once again, the results are stable to choosing a larger number of inferred sources of JV, because for the simulated data there is only one source of JV. Further details for all methods can be found in Additional file 2. Finally, for each algorithm and noise level, SE and SP are averaged over all the 1000 Monte Carlo runs. Finally, the statistical significance of the SE and SP values between algorithms was assessed using paired non-parametric Wilcoxon rank sum tests. The whole analysis above was repeated for sources of variation drawn from a Laplace distribution (with the same mean and standard deviation as the Gaussians above), to capture the super-Gaussian nature of real biological data better. Illumina 450k DNA methylation and multi-way TCGA datasets We analysed Illumina 450k datasets from three main sources. One dataset is a multi-blood-cell subtype EWAS derived from 47 healthy individuals and three cell types (B cells, T cells and monocytes) [3]. Specifically, we used the same normalised data as used in [3], with the resulting data tensor being of dimension 3×47×388 618, after removing poor quality probes and probes with SNPs [40]. Another dataset was generated in [2]. It consists of two tissue types (whole blood and buccal), 152 women and 447 259 probes, resulting in a data tensor of dimension 2×152×447 259. After quality control, and after removing probes on the X and Y chromosomes, polymorphic CpGs, probes with SNPs at the single-base extension site and probes containing SNPs in their body as determined by Chen et al. [40], we were left with 447 259 probes. Finally, we also analysed six datasets from TCGA. Specifically, we processed the RNA-seq, Illumina 450k DNAm and copy-number data for six different cancer types: colon adenocarcinoma (COAD), lung adenocarcinoma (LUAD), lung squamous cell carcinoma (LSCC), kidney renal cell carcinoma (KIRC), kidney papillary carcinoma (KIRP) and bladder adenocarcinoma (BLCA). All of these contained a reasonable number of normal-adjacent samples. The processing was carried out following the same procedure described by us in [29], which resulted in data tensors over three data types (mRNA, DNAm and copy number), 14 593 common genes and the following sample numbers: 273 cancers and 8 normals for LSCC, 390 cancers and 20 normals for LUAD, 292 cancers and 24 normals for KIRC, 195 cancers and 21 normals for KIRP, 194 cancers and 13 normals for BLCA, and 253 cancers and 19 normals for COAD. We note that although these numbers of normal samples are small, that these are the normal samples with data for all three data types. Identifying smoking-associated CpGs in the multi-tissue (whole blood + buccal) EWAS To test the algorithms on real data, we considered the matched multi-tissue (whole blood and buccal) Illumina 450k DNAm dataset for 152 women [2]. Smoking has been shown to be reproducibly associated with DNAm changes at a number of different loci [23]. We, therefore, used as a true positive set a gold-standard list of 62 smkDMCs, which have been shown to be correlated with smoking exposure in at least three independent whole blood EWAS [23]. The 62 smkCpGs were combined with 1000 randomly selected CpGs (non-smoking-associated), resulting in a data tensor of dimension 2×152×1062. Robustness was assessed by performing 1000 different Monte Carlo runs, each run with a different random selection of 1000 non-smoking associated CpGs. The whole analysis was then repeated for 10 000 randomly selected CpGs (data tensor of dimension 2×152×10 062) and for a total of 1000 different Monte Carlo runs. For the tPCA/tICA algorithms, the dimensionality parameters were chosen based on RMT as applied on the two separate matrices. Specifically, estimated unmixing matrices were of dimension 2×2 (for tissue-type mode) and d×d (for sample mode) with d the maximum of the two RMT estimates obtained from each tissue-type matrix. SE to capture the 62 smkCpGs was calculated in two different ways. In one approach, we used the maximum SE attained by any IC, denoted SE(max), whilst in the other approach, we allowed for the possibility that different enriched ICs could capture different subsets of smkCpGs. Thus, in the second approach, the SE was estimated using the union of the selected CpGs over all enriched ICs. We note that enrichment of ICs for the smkCpGs was assessed using a simple binomial test and selecting those with a P value less than the Bonferroni corrected value (i.e. less than 0.05 per number of ICs). In both approaches, the CpGs selected per component were the 62 with the largest absolute weights in the component, i.e. the number of selected CpGs was matched to the number of true positives. For JIVE, the number of components of joint variation was determined by applying RMT to the data matrix obtained by concatenating the features of the blood and buccal sets together with features standardised to unit variance to ensure comparability between data types. For the number of components of individual variation, we used the RMT estimates of each individual dataset, as this provides a safe upper bound. For PARAFAC, the number of components was determined by the sum of the RMT estimates for the blood and buccal sets separately minus the value estimated for the concatenated matrix, as we reasoned that this would best approximate the total number of components of variation across the two data types (joint or individual). For CCA and sCCA, the maximum number of canonical vectors to search for was set to be equal to the RMT estimate of the concatenated matrix, i.e. equal to the dimension of joint variation used in JIVE. For all methods, we selected the top-ranked 62 CpGs with the largest absolute weights in each component, and estimated SE using the same two approaches described above for tPCA/tICA. mQTL and chromosome enrichment analysis We applied tWFOBI to the data tensor of a multi-cell-type EWAS (Illumina 450k) for 47 healthy individuals and three cell types (B cells, T cells and monocytes) [3]. Specifically, we used the same normalised data as used in [3], i.e. a data tensor of dimension 3×47×388 618, after removing poor quality probes and probes with SNPs [40]. Using RMT [17], we estimated a total of 11 components in the sample-mode space, and so we inferred a source tensor of dimension 3×11 618, and mixing matrices of dimension 3×3 and 11×11. We also applied tWFOBI to the previous blood plus buccal DNAm dataset, but for all 447 259 probes that passed quality control. Applying RMT, we estimated 26 significant components in the sample space. Hence, we applied tWFOBI on the 2×152×447 259 data tensor to infer a source tensor of dimension 2×26×447 259 and mixing matrices of dimension 2×2 and 26×26. For both datasets, and for each inferred IC, we selected the 500 probes with the largest absolute weights and tested enrichment of mQTLs against a high-confidence mQTL list from [25] (22 245 mQTLs). This list was generated as the overlap of mQTLs (passing a stringent P value threshold of 1×10−14) in blood derived from five different cohorts representing five different age groups. Odds ratios and P values of enrichment were estimated using Fisher's exact test. For chromosome enrichment, we obtained P values using a binomial test. In selecting the top-500 probes from each component, we note that this threshold is conservative, as all inferred ICs exhibited positive kurtosis with kurtosis values that remained significantly positive after removing the top-500 ranked probes. To obtain estimates of cell-type-independent and cell-type-specific mQTLs, we used the following approach. The first mode/dimension of the estimated source tensor was rotated back to the original cell types, using the estimated mixing matrix (of dimension 3×3, since there were three cell types). For each of the previously enriched mQTLs, we compared its weights in all three components, each component being associated with a given cell type. For instance, if St,cp,∗ denotes the component cp for cell type t, thus defining a vector of weights over all CpGs, we asked if the absolute weight of the given mQTL CpG is large for all cell types or not. If it was sufficiently large (i.e. if within the top 10% quantile of the weight distribution) for all cell types, it was declared to be cell type independent. If the mQTL weight for one or two cell types fell within the lower 50% quantile of weights, we declared it a cell-type-specific mQTL. We also performed a comparative analysis of all multi-way algorithms in terms of their sensitivity to detect mQTLs, as given by the high-confidence list of 22 245 mQTLs from the Aries database [25]. To assess the stability of the conclusions, we computed SE as described earlier, but considered a range of top selected CpGs per component, ranging from 500 up to 22 245 in units of 500. As before, we estimated the overall SE by taking into account the union of all selected CpGs from each component, as well as the maximum SE attained by any single component. Since the SE attained by any single component is bounded by the number of selected CpGs, we also considered the SE normalised for the number of selected CpGs. Application of tICA to multi-omic cancer data We used the same normalised integrated copy-number state (segment values), Illumina 450k DNAm and RNA-seq datasets of six cancer types from TCGA [1], as used in our previous work [29]. For the cancer types considered, see above. We initially applied tWFOBI to the colon adenocarcinoma TCGA dataset, estimating unmixing matrices of dimension 3×3 (for data type) and K×K (for sample mode) where K was the maximum RMT estimate over each of the three data-type matrices. Features driving each IC in each data-type dimension were selected using an iterative approach in which genes were ranked by absolute weight, and recursively removed until the kurtosis of the IC was less than 1, or the number of removed genes was larger than 500. Genes selected in common between the CNV and mRNA modes, or between the DNAm and mRNA modes, were declared driver genes between the respective data types. To identify components correlating with normal/cancer status, we obtained the mixing matrix of the samples and then correlated each component to normal/cancer status using Wilcoxon's rank sum test. TCGA. Comprehensive molecular characterization of human colon and rectal cancer. Nature. 2012; 487(7407):330–7. Teschendorff AE, Yang Z, Wong A, Pipinikas CP, Jiao Y, Jones A, et al.Correlation of smoking-associated DNA methylation changes in buccal cells with DNA methylation changes in epithelial cancer. JAMA Oncol. 2015; 1(4):476–85. Paul DS, Teschendorff AE, Dang MA, Lowe R, Hawa MI, Ecker S, et al.Increased DNA methylation variability in type 1 diabetes across three immune effector cell types. Nat Commun. 2016; 7:13555. Hore V, Vinuela A, Buil A, Knight J, McCarthy MI, Small K, et al.Tensor decomposition for multiple-tissue gene expression experiments. Nat Genet. 2016; 48(9):1094–100. Lock EF, Hoadley KA, Marron JS, Nobel AB. Joint and individual variation explained (JIVE) for integrated analysis of multiple data types. Ann Appl Stat. 2013; 7(1):523–42. Bro R. Parafac. Tutorial and applications. Chem Intel Lab Syst. 1997; 38:149–71. Shen R, Olshen AB, Ladanyi M. Integrative clustering of multiple genomic data types using a joint latent variable model with application to breast and lung cancer subtype analysis. Bioinformatics. 2009; 25(22):2906–12. Virta J, Taskinen S, Nordhausen K. Applying fully tensorial ICA to fMRI data. In: Signal Processing in Medicine and Biology Symposium (SPMB). Philadelphia: IEEE: 2016. p. 1–6. Virta J, Li B, Nordhausen K, Oja H. Independent component analysis for tensor-valued data. J Multivar Anal. 2017; 162:172–92. Comon P. Independent component analysis, a new concept?Signal Process. 1994; 36(3):287–314. Liebermeister W. Linear modes of gene expression determined by independent component analysis. Bioinformatics. 2002; 18(1):51–60. Martoglio AM, Miskin JW, Smith SK, MacKay DJ. A decomposition model to track gene expression signatures: preview on observer-independent classification of ovarian cancer. Bioinformatics. 2002; 18(12):1617–24. Teschendorff AE, Journée M, Absil PA, Sepulchre R, Caldas C. Elucidating the altered transcriptional programs in breast cancer using independent component analysis. PLoS Comput Biol. 2007; 3(8):161. Kowarsch A, Blochl F, Bohl S, Saile M, Gretz N, Klingmuller U, et al.Knowledge-based matrix factorization temporally resolves the cellular responses to il-6 stimulation. BMC Bioinform. 2010; 11:585. Illner K, Fuchs C, Theis FJ. Bayesian blind source separation for data with network structure. J Comput Biol. 2014; 21(11):855–65. Biton A, Bernard-Pierrot I, Lou Y, Krucker C, Chapeaublanc E, Rubio-Perez C, et al.Independent component analysis uncovers the landscape of the bladder tumor transcriptome and reveals insights into luminal and basal subtypes. Cell Rep. 2014; 9(4):1235–45. Teschendorff AE, Zhuang J, Widschwendter M. Independent surrogate variable analysis to deconvolve confounding factors in large-scale microarray profiling studies. Bioinformatics. 2011; 27(11):1496–505. Alexandrov LB, Nik-Zainal S, Wedge DC, Campbell PJ, Stratton MR. Deciphering signatures of mutational processes operative in human cancer. Cell Rep. 2013; 3(1):246–59. Zhang S, Liu CC, Li W, Shen H, Laird PW, Zhou XJ. Discovery of multi-dimensional modules by integrative analysis of cancer genomic data. Nucleic Acids Res. 2012; 40(19):9379–91. Hotelling H. Relations between two sets of variates. Biometrika. 1936; 28(3-4):321–77. Witten DM, Tibshirani RJ. Extensions of sparse canonical correlation analysis with applications to genomic data. Stat Appl Genet Mol Biol. 2009; 8:28. Witten DM, Tibshirani R, Hastie T. A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis. Biostatistics. 2009; 10(3):515–34. Gao X, Jia M, Zhang Y, Breitling LP, Brenner H. DNA methylation changes of whole blood cells in response to active smoking exposure in adults: a systematic review of DNA methylation studies. Clin Epigenetics. 2015; 7:113. van Dongen J, Nivard MG, Willemsen G, Hottenga JJ, Helmer Q, Dolan CV, et al.Genetic and environmental influences interact with age and sex in shaping the human methylome. Nat Commun. 2016; 7:11115. Gaunt TR, Shihab HA, Hemani G, Min JL, Woodward G, Lyttleton O, et al.Systematic identification of genetic influences on methylation across the human life course. Genome Biol. 2016; 17:61. Satake W, Nakabayashi Y, Mizuta I, Hirota Y, Ito C, Kubo M, et al.Genome-wide association study identifies common variants at four loci as genetic risk factors for Parkinson's disease. Nat Genet. 2009; 41(12):1303–7. Kahler AK, Djurovic S, Kulle B, Jonsson EG, Agartz I, Hall H, et al.Association analysis of schizophrenia on 18 genes involved in neuronal migration: MDGA1 as a new susceptibility gene. Am J Med Genet B Neuropsychiatr Genet. 2008; 7:1089–100. Chen L, Ge B, Casale FP, Vasquez L, Kwan T, Garrido-Martin D, et al.Genetic drivers of epigenetic and transcriptional variation in human immune cells. Cell. 2016; 167(5):1398–414. Teschendorff AE, Zheng SC, Feber A, Yang Z, Beck S, Widschwendter M. The multi-omic landscape of transcription factor inactivation in cancer. Genome Med. 2016; 8(1):89. Mayrhofer M, Kultima HG, Birgisson H, Sundstrom M, Mathot L, Edlund K, et al.1p36 deletion is a marker for tumour dissemination in microsatellite stable stage II–III colon cancer. BMC Cancer. 2014; 14:872. Teschendorff AE, Relton CL. Statistical and integrative system-level analysis of DNA methylation data. Nat Rev Genet. 2018; 19(3):129–47. Bonder MJ, Luijk R, Zhernakova DV, Moed M, Deelen P, Vermaat M, et al.Disease variants alter transcription factor levels and methylation of their binding sites. Nat Genet. 2017; 49(1):131–8. Curtis C, Shah SP, Chin SF, Turashvili G, Rueda OM, Dunning MJ, et al.The genomic and transcriptomic architecture of 2,000 breast tumours reveals novel subgroups. Nature. 2012; 486(7403):346–52. Ding S, Cook RD. Tensor sliced inverse regression. J Multivar Anal. 2015; 133:216–31. Virta J, Li B, Nordhausen K, Oja H. JADE for tensor-valued observations. Accepted J Comput Graph Stat. https://doi.org/10.1080/10618600.2017.1407324. preprint arXiv:1603.05406. Plerou V, Gopikrishnan P, Rosenow B, Amaral LA, Guhr T, Stanley HE. Random matrix approach to cross correlations in financial data. Phys Rev E; 65(6):066126. Virta J, Li B, Nordhausen K, Oja H. tensorBSS: blind source separation methods for tensor-valued observations. 2017. R package version 0.3. https://CRAN.R-project.org/package=tensorBSS. Cardoso JF. Source separation using higher order moments. In: International Conference on Acoustics, Speech, and Signal Processing, ICASSP-89. Glasgow: IEEE: 1989. p. 2109–12. Cardoso JF, Souloumiac A. Blind beamforming for non Gaussian signals. IEEE Proc F. 1993; 140:362–70. Chen YA, Lemire M, Choufani S, Butcher DT, Grafodatskaya D, Zanke BW, et al.Discovery of cross-reactive probes and polymorphic CpGs in the illumina infinium humanmethylation450 microarray. Epigenetics. 2013; 8(2):203–9. Jing H, Teschendorff EA. R-scripts for implementing tensor decomposition methods. 2018. https://doi.org/10.5281/zenodo.1208040. AET is supported by the Eve Appeal, the Chinese Academy of Sciences, Shanghai Institute of Biological Sciences, a Royal Society Newton Advanced Fellowship (award 164914) and the National Science Foundation of China (31571359). The Cardiovascular Epidemiology Unit is supported by the UK Medical Research Council (MR/L003120/1), British Heart Foundation (RG/13/13/30194) and the Cambridge Biomedical Research Centre of the National Institute for Health Research. All data analysed here have already appeared in previous publications or are publicly available. The Illumina 450k DNAm buccal and whole blood dataset from the National Survey of Health and Development, as published in [2], is available by submitting data requests to https://mrclha.swiftinfucl.ac.uk; see the full policy at http://www.nshd.mrc.ac.uk/data.aspx. Managed access is in place for this 69-year-old study to ensure that any use of the data is within the bounds of consent given previously by participants and to safeguard any potential threat to anonymity, since the participants were all born in the same week [2]. The multi-blood-cell subtype Illumina 450k EWAS is available from the European Genome-phenome Archive with the accession code EGAS00001001598 (https://www.ebi.ac.uk/ega/studies/EGAS00001001598) [3]. All TCGA data analysed here were downloaded and are publicly available from the TCGA data portal website (https://portal.gdc.cancer.gov/). The Aries mQTL database is available from http://www.mqtldb.org[25]. All software is available from the corresponding R packages as described in 'Methods', and their specific implementations as used in this manuscript are available in Additional file 2 as well as from GitHub (https://github.com/jinghan1018/tensor_decomp) and Zenodo (https://zenodo.org/record/1208040or https://doi.org/10.5281/zenodo.1208040) [41]. The tICA algorithms, as implemented here, are available under a GNU General Public License version 3. CAS-MPG Partner Institute for Computational Biology, CAS Key Lab of Computational Biology, Shanghai Institute for Biological Sciences, Chinese Academy of Sciences, 320 Yue Yang Road, Shanghai, 200031, China Andrew E. Teschendorff & Han Jing Department of Women's Cancer, UCL Elizabeth Garrett Anderson Institute for Women's Health, University College London, 74 Huntley Street, London, WC1E 6BT, UK UCL Cancer Institute, University College London, 72 Huntley Street, London, WC1E 6BT, UK University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing, 100049, China Han Jing Cardiovascular Epidemiology Unit, Department of Public Health and Primary Care, University of Cambridge, Strangeways Research Laboratory, Cambridge, CB1 8RN, UK Dirk S. Paul University of Turku, Turku, 20014, Finland Joni Virta Vienna University of Technology, Wiedner Hauptstr. 7, Vienna, A-1040, Austria Klaus Nordhausen Search for Andrew E. Teschendorff in: Search for Han Jing in: Search for Dirk S. Paul in: Search for Joni Virta in: Search for Klaus Nordhausen in: AET devised the study, performed the analyses and wrote the manuscript. HJ performed the statistical analyses. JV and KN contributed to the methodology and to the writing of the manuscript. DSP contributed and prepared the EWAS datasets for statistical analyses. Correspondence to Andrew E. Teschendorff. No ethics approval was necessary for this publication, as all data analysed are already freely available in the public domain. All authors have read and approved the manuscript. The authors declare that they have no competing interests. Additional file 1 Contains all supplementary figures and supplementary tables. (DOCX 14322 kb) A file containing R scripts for the tPCA, tWFOBI, tWJADE, CCA, sCCA, PARAFAC, JIVE and iCLUSTER algorithms as implemented in this work. (R 17 kb) Teschendorff, A.E., Jing, H., Paul, D.S. et al. Tensorial blind source separation for improved analysis of multi-omic data. Genome Biol 19, 76 (2018) doi:10.1186/s13059-018-1455-8 Multi-omic Dimensional reduction Independent component analysis mQTL Epigenome-wide association study
CommonCrawl
Research on the wood processing method of helium-assisted laser process Chunmei Yang1, Xinchi Tian1, Bo Xue1, Qingwei Liu1, Jiawei Zhang1, Jiuqing Liu1 & Wenji Yu ORCID: orcid.org/0000-0002-0215-02001,2 Journal of Wood Science volume 68, Article number: 50 (2022) Cite this article In order to promote the development of environmental protection, and the usage rate of green energy utilization, a progressive, innovative laser process method employing helium assisted is proposed, which optimizes the joint cutting process under the same energy consumption. This method provides a new idea for the wood process industry. The uniqueness of this paper establishes a mathematical model to address the diffusion of helium injection and the heat transfer of the laser beam on the processed surface. From the results, it can be exhibited that the oxygen concentration reduces when the helium is injected on the processed surface. The helium could destroy the combustion-supporting conditions and decrease the combustion zone of the processed joint cutting. Thus, the carbonized area of the processed surface is reduced, which could effectively enhance the processing quality of joint cutting. Notably, the helium with injection speed forms a sweeping effect on the processed surface, which could remove parts of the carbonized particles and residues on the processed surface, as well as improve the processing quality. Comparing the traditional laser process and helium-assisted laser process, the gas-assisted laser process owns higher process quality than that of traditional laser processing and cutting. In detail, it features the advantages of smaller joint cutting width, lower surface roughness and smoother surface. Eventually, a mathematical model based on the response surface method with the evaluation criteria of the kerf width, kerf depth, and surface roughness is established to analyze the interaction of laser power, cutting speed and inert gas pressure on the response factors. Comparing the error between the predicted and experimental measurement value, and the optimized process parameters could be acquired. In this paper, the helium-assisted laser process method proposed is meaningful and encouraging, which not only obtains better processing quality, but also provides a guide for developing green industry. With the change of world climate and the enhancement of people's awareness of environmental protection, reducing pollution, improving air quality and forest coverage have become major issues all over the world. As for the primary component of forests, wood plays an indispensable role in the national economy. Compared with other materials, such as metal, wood has the advantages of high specific strength, impact resistance, facile processing and renewable resources. These features have been widely employed in the fields of housing construction, paper industry, home decoration, road construction and energy application [1]. Additionally, wood products are irreplaceable to a certain extent. At present, the comprehensive utilization rate of wood in China is about 63%, while that in Australia and other world forestry developed countries has reached more than 80% [2]. The traditional wood processing primarily includes milling, sawing and polishing. The main problems of these methods are the wide sawing path, large machining allowance, inability to process at any positions and difficulty in processing complex wood products. Therefore, the utilization rate of wood cannot be thoroughly improved [3]. Affected by the mechanical properties, geometry, machining tool accuracy and machining methods of the processed wood, the processed products will have defects such as wood grain bulge, burr, wavy knife mark and chip indentation, affecting the machining accuracy and appearance quality of components [4]. Therefore, adopting a new processing technology and method to improve the utilization efficiency of wood and the processing quality of processed parts on the premise of equal resource consumption has important practical significance for the overall goal of resource conservation of wood and international environmental protection. Laser processing of wood is an advanced processing technology derived from the field of metal material processing. Due to the characteristics of high laser intensity, good monochromaticity and good directivity, it is widely used in wood manufacturing industry, resulting in technologies such as rapid processing, tomography and nondestructive testing [5, 6]. Combined with computer control system and automatic equipment, laser beam profiling cutting is employed to achieve the forming processing of parts [7, 8], which could effectively make up for the tool wear, size limitation, complex contour of parts in traditional cutting methods. The influence of self-stability and other constraints on the machining process, especially for thin plates and small-size parts, has better flexibility and adaptability [9, 10]. However, in the traditional laser processing process, as the laser beam temperature is generally between 1200 ℃ and 1500 ℃, it registers ultra-high temperature and energy. Under the thermal coupling between the laser beam and the material, the commensurately laser beam would produce severe ablation on the surface of wood, and then form a wide carbonization layer on the processing surface, which seriously affects the processing accuracy and quality. Therefore, a method is sought to reduce the ablation effects in laser processing and recline the width of carbonization layer, so as to enhance the machining accuracy and quality of the workpiece. Our team has put forward the process method of water jet-assisted laser processing wood, and conducted a large number of relevant research and experiments. The synergistic and cooling effect of water jet has tremendously improved the width and surface roughness of wood [11,12,13], but the refraction phenomenon would occur in the process of laser penetrating water jet, resulting in laser energy loss. Meanwhile, the water jet sprayed on the machined surface would have an abundant impact on the moisture content of wood, and increase the instability of the processed material. Focusing on the technical problems of large ablation range, viz. low machining accuracy and poor machining quality in laser machining, this paper proposes an advanced process method of laser machining assisted by inert gas, which is more suitable for laser processing. Laser and inert gas are sprayed from the nozzle at the same time to cut wood at a height of 1–2 mm from the processing plane. The inert gas could effectively reduce the oxygen concentration of the processed surface, and destroy the combustion-supporting conditions of ablation, which reduces the ablation area of the processed surface. This method could effectively reduce the ablation and carbonization range of the processed surface, and then enhance the processing quality, providing a reference for the popularization and application of wood material in laser processing technology. The feasibility of the process by means of theoretical modeling and experimental has been conducted objectively. Notably, it is of great significance to improve the process level and production efficiency in wood processing industry. In the meantime, it also has practical significance for the economical utilization of wood resources and the protection of global ecological environment. When the laser cooperates with the inert gas emitted from the coaxial nozzle, the initial velocity of the gas is about 10 m/s. The emitted gas absorbs energy in the space and forms turbulence. Meanwhile, the gas is also diffused by the gas flow and the concentration decreases along the direction perpendicular to the axis. As the cutting quality of the machined part is caused by high-temperature ablation, it is vital to employ the mathematical model to solve the ablation area on the machined surface. The gas jet flow during laser cutting is shown in Fig. 1. Gas-assisted jet partition in the cutting process Establishment of model The three elements of combustion are combustibles, combustion-supporting gas and ignition point. When the laser is processed under the helium package, the scattering degree of the laser would further affect the generation and transfer of heat. In the meantime, the helium would diffuse at a certain initial velocity after being sprayed from the nozzle, so the gas concentration would also be reduced in the plane perpendicular to the nozzle. In the mixture of helium and oxygen, when the oxygen content is higher than 12.5%, wood can burn, and when it is lower than 12.5%, it is not combustible. Therefore, it is crucial to calculate the proportion of oxygen at any points. The oxygen content is also affected by the diffusion of helium. Therefore, it is critical to calculate the helium concentration at each point. Meanwhile, whether the processed surface burns also depends on the amount of heat transmitted through the wood. The ignition point of the wood is generally between 200 and 300 ℃. The ignition point of the original wood studied in this paper is approximately 280 ℃. Comprehensively, whether the machined surface burns or not is determined by the oxygen concentration and surface temperature. Basic assumptions In order to simplify the mathematical model and solution process, the following assumptions are put forward: The material is anisotropic in the heat diffusion direction, namely, the heat diffusion rate at each point remains constant but is randomly distributed among each other. The mass of inert gas is conserved in the diffusion process. The gas concentration in the diffusion process obeys the normal distribution in the y and z directions, and the flow rate does not exceed 68 m/s, which is regarded as an incompressible fluid. In the process of laser spraying, each point on the laser beam is supposed to be isothermal, regardless of the temperature drop caused by the emission distance. The indoor gas flow is stable and the helium concentration does not change with time. As the laser source is only 1 mm relative to the processing distance, the blocking effect of wood thickness processed by steam plasma on laser is ignored. Ignore the combustion effects of small sawdust produced by processing. Gas diffusion model Taking the injection point as the origin and the injection direction as the z-axis direction, a spatial rectangular coordinate is established. The effective point source is located at the coordinate origin o, and the average velocity direction is parallel to and in the same direction as the z-axis. The Reynolds number of helium jet flow is higher than the critical value and flows as turbulent flow. When the jet stream ejects the inert gas from the nozzle at the velocity v0, the surrounding fluid is continuously pumped in the flow, the width of the jet is expanded, and the velocity of the main body of the gas is constantly reduced. The jet diffusion angle is θ. According to the momentum theorem, the momentum along the jet velocity direction remains unchanged at any interface of the jet, so it can be obtained: $$\smallint \rho v^{2} dA = \rho_{0} v_{0}^{2} A_{0} = \rho_{0} \pi R_{0}^{2} v_{0}^{2} = 2\pi \smallint_{0}^{R} \rho v^{2} rdr.$$ In the above formula, A0 is the cross-sectional area at the nozzle, R0 is the nozzle radius and the helium density at the nozzle is \(\rho_{0}\). Since the jet density is the same as that of the surrounding air, that is \(\rho = \rho_{0}\), formula (1) can be rewritten as: $$2\int\limits_{0}^{{\frac{R}{{R_{0} }}}} {\left( {\frac{v}{{v_{0} }}} \right)^{2} \frac{r}{{R_{0} }}d\left( {\frac{r}{{R_{0} }}} \right)} = 1. $$ Also \(\frac{r}{{R_{0} }} = \frac{r}{R} \cdot \frac{R}{{R_{0} }}\), \(\frac{R}{{R_{0} }}\) determined by the distance from any cross section to the jet pole. Therefore, \(\frac{v}{{v_{0} }} = \frac{v}{{v_{m} }} \cdot \frac{{v_{m} }}{{v_{0} }}\). So Eq. (2) can be exhibited as: $$2\left( {\frac{{v_{m} }}{{v_{0} }}} \right)^{2} \left( {\frac{R}{{R_{0} }}} \right)^{2} \int\limits_{0}^{1} {\left( {\frac{v}{{v_{m} }}} \right)^{2} \frac{r}{R}d\left( \frac{r}{R} \right)} = 1.$$ Herein, vm is the velocity on the axis of inert gas injection. In terms of the above formula, the inert gas flow on any section is: $$q = 2\pi \int\limits_{0}^{R} {vrdr} = 2\pi R_{0}^{2} v_{0} \left( {\frac{R}{{R_{0} }}} \right)^{2} \frac{{v_{m} }}{{v_{0} }}\int\limits_{0}^{1} {\frac{v}{{v_{m} }}\frac{r}{R}d\left( \frac{r}{R} \right)} = 2q_{0} \left( {\frac{R}{{R_{0} }}} \right)^{2} \frac{{v_{m} }}{{v_{0} }}\int\limits_{0}^{1} {\left[ {1 - \left( \frac{r}{R} \right)^{\frac{3}{2}} } \right]^{2} \frac{r}{R}d\left( \frac{r}{R} \right)}. $$ Among them, q0 the inert gas flow initially ejected from the nozzle can be calculated at any point according to Eq. (4). The concentration change of inert gas in the atmosphere is jointly determined by jet and diffusion. As for the gas diffusion, we adopt the steady Gaussian diffusion model. The point source is located at the coordinate origin, so its diffusion can be regarded as gas diffusion at height 0. The diffusion in air is a two-dimensional normal distribution with two coordinate directions of y and z. When the random variables in the two coordinate directions are independent, the distribution density is the product of the one-dimensional normal distribution density function in each coordinate direction. According to the assumption of Gaussian gas diffusion model, taking \(\mu = 0\),the gas concentration distribution function at any points in the velocity direction of point source can be acquired as follows: $$C\left( {x,y,z} \right) = A(x)\exp \left[ { - \frac{1}{2}\left( {\frac{{y^{2} }}{{\sigma_{y}^{2} }} + \frac{{z^{2} }}{{\sigma_{z}^{2} }}} \right)} \right].$$ In the above formula, C—concentration of pollutants at space point (x, y, z); A(x)—undetermined function; \(\sigma_{y} ,\sigma_{z}\)—standard deviations in the horizontal and vertical directions, i.e., the diffusion parameters in the y and z directions, mm. According to the law of conservation of mass and the continuity theorem, on any cross section of air flow perpendicular to the z-axis: $$q = \int\limits_{ - \infty }^{ + \infty } {\int\limits_{ - \infty }^{ + \infty } {Cud{\text{y}}dz} } .$$ Equation (1) is substituted into Eq. (2) by the velocity stability condition, a is independent of x and y, and \(\int_{ - \infty }^{ + \infty } {\exp \left( {{{ - t^{2} } \mathord{\left/ {\vphantom {{ - t^{2} } 2}} \right. \kern-\nulldelimiterspace} 2}} \right)} {\kern 1pt} \,dt = \sqrt {2\pi }\). Undetermined coefficients can be obtained by integration: $$A(x) = \frac{q}{{2\pi \sigma_{y} \sigma_{z} }}.$$ Substituting (3) into (1): $$C(x,y,z) = \frac{q}{{2\pi \sigma_{{\text{y}}} \sigma_{z} }}\exp \left[ { - \frac{1}{2}\left( {\frac{{y^{2} }}{{\sigma_{y}^{2} }} + \frac{{z^{2} }}{{\sigma_{z}^{2} }}} \right)} \right].$$ The diffusion coefficients \(\sigma_{y}\) and \(\sigma_{z}\) are related to the air stability and the vertical distance z, which increases with the increase of z. Therefore, the concentration of inert gas at each point in the coordinate range can be derived by Eq. (3). Since 1 m3 air mass is about V = 1286 g [14], the proportion of helium at any points can be obtained by substituting \(\frac{C}{V} \times 100{\text{\% }}\), and then utilizing the oxygen content in the air, the proportion of oxygen at any points is \(A{\text{\% }} = \left( {1 - \frac{C}{V}} \right) \times 20{\text{\% }}\), so the oxygen concentration and proportion at any points can be calculated to explore whether it could satisfy the combustion-supporting conditions. Laser heat transfer model Laser is an electromagnetic radiation wave. The laser used for cutting is generally output in the mode of fundamental Gaussian mode. The cross-section spot is circular and the light intensity distribution basically satisfies the Gauss distribution. According to Professor Siegman's theory [15], heat flux density of laser heat source can be expressed as: $$q(r) = \frac{{3q_{\max } }}{{\pi w({\text{z}})^{2} }}e^{{( - \frac{{3r^{2} }}{w(z)})}} .$$ In this formula, q(r)—heat flux at radius R, W/mm2; \(q_{\max }\)—the maximum value of heat flux density of laser beam, viz. the value at the center of beam waist radius, can be replaced by laser power value; W(z)—effective radius of beam at z coordinate, mm. The beam passing through the lens will have a certain scattering, that is Rayleigh scattering. The divergence of the beam will cause a great impact on the heat transmission. The expression of the effective radius of the beam at the z coordinate is: $$w(z) = w_{0} \sqrt {1 + (\frac{{z - z_{0} }}{{z_{R} }})^{2} } $$ where w(z) is the Gaussian beam radius at the z coordinate, \(w_{0}\) is the radius at the beam waist, z0 is the ordinate at the beam waist, and zR is the Rayleigh constant. Its value is determined by the scattering degree of the light source. As there are no combustibles in the air, combustion would not occur even at high temperature, but the surface of processed wood is ablated by laser at high temperature. When the surface temperature is higher than the ignition point of wood and satisfied combustion-supporting conditions, combustion would occur on the surface of wood. Therefore, the boundary between combustion zone and non-combustion zone should be located on the processing surface of wood and perpendicular to the laser centerline. The thickness of wood processed by laser is only 2 mm, so the temperature drop caused by processing is ignored. The heat conduction rate between wood micro elements is roughly the same. Therefore, the heat transmitted at any two points perpendicular to the laser processing axis and equal to the vertical distance of the laser beam in the wood is the same. Meanwhile, the laser is a beam with concentrated energy and small scattering. Therefore, in the wood processed by laser, the heat conduction mode is primarily heat conduction, with small heat convection and ignoring the role of heat radiation. Together, along the direction perpendicular to the laser axis, the heat conduction can be described by the thermal conduction differential equation with constant physical properties, two-dimensional, unsteady state and internal heat source: $$\frac{\partial T}{{\partial t}} = a\left( {\frac{{\partial^{2} T}}{{\partial x^{2} }} + \frac{{\partial^{2} T}}{{\partial y^{2} }}} \right) + \frac{{\dot{\phi }}}{\rho c}$$ In this formula, T—temperature, ℃; t—time, s; a—thermal diffusivity, \(a = \frac{\lambda }{\rho c}\), m2/s, taken as 1.7 × 10–7; \(\lambda\)—thermal conductivity, W/(m·K), represented as 0.2; \(\rho\)—material density, kg/m3, Fraxinus mandshurica signified as 686 kg/m3; \(\dot{\varphi }\)—generation heat of heat source per unit volume, expressed as 6.67 × 10–10 J; c—specific heat of material, Fraxinus mandshurica indicated as 1.72 × 103 J/(kg·℃). The transmission rate of laser heat in any direction of XOY plane is consistent. Therefore, a cylindrical coordinate system can be established in the heat diffusion direction. The two-dimensional heat transfer becomes a one-dimensional heat conduction problem along the radius direction. If the thermal conductivity of wood is a fixed value, the thermal conductivity differential equation can be expressed as: $$\frac{d}{dr}\left( {r\frac{dT}{{dr}}} \right) = 0$$ The indoor initial temperature is 18 ℃. Since the laser heating temperature is 1200 ℃ and the ignition point of wood is 280 ℃, when the temperature of mixed gas containing more than 12.5%, viz. above 280 ℃, wood would be ignited. Therefore, the initial conditions of this model are: $$t = 0,\forall {\kern 1pt} \,r \ne 0,\;t = {\text{const = 18}}\ ^\circ {\text{C,}}$$ $$t = 0,\;r = 0,\;T = 1200 \;^\circ {\text{C}}{.}$$ When the oxygen concentration is higher than 12.5%, the mixed gas would burn. The oxygen content ratio of the mixed gas can be described by a certain boundary. In the XOY plane, its function expression is \(y = f_{1} (x)\). When the temperature is lower than 280 ℃, the gas would not burn on the premise of sufficient oxygen content, and its boundary function expression is \(y = f_{2} (x)\). Therefore, the boundary conditions of this model are exhibited as follows: $$A\% \, = \,{12}.{5}\% ,\;y\, = \,f_{{1}} (x),$$ $$T\, = \,{356}\;^\circ {\text{C}},\;y = f_{2} (x),$$ $$y = f_{1} (x) \wedge f_{2} (x),$$ $$y = y_{\infty } ,T = const,$$ $$h_{0} (T_{\infty } - T(r,t)) = \left. { - \lambda \frac{\partial T(r,t)}{{\partial r}}} \right|_{{r = r_{0} }} ,$$ where h0 is the convective heat transfer coefficient of the laser beam surface, and r0 is the laser heating heat flow radius. For the heat transfer process of laser processing, the comprehensive model is established as follows: $$\left\{ {\begin{array}{*{20}c} {{\text{Jet equation: }}q = 2q_{0} \left( {\frac{R}{{R_{0} }}} \right)^{2} \frac{{v_{m} }}{{v_{0} }}\int_{0}^{1} {\left[ {1 - \left( {\frac{r}{R}} \right)^{{\frac{3}{2}}} } \right]^{2} \frac{r}{R}d\left( {\frac{r}{R}} \right)} } \\ {{\text{Gas diffusion equation: }}C(x,y,z) = \frac{q}{{2\pi \sigma _{y} \sigma _{z} }}\exp \left[ { - \frac{1}{2}\left( {\frac{{y^{2} }}{{\sigma _{y}^{2} }} + \frac{{z^{2} }}{{\sigma _{z}^{2} }}} \right)} \right];A\% = \left( {1 - \frac{C}{V}} \right) \times 20\% ,} \\ {{\text{Heat source equation: }}q(r) = \frac{{3q_{{\max }} }}{{\pi w(z)^{2} }}e^{{\left( { - \frac{{3r^{2} }}{{w(z)}}} \right)}} ,} \\ {{\text{Governing equation: }}\frac{{\partial T}}{{\partial t}} = a\left( {\frac{{\partial ^{2} T}}{{\partial x^{2} }} + \frac{{\partial ^{2} T}}{{\partial y^{2} }}} \right) + \frac{{\dot{\phi }}}{{\rho c}},} \\ {{\text{Heat transfer equation: }}\frac{d}{{dr}}\left( {r\frac{{dT}}{{dr}}} \right) = 0,} \\ {{\text{Boundary condition: }}A\% = 12.5\% ,y = f_{1} (x);T = 356,} \\ {\;\quad \quad y = f_{1} (x) \wedge f_{2} (x).} \\ \end{array} } \right.$$ Solution of model Finite difference method The finite difference method is to express the differential equation as a difference equation defined on discrete lattice points (i.e., the numerical relationship between lattice points reflected by the differential equation), and iteratively calculate the numerical value on the unknown boundary through the difference relationship between similar lattice points according to the given boundary conditions. The main difference between explicit and implicit difference methods is that the discretization approximate representation of differential terms in the difference equation instead of calculating differential equations is different, so that the explicit difference method can directly deduce the values on the target boundary from the known boundary conditions, while the implicit difference method constantly needs to solve the equations for recursive calculation. However, it elucidates that the results calculated by the difference method are not necessarily stable, and the small error may be amplified, resulting in the error of the calculation results. To solve the partial differential equation problem by the finite difference method, the continuous problem must be discretized, namely, the solution area is meshed, the continuous solution area is replaced by a finite number of discrete grid nodes, and then the differential quotient of the partial differential equation is substituted by the difference quotient to derive the discretized difference equation system. Finally, the difference equations are solved by direct method or iterative method to achieve the numerical approximate solution of the differential equation. When established the difference equations of heat conduction and gas diffusion, the solution area needs to be meshed by the internal node method. As shown in Fig. 2, the node temperature and gas concentration can approximately represent the temperature and gas concentration of the whole small area, which is convenient to deal with the uneven problem. Meanwhile, the data of boundary conditions are employed in the boundary points. Model meshing Assuming that in the grid, the time step \(\tau\), the space step along the x-axis direction h, the space step along the y-axis direction k, and the step along the radius direction n, the explicit difference scheme is used for discretization in this model to solve the temperature and gas concentration in the jth grid. The discretization scheme in the model is: $$\left\{ {\begin{array}{*{20}l} {{\text{Jet equation: }}q_{j} = 2q_{0} \left( {\frac{{R_{j} }}{{R_{0} }}} \right)^{2} \frac{{v_{mj} }}{{v_{0} }}\int_{{r_{j} - \frac{n}{2}}}^{{r_{j} + \frac{n}{2}}} {\left[ {1 - \left( {\frac{{r_{j} }}{R}} \right)^{\frac{3}{2}} } \right]\frac{{r_{j} }}{R}d\left( \frac{r}{R} \right),} } \hfill \\ {{\text{Diffusion equation: }}C_{j}^{n} = \frac{{q_{j}^{n} }}{{2\pi \sigma_{yj}^{n} \sigma_{zj}^{n} }}\exp \left[ { - \frac{1}{2}\left( {\frac{{(y_{j}^{n} )^{2} }}{{(\sigma_{yj}^{n} )^{2} }} + \frac{{(z_{j}^{n} )^{2} }}{{(\sigma_{zj}^{n} )^{2} }}} \right)} \right],} \hfill \\ {{\text{Governing equation: }}\frac{{T_{j}^{n + 1} - T_{j}^{n} }}{\tau } = a_{j} \left( {\frac{{T_{j + 1}^{n} - 2T_{j}^{n} + T_{j - 1}^{n} }}{{h^{2} }} + \frac{{T_{j + 1}^{n} - 2T_{j}^{n} + T_{j - 1}^{n} }}{{k^{2} }}} \right) + \frac{{\dot{\phi }_{j} }}{{\rho_{j} c_{j} }},} \hfill \\ {{\text{Heat transfer equation: }}\frac{{T_{j}^{n + 1} - T_{j}^{n} }}{n} + r_{j}^{n} \frac{{T_{j + 1}^{n} - 2T_{j}^{n} + T_{j - 1}^{n} }}{{n^{2} }} = 0.} \hfill \\ \end{array} } \right.$$ For the explicit difference scheme, the discrete solution of unsteady heat transfer process needs to be considered in the stability condition. The above explicit difference formula displays that the temperature of the time node n + 1 on the space node I is affected by the adjacent points on the left and right sides, and the stability limit condition (Fourier grid number limit) needs to be satisfied, otherwise there would be unreasonable oscillatory solutions[16]. $$\left\{ {\begin{array}{*{20}l} {F_{O\Delta } = \frac{\lambda \Delta t}{{\rho c(\Delta x)^{2} }}({\text{grid number}}),} \hfill \\ {F_{O\Delta } \le \frac{1}{2}({\text{inner node constraints}}),} \hfill \\ {F_{O\Delta } \le \frac{1}{{2\left( {1 + \frac{h\Delta x}{\lambda }} \right)}}({\text{boundary constraints}}).} \hfill \\ \end{array} } \right.$$ After discretizing the unsteady process, the gas diffusion and heat transfer equations can be solved by the control equations and boundary conditions. Solution of gas jet diffusion model Under the idea of finite difference method, MathWorks Inc's MATLAB2015a software is employed to set the flow rate at the nozzle to 10 m/s and the height to 1 mm. The gas will expand due to energy absorption during injection. Meanwhile, the density of helium is less than that of air. After leaving the nozzle, helium will be subjected to the air buoyancy, resulting in an upward trend, which would further lead to the expansion of the outer diameter of the gas. The distribution of gas trajectory after helium is injected out of the nozzle is shown in Fig. 3. Gas injection motion diagram After the gas is injected through the nozzle, the instability of the gas itself will diffuse and the concentration of helium will decrease along the X and Y directions. The greater the helium concentration is, the smaller the air concentration is. Therefore, along a fixed direction, the gas will diffuse with the flow, and the concentration will gradually decrease during diffusion, resulting in the gradual increase of oxygen content. Therefore, the proportion of oxygen can be achieved by helium concentration. The distribution of helium diffusion concentration is shown in Fig. 4a: a Gas diffusion concentration distribution. b Isoconcentration diagram of gas diffusion in XOY plane The proportion of oxygen in the air is about 20% and combustion support can be realized when the oxygen content is higher than 12.5%. When the oxygen content is 12.5%, the proportion of air is 62.5%, and the helium concentration is 37.5%. Therefore, when the helium concentration is 37.5%, other gases can realize combustion support. The concentration distribution line of gas diffusion is shown in Fig. 4b, in which the red dotted line is the isoconcentration line of helium concentration of 37.5%. When the oxygen concentration is higher than 12.5%, the combustion-supporting effect can be realized. Solution of laser heat transfer model According to the assumptions, the laser heating temperature does not decrease with the processing depth, and the wood is a poor conductor with low thermal conductivity. Meanwhile, the wood is an anisotropic material with significant differences in porosity, moisture content and other physical properties. Combustion would occur when the wood surface temperature is higher than its ignition point of 280 ℃, but due to the anisotropy of the wood material, the heat conduction efficiency in each direction is different. When the moisture content in one direction is high, the heat transfer efficiency of wood will also enhance. Set the wood performance parameter as the original wood parameter, and take the processing time t = 2 s to achieve the wood surface temperature distribution, as shown in Fig. 5a. .a Wood surface temperature distribution. b Wood surface isotherm As the thermal conductivity of wood is relatively poor, the temperature decreases rapidly during heat transfer. In addition, Fraxinus mandshurica is composed of fiber structure, and the transverse and longitudinal thermal conductivity are quite different. Due to the influence of wood fiber structure, the carbon content in the along-grain cutting mode is significantly less than that of the cross-grain cutting mode, whereas the thermal conductivity of longitudinal fiber and transverse wood cells are also very small. Therefore, the temperature drop on the wood surface changes rapidly with coordinates. Chosen various temperature and laser source as the origin, a plane rectangular coordinate is established on the processing surface. The factors affecting heat conduction such as porosity and moisture content on the wood surface are randomly distributed, and the isotherm is shown in Fig. 5b. The feasible areas of gas concentration and temperature are shown in Fig. 6. Selected 37.5% to mark the critical line of combustion-supporting gas in the concentration change diagram. The oxygen content in the area outside the critical line (marked as area 1) can achieve the combustion-supporting effect. The temperature of 280 ℃ is selected to mark the isotherm on the wood surface. The wood may burn in the inner area of 280 ℃ (marked as area 2), and the intersection area of areas 1 and 2 will produce combustion and forming a combustion area. Machining plane combustion zone It can be signified from the above figures that the oxygen content of the machining surface would be reduced after adding helium, which destroys the combustion conditions. As such, when using inert gas-assisted laser processing, the carbide generated by combustion will be reduced, the slit width declined correspondingly, and the machining accuracy as well as cutting quality will be further improved. In the process of inert gas-assisted laser processing wood, the slit quality is the standard to evaluate the forming process. Due to the absorption of laser energy, heat conduction and mass flow under the pressure of auxiliary gas, the process of gas-assisted laser processing is quite complex. The selection of various parameters would have a vital impact on the cutting quality. According to the experimental requirements, the selected cherry wood veneer was processed into a certain size when purchased, and its specification was 120 mm × 80 mm × 5 mm. The moisture content was 15%, and the surface of the plate was flat and smooth without cracks, defects, knots, decay and other defects. In order to ensure the smooth progress of the experiment and the feasibility of the later test, the wood surface was sanded with 240# sandpaper in the early stage of the experiment. As the wood is a typical anisotropic porous medium material, when cutting along fiber, the laser beam passes through the aggregate of countless fiber bundles. While cutting across grain, the laser beam passes through the thin-walled cavity tissue composed of wood rays. Due to the differences of structures, the cutting seam sizes in different cutting directions are inconsistent. Therefore, both the along-grain direction and the cross-grain direction should be considered during processing. Thus, the influence experiment of process parameters on wood cutting quality is studied. Taking the slit width and surface roughness as evaluation indexes, the influence laws of traditional laser cutting and gas-assisted laser cutting in different texture directions are compared and analyzed to acquire the optimal processing method. Therefore, both along the grain direction and across the grain direction should be considered during processing. In this study, helium is progressively proposed to be employed as the inert gas for auxiliary laser processing. The equipment used in the forming test of inert gas-assisted laser processing of wood is the laser cutting machine produced by Shandong baomei, as shown in Fig. 7. Gas-assisted laser processing equipment The wavelength of CO2 laser is 10.6 µm, the rated power of laser is 80 W, and the focal length of lens is 63.5 mm. Industrial helium with purity ≥ 99.999% is selected as the inert gas. After pressure regulation and stabilization through the pressure reducing valve, it enters the nozzle from the air duct and is sprayed coaxially with the laser beam to the surface of wood. The gas pressure at the nozzle can reach 2.5 MPa at most. Table 1 exhibits specific process parameters for selecting. Table 1 Cherry wood processing parameters by inert gas-assisted laser After laser ablation of cherry wood, Quorum Q150t PLUS automatic high vacuum sputtering instrument of Nanjing Qinsi Technology Co., Ltd is proposed. The uniformity of gold spraying determines the image clarity during SEM observation, so it is necessary to ensure uniform coating and reasonable thickness. The Apreo scanning electron microscope produced by Thermo Scientific Company of the United States is used to measure and analyze the wood cutting surface processed by inert gas-assisted laser. The micromorphology of the slit surface is examined, and the surface composition of the slit surface is analyzed by energy spectrum analysis. The slit width and depth are measured by MOTIC Co., Ltd's BA400 microscope. The distance from the top to the bottom of the slit can be measured as the slit depth. In order to ensure the accuracy of the measurement results, each incision should be measured repeatedly three times and expressed by the average value. Cutting quality analysis In the process of cutting, we usually think that under the same processing conditions, the smaller the slit width and the lower the roughness at the slit are, the higher the cutting quality is. Therefore, this paper aims to analyze and evaluate the cutting quality with the slit width and slit surface roughness as the evaluation criteria. Figure 8 exhibits the influence of laser process parameters on slit width under two modes of traditional laser processing and inert gas-assisted laser processing. When the cutting speed is constant, whether there is helium or not, the change trend of slit width under different laser power is basically similar, and the slit width enhances with the increase of laser power. When the laser power is low, it can be derived from Eq. (8) that the heat flux generated by the laser decreases, and less heat is transmitted by the bad conductor, so a narrow slit will be formed. Reversely, when the power is relatively high, the heat flux increases, and the ablation area of the laser on the machined surface also enhances, which makes the slit width larger. Influence of laser parameters on the width of kerf during grain cutting Compared with the traditional laser processing, the slit width of the plate with the assistance of inert gas is significantly smaller than that without the assistance of inert gas. As the auxiliary gas, helium effectively affects the combustion-supporting conditions of the machined surface, making the combustion area smaller and the ablation as well as carbonization part at the slit reduced. The experimental results reveal that when the laser power is 30 W and the cutting speed is 25 mm/s, the longitudinal slit of traditional laser processing is 0.42 mm and the minimum width of longitudinal slit of gas-assisted laser processing is 0.29 mm. Gas-assisted laser processing has the advantages of reducing slits and enhancing cutting quality. In addition, the roughness of cutting seam is also one of the important indicators to measure the quality of cutting seam. Therefore, according to relevant standards [17],we use Ra to characterize the surface roughness of the machined surface. We measure the surface roughness with the aid of a slit tester. The lower the surface roughness can make the higher the cutting quality. In traditional laser processing and inert gas-assisted laser processing, the influence of laser parameters on surface roughness under two modes of along-grain cutting and cross-grain cutting is shown in Fig. 9. When the cutting speed is constant, the change trend of surface roughness under various laser power is generally similar, and the surface roughness increases with the increase of laser power. This is due to the laser power small, the laser energy interacted on the processed surface for heat transfer is small. The ablation ability of wood at the slit is weakened, the flatness of the slit surface is excellent, and the surface roughness is small. When the laser power increases, the heat transferred by the laser to the direction perpendicular to the laser axis enhances, and the corresponding area of carbonization and ablation also increases. When helium assisted laser processing, helium hinders the ablation effect of laser on the surface. The heat in the ablation area in the slit and the residue generated by incomplete gasification are taken away by gas purging, so that these heat cannot be further transmitted to the interior of the workpiece, which reduces the heat accumulation temperature, effectively reduces the heat affected zone, and reduces the ablation amount of wood. Therefore, the slit width is relatively small and the slit parallelism is relatively good. The experiments elucidate that when the laser power is 30 W and the cutting speed is 25 mm/s, the minimum surface roughness along the grain of wood processed by gas-assisted laser is Ra = 2.89 μm. The minimum cross-grain surface roughness of gas-assisted laser processing wood is Ra = 3.69 μm. As seen therein, helium-assisted laser processing can effectively improve the processing quality and reduce the surface roughness dramatically. Influence of laser parameters on surface roughness during transverse cutting Response surface methodology Response surface methodology (RSM) is a method for optimizing experimental conditions. A reasonable experimental design method is employed to obtain certain data. By studying the influence of the interaction between various factors in the response, the multiple quadratic regression equation is conducted to fit the functional relationship between the factors and the response value, and the response value of each factor level is acquired. Afterwards, the optimal value of the predicted response and the experimental conditions of the response were found out. The response variables of laser cutting of wood are yielded and processing quality. The former is expressed by the maximum slit depth reached in the cutting process, and the latter is related to slit width and surface roughness. In the response surface method, the Box–Behnken design experimental design is an experimental design that can evaluate the nonlinear relationship between indicators and factors. It is a typical rotatable design method. Based on the above single factor test results, the response surface method experiment is carried out with laser power (x1), cutting speed (x2) and air flow pressure (x3) as parameters. The factors and levels of inert gas-assisted laser wood processing test are shown in Table 2. Table 2 Factors and levels of inert gas-assisted laser processing of wood In the response surface problem, the mathematical model of each response is obtained through multiple linear regression analysis, and the approximation function between the response variable y and the control variable x is established. The significance level of the variables in the model is determined through variance analysis, and the coefficients of the multiple regression model are estimated by the least square method. The second-order model is below: $$y = \beta _{0} + \sum\limits_{{i = 1}}^{k} {\beta _{i} x_{i} } + \sum\limits_{{i = 1}}^{k} {\beta _{{ii}} x_{i}^{2} } + \sum\limits_{{i < j}} {\sum {\beta _{{ij}} x_{i} x_{j} } + \varepsilon }$$ In the above formula: \(y\)—predicted response value; \(\beta_{0}\)—constant term; \(\beta_{i}\)—main effect coefficient; \(\beta_{ii}\)—secondary effect coefficient; \(\beta_{ij}\)—interaction term absorption. Response surface regression model Table 3 displays the experimental results of response surface method for inert gas-assisted laser processing of wood. Taking slit width, slit depth and surface roughness as evaluation indexes, the mathematical models of three response parameters: laser power, cutting speed and air flow pressure are established. Table 3 Box–Behnken matrix and response values The quadratic polynomial function is used to describe the predicted response of slit width, slit depth, surface roughness and process parameters in this experiment: $$\begin{gathered} y_{{{\text{width}}}} = - 0.96825 + 0.05085x_{1} + 0.013875x_{2} - 1.18x_{3} - 0.0015x_{1} x_{2} - 0.01x_{1} x_{3} \hfill \\ - 0.025x_{2} x_{3} - 0.00004x_{1}^{2} + 0.001x_{2}^{2} + 6.6x_{3}^{2} , \hfill \\ \end{gathered}$$ $$\begin{gathered} y_{{{\text{depth}}}} = 20.3855 - 0.2689x_{1} - 1.08175x_{2} - 4.58x_{3} - 0.01325x_{1} x_{2} - 0.43x_{1} x_{3} \hfill \\ + 0.775x_{2} x_{3} + 0.00831x_{1}^{2} + 0.030688x_{2}^{2} + 33.1x_{3}^{2} , \hfill \\ \end{gathered}$$ $$\begin{gathered} y_{{{\text{Ra}}}} = - 67.00325 + 1.53685x_{1} + 2.61512x_{2} + 36.72x_{3} - 0.00825x_{1} x_{2} + 0.28x_{1} x_{3} \hfill \\ - 1.1x_{2} x_{3} - 0.001399x_{1}^{2} - 0.046187x_{2}^{2} - 64.9x_{3}^{2} . \hfill \\ \end{gathered}$$ In this formula: \(y\)—Response value; \(x_{1}\)—laser power; \(x_{2}\)—cutting speed; \(x_{3}\)—gas pressure. Influence of interaction of process parameters on slit width The variance analysis and significance test of the slit width test results in the process of inert gas-assisted laser cutting wood are described in Table 4 to evaluate the reliability of the test results. P indicates the significance of each item in the model. Among the effects of the primary item, the laser power has the most significant effect on the slit width, followed by the cutting speed and air flow pressure. The interaction between various factors is more significant. Table 4 Variance analysis of kerf width regression model Figure 10 is the response surface of slit width under different parameter combinations of inert gas-assisted laser cutting wood. It can be observed from Fig. 10a that the interaction of process parameters has a significant impact on the slit width. Most of the experimental values are distributed above the predicted values, and the predicted values of the established model are in good agreement with the experimental values. Figure 10b demonstrates the influence of the interaction of laser power and cutting speed on the slit width. As the laser energy is transmitted downward on the surface of the machined surface, when the cutting speed at low water level, the excess laser energy heats the flue gas and burned products in the incision to produce a large slit. With the increase of laser power, more laser energy accumulates on the wood surface, and the influence on the slit width enhances significantly. Figure 10c elucidates the influence of the interaction of laser power and airflow pressure on the slit width. Larger laser power and lower airflow pressure lead to the enhancement of wood slit width, and the influence of laser power on the slit width is more significant than that of airflow pressure. Figure 10d signifies the effect of the interaction between cutting speed and air flow pressure on the slit width. The interaction between air flow pressure and cutting speed has no obvious effect on the slit width, as the interaction time between gas jet and wood surface is prolonged, higher auxiliary gas pressure enhanced the cooling effect in the laser action zone. In what follows, the uneven pressure and temperature in the gas flow will cause the change of gas flow field density and change the energy impact of the laser beam and the energy distribution along the material notch. According to the corresponding experimental results of various influencing factors, the optimal conditions for obtaining the minimum slit width of wood by inert gas-assisted laser processing are listed as follows: laser power 40 W, cutting speed 25.15 mm/s, gas pressure 0.17 MPa, and the minimum slit width is 0.29 mm. Fig.10 Response surface diagram of cutting width of inert gas-assisted laser cutting wood. a Comparison between predicted and actual. b Interaction of laser power and cutting speed. c Interaction of laser power and gas pressure. d Interaction of cutting speed and gas pressure Influence of process parameter interaction on cutting seam depth The variance analysis and significance test of the experimental results of slit depth in the process of inert gas-assisted laser cutting wood are recorded in Table 5. Among the primary effects, the laser power has the most significant effect on slit depth, followed by air flow pressure and cutting speed. The interaction between laser power and cutting speed and the interaction between laser power and airflow pressure also have a significant impact on the slit depth. This is due to the fact that the formation of the slit of the workpiece is primarily caused by the combustion and heat accumulation. As such, when the laser power is large, the slit depth increases. When the cutting speed increases, the slit depth decreases due to the short action time and less heat accumulated on the wood surface. Table 5 Variance analysis of kerf depth regression model Figure 11 is the response surface of slit depth under the combination of parameters of inert gas-assisted laser cutting wood. Figure 11b demonstrates the influence of the interaction between laser power and cutting speed on the cutting depth. Since the laser cutting of wood primarily depends on the thermal process of laser beam providing energy, when the cutting speed is at a low level, the energy for transmission enhances with the gradual increase of laser power, and the enhanced coupling between laser beam and material will produce deeper cutting. Figure 11c reveals the influence of the interaction of laser power and gas flow pressure on the slit depth. At a higher laser power level, the laser energy is evenly distributed along the thickness direction of the workpiece. With the increase of gas jet pressure, the scouring effect is enhanced, which is conducive to remove particle smoke and combustion particles in the slit and reduces the residues in the slit. The propagation of laser beam in the slit is smoother and the slit depth is increased. Figure 11d shows the influence of the interaction between cutting speed and air pressure on the cutting depth. Under the condition of low cutting speed, affected by the action time of gas jet impacting wood, the longer impact process will produce higher kinetic energy, exert greater shear stress on the cutting wall, and then increase the cutting depth. Based on the experimental results, the optimal conditions for the maximum seam depth of wood processed by inert gas-assisted laser are: laser power 50 W, cutting speed 22 mm/s, gas pressure 0.15 MPa, and the maximum seam depth is 3.58 mm. Fig. 11 Response surface diagram of cutting depth of inert gas-assisted laser cutting wood. a Comparison between predicted and actual. b Interaction of laser power and cutting speed. c Interaction of laser power and gas pressure. d Interaction of cutting speed and gas pressure Influence of process parameter interaction on surface roughness The variance analysis and significance test of the experimental results of surface roughness in the process of inert gas-assisted laser cutting wood are listed in Table 6. Among the primary effects, the laser power possesses the most significant effect on the cutting depth, followed by the cutting speed and air flow pressure. Thereafter, the interaction between laser power and cutting speed also has a significant influence. Table6 Variance analysis of surface roughness regression model Figure 12 elaborates the response surface of surface roughness under various parameters combinations of inert gas-assisted laser cutting wood. Figure 12b exhibits the effect of the interaction of laser power and cutting speed on surface roughness. In the selected parameter range, when the cutting speed at a low level, the slit surface roughness enhances with the increase of laser power and produces a large rate. It is suggested that the interaction between cutting speed and laser power have a great influence on the surface roughness of slit after cutting. When the laser power increases, more heat accumulates on the wood cutting surface, resulting in the increase of cutting seam surface roughness. Figure 12c demonstrates the effect of the interaction between laser power and air flow pressure on surface roughness. Lower laser power combined with low pressure could acquire smaller surface roughness, which is primarily due to the excellent distribution of laser energy along the cutting depth direction and lower shear stress of jet. The combination of high laser power and high air flow pressure have the higher surface roughness, as the gas jet exerts a higher shear force, which makes the machined surface rougher. Figure 12d demonstrates the effect of the interaction between cutting speed and air flow pressure on surface roughness. The surface roughness enhances with the increase of air flow pressure, as with the increase of gas jet pressure, more materials are eliminated from the slit by the generated shear force, and the adhesion of combustion products attached to the surface of wood after carbonization is poor. Therefore, it will have a significant impact on the surface roughness. According to the corresponding experimental results of various influencing factors, the optimal conditions for obtaining the surface roughness of wood assisted by inert gas laser processing are: laser power 40 W, cutting speed 25.99 mm/s, gas pressure 0.15 MPa, and the minimum surface roughness is 1.72 μm. Response surface diagram of surface roughness of inert gas-assisted laser cutting wood. a Comparison between predicted and actual. b Interaction of laser power and cutting speed. c Interaction of laser power and gas pressure. d Interaction of cutting speed and gas pressure Process parameter optimization The objective of selecting process parameters of inert gas-assisted laser cutting is to obtain the minimum slit width, maximum slit depth and minimum surface roughness under reasonable process conditions. The optimized process parameters maximize the process range and the quality of cutting section, but each parameter level is independent each other. Selecting process parameters, it is difficult to acquire the minimum surface roughness which meets the maximum cutting seam depth. Therefore, it is necessary to optimize and verify according to the importance of each parameter. Table 7 elaborates the variation range and importance index of inert gas-assisted laser processing process parameters. Table 7 Variation range and importance of inert gas-assisted laser wood processing parameters Table 7 lists the representation of each response weight and its importance, from 1 to 5 (the importance reaches from small). The weight of each response is a process of yield and quality prediction based on a priority. The weight of each parameter is determined by its frequency as an important parameter or its participation in the interaction affecting the analyzed response. The optimal combination of process parameters for inert gas-assisted laser processing of wood is described in Table 8. These process parameters achieve the target requirements of each response and maximize the process quality of laser cutting. It can be observed that the predicted value of the model is basically consistent with the experimental value, and the error is small. Under three different evaluation criteria, the error of slit width is 0.35%, and the errors of slit depth and surface roughness are 0.14% and 0.12%, respectively. The experimental results substantiate that the model prediction value of surface roughness is the best. Table 8 Selection and verification of model process parameters Figure 13 shows the cutting quality of inert gas-assisted laser processing wood when the laser power is 40 W, the cutting speed is 26 mm/s and the gas pressure is 0.15 MPa. It can be discovered from Fig. 13a that the cutting seam quality obtained by traditional laser cutting wood is poor, a large amount of residue accumulation generated at the bottom of the cutting seam, and both sides of the cutting seam are rough. This is attributed to the fact that the molten substances produced by wood combustion and vaporization in the laser action area are irregularly attached to both sides of the cutting seam or accumulated at the bottom of the cutting seam, so the processing quality is poor. It can be observed from Fig. 13b that when inert gas-assisted laser cutting wood, the processing quality of slit surface is significantly improved as the residue and carbide adhesion caused by incomplete combustion reaction are reduced under the protection of gas jet, and the impact cooling effect of air flow will discharge some materials that cannot be effectively burned from the bottom of slit, which reduces the solidified layer formed by the residue on the wall of the cutting seam. Hence, the cutting seam is relatively smooth and flat, and the cutting seam is relatively straight along the vertical direction. As seen in Fig. 13c, under the optimal test parameters, the cherry wood cutting seam surface is relatively smooth and there is almost no residue in the pores, which is due to the combination of various parameters of inert gas-assisted laser processing more reasonable and realization their positive significance. Under the synergistic action of laser and inert gas, the gas jet protects the cutting surface, reduced the damage of combustion exothermic reaction to the inner wall of cells through the flame retardant effect of air flow, and employes the cooling and scouring effect of air flow to take away the particle residue after gasification. Therefore, the quality of slit surface has been improved. Gas-assisted laser cut wood surface optimization. a Traditional laser machining. b Gas-assisted laser processing. c Micromorphology of kerf surface Utilizing helium-assisted laser processing and cutting wood technology can process high-quality cutting joints under the same energy consumption, save raw materials to a certain extent, and promote the improvement of forest coverage and environmental protection. After theoretical modeling and a series of experimental verification, the conclusions are listed as follows: Using helium-assisted laser processing, the helium ejected from the nozzle affects the combustion effect caused by the heat of high-temperature laser on wood, which reduces the carbonized layer of the processed slit, and then obtains better processing quality. Helium-assisted laser processing, the gas flow velocity of helium ejected from the injection port is 10 m/s. Helium not only affects the concentration of combustion-supporting gas, but also has a certain purging effect, which can eliminate the carbonized particles and residues of the processed slit, and improve the processing quality of the slit simultaneously. By comparing the traditional laser processing and inert gas-assisted laser processing wood cutting under the same process parameters, the inert gas-assisted laser processing has smaller slit width, surface roughness and smoother surface than the traditional laser processing. The cutting seam width and surface roughness of wood in the along-grain cutting mode are less than those in the cross-grain cutting mode. The response surface method is innovatively employed to establish the mathematical model with the slit width, slit depth and surface roughness as the evaluation criteria, to analyze the interaction of laser power, cutting speed and inert gas pressure on the response factors. Comparing the error between the predicted value of the quadratic mathematical model and the experimental measured value, the optimized the combination of process parameters could be acquired. When the laser power is 40 W, the cutting speed is 25.99 mm/s and the gas pressure is 0.15 MPa, the minimum surface roughness is 1.71 μm. When helium assisted laser cutting wood, the cutting seam quality and micromorphology are dramatically enhanced. On observing the micromorphology of the cutting seam by scanning electron microscope, it can be illumined that the cutting seam surface and around the pores of inert gas-assisted laser processing wood are relatively clear, almost no residues, which makes the inner wall complete and the surface smooth. Most data analyzed during this study are included in this published article. The supplementary information is available from the corresponding author on reasonable request. RSM: Xiang SL, Li CS (2010) Progress in wood processing and application technology. Science Press, Beijing Wang BY (1994) Application of laser cutting in wood processing. China forestry science and technology 01:25–27 Zhang SH (2005) Research on laser cutting technology. D Sc Tech dissertation, Xi'an University of Technology. Olakanmi EO, Cochrane RF, Dalgarno KW (2015) A review on selective laser sintering/melting (SLS/SLM) of aluminium alloy powders: Processing, microstructure, and properties. Prog Mater Sci 74:401–477 Naderi N, Legacey S, Chin SL (1999) Preliminary investigations of ultrafast intense laser wood processing. For Prod J 49(6):72–76 Kalyanasundaram D, Shehata G, Neumannc C, Shrotriya P, Molian P (2008) Design and validation of a hybrid laser/water-jet machining system for brittle materials. J Laser Appl 20(2):127 Lee M (2015) Recent Trends of the Material Processing Technology with Laser-ICALEO 2014 Review. J Weld Join 33(4):7–16 Lu Y, Ding Y, Wang ML (2021) An environmentally friendly laser cleaning method to remove oceanic micro-biofoulings from AH36 steel substrate and corrosion protection. J Clean Prod 314:127961 Xu SL (2018) Research and application of laser cutting equipment for complex curved surface. D Sc Tech dissertation, Shenyang University of Technology. Lu Y, Yang LJ, Wang ML, Wang Y (2020) Improved thermal stress model and its application in ultraviolet nanosecond laser cleaning of paint. Appl Opt 59(25):7652–7659 Yang CM, Jiang T, Liu JQ, Ma Y, Miao Q (2020) Water admittance nanosecond laser ablation mechanism and processing experiment of Korean pine. Scientia Silvae Sinicae 56(08):204–211 Jiang XB, Hu H, Liu JQ, Zhu XL, Ma Y, Yang CM (2018) Discussion on the Processing of Wood by Nanosecond Water Guide Laser. Scientia Silvae Sinicae 54(01):121–127 Yang CM, Jiang T, Ma Y, Liu JQ (2019) Design and experimental study on water-jet assisted nanosecond laser equipment for wood. China forest products industry, 46(05):12–16+53. American Meteorological Society (1935) Bulletin of The American Meteorological Society. 16(2): 31–33. Siegman AE (1993) Defining, measuring, and optimizing laser beam quality. SPIE 1224:2–13 Li XC, Wang ZW (2016) Study on transient heat transfer process of one-dimensional thermoelectric module. Solar Newspaper 37(07):1826–1831 ISO 21920–2 (2021) Geometrical product specifications (GPS)—Surface texture: profile — Part 2: terms, definitions and surface texture parameters. International Organization for Standardization, Geneva The authors thank many teachers of the school of mechanical and electrical engineering of Northeast Forestry University for their guidance and the laboratory of the school of mechanical and electrical engineering for their support. The work of this paper is jointly funded by the natural science foundation of Heilongjiang Province (ZD2021E001) and the special fund for basic scientific research business expenses of Central Universities of the Ministry of education of China (2572019CP18). Northeast Forestry University, 26 Hexing Rd, Harbin, 150040, China Chunmei Yang, Xinchi Tian, Bo Xue, Qingwei Liu, Jiawei Zhang, Jiuqing Liu & Wenji Yu Chinese Academy of Forestry, 1 DXF of Xiangshan Rd, Beijing, 100091, China Wenji Yu Chunmei Yang Xinchi Tian Bo Xue Qingwei Liu Jiuqing Liu CY is the project leader and responsible for the manuscript review. XT established the model, analyzed the data and drafted the manuscript. BX designed the experiment and put forward many modification suggestions for the manuscript. QL carried out specific experimental operation and data acquisition. JZ participated in electron microscope analysis and sample collection of processed wood. JL is one of the project leaders. WY is responsible for manuscript review and the corresponding author. All authors read and approved the final manuscript. Correspondence to Wenji Yu. Yang, C., Tian, X., Xue, B. et al. Research on the wood processing method of helium-assisted laser process. J Wood Sci 68, 50 (2022). https://doi.org/10.1186/s10086-022-02051-4 Helium assisted Heat transfer model Response surface method
CommonCrawl
Understanding the definition of mean/autocorrelation I was studying about the definitions of mean, expected value and autocorrelation. I wanted to verify my understanding the evaluation of mean, expected value and autocorrelation. At the same time to verify the definition of autocorrelation as the convolution of the signal with this delayed copy mathematically which I was not able to. Here is my analysis: Basically, I see mean as an average of all values in a population. For example, consider an experiment where we measure the outcome of the random variable that can go anywhere between $[1,100]$. By performing the above experiment $50$ times, we get $50$ outcomes:$a_1, a_2, \ldots, a_{50}$ which has values ranging from $1$ to $100$. The mean of the above random variable is given by: $$ \mu = \frac{a_1 + a_2 +\ldots + a_{50}}{50} $$ The expected value is given by: $$ E(x)=(1*\textrm{probability of getting 1})+(2*\textrm{probability of getting 2})+\ldots+(100*\textrm{probability of getting n}) $$ which can also be represented as: $$ E(x)=\prod_{k=1}^{100} kP_k $$ where $P_k$ is the probability of getting $k$. Now taking the actual definition of an expected value which is: $$ E(X)=\int_{-\infty}^{\infty} xF_x(X)dx $$ The above seems somewhat satisfying as we could relate the above with the example - taking $x$ as one of the outcome and $F_x(X)$ as the probabilities (actually it's a probability density function to be accurate). For a random process, the above expands to (at time $t_1$): $$ E(X_{t_1})=\int_{-\infty}^{\infty} x_{t_1}F_{X_{t_1}}(x_{t_1})dx $$ Similarly, expected value can be computed at several time instances since the random variable changes every time instant. Now, lets take a Strict-Sense Stationary Random Process where expected value (mean) is constant and autocorrelation depends up on the time difference. So, the definition of Autocorrelation is given as per wikipedia: $$ R_x(\tau)=\frac{E\left[\left(X(t_1) - \mu\right)\left(X(t_1+\tau) - \mu\right)\right]}{\sigma_{t_1}\sigma_{t_2}} $$ The above is with Normalization. Taking out the normalization, we have: $$ R_x(\tau)=E\left[X(t_1)X(t_1+\tau)\right] $$ Till now it's pretty clear. Now applying the definition of an expected value in the above expression: $$ R_x(\tau)=E\left[X(t_1)X(t_1+\tau)\right]=\int_{-\infty}^{\infty} x_{t_1}F_{x_{t_1}}(X_{t_1})x_{t_2}F_{x_{t_2}}(X_{t_2})dx\quad\text{where}\quad t_2 = t_1+\tau $$ But in Wikipedia, it's given as: $$ R_x(\tau)=E\left[X(t_1)X(t_1+\tau)\right]=\int_{-\infty}^{\infty} x(t)x^*(t-\tau)dt $$ This is where I am stuck at - I am not able to match the last two expressions. I am very well aware that autocorrelation of a random process is convolution with itself at a time lag of $\tau$ but I am not able to relate the expressions mathematically. I am stuck here for several days. continuous-signals autocorrelation random-process stochastic sundarsundar $\begingroup$ Your definition of auto-correlation isn't correct, you have to use joint probability which in general isn't the product of marginal probabilities. $\endgroup$ – Mohammad M $\begingroup$ @MohammadMohammadi So, you mean this definition?: latex.codecogs.com/… $\endgroup$ – sundar $\begingroup$ Yes, you have to use joint probability. But this correction still won't relate those two definitions. $\endgroup$ $\begingroup$ @MohammadMohammadi Yeah. Thats what I am wondering... But if we take the joint probability and even divide this definition latex.codecogs.com/… by twice the total time period T , then it would make some sense (like taking the values and averaging over a time period T - since there are two random variables - 2 times T). $\endgroup$ $\begingroup$ Still it has some problem, it has to be joint probability and also the integral would be the double integral over two random variables. But the more important problem is that you are trying to show that these two equations are equal, but they aren't. auto-correlation over time is an estimator of the ensemble auto-correlation for ergodic processes which converge to the auto-correlation in Mean Square sense or in other word expectation of squared difference of your estimator and the ensemble auto-correlation would goes to zero. $\endgroup$ The definition of the autocorrelation function $R_x(\tau)$ depends on the nature of your $x$. If $x$ is a deterministic signal with finite energy then: $$R_x(\tau)=\int_{-\infty}^{+\infty}x(t)x^*(t-\tau)dt$$ If $x$ is a deterministic signal with finite average power$^{(1)}$ then: $$R_x(\tau)=\lim_{T\to+\infty}\frac{1}{T}\int_{-T/2}^{+T/2}x(t)x^*(t-\tau)dt$$ If $X$ is a (real-valued) random process then: $$R_X(t_1,t_2)=E[X(t_1)X(t_2)] = \int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}u_1u_2p_X(u_1,u_2)du_1du_2,$$ where $X(t_1)$ and $X(t_2)$ are two random variables (the realizations of the random process $X$ at times $t_1$ and $t_2$, respectively), $u_1$, $u_2$ are their respective values and $p_X(u_1,u_2)$ is the joint probability density function of these two random variables. This statistical autocorrelation becomes only dependent on $\tau = t_1-t_2$ if the random process is stationary and we can write: $$R_X(\tau)=E[X(t)X(t-\tau)].$$ If, furthermore, the random process is ergodic then the statistical autocorrelation equals the temporal autocorrelation, i.e: $$E[X(t)X(t-\tau)]= \lim_{T\to+\infty}\frac{1}{T}\int_{-T/2}^{+T/2}x(t)x(t-\tau)dt^{(2)}$$ $^{(1)}$: Signals with finite energy are also known as energy signals whereas signals with finite average power are also known as power signals. Moreover, the average power of an energy signal is zero and the energy of a power signal is infinite. $^{(2)}$A random signal is by convention considered as a power signal. Learn_and_ShareLearn_and_Share $\begingroup$ You define the autocorrelation in general for stochastic signals as an expectation (ensemble average), and say that this equals temporal average in the special case of ergodic processes. Can you tell me why for deterministic signals then you go straight to the temporal average for your definition, and not ensemble average? $\endgroup$ – teeeeee $\begingroup$ @teeeeee An ensemble average is taken, at some time instant say $t_1$, by averaging the outcomes of different experiments (or realizations) of a random process see page 175 here. By definition, a deterministic signal is nonrandom. So, you only get a single deterministic outcome. So, you can only take an average over $t$ for $x(t)$, i.e., what you called temporal average. $\endgroup$ – Learn_and_Share $\begingroup$ I understand that, but since you have taken an average over time does that mean that the expression is only valid for ergodic signal? Or is it correct to say that all deterministic signals are ergodic anyway? $\endgroup$ $\begingroup$ @teeeeee, It doesn't actually make sense to talk about ergodicity for deterministic signals. Ergodicity basically says that assuming you have a random process ($X(t,\omega)$ where $t$ is time and $\omega$ outcomes) then taking a temporal average of a single outcome $\omega_k$ over variable $t$ (i.e., average of $X(t, \omega_k)$ over $t$) is equivalent to taking an ensemble average over the outcome variable $\omega$ of this random process considered at some time instant $t_k$ (i.e., average of $X(t_k, \omega)$ over $\omega$). How do you average outcomes when you don't have them? $\endgroup$ In addition to stationarity, your process must be ergodic to relate these two definitions. Ergodicity tells us joint probability of your signal's value at two instant of time (which only depend on their time difference) is equal to the number of instants of time which those combination with the same delay appeared in your signal divided by the total time. Atul Ingle Mohammad MMohammad M Not the answer you're looking for? Browse other questions tagged continuous-signals autocorrelation random-process stochastic or ask your own question. Auto correlation definition Understanding of Random Process, Random Variable and Probability Density Function Power Spectral Density of Brownian Motion despite non-stationary Mean Square Continuity of Random Process Null autocorrelation function and stationary Autocorrelation for Stationary Signals Autocorrelation function and correlation integral
CommonCrawl
Other Headings Surface not flat J-integral revisited Curve crack growth For simplicity, let $\Pi $ be a plane. Crack growth along the plane $\Pi $ and the mapping $x\mapsto \Phi _h(x)$ Let $h$ be a $C^1$-function defined on the edge $\partial \Sigma $ of the crack $\Sigma $ given in Section 2.2.2 . Using the geodesic coordinate, we define $\mathcal{E}(v;h)=\tilde{\mathcal{E}}(v;f,g,\Phi _h)$, $\Phi _h(x)=x+h(\gamma (\mathcal{P}(x)))e_{\partial \Sigma ,1}(\gamma (\mathcal{P}(x)))\beta (x)$ where $\beta $ is the cut-off function such that $\beta (x)=1$ at a point $x$ in a neighborhood of $\partial \Sigma $ included in $U(\partial \Sigma )$ given in Section 2.2.2 . Applying Theorem 2.10 to fracture problem with $\mathcal{V}=\{v\in H^1(\Omega ;\mathbb{R}^d):\,v=0~~ \textrm{on }\Gamma _D\}$, $\mathcal{O}=C^1(\partial \Sigma )$, $\varphi =th$ and $\varphi _0=0$, we have \begin{eqnarray*} \frac{d}{dt}\mathcal{E}(u(t);f,g,\Omega (t))|_{t=0}&=& -R_{\Omega }(u,\mu _h) \Omega (t)=D\setminus \Phi _{th}(\Sigma )\\ \mu _h&=&h(\gamma (\mathcal{P}(x)))e_{\partial \Sigma ,1}(\gamma (\mathcal{P}(x))\beta (x) \end{eqnarray*} At the point $x$ near $\partial \Sigma $, $\mu _h(x)=h(\gamma (\mathcal{P}(x)))e_{\partial \Sigma ,1}(\gamma (\mathcal{P}(x)))$ is the Parallel extension of the vector filed $h(\gamma )e_{\partial \Sigma ,1}(\gamma ), \gamma \in \partial \Sigma $ with the speed $h$. If $\| h\|_{C^1(\partial \Sigma )}$ is large, let $\epsilon h$ be $h$ with sufficiently small $\epsilon \gt 0$. Let $\omega _0$ be a neiborhood of $\Sigma $, then $J_{\Omega \setminus \overline{\omega _0}}(u,\mu _h)=0$ since $\Phi _{th}(x)=x$ on $\partial D$ and $u$ has the interior regularity so that \begin{eqnarray*} R_{\Omega }(u,\mu _h)&=&R_{\Omega \setminus \overline{\omega _0}}(u,\mu _h) +R_{\omega _0}(u,\mu _h)\\ &=&-P_{\Omega \setminus \overline{\omega _0}}(u,\mu _h) +R_{\omega _0}(u,\mu _h)=J_{\omega _0}(u,\mu _h |\partial \omega _0). \end{eqnarray*} Let $\omega $ be a neighborhood of $\partial \Sigma $. Neighbourhoods $\omega _0, \omega $ of the crack front $\partial \Sigma $ and the vector field $\mu _h$ Since $u |_{\omega _0\setminus \overline{\omega }}$ is regular except for the direction crossing $\Sigma $, $\widehat{T}^±(u)=0$ and $\mu _h\cdot n=0$ on $\Sigma $, we arrive at \begin{equation} \mathcal{G}(f,g,\Sigma _h(\cdot ))=J_{\omega }(u,\mu _h |\partial \omega )\left (\int _{\partial \Sigma }h(\gamma )d\gamma \right )^{-1}, \Sigma _h(\cdot ):=\{\Phi _{th}(\Sigma )\} _{t\in [0,\epsilon ]} \tag{2.44} \end{equation} which is another proof of Theorem 2.7 . Similar results are obtained if we notice that on a plane, the shortest line connecting two points is a straight line, but on a general surface, the shortest line is a geodesic. By using geodesic coordinate, a point $x$ near the crack front $\partial \Sigma $ is expressed as $x=\mathcal{P}(x)+\lambda _3(x)e_{\partial \Sigma ,3}(\mathcal{P}(x))$, $\mathcal{P}(x)=g_{\Sigma }(e_{\partial \Sigma ,1}(\gamma (x)),\lambda _1(x))$. Here $[\lambda \mapsto g_{\Sigma }(e_{\partial \Sigma ,1}(\gamma (x)),\lambda )], 0\le \lambda \le \lambda _1(x)$ is the geodesic line passing through $\mathcal{P}(x)$ and crossing $\partial \Sigma $ in the direction $e_{\partial \Sigma ,1}(\gamma (x))$ at $\gamma (x)$ as shown below. Now, we define \begin{equation*} \Phi _h(x):=\mathcal{P}_{h\beta }(x)+\lambda _3(x)e_{\partial \Sigma ,3}(\mathcal{P}_{h\beta }(x)) \end{equation*} where $\mathcal{P}_{h\beta }(x):=g_{\Sigma }(e_{\partial \Sigma ,1}(\gamma (x)),\lambda _1(x)+h(\gamma (x))\beta (x))$ with a cut-off function $\beta $ near $\partial \Sigma $. If $x\notin \textrm{supp}_{\Pi }\beta $, then $$ \mathcal{P}_{h\beta }(x)=g_{\Sigma }(e_{\partial \Sigma ,1}(\gamma (x)),\lambda _1(x))=\mathcal{P}(x)$$ that is, $\Phi _h(x)=\mathcal{P}(x)+\lambda _3(x)e_{\partial \Sigma ,3}(\mathcal{P}(x))=x$, which means that $\Phi _h$ is defined at $x\in \mathbb{R}^d$. $\mu _h(x)$ is the parallel transport (e.g., [Def.7.4.8, Pr10]) from $\gamma (x)$ to $\mathcal{P}(x)$ along the geodesic, and parallel to its normal when $x$ is near $\partial \Sigma $. In [Oh81], the product neighborhood $(U,p)$ [Lemma 4.2, Oh81] was used, but for clarity of the geometric image, the geodesic coordinate system was used here. In [Oh81], it is also shown that the the velocity vector of the energy release rate is independent of $(U,p)$ [Lemma 5.3, Oh81]. product neighborhood $(U,p)$ subordinate to the crack front $\partial \Sigma $ Consider the elasticity $\sigma _{ij}(\xi ,u)=C_{ijkl}(\xi )\varepsilon _{kl}(u)$ with $C_{ijkl}(\xi )$ depends on $\xi \in \Omega \subset \mathbb{R}^2$. $\Sigma :=\{(x_1,0): -a\le x_1\le 0\}$, $\Omega :=D\setminus \Sigma $.$\widehat{W}(\xi ,\varepsilon (u)):=\sigma (\xi ,u):\varepsilon (u)/2$ For a given $f\in L^2(\Omega ;\mathbb{R}^2), g\in L^2(\Gamma _N;\mathbb{R}^2)$, find $u\in V(\Omega )$ such that \begin{eqnarray*} \mathcal{E}(u;f,g,\Omega )&=&\min _{v\in V(\Omega )} \mathcal{E}(v;f,g,\Omega )\notag \\ \mathcal{E}(v;f,g,\Omega )&:=& \int _{\Omega }\{\widehat{W}(x,\varepsilon (v))-f\cdot v\} dx -\int _{\Gamma _N}g\cdot v~ds\\ V(\Omega _{\Sigma })&:=& \{v\in H^1(\Omega ;\mathbb{R}^2):v=0~\textrm{on }\Gamma _D\} \end{eqnarray*} Crack growth $\Sigma (t)=\{(x_1,0):0\le x_1\le \ell (t)\}$ at time $t$. Since the bilinear forms $a_{\Sigma (t)}(v,w):=\int _{\Omega (t)}\sigma (x,v):\varepsilon (w)dx, \Omega (t)=D\setminus \Sigma (t)$ are coercive on $V(\Omega _{\Sigma (t)})$ and $(f,g)\mapsto u(t)$ are bounded operator from $L^2(\Omega ;\mathbb{R}^2)\times L^2(\Gamma _N;\mathbb{R}^2)$ to $V(\Omega (t))$, we obtain from the similar argument in [Oh81] with $\varphi _t(x)=x+e_1\ell (t)\beta $ \begin{equation} -\left .\frac{d}{dt}\mathcal{E}(u(t);f,g,\Omega (t))\right |_{t=0} =R_{\Omega }(u,\ell '(0) e_1\beta )\tag{2.A1} \end{equation} Therefore, the energy release rate $$ \mathcal{G}(f,g,\Sigma (\cdot )):=\lim _{t→+0}\ell (t)^{-1}[\mathcal{E}(u;f,g,\Omega )-\mathcal{E}(u(t);f,g,\Omega (t))]$$ become the following from (2.A1) \begin{eqnarray} \mathcal{G}(f,g,\Sigma (\cdot ))&=&-\left .\frac{d}{dt}\mathcal{E}(u(t);f,g,\Omega (t))\right |_{t=0}(\ell '(0))^{-1}\notag \\ &=&R_{\Omega }(u,\ell '(0) e_1\beta )(\ell '(0))^{-1}=R_{\Omega }(u,e_1\beta )\tag{2.A2} \end{eqnarray} In Bui[Bu04], (2.A2) is called G-theta integral, and it is written that (2.A2) has been independently given by Ohtsuka (1981)[Oh81], deLorenzi (1982)[Lo82], Destuynder and Djaoua (1981)[D-D81].Also,in [Kn06, Kh-So99], (2.A2) is called Griffith formulaand [Kn-Mi08] for finite elasticity. (a) Crack tips $\gamma $ and $\phi _t(\gamma )$ (b) Parametrization $\rho $ of $\Pi $ by arc length Consider the curve crack growth based on the idea of 3D problem[Oh81]. Let $[\lambda _1\mapsto \rho (\lambda _1)]$ be the parametrization of $\Pi $ by arc length such as $\gamma =\rho (0)$, $\rho (\lambda _1)\in \Sigma $ if $\lambda _1\le 0$ and $\rho (\lambda _1)\notin \Sigma $ if $\lambda _1 \gt 0$. Put $\phi _t(\gamma )=\rho (\ell (t))$. Take a neighbourhood $U(\gamma )$ of $\gamma $ such that the foot of the perpendicular to $\Pi $ from $x\in U(\gamma )$ is uniquely $\rho (\lambda _1(x))\in \Pi $, that is, $x=\rho (\lambda _1(x))+\lambda _3(x)\nu (\rho (\lambda _1(x))$. $\varphi _t(x)=\rho (\lambda _1(x)+\ell (t)\beta (x))+\lambda _3(x)\nu (\rho (\lambda _1(x)+\ell (t)\beta (x))$ with a cut-off function $\beta $ near $\gamma $, $\textrm{supp}\beta \subset U(\gamma )$. (a) Neighborhood $U(\gamma )$ and the map $[x\mapsto \varphi _t(x)]$ (b) Vector field $\mu _C$ obtained by parallel extension from crack growth \begin{eqnarray} \mathcal{G}(f,g,\Sigma (\cdot ))&=&-\left .\frac{d}{dt}\mathcal{E}(u(t);f,g,\Omega (t))\right |_{t=0}(\ell '(0))^{-1}\notag \\ &=&R_{\Omega }(u,\mu _C\beta )(\ell '(0))^{-1}\tag{2.A5} \end{eqnarray} Unfortunately, (2.A5) does not hold if $\Sigma $ grows at an angular bend (see Crack Path ). [Kh-So99] A.M.Khludnev and J.Sokolowski. The Griffith formula and the Rice-Cherepanov integral for crack problems with unilateral conditions in nonsmooth domains. Eur. J. Appl. Math. 10(1999), 379---394. [K-W06] M. Kimura. and I. Wakano, New mathematical approach to the energy release rate in crack extension, Trans. Japan Soc. Indust. Appl. Math., 16(2006) 345--358. (in Japanese) [K-W11] M. Kimura and I. Wakano, Shape derivative of potential energy and energy release rate in rracture mechanics, J. Math-for-industry, 3A (2011), 21--31. [Kn06] D. Knees, Griffith-formula and J-integral for a crack in a power-law hardening material. Math. Models Meth. Appl. Sci., 16(2006), 1723--1749. [Kn-Mi08] D.Knees and A,Mielke, Energy release rate for cracks in finite-strain elasticity, Math. Meth. Appl. Sci.31,2008, 501--528. [Lo82] H.G. deLorenzi, On the energy release rate and the J-integral for 3-D crack configurations. Int.. J. Fracture 19 (1982), 183-193.
CommonCrawl
Solomon Wiznitzer The Complete Boeing Project Northwestern University: MSR Final Project 3D CAD with Onshape PCB Design & Soldering Crimping & Wiring Design Programming the Tiva C Series microcontroller ROS Kinetic Mobile Robot Kinematics SLAM & Sensor Fusion Mathematica, Python & C/C++ Setting up the Intel NUC Kit NUC7i7BNH As mentioned on the Project page, this quarter brought forth some major changes, even some updates to the design proposed last quarter! For that reason, this post will attempt to describe the new build process of the Omni robot from the ground up, possibly overlapping from the last post but making up for it by explaining things in a sequential, easy to understand manner. That said, the goal of the project remained the same - to build and program three mobile platforms such that they are able to move as one rigid body. This is imperative as each platform will be carrying an inverted robotic delta arm (designed by Matthew Elwin and William Hunt) which in turn will be supporting part of a large rigid object. Without the platforms moving as one entity (i.e. in a formation), it will not be possible for the delta arms to maintain their hold on the rigid object. This is also why it is vital that the odometry calculated by each robot is as accurate as possible - to minimize drift that could ruin the formation. To accomplish this, there were five main tasks that were involved: Figure 2: Sabertooth 2x25A V2 Figure 1: IG52-04 24VDC 285 RPM motor Wiring & PCB Design Programming the Tiva microcontrollers Physical Design & Sensor Placement Control of a Single Robot including Odometry Calculation & Verification along with Sensor Fusion Formation Control of all Three Robots This section provides a mid-level overview of how all the electronics were connected, starting from the motor and going all the way up to the sensors. Before starting, it should be noted that the motors, motor drivers, chassis, wheels, switches, batteries, gears and chains were all sourced from the Programmable Mecanum Wheel Vectoring Robot - IG52 DB product on SuperDroid Robots' website. As shown in Figure 1, a brushed 24VDC motor, capable of going 285 RPM and complete with a dual channel/quadrature magnetic encoder was used for this project. Four of these were installed in the aluminum chassis with enough combined torque to move the robot while carrying a 200lb payload! In regards to connections, the Sabertooth motor driver (Figure 2) can provide 25A to two motors (one pair of red/black leads wired to M1A/M1B, and the other pair to M2A/M2B). As the motors have a rated current of up to 2.85A, this driver is more than able to do the job. The middle two terminals of the Sabertooth (B+/B-) were then wired to a 24V busbar (powered by two 12V 18Ah Interstate SLA batteries hooked up in series) via a 25A 4PST switch. For safety, a 6.3A Slow fuse was hooked up between B+ and the 24V (2.85A*2 < 6.3A). Since there are four motors, two Sabertooths were installed - one on either side of the robot. Figure 3: 'Wheel' PCB to control the motors With a good understanding now of how the motors are powered, the wiring needed to control them can be discussed. In the previous paragraph, I mentioned that each motor has a quadrature encoder which can be used to measure its speed. Four wires come out from the encoder. Two of them represent Channel A and B and the other two provide 5V and Ground. Unfortunately, the Tiva microcontroller that is being used only has two quadrature encoder peripherals (aka QEI pins). As a result, only two motors' encoder wires can be hooked up to any one microcontroller. Thus, two PCBs were made to aid in the motor control process. Known as the 'Wheel Control Board' (shown in Figure 3), each board contains two four-pin JST housings for two sets of encoders (by the yellow heat-shrink in Figure 3). Additionally, it holds a three-pin JST housing (by the black heat-shrink) for three wires that connect to the 0V, 5V, and S1 pins on the Sabertooth (Figure 2). The Tiva microcontroller (which plugs into the header pins on the left half of the board) uses these three pins to send packetized serial commands to the Sabertooth. The 0V pin provides a ground reference for the signal pin (S1), and due to how the packetized serial mode works on the Sabertooth, the Tiva can send commands via the S1 pin to both of the motors hooked up to the Sabertooth. Finally, in order to isolate the 'high voltage' motor driver from the 'low voltage' Tiva board, the ISO3086DWR 16-pin isolated RS485 chip was placed between them. The 'high' 5V from the Sabertooth was then used to power the 'high' side of the chip. Just as a note, the way to determine where the motor or encoder leads would be attached was based on a 'low' to 'high' number system. Looking in the same direction as the front of the robot, the leads associated with the right side of the robot had a 'lower' number than the left side. For example, the motor leads that powered the right wheel would connect to M1A/M1B on the Sabertooth while the leads that powered the left wheel would connect to M2A/M2B. Similarly, the encoder wires for the right wheel would plug into the QEI0 peripheral while the ones for the left wheel would plug into the QEI1 peripheral. Figure 4: 'Omni' PCB to control the 'Wheel' microcontrollers Technically, this setup would have been enough to control all four wheels of the robot. An FTDI cable could be hooked up from each Tiva microcontroller (TX, RX, and ground pins) to the NUC (a computer with a 4x4 inch footprint capable of running Ubuntu Linux) and the NUC could send commands through it to both microcontrollers. However, this design idea was discarded for a few reasons: 1) it would require twice the number of USB ports on the NUC 2) there was no guarantee that commands would be sent to each 'Wheel' Tiva at exactly the same time 3) since the Delta robot had a 'master' microcontroller to control the Tivas that directly interfaced with its motors, it would be consistent if the Omni robot had one as well. This is where the Ethernet cables come in handy. As shown in the right part of Figure 4, two Ethernet cables leave the 'Omni' PCB from the two silver rectangular ports. One cable connects to a port on one 'Wheel' PCB (top right in Figure 3) and the other connects to the second 'Wheel PCB'. In order to preserve Serial data, 8-pin RS485 chips (one for each 'Wheel' PCB and two on the 'Omni' PCB) were placed in between the TX and RX pins of the Tiva and the Ethernet port. This accounts for four of the wires inside the 8-wire bundle that make up the Ethernet cable (TX+, TX-, RX+, RX-). Another two wires provide 5V and GND to power the 'Wheel' Tivas. The last two wires were used for Emergency Stop purposes - one for a 'Wheel' Tiva to request an E-STOP (from the 'master' Tiva) and one to get back the response. The 'requests' then become two inputs to an AND gate on the 'Omni' PCB. Another two inputs to the AND gate come from the 'Omni' Tiva itself as well as an external Emergency Stop button (which plugs into the two pin JST in the top of Figure 4). If the Delta robot is being run at the same time as the Omni robot, the output of the AND gate travels up (via the 4-pin JST right next to the two-pin E-STOP JST) to the Delta and becomes an input to another AND gate located on its 'master' PCB. The output from the Delta AND gate (i.e. the 'response') travels back down to the 'Omni' PCB then to the 'Wheel' PCBs. However, if the Delta robot is not connected, the output of the AND gate located on the 'Omni' PCB becomes the 'response' signal (thus, the reason why the 4-pin JST shows a wire looping back on itself - note that the other two pins of that JST do not connect to anything). Finally, the black FTDI cable shown in Figure 4 connects to a USB port in the NUC. Besides for the 'Omni' FTDI cable, three other micro-USB cables plug into the NUC. One is used to power the 'Omni' Tiva (which then feeds the power to the 'Wheel' Tivas). The other two plug into this laser scanner and this IMU respectively. To power the NUC though is a bigger problem since it only accepts 12-19VDC and because it should be isolated from the 24V motor power supply. To fix this issue, this DC/DC converter was placed between the NUC and the 24V motor power supply (can partially be seen under the 'Omni' PCB in Figure 4). In any event, that about sums up the wiring of the Omni robot! Programming the Tiva Microcontroller With an ARM Cortex-M4 core capable of floating point, CPU speeds up to 80 MHz, 256KB Flash memory, 2KB EEPROM, 8 UART modules, 2 Quadrature Encoder Inputs, and a pricetag of only $13.50, the TM4C123G LaunchPad Evaluation Board had everything needed to make this project work. Although in the design from last quarter, the plan was to have each Sabertooth connected to a Kangaroo motion controller which in turn would be wired to a single Arduino Mega, this was rejected for a couple of reasons. For one, the Kangaroo (which would have been responsible for doing PID control to maintain the desired wheel speeds) could not accurately measure the velocity of a given motor. For example, if the Kangaroo was commanded to rotate a motor at 60 rpm, an external tachometer would measure a speed of 55 rpm. A possible explanation for this could be due to the nature of the magnetic encoders on the back of each motor. Normally, Channel A and B should have a 90 degree phase shift between them as shown in Figure 5. Figure 5: Typical Encoder Trace However, when measured with an oscilloscope, the traces from the magnetic encoders showed variable phase shifts (ex. if the Channel B trace in Figure 5 was shifted a bit more to the left). Considering the fact that the Kangaroo did give an accurate velocity reading when taking in square wave pulses that were 90 degrees apart (via a function generator), this seems to be the most likely culprit. It is not obvious though why this is the case. Possibly, the Kangaroo calculates speed by measuring time between each quadrature transition. Thus, variable phases between transitions could lead to inaccurate velocity estimates. On the other hand, the Tiva calculates speed by counting the number of quadrature transitions that occur in a specified time frame (20ms in the project). As a result, the variable phase shifts don't matter. Even if the Tiva missed a transition that it would have registerd assuming the channels were 90 degrees apart, it would not make a noticeable difference in the velocity estimate due to the large number of samples already accumulated in that time frame (ex. if the Tiva counted 99 transitions when it should have been 100). For that reason, it was decided to replace the Kangaroos with the Tiva LaunchPads (also because the Delta was using them). Furthermore, it didn't make sense to stick with the Arduino Mega but rather to use another Tiva as the 'master' of the two 'Wheel' Tivas to keep the architecture consistent (and cheaper!). To actually program the Tiva microcontrollers, the Atom text editor was used in conjunction with the GNU Arm Embedded Toolchain so that programs could be flashed to them. In addition, my advisor (Matthew Elwin) provided libraries containing custom Serial communication protocols and basic peripheral initialization functions that he wrote (originally for the Delta robot) for my partner (Aamir Hussain) and I to use while developing our code. For the first couple weeks of the quarter, we both familiarized ourselves with his code as well as his style of programming so that we could program the microcontrollers for the Omni robot in a similar fashion. Additionally, in order to keep communication between the NUC, microcontrollers, and motors straightforward, it was decided to only allow 'upstream' systems to initiate contact. Specifically, the NUC could send commands down to the 'Omni' Tiva which in turn could send commands down to the 'Wheel' Tivas which in turn could send commands to the motors. However, a 'Wheel' Tiva would not be allowed to send data to the 'Omni' Tiva without first being requested to do so by the 'Omni' Tiva. Likewise, the 'Omni' Tiva would not be able to send data to the NUC without the NUC first requesting it. In any event, after becoming familiar with the microcontrollers, I focused on programming the 'Wheel' Tivas - specifically setting up the QEI peripherals mentioned above, creating a library of functions to interface with the Sabertooths (via packetized serial mode described here), and implementing a PID control loop to set wheel speeds. On the other hand, Aamir focused on programming the 'Omni' Tiva - specifically figuring out how to send Serial commands to both 'Wheel' Tivas at the same time as well as implementing functions to convert wheel velocities into twists and visa versa. This will be discussed more in the next couple paragraphs. In the process of determining the number of encoder ticks per wheel revolution, the following points were observed. There are 19 pulses per encoder wheel revolution. Although Superdroid specifies that the gear reduction ratio of the motors is 12:1, it is in fact 12.25:1. Thus, it takes 12.25 encoder wheel revolutions to rotate the motor output shaft just once. The gear reduction ratio from the sprocket on the output motor shaft to the sprocket on the wheel axle is 21:15. Thus, for every 21 rotations of the output motor shaft, the actual wheel of the robot rotates 15 times. The microcontroller uses x4 encoding so each pulse of the encoder is counted as 4 indivdiual ticks (rising edge of Channel A, rising edge of Channel B, falling edge of Channel A, falling edge of Channel B) which increases resolution by a factor of 4. Therefore, the number of ticks counted by the QEI peripheral on the Tiva that make up one wheel revolution is: $$ \frac{19\,pulses}{1\,encoder\,wheel\,rev} * \frac{12.25\,encoder\,wheel\,rev}{1\,motor\,rev} * \frac{21\,motor\,rev}{15\,wheel\,rev} * \frac{4\,ticks}{1\,pulse}= \frac{1303.4\,ticks}{1\,wheel\,rev} $$ To convert from ticks/rev to the angular velocity units of rad/sec, the following points were considered. The system clock frequency for the Tiva was set to 80 MHz. The QEI peripheral has an associated API (found in the TivaWare Peripheral Driver Library here) that allows the user to count the number of encoder ticks per a given period of time (measured in system clock ticks). Currently, this value is set to 1.6 million. This then leads to the conversion equation below: $$ \frac{x\,ticks}{1.6\,million\,system\,ticks} * \frac{80\,million\,system\,ticks}{1\,sec} * \frac{1\,rev}{1303.4\,ticks} * \frac{2\pi}{1\,rev} = \frac{y\,rad}{sec} $$ Using this equation and PID control, the 'Wheel' Tiva was able to complete its purpose - to get a pair of desired wheel velocities (in rad/sec) from the 'Omni' Tiva, command both motors to achieve those speeds, and send back to the 'Omni' Tiva the current wheel speeds for odometry purposes. Besides for this, the 'Wheel' Tiva was also programmed with functions to change PID values and to save/load them to and from EEPROM. Figure 6: Mecanum Wheel Kinematics (pg. 519 in Modern Robotics) Once all four wheel velocities are sent to the 'Omni' Tiva, the next step is to calculate the planar twist of the robot. This can be done using Equation 13.33 from Modern Robotics: $$ V_b = Fu \qquad => \qquad \begin{bmatrix} w_{bz} \\ v_{bx} \\ v_{by} \end{bmatrix} = \frac{r}{4} \begin{bmatrix} \frac{-1}{(l+w)} & \frac{1}{(l+w)} & \frac{1}{(l+w)} & \frac{-1}{(l+w)} \\ 1 & 1 & 1 & 1 \\ -1 & 1 & -1 & 1 \end{bmatrix} \begin{bmatrix} u_1 \\ u_2 \\ u_3 \\ u_4 \end{bmatrix} $$ where \(u\) is the vector of wheel velocities (the numbers in the subscript correspond to the ones in Figure 6), \(F\) represents the transformation matrix (\(l\), \(w\), and \(r\) correspond to the measurements shown in Figure 6 and the radius of the wheels), and \(V_b\) symobizes the body twist of the robot. With a little rearranging, the equation above can also convert body twists to wheel velocities as shown in Equation 13.10. $$ u = H(0) V_b \qquad => \qquad \begin{bmatrix} u_1 \\ u_2 \\ u_3 \\ u_4 \\ \end{bmatrix} = \frac{1}{r} \begin{bmatrix} -l-w & 1 & -1 \\l+w & 1 & \;\;\;1 \\l+w & 1 & -1 \\-l-w & 1 & \;\;\;1 \\ \end{bmatrix} \begin{bmatrix} w_{bz} \\ v_{bx} \\ v_{by} \\ \end{bmatrix} $$ where \(H^\dagger(0) = F\). Armed with this equation, the 'Omni' Tiva contains functions allowing the NUC to set a desired twist or get a current twist. By sending the commanded wheel velocities to each 'Wheel' Tiva in a piecemeal manner (switching off sending a few bytes to one, then the other), the 'Omni' Tiva also ensured that both wheels would be set to their new velocities at the exact same time. Furthermore, it should be noted that the 'Wheel' Tiva's control loop operated at 400 Hz while the 'Omni' Tiva's control loop ran at 100 Hz. As such, for those 3 - 4 cycles of the control loop that the 'Wheel' Tiva did not hear from the 'Omni' Tiva, it continued to do PID conrol based on the last wheel velocities that were sent. If for some reason, the 'Wheel' Tiva did not hear from the 'Omni' Tiva for more than 100 of its control loop cycles, it would automatically stop the wheels. Figure 7: Inside of the Omni Robot Just like before, I will explain the construction of the Omni robot from the bottom up. After unpackaging, the aluminum chassis already had holes for the ball bearings (2 per axle), upper deck supports (2 per corner), cover (3 evenly spaced per rib), motor output shaft (1 per motor), motor spacer plate (4 per motor), and oval shaped screwdriver holes (2 per motor). In addition to that, 9 additional holes were drilled towards the front and back of the robot to screw down the Sabertooth driver (4 holes), the 'Wheel' PCB (4 holes) and the 6.3A slow fuse (1 hole). To make it easier to drill these holes in the correct spots, a wooden stencil was laser cut with the correct hole placement. Two more holes were drilled on either side of the robot (in the middle) where ESD drag chains were installed to dissipate static (without them, the robot would accumulate static while driving and shock anyone who touched it). Finally, 5 holes were drilled in the front of the robot to accomodate the Hokuyo laser scanner (3 in the middle to hold a 90 degree aluminum bracket and 1 on either side to hold the U-bolt scanner guard). Regarding hardware, only Imperial hex drive steel socket cap screws were used although metric ones were installed on the motor spacer plates (since the holes were made specifically for M5 screws). This decision was made so that the builder would only have to use one type of driver (in this case, a set of Imperial Allen wrenches) and because the Delta robot had the same type of screws. Additionally, it would have been difficult to adjust the motor spacer plate with a conventional (ex. Phillips) screw as the drive chain would have been in the way of the screwdriver. The allen wrench however, could be maneuvered around the drive chain to tighten or loosen a socket screw easily. Figure 8: Keyed shaft in wheel axle To prevent the sprocket on the motor output shaft from slipping, both the shaft and the sprocket were 'D' shaped. Furthermore, all sprockets were installed with set screws that could be tightened to prevent lateral slippage. The sprockets and axle by the wheels however, were keyed as shown in Figure 8. To cut the drive chain to the proper length, a dremmel with a 'grind' top was used, and the procedure outlined in the video at the bottom of this page was followed. In addition, to connect the B+/B- screw terminals on each Sabertooth to 24V, 18 AWG 2 conductor stranded wire was installed with 15A Anderson connectors at the ends. These connectors were then stuck through a small hole in the back of the cover (between the two switches in Figure 9) to click into their counterparts (the red wire leading to the switch and the black wire leading to the GND rail of the busbar). As can be seen in both Figure 9 and 10, another pair of 15A Anderson connectors linked the input of the isolated DC/DC converted (underneath the 'Omni' Tiva) to 24V as well. From a design perspective, the Anderson adapters were chosen due to the easiness of their use, their ability to withstand tremendous current, and to be consistent with the Delta robot where they were also installed. Adding these connectors also helped make the robot more modular. This was especially helpful in the case of the batteries which could be detached from the rest of robot for charging purposes (30A Anderson connectors were used for this purpose). Figure 9: Back of the Omni Robot Figure 10: Top of the Omni Robot Besides for the 6.3A slow fuse by each Sabertooth, there was also a 35A 'system wide' fuse placed between the batteries and the busbar (left part of Figure 9). Additionally, a second switch was placed on the back panel of the robot to power the Delta arm when it will eventually sit on the Omni's upper deck (the first switch obviously powers the Omni robot itself). Lastly, the big red button on the back panel is the Emergency Stop and the object immediately to its right is the Razor IMU. Originally, the IMU was going to be placed at the front of the robot behind the laser scanner, but the magnetic field created by the NUC distorted its magnetometer readings. Fortunately, there was no disturbance observed at the IMU's current location on the back panel. Moving to the front of the robot, the black square object in Figure 10 represents the Intel NUC computer. Right next to it is a circular hole that allows for the two Ethernet cables originating from the 'Wheel' PCBs to connect to the 'Omni' PCB. Finally, next to that is the DC/DC converter with the 'Omni' PCB ontop, held in place by standoffs. It should be noted that the NUC, DC/DC converter, and 'Omni' PCB are screwed down to a transparent 1/4 inch thick piece of acrylic. This modular 'front deck' was laser cut with all the holes in the correct places to hold the items mentioned above. In turn, the acrylic piece was fastened to the aluminum cover in four places, two of which went through the front rib shown in Figure 7 (thus, the reason for the two holes shown there). Finally, a bracket was placed ontop of the batteries to prevent them from shifting and the upper deck snugly laid down on the four corner supports. Single Robot Control Now that the 'Omni' Tiva is capable of sending and recieving individual twists, it was time to create a ROS node to automate the process (initially, this meant commanding a hard coded twist at 100 Hz). Specifically, the node would be responsible for sending the desired twist to the 'Omni' Tiva, recieve the observed twist from the 'Omni' Tiva, and calculate odometry of the robot. To do that, Equations 13.35, 13.36 and a modified version of 13.33 from Modern Robotics were used. It is important to note though that the equations assume that the wheel speeds remain constant between one observed twist and the next (approximately 10ms in this case). In Equation 13.33, the only change is to multiply the right hand side by \(\Delta t\) so that the body twist \(V_b\) is integrated for the right amount of time giving \(V_b = Fu \Delta t\). Next, the difference in coordinates (\(q_b\)) of the robot's new pose relative to its initial pose (before integration) can be found with Equation 13.35. Assuming that \(w_{bz} = 0\), then the change in pose is straightforward. $$ if \; w_{bz}=0, \qquad \Delta q_b = \begin{bmatrix} \Delta \phi _b \\ \Delta x_b \\ \Delta y_b \\ \end{bmatrix} = \begin{bmatrix} 0 \\ v_{bx} \\ v_{by} \\ \end{bmatrix} $$ However, if \(w_{bz}\) is nonzero, then the integration must also take into account the robot's rotation as it calculates \(\Delta x_b\) and \(\Delta y_b\). $$ if \; w_{bz} \neq 0, \qquad \Delta q_b = \begin{bmatrix} \Delta \phi _b \\ \Delta x_b \\ \Delta y_b \\ \end{bmatrix} = \begin{bmatrix} w_{bz} \\ (v_{bx}\sin w_{bz} + v_{by}(\cos w_{bz} - 1))/w_{bz} \\ (v_{by}\sin w_{bz} + v_{bx}(1-\cos w_{bz}))/w_{bz} \\ \end{bmatrix} $$ At this point, we have calculated the change in pose relative to the robot's initial position before integration. However, even if \(w_{bz}=0\), this change in configuration must be transformed into the world frame {\(s\)} so that it can be added to the current odometry estimate. This can be done as follows. $$ \Delta q = \begin{bmatrix} 1 & 0 & 0 \\ 0 & \cos \phi _k & - \sin \phi _k \\ 0 & \sin \phi _k & \cos \phi _k \\ \end{bmatrix} \Delta q_b $$ where \(\phi _k\) is the previous chassis angle relative to the world frame {\(s\)}. Finally, the new odometry estimate can be computed. $$ q_{k+1} = q_k + \Delta q $$ Figure 11: PS3 Controller With that out of the way, the next step was to send variable twists to the robot. To do that, a Bluetooth enabled PS3 controller (Figure 11) was used. Specifically, the left stick manipulated sideways motion, the right stick manipulated the forward/backward motion and the R2\L2 buttons manipulated clockwise/counterclockwise motion. Besides for this being a logical user interface, another reason these buttons were used was because they were all analog (values ranging from -1 to 1). For this to work with the robot, a new node was created to subscribe to the PS3 messages, stick the correct analog values into a twist message, and send the message to the main node (called 'omni_node') where it would then be commanded to the robot's wheels. While this worked, it sometimes made the robot move in a jerky manner, especially if it was commanded to go from a total standstill to 0.5m/s in one command (or visa versa). To compensate for this, a 'twist filter' was implemented that would allow the robot to slowly build up or decelerate to the desired speed. It would do this by first normalizing the desired twist if one of its values exceeded a velocity limit. Then, it would calculate and send out a new twist based on a user defined acceleration limit. To incorporate this step, the PS3 node would publish a raw twist to the 'raw' filter topic, recieve the updated twist from the 'smooth' filter topic, and then publish the twist (using a custom message as will be explained later) to the 'omni_node'. Figure 12: Rviz Demo Until now, I have been a big vague as to which nodes are run on which computers so as not to confuse the reader. However, this section will clear that up. First, all three robot NUCs are wirelessly connected to a LAN called 'Boeing' (without WAN). A 'master' computer is connected to this network as well although this could be done via Ethernet so that the 'master' retains access to Internet. The responsibilities of the 'master' are to launch the PS3 node (including the twist filter), the Rviz visualization tool (demo shown in Figure 12), and the 'omni client' node (which provides a menu to the user for robot control) on itself. It also is responsible for launching the 'omni node' and sensor related nodes on each of the robot computers (it does this over the network using the 'ssh' tool and preset static IP addresses). Thus, when a user presses a button on the PS3 controller, it is transmitted over Bluetooth to the 'master' computer. After processing the twist via the twist filter, the PS3 node publishes a custom message containing the twist. This is then recieved by each robot's 'omni node' over the network. The custom message specifies an 'id' which tells the 'omni node' which robot should act upon the twist. In this manner, all three robots (or two, or just one) could be controlled with the same twist message. To switch between robot 'ids', the geometrically shaped buttons on the right side of the controller can be toggled. Regarding odometry verification, there were two methods that were implemented. One method relied on using the 'overhead_mobile_tracker' package developed by Jarvis Schultz. In this case, a camera was placed by the ceiling of the lab and an AR tag was taped onto one of the robots. As the robot moved, the tracker node would publish the tag's odometry which could then be compared with the odometry calculated from the wheel encoders. A python script was developed by Aamir to do just that, and an analysis of the results can be seen on his post. In the second apporach, I compared the odometry of the encoders against those produced by the laser scanner and IMU. This was done by driving the robot an exact distance and comparing the observed odometry values to that distance. A complete write-up of these results including my sensor fusion approach (using the 'robot_localization' package) can be found in the OMNI_SENSOR_SETUP.md file within the 'Boeing' repository. Build and runtime instructions for the Omni robots can be found there as well. Figure 13: Rigid body motion Formation Control The last part of this project was to do formation control of the three robots. Essentially, this means that the user should be able to command a twist to a 'pivot' point, and the robots should move as if they are rigidly attached to that point. To accomplish this, consider the diagram in Figure 13. Ignore the drawn body frame {\(b\)} and assume that \(x_b\) points up and \(y_b\) points left. Also, assume the body frame is at the centroid of the robot as would be the case for a four wheel mecanum wheel robot. Finally, note that \(r\) represents the 'pivot' point. For formation control to work, the following must be given: The position of \(r\) in the {\(s\)} frame \(\Rightarrow\) \(r_\theta\), \(r_x\), and \(r_y\). This could be an arbitrary point provided by the user from the 'omni client' menu or the centroid calculated from the robot poses. To be consistent with the picture, we will initialize \(r\) to \([0,2,-1]\) (format of \(\theta\), \(x\), \(y\)). The commanded twist \(V_r\) with respect to the 'pivot' frame. This is generated from manipulating the buttons on the PS3 controller. For the sake of this discussion, \(V_r = [\pi/6,0.5,0.25]\). The pose of all robots with respect to the {\(s\)} frame. This comes from the odometry calculated for each robot. To fit with the diagram, let's make \(b\) initially \([\pi/2,4,-0.4]\) (format of \(\theta\), \(x\), \(y\)). In \(V_r\) above, \(w_r = \pi/6\) which means that the pivot is rotating counterclockwise at a rate of \(\pi/6\) rad/s. From this, we can infer that since \(r\) is rotating at \(\pi/6\) rad/s, then any other point on this virtual rigid body is rotating at this angular velocity as well. Thus, \(w_{bz} = \pi/6\), so if this value is not seen in the final body twist, there must be an issue. Moving onwards, we can represent the transformation matrices of the pivot (\(T_{sr}\)) and robot (\(T_{sb}\)) with respect to the {\(s\)} frame as follows: $$ T_{sr} = \begin{bmatrix} 1 & 0 & 0 & 2 \\ 0 & 1 & 0 & -1 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \qquad \qquad T_{sb} = \begin{bmatrix} 0 & -1 & 0 & 4 \\ 1 & 0 & 0 & -0.4 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} $$ Next, we need to find \(T_{br}\), the transformation matrix that represents the pivot frame with respect to the robot. The reason for this will be explained in the next step, but the calculation can be seen below. $$ T_{br} = T^{-1}_{sb} T_{sr} = T_{bs} T_{sr} = \begin{bmatrix} R_{bs}R_{sr} & R_{bs}p_{sr} + p_{bs} \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} 0 & 1 & 0 & -0.6 \\ -1 & 0 & 0 & 2 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \qquad and \qquad T^{-1} = \begin{bmatrix} R^T & -R^Tp \\ 0 & 1 \\ \end{bmatrix} $$ where \(p\) represents the 3-vector point and \(R\) symbolizes the rotation matrix. With \(T_{br}\), we can then calculate \(V_b\), the twist in the robot frame, using the Adjoint map (note that \(V_r\) must be padded with zeros to make it a 6-vector). $$ V_b = [Ad_{T_{br}}]V_r \qquad where \qquad Ad_T = \begin{bmatrix} R & 0 \\ [p]R & R \\ \end{bmatrix} \qquad and \qquad [p] = \begin{bmatrix} 0 & -p_3 & p_2 \\ p_3 & 0 & -p_1 \\ -p_2 & p_1 & 0 \\ \end{bmatrix} $$ In the last equation above, \([p]\) represents the skew symmetric form of the point \(p\). When doing the actual calculation for the example above, the result is: $$ V_b = \begin{bmatrix} 0 \\ 0 \\ \pi/6 \\ 1.2972 \\ -0.1858 \\ 0 \\ \end{bmatrix} = \begin{bmatrix} 0 & 1 & 0 & 0 & 0 & 0 \\ -1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 2 & 0 & 1 & 0 \\ 0 & 0 & 0.6 & -1 & 0 & 0 \\ 0.6 & -2 & 0 & 0 & 0 & 1 \\ \end{bmatrix} \begin{bmatrix} 0 \\ 0 \\ \pi/6 \\ 0.5 \\ 0.25 \\ 0 \\ \end{bmatrix} $$ Figure 14: Mathematica Simulation of Formation Control As was expected, \(w_{bz} = \pi/6\). We also see that the robot is expected to travel 1.2972 m/s along its \(x\)-axis and -0.1858 m\s along its \(y\)-axis. A simulation of this example can be seen in Figure 14. The black arrow represents the space {\(s\)} frame while the green arrow signifies the pivot {\(r\)} frame. Note also that the \(x\)-axis is along the tips of each arrow and the robot markers. Additionally, the red marker marks the location of the robot that we just calculated the twist for. When the GIF resets, one can easily see how the relative location of the arrows and the red marker matches the layout described in Figure 13. The other two markers are there just for show and represent two more robots that have different initial conditions. Most of all, it is readily apparent that all entities move as one rigid body. The reader should note though that the simulation is run at 2x speed. This project was the joint effort of four people. My focus was on the following: interfacing with the motor encoders & Sabertooths, designing the 'Wheel' PCB, programming the 'Wheel' Tiva, setting up the sensors, and performing sensor fusion. My partner Aamir Hussain, focused on designing the 'Omni' PCB, programming the 'Omni' Tiva, creating the twist filter, and implementing odometry. Together, we built the physical robots (including wiring, soldering, crimping, drilling, and laser cutting), discussed the theory behind the twist, odometry, and rigid-body-motion calculations, and coded formation control. William Hunt, a staff engineer at Northwestern, took our PCB schematics and created board files using KiCad. Finally, our advisor, Matthew Elwin, helped with all kinds of debugging and provided us with clear goals to follow. An honorary mention also goes to Jarvis Schultz, the Assistant Program Director, who gave time weekly to discuss updates on the project and offer up suggestions. © 2017, 2019 Solomon Wiznitzer Built with: Jekyll
CommonCrawl
What weapon of mass destruction could theoretically vaporize a whole solar system? I'm trying to engineer a scenario where a nation wipes out another's home solar system by vaporizing it into a cloud of gas nearly instantly and it engineers a huge controversy over how it happened and would cause extreme chaos within the galaxy over the war crime. What realistic(ish) theoretical weapon could achieve this effect? EDIT: By vaporizing a solar system I meant turn the whole thing to a cloud of gas or plasma... science-fiction weapon-mass-destruction EfialtesEfialtes $\begingroup$ Gamma ray burst? $\endgroup$ – Canyon Runner Mar 21 '18 at 21:44 $\begingroup$ Starkiller Base $\endgroup$ – Alexander Mar 22 '18 at 0:28 $\begingroup$ How about simply a Nova Bomb. I don't know if it qualifies as realistic, but it got Gene Rodenbery's stamp of approval, and that's pretty darn good =) $\endgroup$ – Cort Ammon Mar 22 '18 at 5:26 $\begingroup$ What do you mean by "nearly instantly"? In particular, would something that takes 2 hours to vaporize the star and all major planets be sufficiently fast to qualify? (2 light-hours being the approximate distance between our sun, and Uranus) $\endgroup$ – Ethan Kaminski Mar 22 '18 at 8:25 $\begingroup$ Whatever's in the next Star Wars movie, assuming they keep on the "death star but bigger" trajectory. $\endgroup$ – AJFaraday Mar 22 '18 at 9:39 A Jupiter-sized mass of antimatter Based on some quick Googling: If 1 kg of antimatter came into contact with 1 kg of ordinary matter, they would annihilate with 1.8 × 1017 joules of energy. Jupiter has a mass of around 1.9 × 1027 kg A supernova can release as much as 1044 joules of energy. Based on these numbers, if you sent a lump of antimatter with the mass of Jupiter on a collision course with the sun, it would annihilate with 3.4 x 1044 joules of energy, or more energy than three supernovae. This should be more than enough to vaporize the entire solar system, and probably a few neighboring systems as well. plasticinsectplasticinsect $\begingroup$ Or a weapon that converts matter to antimatter with some technobabble ("reversing the quantum polarity"). As matter and antimatter are "identical" except spin, one can come up with nice technobabble why this works with little energy. Explode that somewhere close to the core of the star and voila, achieved. $\endgroup$ – TomTom Mar 22 '18 at 15:58 $\begingroup$ I would be remiss if I didn't mention that the supernova idea was used in the now Legends universe of Star Wars. The Sun Crusher would launch an "energy resonance torpedoes" that would cause a star to go supernova, even low mass ones. Bonus points that it could also survive it. starwars.wikia.com/wiki/Sun_Crusher $\endgroup$ – bubbajake00 Mar 22 '18 at 17:34 $\begingroup$ @TomTom You should make that an answer. Such a weapon could work on many different scales - convert a jupiter-sized portion of the sun to destroy a whole solar system, or convert a few thousand atoms to kill a mosquito. (or I suppose you could use it as an unlimited supply of free, clean energy, but that's boring) $\endgroup$ – plasticinsect Mar 22 '18 at 21:12 $\begingroup$ "more energy than three supernovae" – congratulations, you've managed to defeat the rule of thumb for estimating supernova-related numbers! Factor 3 is an incredibly close match, at that – normally when juggling such vast numbers, different scenarios will give energies in nowhere near the same ballparks. $\endgroup$ – leftaroundabout Mar 22 '18 at 22:41 $\begingroup$ @TomTom 'As matter and antimatter are "identical" except spin' - Absolutely not. You must invert all of the quantum numbers. In particular electric charge, color charge, weak isospin (I think). OTOH, spin does not need to be balanced (an electron can have spin +1/2 or -1/2 and can annihilate with an antielectron of spin +1/2 or -1/2; the resulting photon pair will carry the spin sum). $\endgroup$ – David Tonhofer Mar 23 '18 at 22:12 Solar systems are big and hard to vaporize. Fortunately, when we're trying to do so, we can basically ignore all the planets and moons. On a relative scale, planets are easy to vaporize when compared to vaporizing the star at the center of the system. What's difficult about this is that a star has a lot of gravity. The gravitational binding energy of the Sun is $2.276\cdot10^{41} J$. That's a lot of energy, that that's what you'd need to convert a nice tidy ball of fusion supplies and byproducts into something more resembling an expanding cloud of gas. For that, we turn to one of my favorite Wikipedia pages: Orders of Magnitude (energy) How much energy? Canyon Runner mentions in a comment that you could accomplish it with a Gamma Ray Burst, which has $5\cdot10^{43} J$. If about 1% of the energy of the burst went into your star, that would be enough to add the kinetic energy needed to break the binding energy of the star. How realistic? Well, let's just say that wikipedia includes the phrasing "... gamma-ray bursts (GRBs), [which are] now recognised as the most violent events in the universe." These are not small events. Or you could just set off a supernova next to it. That outputs $10^{44} J$ (aka 1 foe). 1/10th of 1% of a supernova would be enough to do the trick. However, we do have to question whether the process of hauling a large star nearby and then somehow causing it to go nova qualifies as instantaneous. In all senses, you will find whatever weapon that gets used will not be realistic(ish). The weapon has to emit that much energy, and its simply hard to make devices that have energies on those scales. But if you like, you can use that wikipedia page to see what sorts of events might be applicable. One of them might be "realistic(ish)" within the scope of your story. As for the rest of the galaxy, war crimes may be difficult to level against a foe so dangerous that they wield this kind of power on a whim. Cort AmmonCort Ammon $\begingroup$ I did not know a foe was a unit. It's too bad its basis is an imperial unit. $\endgroup$ – Samuel Mar 22 '18 at 3:38 $\begingroup$ One thing to note: instantaneous is a very relative term (quite literally). The destruction from a supernova would propagate at the speed of light, which is to all intents and purposes as instantaneous as you can get. If you can disguise the setup and aiming of your weapon as something benign (of course we'll fit star-lifting tech to your nearest suitable neighbor for you!) then I'd say supernovae fits the bill. $\endgroup$ – Joe Bloggs Mar 22 '18 at 8:13 $\begingroup$ You don't really need that "[sic]" on your wikipedia quote. "Recognise" is just as correct as "armour". Which is to say, they're common UK-English spellings, as opposed to the US-English "recognize" and "armor". $\endgroup$ – Dave Sherohman Mar 22 '18 at 8:54 $\begingroup$ "hauling a large star nearby and then somehow causing it to go nova" - why would you have to haul another star nearby? If you've got a way of making stars go nova, just apply it to the one already in the solar system you want to destroy. $\endgroup$ – Tom Carpenter Mar 22 '18 at 10:11 $\begingroup$ @DaveSherohman Ahh, thanks. I didn't know. My spelling is laughable at best, so when my browser put the red squigglies under the word, I just did the best I could. I've edited it accordingly. $\endgroup$ – Cort Ammon Mar 22 '18 at 15:03 In terms of technological weapons with a sort of realistic hand wave, you could build a Dyson swarm around a star and use it to power a Nicoll-Dyson beam. Nicoll-Dyson beam in operation The energy of the beam would be sufficient to vapourize the planets and small bodies orbiting the star, and if focused on the star for a sufficient length of time, could provide enough energy to essentially "evaporate" the star as well. It is also possible to accelerate the rate of fusion reactions by dumping energy into the star, and essentially creating a flare star or perhaps even a false Nova, energized from without. The downside of this is the beam propagates at the speed of light. A Nicoll-Dyson beam launched from Tabby's star today would vapourize the Earth in @1700 years from now. The energy could also be used indirectly to propel a swarm of RKKV's at the Solar system, with each one capable of adjusting its flight path to impact a target, or if no target was available to strike the star. From a political POV, it s difficult to imagine what sorts of conflicts would last literally tens of thousands of years (know where the enemy is, locate the target, fire the beam, wait for the effects to be known i.e. another 1700 years to see the results on the Solar System, then relay, realm and fire again.....). And of course, since space is 3 dimensional, you are shooting at one planet while their allies off the plane of the ecliptic are shooting at you from a totally different direction... ThucydidesThucydides $\begingroup$ The beam will not be very effective at evaporating the star. Basically you are at best doubling the luminosity, but the star will just reach a new hydrostatic equilibrium that is a bit fluffier. To actually affect the fusion you need to reach the core, and that is going to be hard given the opacity of plasma. Nicoll-Dyson beams are great against planets and other condensed matter, but not stars or moveable objects. $\endgroup$ – Anders Sandberg Mar 21 '18 at 23:15 $\begingroup$ Interesting what-if XKCD article..what-if.xkcd.com/141 $\endgroup$ – AerusDar Mar 22 '18 at 5:17 $\begingroup$ It would be especially sad if FTL is later discovered, because there would be attacks incoming thousands of years after a peace treaty was made after a war long forgotten. $\endgroup$ – vsz Mar 22 '18 at 7:32 $\begingroup$ Youtube -> pause video at start of relevant section -> Share -> "start at (time here)" -> link that cues you to relevant section. TRY IT. Really. $\endgroup$ – Harper Mar 22 '18 at 17:35 $\begingroup$ This would not work from a system light years away. Focusing the beam within the Dyson sphere would mean that the uncertainty of the location of the photons would have an upper bound of the diameter of the sphere, but to get the beam several light years away would require the uncertainty in the direction of the beam be minuscule. This violates the Uncertainty Principle. Also note that this would increase the temperature of the target to at most the surface temperature of the star, which would kill any humans but not necessarily "vaporize", certainly not instantaneously. $\endgroup$ – Acccumulation Mar 23 '18 at 16:41 In Children of The Lens by E.E. Smith (1950), The Galactic Patrol fired a planet from another universe at the target star. In the other universe, the vector of the planet was greater than our speed of light. This sheerly fabulous handwave allows the attack to contain as much energy as you wish, for it to be a complete surprise, and for the speed of light in our universe to be effectively irrelevant for the attack. Here's what happened: "What happened? Even after the fact none of the observers knew; nor did any except the L3's ever find out. The fuses of all the recorder and analyzer circuits blew at once. Needles jumped instantly to maximum and wrapped themselves around their stops. Charts and ultra-photographic films showed only straight or curved lines running from the origin to and through the limits in zero time." "Ploor's sun became a super-nova. How deeply the intruding thing penetrated, how much of the sun's mass exploded, never was and perhaps never will be determined. The violence of the explosion was such, however, that Klovian astronomers reported--a few years later--that it was radiating energy at the rate of some five hundred and fifty million suns." Of course, this attack was also the penultimate battle of the war - no defense against such an attack was conceivable, so the Galactic Patrol had to finish the war within days after first use - before their enemy discerned the principles used, engineered the mechanisms, and began using this game-changer themselves. $\begingroup$ Thanks for the complete reference and quote! When I saw the question, I immediately thought Lensman, but wouldn't have been able to include this much detail had I answered. $\endgroup$ – Dave Sherohman Mar 22 '18 at 8:57 $\begingroup$ That's brilliant. $\endgroup$ – user1876058 Mar 25 '18 at 15:38 $\begingroup$ +1, because that is still one of the ultimate weapons of mass destructions in scifi and mentioning it deserves recognition. Also anything more powerful than this would probably cause "collateral damage". $\endgroup$ – Ville Niemi Mar 26 '18 at 8:09 $\begingroup$ There's a trope named for this series: A Lensman arms race. I love the series to bits, but in the space of 6 books (the 7th is a side story) they went from lasers and conventional explosives to flinging around faster than light antimatter planets and psychic attacks on a galactic scale. His science isn't actually terrible, it didn't fully anticipate relativity or computer science amongst other things. $\endgroup$ – Kaithar Mar 27 '18 at 18:41 $\begingroup$ Actually, the faster-than-light stuff is interesting in theory. The idea is that the inertialess drive "suspends" momentum and restores it the instant you turn it off, absolute conservation of momentum. And the other theory is something along the lines that the laws of physics are local to a given universe and in the other universe the base frame of reference is speed of light different from home. Combine those two rules, and pretend you don't believe in relativity, and you get planet sized objects exceeding c. It's hard scifi, just using outdated science. $\endgroup$ – Kaithar Mar 27 '18 at 19:02 You will need a portal gun: ... Or any other thing capable of building stable wormholes on demand. One mouth ot the hole goes on a planet in the solar system you wish to cook. The other gets sent towards TON 618. TON 618 is a black hole with a mass estimated at 66 billion times that of the sun. At that tonnage, the notion of margin of error becomes ridiculous. Anyway, one of Tony's most striking features is its accretion disc. Its temperature is in the order of dozens of millions of K. Yes, millions. Compare with the Sun's surface temperature of ~5,770 K. To give you an idea of how hot that is... I'll just quote the wiki: The surrounding galaxy is not visible from Earth, because the quasar itself outshines it. With an absolute magnitude of −30.7, it shines with a luminosity of 4×1040 watts, or as brilliantly as 140 trillion Suns, making it one of the brightest objects in the Universe. So there you have it. Once the mouths of the portals/wormhole connect the target solar system to TON 618's accretion disc, you get an expletive amount of energy coming towards the target, as well as enough gravity to shred the star into pieces. RenanRenan $\begingroup$ What happens if you use the portal gun so the wormhole connects "north and south poles" of TON 618? $\endgroup$ – Felipe Pereira Mar 24 '18 at 1:29 $\begingroup$ @FelipePereira you waste a couple portal mouths. $\endgroup$ – Renan Mar 25 '18 at 19:38 $\begingroup$ +1: There are a couple of creative uses of Portal guns that would destroy a planet, I came here to post one. I like that this also nukes the star, I wouldn't have thought of that TBH :) $\endgroup$ – Binary Worrier Mar 26 '18 at 12:00 $\begingroup$ It would take very long unless the portal is extremely large, though, and evacuation or other countermeasures could be applied before: astrorhysy.blogspot.fr/p/q-a.html#isstargateadocumentary $\endgroup$ – Eth Mar 27 '18 at 18:07 A controlled False Vacuum over the area This is not supposed to be possible, but let's admit that your engineers invented a weapon capable of producing a false vacuum, and control its expansion. Basically, a way to erase from existence a part of the universe cleanly and in the most horrible way possible. This would cause controversy. Why ? For the same reason that, in today's world, people fear that an uncontrolled black hole would suddenly pop from a scientific experience and delete Earth. Imagine that your mass weapon of destruction fails, and the false vacuum continues his way through the designated area. That would be really bad for the WHOLE universe. But also for another reason : because the area where the weapon has have been used is empty afterward. Which means it has some impact on the other galaxies around ! Imagine all the orbits changing, or the trajectory of comets taking different destination. The calculations of all your galactic empire become obsolete, and they have to be done again. Here is a cool video that explains it further: https://www.youtube.com/watch?v=ijFm6DxNVyI SasugasmSasugasm $\begingroup$ Interesting, and I up-vited it, but doesn't satisfy "by vaporizing it into a cloud of gas". $\endgroup$ – RonJohn Mar 22 '18 at 3:06 $\begingroup$ @RonJohn I think it could be extended into vaporizing into a cloud of gas for two reasons. One, we aren't entirely sure what happens when a false vacuum falls to a lower potential. The other is that, if you drop the base energy level like that, you may be able to rob that region of spacetime to provide the energy needed to vaporize the star akin to my answer. I think that qualifies as utterly insidious, because you leave behind a region of lower potential that sucks up energy until it equalizes. You're literally vaporizing a star at the expense of your children's future energy. $\endgroup$ – Cort Ammon Mar 22 '18 at 4:39 $\begingroup$ This is backward. The fear of vacuum bubbles is that our universe is the false vacuum, and a bubble of true vacuum would unravel it. $\endgroup$ – Kevin Krumwiede Mar 22 '18 at 5:08 $\begingroup$ This has been handled already in one of qntm's stories: qntm.org/kinetic where an alien civilization activates a rogue false vacuum bubble. $\endgroup$ – vsz Mar 22 '18 at 7:30 $\begingroup$ "That would be really bad for the WHOLE universe". Not really. Only that part that can be reached by a light ray from "ground zero" in some future time. This is already right now less than the whole universe. $\endgroup$ – David Tonhofer Mar 23 '18 at 22:18 Your rogue space nation might be wise to consider nano-disassemblers. These could be manufactured relatively cheaply and could be smuggled (or launched) into the target system relatively easily. Though the effect wouldn't be instant, it would accelerate as the nano-disassemblers produced more of themselves from the matter present in the star system. Though the entire system wouldn't be reduced to gas, the targeted planets and other objects would be reduced to dust. The nano-disassemblers could be programmed to shut off after a calculated time, or perhaps even to dive toward the sun to clean up the evidence. Or if the nano-disassemblers don't shut off, there's a horrible booby trap waiting for anyone foolish enough to investigate. MashtaterMashtater $\begingroup$ While I love this idea, it seems like the little critters would get pretty hot releasing all that energy. $\endgroup$ – MarkHu Mar 22 '18 at 0:51 $\begingroup$ Well, they may not be able to complete the mission if they are still running IPv6... xkcd.com/865 $\endgroup$ – zmerch Mar 23 '18 at 19:32 $\begingroup$ Most of the stuff in planets and stars is pretty stable. Where would the nanobots get all the energy needed to "turn it to dust", and how do they get rid of waste heat (not to mention all the other EM and charged particles flying around)? $\endgroup$ – Luaan Mar 24 '18 at 8:14 $\begingroup$ An example of this can be found in the "Moonseed" novel by Stephen Baxter $\endgroup$ – Gianluca Mar 29 '18 at 6:48 You need a way to make the star go novae. A wormhole adding several solar masses to de star or something converting part of the core to antimatter. The resulting blast should vaporise the star system in less than a day. Vinicius Zolin De JesusVinicius Zolin De Jesus $\begingroup$ Well, technically the extents of a solar system have a radius of more than one light day. So anything radiating from the star would take 300 to 1000 days to even be seen by the whole system, let alone destroyed by it. $\endgroup$ – Samuel Mar 22 '18 at 3:34 $\begingroup$ Rather than using Antimatter or anything else exotic, you could just use large quantities of Iron to disrupt the fusion rate and make the star artificially 'older' so that it skips to the Supernova stage... $\endgroup$ – Chronocidal Mar 22 '18 at 10:12 $\begingroup$ @chronocidal That requires that the star be very big, Solar mass stars don't go supernovae normally. $\endgroup$ – Vinicius Zolin De Jesus Mar 22 '18 at 19:05 $\begingroup$ @Samuel I'm restraining the concept of star system to the orbit or Neptune (around 250 light-minutes). If I'm not mistaken a supernovae blast go at relativistic speed so one day or two should be more than enough. Radiation pressure should arrive with the light and that should be energetic enough to kill everything in its path. $\endgroup$ – Vinicius Zolin De Jesus Mar 22 '18 at 19:11 $\begingroup$ @ViniciusZolinDeJesus Quite mistaken about the speed. The wave front travels at 8 miles per second. While fast, it wouldn't reach Neptune's orbit for just over 11 years. Some molecules would be travelling about five times faster, but anything on the order of days is not doing to happen. $\endgroup$ – Samuel Mar 22 '18 at 19:22 As an addendum to the other answers... nearly instantly That could be a problem. Using our solar system as an example, Neptune orbits at about 250 light minutes, so you have 3 options: Accept destruction is going to take the propagation time from the centre of the geometric area to furthest object or from the star to its furthest child. So destruction could be a minimum of hours. Use a destruction method that uses multiple distributed and synchronised sources. You'll have to handle relativity issues, but at least you can get that near instant from the right method. Figure one source for the star and one source for each planet you particularly don't like. Use a destruction method that ignores relativity and propagates faster than light. Unfortunately that's pretty unrealistic, so you're definitely in to the realm of science fiction. Something that operates directly on spacetime or "subspace" is probably your best bet. For option 1 my vote's for lobbing some antimatter or a singularity in to the star, depending on how much destruction you need. For option 2 my vote's for a nice Von Neumann grey goo, the nano disassembler approach, or some repurposed mining equipment. For option 3, just grab a nice wormhole generator and see how many event horizons you can pack in to the system, and connect the other ends to somewhere fun like a black hole, some kind of powerful energetic source like an engineered gamma burst or a high speed quasar. A nice combination of gravity and destructive radiation. Even if you don't vaporise it completely, you can probably sterilise the system and detonate the star. Certainly the result of something like that would cause you plenty of chaos even if the destructive amount is a little less. KaitharKaithar $\begingroup$ Get near a star that's about to go nova, and use strong magnetic fields (perhaps powered by a magnetar?) to collect and focus a burst of gamma rays at the individual planets in your target solar system (leading the shots and actually taking correct aim would be possible with sufficiently advanced logistics). That way you don't need to cause any proximal stars to go nova (which may not be possible even with exceedingly advanced technology), and you could perform the feat from the safety of another solar system (i.e. no one would notice you). $\endgroup$ – forest Mar 22 '18 at 3:28 $\begingroup$ @forest the tricky bit is that you getting a gamma burst from one star to another is not a quick thing without some kind of spacetime tweak power, and if you have that kind of power you should be able to break trigger at least a partial nova on demand, it should be sufficient to spike the core gravity for just long enough to cause the star to contract decently and then when it's released the radiation pressure will cause, at the very least, a large mass ejection. $\endgroup$ – Kaithar Mar 26 '18 at 15:07 Do you know the difference between matter and antimatter? No. Well, now that you got ultra violet secret 3 security clearance... it is simply the direction of the 4th level quantum spin. Energy content is the same. This is very relevant because - you know the work we did with artificial wormholes and hyperjumps? The dead ends. Happens if you create a hyperfield with a very small calibration error, it can invert the 4th level quantum spin on all matter within it. Takes little energy and depending how you set it up, it converts up to nearly 199% of the matter in it's range into antimatter. And yes, this can be weaponized. The warhead is actually quite small - you dont really need to get into hyperjump level energy levels. And you just need to keep the field up for a nanosecond. The biggest joke is that if you want to build a star buster, the shielding to get the warhead into the core of the sun is like 100% as heavy as the warhead. How you think we got rid of the rebels in UAX-249? That sudden sun eruption was... not exactly unexpected given that we are now short one sunbuster projectile. Get the idea? Matter and Antimatter have the same energy level, so come up with some technobabble and voila, free matter to antimatter conversion. Possibly in a large scale. Which makes this anything, from a posibly solar busting projectile to actually free energy - the old problem that antimatter is NOT an energy source (because you need to generate it first,something star trek never cleared up). I used that many many years ago in a SF role playing campaign. Very slow FTL (unless you used up ridiculous amounts of fuel, very ineconomic, or antimatter, very expensive), but there was this military secret that... if you just manipulated a hyperdrive a little, you could acutally make free antimatter. Ultimate WMD. Use a lot more energy and a lot more complex system and you got a 99.999% conversion in a way that you could actually syphon off the antimatter - for use in military starships. How did noone realize it? Well, they imported it from a black ops company they ran (which supposedly had those huge close to the star antimatter generation plants) - and used all the budget saved for not exactly official purposes ;) TomTomTomTom If somebody repurposed an Alcubierre drive to send a black hole into a solar system, that would effectively accomplish the same. Plus, if you make alcubierre drives ubiquitous, this possibility is demonstrated to the whole galaxy's population, who could readily do the same, which makes controversy much bigger than the fate of just one system. As far as vaporization goes, you could use the same mechanism to move the black hole away later, leaving bare space in its wake. If you did it fast enough, it would look like one huge projectile passed through and vaporized it, achieving (at least in appearance) the same effect as having a single weapon vaporize it all. And if you're looking to leave no trace, you have to put everything behind an event horizon somehow, due to the conservations of mass and energy. Other than sending it faster-than-light beyond the observable universe, a black hole is the only way to achieve true tracelessness. $\begingroup$ This would require the existence of negative mass, which has not been proven to exist. Because of that, I don't know if this would even count as theoretically possible. $\endgroup$ – forest Mar 22 '18 at 3:29 $\begingroup$ @forest Isn't that the basically the definition of theoretically possible (that and that some theory predicts it or at least does not completely rule it out)? $\endgroup$ – Graipher Mar 22 '18 at 5:31 $\begingroup$ @Graipher That means it might be theoretically possible. 50% light travel is theoretically possible. Using the Higgs field in a way that relies on it having 50 GHz wavelength is not theoretically possible, as its wavelength is not 50 GHz. $\endgroup$ – forest Mar 22 '18 at 5:59 Try firing a strangelet bomb at the star. It won't turn the star itself entirely into a cloud of gas or plasma — there will be a strange star remaining at the center of the ensuing explosion — but the energy released should do the trick for the rest of the solar system. I have no way of actually calculating the energy released, but it should easily be enough to vaporize the entire solar system. Given that any potential drone ship used to deliver the bomb would be obliterated along with the rest of the solar system, the identity of the perpetrator nation would be difficult to determine. The effects of a strangelet bomb will propagate at the speed of light outward from the central star — the solar system is vaporized not by the bomb itself, but by the extremely intense and energetic radiation emitted, which travels at the speed of light. Short of simultaneously bombing different parts of the solar system, this is as close to "nearly instantly" as it is possible to achieve. Of course, the compounding issue for anything that vaporizes a solar system is what will happen to surrounding systems in a few years when the radiation released by the event arrives. If the event is too energetic, it will cause disruption and damage to unintended solar systems nearby, potentially resembling the effects of a nearby supernova. This may or may not be advisable for the perpetrator nation, depending on whether they or their allies occupy any nearby systems. Aidan F. PierceAidan F. Pierce $\begingroup$ One thing that deserves to be noted: We don't know how quickly the strangelets will expand to convert the rest of the star. It might turn the star into a strange one in a matter of seconds, or it might spend months gradually expanding. Scientists don't know enough about the properties of strangelets to know how quickly that happens. $\endgroup$ – Jarred Allen Mar 22 '18 at 23:33 Your scientists and engineers have developed a material that can (temporarily; it doesn't need to be that long) withstand the temperatures in the star. Make a (or more) large missile out of it, with a magnetic containment field full of antimatter. Once the outer shell melts and the anti-matter is exposed... BLAMMO!! RonJohnRonJohn $\begingroup$ Well, surviving in a star for a while isn't that far fetched. It's estimated that if the Earth ends up in the outer shell of the Sun when it expands into a super-giant, it will take a few million years for the Earth to vaporise. But that's not the problem - the problem is how utterly powerless an antimatter missile would be. I didn't do the math, but I wouldn't be surprised if even if you sent an antimatter Earth into the Sun, it wouldn't do much to disrupt the star long-term. Some surface disruption, yes, maybe even some ejection, but nothing "star-breaking". $\endgroup$ – Luaan Mar 24 '18 at 8:18 Kinectic projectile There's literally not a limit to the amount of energy you can put into one. It just asymptotically approaches the speed of light as you add more energy. Just keep going until it's enough to destroy the star. The star shrapnel will take care of everything else. Be warned that, at near the speed of light, collisions with tiny objects can seriously dent or destroy your projectile. Even individual atoms in the way may be a problem. Better to send a handful of projectiles to be sure. Emilio M BumacharEmilio M Bumachar $\begingroup$ I mean, if your projectile is moving fast enough, getting dented or destroyed just means there's now a big cloud of relativistic plasma heading towards your target. $\endgroup$ – timuzhti Apr 13 '18 at 6:23 One possible method would be artificial wormholes. It wasn't specified how much of the target solar system had to be obliterated. Was it just the area containing the planets, or all the way out to the outer comets halfway to the closest stars? If you could generate an artificial wormhole with a mouth a few billion miles wide just ahead of the solar system as it orbits the galactic center, the solar system will enter the mouth of the wormhole and exit from the other mouth of the wormhole far, far away and unable to bother you. Unfortunately, since the orbital speed of the solar system will be just a few hundred kilometers per Earth second and there are only about 31,000,000 seconds in an Earth year, it might take several years for the target solar system to completely fall into the wormhole mouth. So you may have to give the mouth of the wormhole a very fast velocity toward the target solar system in order to envelope it in weeks or days. Or you will have to create many, possibly hundreds, of different artificial wormholes at once and position their mouths much closer to the orbiting planets, moons, dwarf planets, etc. to make them enter the wormhole mouths in minutes, hours, or days. Another technique would be to create wormhole mouths that are only ten, twenty, or thirty percent as wide as the planets and moons they are formed in front of. So the wormhole mouths will transport cylinders of matter from the onrushing planets to distant regions of space, leaving the spherical planets and moons with missing cylinders through their centers if the wormhole mouths are positioned correctly. Spherical planets and moons are spherical because their gravity is stronger than the strength of their materials and pulls them into a spherical shape. So if the centers of the planets and moons are removed, the matter in their outer layers will no longer be supported and will fall inward toward the mathematical centers of the planets and moons. And when the tremendous masses of matter falling at great speeds collide with each other in the central regions of the hollowed out bodies, the collisions should be many orders of magnitude greater than the greatest asteroid impacts in Earth's history. I guess that most of those astronomical bodies will be totally turned into plasma by the energy released by those collapses, plasma that might be moving fast enough to escape from the gravity of the former bodies. Type II supernovas are caused by core collapses in massive stars. So forming a wormhole mouth in front of a moving ordinary star that transports a cylinder shaped section out of the star should cause the star to collapse. That should result in a much, much smaller version of type II supernova. But of course even a tiny insignificant fraction of a type II supernova should be a very devastating event that further ensures the vaporization of all the remaining matter in that solar system and will probably heat up all that plasma to escape velocity. Then, of course, shut off all of the artificial wormholes so no evidence remains. M. A. GoldingM. A. Golding You could destroy all life within a system if you could create a gamma ray burst of sufficient intensity. Normally gamma ray bursts occur when stars collapse to form neutron stars or black holes - for example the infall of material into a black hole drives a pair of relativistic jets out along the rotational axis. This can occur when two stars spiral into each other to become an object with too much mass to remain a normal star, and must collapse to degenerate matter or a singularity. Gamma ray bursts are really one of the most deadly phenomena in the universe, and even though one would not notice much change in the appearance of the earth, it would destroy all life and possibly strip off a lot of the atmosphere. Gamma ray bursts are kind of the galactic scale EMP weapons. Gamma ray bursts also can be beamed long distances in one specific direction in a narrow jet. You can imagine purposely using this to focus on a particular star system. However energetically it takes major cataclysms to create them, e.g. formation of black holes. RobotbugsRobotbugs $\begingroup$ Welcome to WorldBuilding! If you have a moment please take the tour and visit the help center to learn more about the site. Have fun! $\endgroup$ – Secespitus Mar 22 '18 at 16:29 Vibrational energy, and a lot of it! If we're talking about a solar system roughly the size of our own, with the intended effect being effectively crumbling it into gas and dust, the best I can think of would be a local and extraordinarily strong tidal pull from a passing body. This would have to be extremely massive, like a pair of orbiting neutron stars or black holes. The vibration felt by the entire system as they moved in and out of phase could easily crumble any solid matter within it, particularly if the orbit was at an extreme speed. One such system, PSR J1311-3430 (a millisecond pulsar), completes a full orbit every 93 minutes. When both stars are in alignment with the system, it would feel the pull of all present mass at once; whereas when 90° out of alignment, a decent portion of the pull from each star would be canceled by the other. Of course, actually getting your hands on the requisite mass and transporting a couple of neutron stars is no small feat; but it should be doable with some canon scifi technology. Wormholes, mass effects, and such; maybe even some kind of 100 AU wide pulsating artificial gravity beam. In any case, after only a couple of hours (maybe substantially less), nothing living would be left in that system, and as a warcrime-worthy bonus it would likely be an extremely painful death; after a day it would be complete chaos and gaseous debris. Michael Eric OberlinMichael Eric Oberlin Time dilation device. I lifted this scifi concept from Stargate. It works for your purposes and offers narrative possibilities too. http://stargate.wikia.com/wiki/Time_dilation_device The Time dilation device was a device of Asgard design, conceived to artificially change the normal passage of time... Within the bubble, time could be slowed down by a factor of ten to the fourth power. This means that one year within the time dilation field would be 10,000 years for everyone else ... During the Asgard-Replicator war, the Asgard summoned the Replicators to the planet Hala and trapped them in a time dilation field... However, the Replicators managed to stop the time-dilation device and used it to increase time for themselves, thriving and replicating many thousands of times. Time moved faster inside the bubble once the Replicators reverse engineered it. In your scenario this is what is used against the solar system. Time moves much faster in the bubble. When the bubble turns off, quintillions of years have passed inside. All that remains is a gas and loose elementary particles - a solar system dying the heat death at the end of time. The detectives can figure out what happened by first realizing that not only has there not been an explosion, absolutely nothing is radioactive in this system any more. They then find evidence of proton decay - something which should not happen in our universe for a long time yet. They realize that all the matter in the system is extremely old. The perpetrators of the event counter that no crime has been committed - from the standpoint within the time dilation field, the worlds went on as normal, and all inhabitants, and their descendants, and their civilizations, and the stars their plants orbited lived out their natural lives. The Outer Limits twist: within the time dilation field, some of the inhabitants figured out what was going on. They could not turn off the time dilation field they were in but they could duplicate a smaller one and esconce themselves inside, slowing the passage of time within to nothing. When the big fast field turns off, the small inner field (later in the story) turns off too. They emerge. They have had hundreds of thousands of their years to improve their technology before turning on their inner field. These beings are very different from who lived there before. $\begingroup$ This can dangerously backfire - it would give the guys inside a lot of time for development, and they might emerge next Tuesday with a massive technological advantage. In fact, it would be a bit surprising if civilizations didn't use the same technology to "uplift" themselves :P Maybe they just don't like the stars going away? $\endgroup$ – Luaan Mar 26 '18 at 7:46 The Lost Fleet series deals with this somewhat, (spoilers): A hyper gate collapse could theoretically result in a super-nova like energy release and annihilate the system. Hyper gates have an extreme amount of energy held in tension, and a chaotic release would be devastating. ScarySpiderScarySpider $\begingroup$ I hope those gates are really far away from the system's star. The gravitational influence of so much energy would be pretty crazy even a light-year away. Though from what I can find, it's really supposed to be nova-scale, not supernova-scale - which is a huge difference. Still not what you'd put on the "edge of the system", though - that wouldn't be good for the planets at all. $\endgroup$ – Luaan Mar 24 '18 at 8:35 $\begingroup$ Id suggest reading the book. A bit dry at times, but good. $\endgroup$ – ScarySpider Mar 27 '18 at 23:49 Stellar muon bomb. Muon catalyzed fusion allows meaningful rates of fusion of deuterium-tritium at room temperature (and lower). Stars are hot enough and dense enough to fuse "normal" hydrogen. (Citation needed?) At room temperature, muons increase the rate of proton-deuteron fusion about 38 orders of magnitude. One might be concerned that there is very little molecular hydrogen in stars. This is both true and false. In Sol, the largest fraction of (fluorescent) molecular hydrogen occurs in the chromosphere, around 150 km above the top of the photosphere. The number density exceeds $10^{13} \,\mathrm{cm}^{-3}$ in a 200 km thick layer over the entire surface of the Sol. (ibid., p. 4. See also the pretty pictures in fig. 2 of an associated paper showing the complicated distribution of molecular hydrogen in the chromosphere.) The total number of $H_2$ molecules known to be "on the surface of the sun" is $$ 6 \times 10^{12} \,\mathrm{km}^2 \cdot 200 \,\mathrm{km} \cdot 10^{15} \frac{\mathrm{cm}^3}{\mathrm{km}^3} \cdot 10^{13} \,\mathrm{cm}^{-3} \approx 10^{42} \text{.} $$ (There are about $10^{63}$ atoms in the sun, so this number isn't wildly too big, a good thing to check for such unfamiliarly big numbers. The "real" number of hydrogen molecules could easily be 1000-times larger.) These hydrogen molecules have roughly the mass of Mars's moon Phobos. So even if you were to fuse all of it at once, although the CME would be quite spectacular, the star would hardly notice. Something that is not known is what muons would do to the metallic hydrogen phase at the core of stars. It is plausible that muons would cause the lattice spacing of the molecules to decrease substantially, catalyzing fusion in a manner similar to that at room temperature but, instead of a single muon catalyzing fusion of the two members of a single molecule, a single muon would accelerate fusion over an entire delocalized region of the metal. That is, a sufficiently intense beam of near-light-speed (muon half-life is 2.2 microseconds in their comoving frame) muons could cause the portion of the core they reach to engage in fusion about $10^{30}$-times faster (or more, possibly much more than the $10^{38}$-times for room temperature fusion). The mass of the core is about one-third the mass of Sol, roughly $10^{63}$ hydrogen masses. If you could saturate the entire core, you could induce the Sun to perform all the of the rest of its lifetime hydrogen burning in a few femtoseconds. Typical numbers for the power of the sun are around $10^{26} \,\mathrm{W}$. Increasing by $30$ orders of magnitude for $1 \,\mathrm{fs}$ gives $10^{41} \,\mathrm{J}$, the gravitational binding energy of the sun. So, this process is releasing about the right amount of energy. (I'm not convinced one needs to actually impart escape velocity to all particles. Even one-tenth on average would produce a nebula that would persist for a very long time on human scales.) This would also dump about $10^{15} \mathrm{J}/\mathrm{m}^2$ of energy on the Earth, about the energy of $1 \,\mathrm{Mt}$ of TNT per square meter. The presented cross-section of the Earth is about $10^{13} \,\mathrm{m}^2$, so the initial flash would only carry about $0.1\%$ of the gravitational binding energy of Earth. (However, the rather large uptick in solar wind will ablate the remainder rather quickly.) To be clear, this process is roughly the least energetic process to achieve your stated goal. Only exciting a few percent of the core or only exciting a beam or narrow cone through the star will not produce the desired effect. Each muon has rest mass of $100 \,\mathrm{MeV}/c^2$ and it is likely we need more than $10^{50}$ of them if they are imparted with zero velocity relative to the star. (At room temperature, each can participate in 100-1000 fusions before being captured by an alpha -- at core densities, this number should be higher and delocalization should make it higher still. Until someone builds a high intensity muon gun and fires it at a working fusion reactor, we don't any of these numbers.) However, to excite the entire core at once, we postulated relativistic muons. Determining the effect of the heating coming from stopping all those muons exceeds my stamina for this problem, but suffice to say that each muon dumps a few $100 \,\mathrm{MeV}/c$ of momentum when it is captured, which is a few thousand times the activation energy for fusion in the core. So it would be reasonable to describe this as a heat gun which instantly completes stellar fusion. Equivalently, we could reduce the flux of muons by up to 3 orders of magnitude and still get the desired effect. This still necessitates converting about $10^{-10}$ Sol masses (about $10^{20} \,\mathrm{kg}$) to energy 100% efficiently. This is roughly the mass of a rock having a $300 \,\mathrm{km}$ diameter (roughly the mass of $200$ Death Stars). What is not known and could make this process much easier: muons are easily produced by the decay of charged pions, pions are easily produced by hadron-hadron collisions (i.e., fusion), the rate of pion production depends on the energy of the fusion reactions, which we can now control by the intensity of our relativistic muons. So, can we arrange for our muon catalyzed stellar fusion to produce copious quantities of muons? If so, we could use a much smaller rock and/or a more realistic muon production efficiency. Eric TowersEric Towers Everyone here is answering your question quite litteraly offering you weapon concepts, which is great. Now if what you want is the destruction (regardless pieces left floating in space) you could take this problem in a different and more simple way. If you know a little about Newton's laws one of them could help you : Newton's law of universal gravitation. This law states that a particle attracts every other particle in the universe with a force which is directly proportional to the product of their masses and inversely proportional to the square of the distance between their centers. Now to speak in simple terms, if earth (for exemple) stopped rotating around the sun, it would (fast enough for us to be unable to notice), crush into the it because of the attraction between them. That's where I wanted to drag your attention. If you could stop (or at least highly slow down) planets rotation around the sun they would crush into the it and get destroyed. This article explains stuff about the orbiting of the earth around the sun and the consequences if it stopped. $\begingroup$ It's thought that the collision that formed the Moon pretty much made the Earth melt all over - and did barely anything to the orbital speed of the Earth (not itself surprising, given that they were gravitationally bound in the first place). Any impact large enough to substantially affect the Earth's orbital speed would have drastic effects on the Earth. In fact, the Earth's gravitational binding energy is only about three orders of magnitude higher than the kinetic energy of its orbit. And it's not like you're bouncing billiard balls - the collision will not be elastic. $\endgroup$ – Luaan Mar 24 '18 at 8:29 $\begingroup$ @LuaanThanks for noticing, I'll remove this part of advice. If you had anything piece of advice to add feel free to talk about it. $\endgroup$ – Rolexel Mar 26 '18 at 6:23 I'm going to go for something that's real and might be possible with current technology. You put a space telescope in orbit around the solar system you want to destroy. The space telescope looks closely at all the stars around. When you see the light of some stars shift because a primordial black hole is between you and the that star, you can figure out the distance and location of that primordial black hole. Then, you fly your space telescope close to the black hole. When the space telescope starts to be sucked in by the black hole, its engines turn on and pull away from it. The black hole is slowly being pulled in the direction of the space telescope. Over time, you can pull the black hole in a trajectory that causes it to eventually hit the sun of the solar system you want to destroy. Once the black hole is attracted by the sun's gravity, it should start to move toward the sun faster and faster. It might take a few weeks, but it's a sneak attack that nobody expects. Although this is a bit of a stretch as far as really being possible, it makes for good science fiction. Russell HankinsRussell Hankins I assume you have FTL. Create a microscopic black hole on a ship that is in hyperspace right before it drops out of it, right in the middle of the star. Due to radiation pressure and extreme temperatures it generates, it might be enough to send star's matter flying into the planets, causing them to turn into molten pieces of rocks. Cem KalyoncuCem Kalyoncu $\begingroup$ If it "ate" the whole star, it would no longer evaporate quickly. As for sucking all matter in the system... no. That's not how gravity works. You didn't change the mass of the star, so all the objects will continue to orbit it the same way they always did. It would just get very dark (and possibly, with lots of gamma light while the star is consumed). If it's small enough to evaporate faster than it grabs more matter from the star, it wouldn't produce much effect either. Even in the "eat the star" case, I'm not sure how fast that could happen - do you have some calculations to back that up? $\endgroup$ – Luaan Mar 26 '18 at 7:52 $\begingroup$ You are right about to gravity, didn't think that part. For evaporation, I think it will not be able to stay stable for long. After all, stars that are in range of sun will not be able to turn into a black hole. Most likely it will do quite a bit of gamma rays. Still, I don't think any of the planets could survive that event. $\endgroup$ – Cem Kalyoncu Mar 27 '18 at 9:24 $\begingroup$ Stars massive like the Sun can't turn into black holes on their own, but that doesn't mean that Sun-sized blackholes aren't stable - in fact, if you made a black hole from the Moon (about 133 um in diameter!), it would be in an almost perfect equilibrium with the cosmic microwave background radiation. In other words, even if you removed all the matter and radiation in the solar system, the tiny black hole would neither grow nor shrink (until the CMB cooled further). Needless to say, the Sun is considerably more massive than the Moon. $\endgroup$ – Luaan Mar 27 '18 at 18:48 $\begingroup$ I am not well versed in Hawking radiation, but according to the online calculator, it will take quite a bit: 10^66 years to evaporate. Thus my initial guess is off. I learnt quite a bit about it while writing this comment: the microscopic black hole will not be able accrete any material due to having higher radiation pressure than its gravitational pull. But, I still think system can be wiped this way. A careful calculation can help to design a black hole size that can give off quite a lot of radiation and excite the particles in the star to send enough energy to scorch the planets around it. $\endgroup$ – Cem Kalyoncu Mar 28 '18 at 21:04 Use iron to make the star die. Iron is unique--when it's created, it begins the countdown to the star's death. With hydrogen, helium, lithium, etc., the star can still use nuclear fusion to create higher and higher numbers of elements. Iron, however, absorbs the energy of the impact. But it won't fuse into heavier elements, so it decreases the fusion of the star. A star's life is marked by a balance between gravity and fusion. If fusion decreases, gravity wins, the star collapses and then . . . A: Goes super/hypernova. B: Blows off its outer layer and fizzles. C: Goes supernova and then collapses into a black hole (note: only for massive stars.) You'd need a lot of iron to do it that quickly, but a planet- or two's worth would probably work. FoxElementalFoxElemental A few other answers have suggested dropping a black hole into the Sun. This would trivially certainly destroy the Sun by consuming it, but there's another, much more spectacular, mechanism at play. And that is accretion. When matter falls into a black hole, it won't fall straight in unless aimed incredibly precisely. Instead, an object dropped into a black hole will more often be shredded by the black hole's tidal forces, then form into a disk of gas orbiting the black hole. The gas's angular momentum keeps it from falling inward right away, but it can bleed off that angular momentum by knocking into particles orbiting just a bit farther outward, transferring momentum and letting the inner edge of the disk creep ever closer to the event horizon. This whole turbulent momentum-exchange process produces heat- a lot of heat. There's a lot of potential energy stored in a mass suspended above a black hole, after all, and when the mass falls in, that energy has to go somewhere. In fact, an accretion disk can covert around 10% to over 40% of the mass of the infalling matter into energy that can radiate away from the hole- far more than the 0.7% attainable by nuclear fusion. Imagine that- a weapon capable of converting 10% of the sun's mass into energy. If Jupiter's mass-energy is comparable to a supernova, then 10% of the sun's... is just insane. Now, there are a few caveats here. For one thing, it's not instantaneous. Black holes can only consume matter at a certain rate. The accretion disk emits radiation, and that radiation exerts a force on any matter that it hits, pushing it away. When that force equals the force of gravity pulling matter inward, they balance out- and the hole can't eat any faster. Some matter will continue to slip in, as that's powers the radiation pressure; but the rest of the star will be kept out. This is called the Eddington limit. It increases linearly with the mass of the black hole, which means that bigger black holes can eat faster. And since black holes grow by eating, a black hole dropped into a star will grow exponentially until there's nothing left for it to consume. So you could kick-start the process by using a large black hole, or use a small black hole and let it act as a sort of apocalyptic time bomb. Over time, the star would grow brighter, larger, and redder as the black hole outstrips the thermonuclear reaction in its core (which will happen around when the hole has absorbed about 0.1% of the star's mass), until the rate at which the hole's consuming matter exceeds the rate at which that matter can heat up and expand, whereupon the star will become smaller, hotter, and ever more luminous until there's nothing left to shield the rest of the solar system from being scoured clean by the incredibly intense radiation from the accretion disk. And I have absolutely no idea over what sort of time scale this would take place. I've made a couple of attempts at the math, but I'm fairly sure I messed it up somewhere. Another caveat is that much of that radiation may be directed directly out of the plane of the solar system. We might wind up essentially creating a gamma-ray burst. Sometimes, when massive stars collapse into black holes, much of the energy of the supernova is directed into two narrow beams of gamma rays perpendicular to the accretion disk. The same thing might happen when a black hole fully consumes a smaller star, depending on how quickly the star is spinning. The third caveat is that this won't fully vaporize the solar system- there will for sure be a stellar-mass black hole left behind, and perhaps also some planets. I have no idea whether or not they could survive this kind of event, although their biospheres almost certainly wouldn't. And if the biospheres somehow did survive the blast, there would be no sun afterwards, and the planets would freeze. protected by James♦ Mar 22 '18 at 19:27 Not the answer you're looking for? Browse other questions tagged science-fiction weapon-mass-destruction or ask your own question. A weapon to attack the Solar System Could a solar system with large amounts of dust and debris exist? Could two planets in the same solar system create intelligent life Which major solar system body could most realistically be artificial? How do I keep FTL from being used as a weapon of mass destruction? Speculative weapon of mass destruction What would be the impact on Titan, Saturn and the Solar system if aliens harvested as much hydrocarbons as possible from Saturn's moon?
CommonCrawl
Why is the velocity (vector) the same value as the speed (scalar)? Thread starter ggandy ggandy anuttarasammyak ggandy said: Summary:: Why is the velocity(vector) the same value with the speed(scalar) in a parabolic motion? I think the displacement is smaller than the distance, so the velocity must be smaller than the speed. In OP's case of [tex]x=v_xt[/tex] [tex]y=-\frac{1}{2}gt^2[/tex] Distance = [tex]\int_0^T \sqrt{dx^2+dy^2} = \int_0^T \sqrt{v_x^2+g^2t^2} dt[/tex] Displacement = [tex](v_xT, -\frac{1}{2}gT^2)[/tex] From your sketch it is obvious Distance > Displacement that [tex]\int_0^T \sqrt{v_x^2+g^2t^2} dt > \sqrt{v_x^2T^2+\frac{1}{4}g^2T^4}[/tex] speed = d Distance / dT = [tex]\sqrt{v_x^2+g^2T^2}[/tex] velocity = (d Dispacement_x/dT, d Dispacement_y /dT)= [tex](v_x, -gT)[/tex] We see speed equals to magnitude of velocity. In general, infinitesimal Distance = magnitude of infinitesimal Displacement [tex] dl = \sqrt{dx^2+dy^2}[/tex] [tex] \frac{dl}{dt} = \sqrt{(\frac{dx}{dt})^2+(\frac{dy}{dt})^2}[/tex] Likes ggandy and vanhees71 Doc Al Be careful not to confuse average velocity with instantaneous velocity. Doc Al said: Then in my sketch, does the velocity means instantaneous velocity? The calculation for ##\vec{v}## (velocity) is for the instantaneous velocity yes. The magnitude of the instantaneous velocity is always equal to the instantaneous speed. However your assessment that "The displacement is smaller than the distance, hence the velocity should be smaller than the speed" is correct only if you replace the word "velocity" with "magnitude of the average velocity" and the word speed with "average speed". It isn't correct for the magnitude of the instantaneous velocity and the instantaneous speed which are always equal as i said in the previous paragraph. Likes ggandy Delta2 said: Then in my sketch, the value of the velocity from the blue dotted line means the magnitude of the average velocity? And the value of the speed from the red dotted line means the average speed? Hence the magnitude of the average velocity is smaller than the average speed? The blue line shows the displacement, which you can use to calculate the average velocity. Not sure what you mean. The value that you calculated is the instantaneous velocity after one second. To calculate the average speed of the projectile you'd have to find the total distance traveled (distance, not displacement). That's what you are looking to compare with the average velocity. Then if I calculate the average speed from the total distance traveled, would it be bigger than the average velocity? Yes. In your sketch you calculate the instantaneous velocity and instantaneous speed after 1 second. If you calculate the average velocity and the average speed for the duration of this 1second you ll find that the average speed is greater than the magnitude of the average velocity. As a clear and extreme case, say a body starts from the origin at time 0 and go around a path and come back to the origin at time T. There is no displacement so Displacement / T = 0 < Length of path / T Then let us do Galilean (or Lorentz for SR) transformation so that starting point x=0, time=0 goal point x=X, time =T Displacement is a straight line meaning inertial, constant velocity motion. Path of the body are curves. The line is the shortest one among the curves. So in this new frame also Displacement < Length of path (=Distance though I myself am not accustomed to saying so) Displacement /T < Length of path/T (=average speed) About the terminology, I confess my coarse usage was : Path length is the length that a body moved in some way = Distance here Distance is the length of line connecting the two points. The shortest distance.= Displacement here "A circle is a shape consisting of all points in a plane that are a given distance from a given point, the centre;" In this Wiki definition of a circle, word distance is used, not displacement. Displacement is rather small change or difference of molecule coordinates in vibration or elasticity in many cases a body would come back to its original position. Thanks a lot. Now I see that I just calculated only the instantaneous speed and velocity. Likes Delta2 anuttarasammyak said: A circle is a shape consisting of all points in a plane that are a given distance from a given point, the centre; Here distance is used, not displacement. Displacement is rather small change or difference of coordinates in vibration or elasticity in many cases a body would come back. Now I think that the straight line is not the real motion in gravity, the curve line is the real motion in gravity. In my sketch the values of the speed and velocity are just instantaneous values. So If I calculate the lengths of the straight line and curve line, I would got the results of the average speed and velocity. Then I would know that the average speed is greater than the average velocity. For peace of your mind you may think that in infinitesimal time all the motions are line motions where speed and (magnitude of ) velocity coincide. In post #10 motions of real, natural or artificial, e.g. grab and move, does not matter. But I should not have referred to Galilean or Lorentz transformation under gravity that would cause interesting but missing the point of the thread discussions. I appreciate your kind explanation. Sure. I was going to give the same example that @anuttarasammyak gave as an extreme case: Imagine a car driving around a racetrack at 100 mph and returning to its starting point: The average speed was 100 mph; the average velocity was zero. Another example: Toss a ball straight up and catch when it comes down. Come to think of it, from a purely mathematical viewpoint this thread is about the inequality $$\frac{1}{T}\left \|\int_0^T \mathbf{v(t)}dt\right\|\leq\frac{1}{T}\int_0^T\left\|\mathbf{v(t)}\right\|dt$$ which is just a property of integration and the modulo (norm) ##\|\mathbf{v}\|## of a vector , which is a generalization of the property $$|\int f(x)dx|\leq\int|f(x)|dx$$ for real valued functions of a real variable. Likes ggandy, vanhees71, Doc Al and 1 other person Leo Liu Speed is just the magnitude of the velocity vector. In this case the displacement is less than the displacement because some components of the velocity vectors throughout the motion could cancel out one another. Bonus: In a circular motion, the displacement is essentially 0, as the distance traveled is not 0. In this case both the velocity and the speed is not 0 or ##\left\langle 0, 0 \right\rangle##bcause the velocity changes direction and the speed is constant. Thank you for the good examples. Leo Liu said: I'm sorry I can't see what you mean. In your second line, " cancel out one another"; What are the components that are canceled out? In your last line, "speed is constant"; Why does the speed constant in gravity? Sorry if that wan't clear. Every infinitesimal displacement vector can be added together, but they have directions which make the total displacement less than or equal to the distance. If you want some intuition, you can imagine a triangle with three sides a, b, c, in which side a and b are infinitesimal displacement (##\lim_{x,y\to 0} b(x,y)## ##\lim_{x,y\to 0} a(x,y)##, where x and y are the components of the vectors). The side c is a resultant vector of a and b, denoting the total displacement. So we can write ##\lim_{x,y\to 0} c(x,y) = \lim_{x,y\to 0} b(x,y) + \lim_{x,y\to 0} a(x,y)## (again, vectors can be added together). In comparison, the tiny distance traveled is sum of the length of a and the length of b. An axiom of geometry tells us that the length of two sides in a triangle added together must be greater than the length of the other side; therefore, the length of the displacement vector c must be smaller than the distance which is the length of a and the length of b added together. Now back to your question--why does the length of the velocity vector equal value the speed? It is because both of them are instantaneous (not average) in this case. So it is not legitimate to divide (not rigorously saying) c by a unit of time to get the instantaneous rate of change, otherwise you are calculating the average velocity in this tiny time interval. If we think a bit further, the speed at a point is equal to the tangent of the distance function (the change in the length of vector, which is b-a if we assume the object travels from a to b), which is also equal to the length of the velocity vector. The displacement vectors merely give the velocity vector a direction. In your last line, "speed is constant"; Why is the speed constant in gravity? "In Circular Motion (Bonus)" therefore, the length of the displacement vector c must be greater than the distance which is the length of a and the length of b added together. You must mean smaller there not greater. You are right that the triangular inequality is behind all this. The proof of inequality in my post #16 is based on triangular inequality of the modulus (norm) of a vector $$ \|\mathbf{v_1}+\mathbf{v_2}\|\leq\|\mathbf{v_1}\|+\|\mathbf{v_2}\|$$ Likes Leo Liu Thank you for your kind explanation, but I'm sorry I can't still understand your added explanation. Maybe the reason why I can't understand is it just consists of texts without images. I'm going to try again to understand that, thank you. If you zoom in on the trajectory of the object, it will become many some line segments. The diagram is simply a representation of a piece of the trajectory. Since the motion has direction, we just consider them as vectors. Suggested for: Why is the velocity (vector) the same value as the speed (scalar)? B Is the final velocity of a Tossed tomato the same as its initial velocity B Why is the instantaneous speed equal to the magnitude of the instantaneous velocity? I Why is momentum considered a vector and kinetic energy a scalar? B Speed/velocity as a derivative Why is the integrated velocity times change in velocity zero? Why we jump with the same initial speed on 2 different planets? Why the change in potential energy is the same in all cases? Constants in scalar and vector potentials I When are the initial and final momentums the same (conserved)? Escape Velocity Question: Why is the final kinetic energy = 0? B Rotation is absolute, linear motion is relative?
CommonCrawl
Machine learning and structural analysis of Mycobacterium tuberculosis pan-genome identifies genetic signatures of antibiotic resistance Identification and Characterization of Genetic Determinants of Isoniazid and Rifampicin Resistance in Mycobacterium tuberculosis in Southern India Asma Munir, Narender Kumar, … Sony Malhotra An evolutionary functional genomics approach identifies novel candidate regions involved in isoniazid resistance in Mycobacterium tuberculosis Victoria Furió, Miguel Moreno-Molina, … Iñaki Comas Genomic signatures of pre-resistance in Mycobacterium tuberculosis Arturo Torres Ortiz, Jorge Coronel, … Louis Grandjean Large-scale genomic analysis of Mycobacterium tuberculosis reveals extent of target and compensatory mutations linked to multi-drug resistant tuberculosis Gary Napier, Susana Campino, … Taane G. Clark GWAS for quantitative resistance phenotypes in Mycobacterium tuberculosis reveals resistance genes and regulatory regions Maha R. Farhat, Luca Freschi, … Megan Murray Integrating standardized whole genome sequence analysis with a global Mycobacterium tuberculosis antibiotic resistance knowledgebase Matthew Ezewudo, Amanda Borens, … James Posey Clinically prevalent mutations in Mycobacterium tuberculosis alter propionate metabolism and mediate multidrug tolerance Nathan D. Hicks, Jian Yang, … Sarah M. Fortune CRISPRi chemical genetics and comparative genomics identify genes mediating drug potency in Mycobacterium tuberculosis Shuqi Li, Nicholas C. Poulton, … Jeremy M. Rock Genetic diversity of candidate loci linked to Mycobacterium tuberculosis resistance to bedaquiline, delamanid and pretomanid Paula J. Gómez-González, Joao Perdigao, … Taane G. Clark Erol S. Kavvas ORCID: orcid.org/0000-0003-2525-08181, Edward Catoiu1, Nathan Mih1,2, James T. Yurkovich ORCID: orcid.org/0000-0002-9403-509X1,2, Yara Seif ORCID: orcid.org/0000-0001-8813-56791, Nicholas Dillon3,4, David Heckmann1, Amitesh Anand1, Laurence Yang1, Victor Nizet ORCID: orcid.org/0000-0003-3847-04223,4, Jonathan M. Monk1 & Bernhard O. Palsson ORCID: orcid.org/0000-0003-2357-67851,2,3 Bacterial genetics Mycobacterium tuberculosis is a serious human pathogen threat exhibiting complex evolution of antimicrobial resistance (AMR). Accordingly, the many publicly available datasets describing its AMR characteristics demand disparate data-type analyses. Here, we develop a reference strain-agnostic computational platform that uses machine learning approaches, complemented by both genetic interaction analysis and 3D structural mutation-mapping, to identify signatures of AMR evolution to 13 antibiotics. This platform is applied to 1595 sequenced strains to yield four key results. First, a pan-genome analysis shows that M. tuberculosis is highly conserved with sequenced variation concentrated in PE/PPE/PGRS genes. Second, the platform corroborates 33 genes known to confer resistance and identifies 24 new genetic signatures of AMR. Third, 97 epistatic interactions across 10 resistance classes are revealed. Fourth, detailed structural analysis of these genes yields mechanistic bases for their selection. The platform can be used to study other human pathogens. Advancements in genome sequencing technologies have made available thousands of drug-tested M. tuberculosis genomes in public databases. With available sequences expected to surpass 60,000 during the next 5 years (https://www.crypticproject.org/), there is impetus for new quantitative approaches that excel at analyzing massive datasets. Methods that explicitly account for structure amongst features—such as those found in the field of machine learning—will be essential for addressing this M. tuberculosis data deluge1. To date, most approaches compare M. tuberculosis genome sequences against the H37Rv reference strain in order to identify single nucleotide polymorphisms (SNPs). Following SNP identification, most studies then focus on the subset of previously identified resistance-determining SNPs that have been previously determined to be key resistance-determining mutations, specifically those within a handful of genes encoding proteins targeted by drugs2. While such studies have proven to be powerful for diagnostics3 and elucidating mutational steps to AMR2, they do not account for potential genome-wide mutations reflecting positive AMR selection, such as those found to be related to cell wall permeability, efflux pumps, and compensatory mechanisms4. Specific genome-wide functional analyses in M. tuberculosis have shown that ald loss-of-function5, ubiA gain-of-function6, and thyA loss-of-function7 mutations occur in off-target reactions, and confer resistance through modulation of metabolite pools. These results exemplify the complex interplay underlying AMR phenotypes that extends beyond the few genes currently utilized in diagnostic studies. In addition to limitations of a narrow genetic view, the identification of other types of resistance-conferring mutations, such as deletions8,9, suggest that SNPs are no longer comprehensive in describing the mutational landscape of M. tuberculosis AMR evolution. Here, we apply a reference-agnostic machine learning approach complemented by both genetic interaction and protein structural analysis to deduce the variability in genetic content and AMR of 1595 M. tuberculosis strains. The complete analysis recapitulates known AMR mechanisms and infers specific selection pressures through directed hypotheses. Characterizing the M. tuberculosis pan-genome Our first goal was to characterize and understand the gene content of sequenced M. tuberculosis strains. We selected a representative set of 1595 M. tuberculosis strains for which AMR testing data was available from the PATRIC database10 and come from a wide range of studies (see Supplementary Discussion). Strains were selected for their genetic, geographic, and AMR phenotypic diversity (Supplementary Fig. 1). The geographic diversity of these strains reflects areas heavily burdened by M. tuberculosis (Supplementary Fig. 1a). We constructed a phylogenetic tree for the 1595 strains using a robust set of lineage-defining SNPs11 (Supplementary Fig. 1b and Methods). Finally, strains were selected in order to provide a distribution across commonly used M. tuberculosis treatment regimens (Methods). Of these 1595 strains, 1282 strains had AMR testing data for isoniazid, rifampicin, streptomycin, and ethambutol (Supplementary Fig. 1c) and 946 (59%) were resistant to both isoniazid and rifampicin. Following the selection of strains, we determined the pan-genome (i.e., the union of all genes across the strains) represented by these data and analyzed the distribution of various genomic features (core genes, virulence factors, etc.). The pan-genome analysis described a general theme of high conservation (Supplementary Fig. 2, see Supplementary Discussion for further discussion of M. tuberculosis pan-genome). Assessing allele frequencies identifies key AMR genes Although the M. tuberculosis pan-genome clusters provide an informative view of the global genetic repertoire within a species, they lack the resolution necessary to discriminate between most AMR phenotypes. To elucidate fine-grained genetic variation indicative of AMR evolution, we separated each pan-genome cluster into groups of exact amino acid sequence variants, or alleles (Supplementary Fig. 3g). In contrast to alignment-based perspectives, the allele-based pan-genome does not reduce non-H37Rv variants to a collection of SNPs, but instead represents variants in their functional protein-coding form. This approach accounts for all protein-coding alleles in the M. tuberculosis pan-genome, thereby representing the extensive strain-to-strain variation observed in bacterial genomes without biasing the variations relative to a single reference genome. We used mutual information (MI)12 as an association metric to identify resistance-determining genes with this newly constructed variant pan-genome and the accompanying AMR dataset (Methods). Importantly, this approach identified primary resistance-conferring genes previously reported in the literature (Fig. 1). In addition to MI, we calculated associations using a chi-squared test and an ANOVA F-test, both of which identified similar sets of key AMR genes (P < 0.005; Bonferroni correction) (Supplementary Data 1). These results suggest that allele frequencies based on exact sequence (i.e., without a metric for genetic distance) are capable of identifying AMR genes, which has previously been shown with k-mer based approaches13,14,15. Identification of key resistance-conferring genes using mutual information. The pairwise mutual information (vertical axis) between the pan-genome alleles and antibiotic resistance was calculated across all possible pairs. The listed genes correspond to the pan-genome alleles that hold the most information about the listed drug's AMR phenotype Machine learning identifies known and new resistance genes Although simple and effective, pairwise association tests (i.e., MI, chi-squared, and ANOVA F-test) do not simultaneously account for multiple alleles because the pairwise calculations consider variants independently of one another. Thus, we tailored a support vector machine (SVM)—a method that inherently accounts for structure amongst the features—to uncover AMR-conferring genes (Methods). Using the allele presence–absence across strains as the features, the SVM identified an additional seven known AMR gene–antibiotic relations absent from the top 40 ranked alleles determined by pairwise associations, including those associated with complex resistance (Table 1). In particular, ubiA, a resistance gene recently found to confer high level resistance to ethambutol6, appeared as a strong signal across the ensemble of SVM simulations—despite not being accounted for in contemporary M. tuberculosis diagnostics (Supplementary Data 2). Table 1 Known AMR genes uncovered by machine learning The SVM method revealed an abundance of AMR-implicated genes involved in metabolic pathways (119/317, 37.5%) (Supplementary Data 2). In fact, the majority of the known AMR determinants are metabolic enzymes (24/33, 73%). We found over 20 genes related to cell wall processes (26/317, 8.2%), which is consistent with previous findings of convergent AMR evolution in M. tuberculosis4. Furthermore, many high-signal AMR genes, such as pbpA and mmpS3, have recently been identified as determinants of intrinsic M. tuberculosis AMR16. The full list of identified genes for each drug is provided (Supplementary Data 2). Machine learning uncovers genetic interactions Beyond identifying AMR genes, four key attributes of our ensemble SVM learning approach enable analysis of genetic interactions underlying variable AMR phenotypes (Methods and Supplementary Fig. 4): (1) the weighting of a particular allele in a specific SVM hyperplane scales with its contribution to a particular AMR phenotype, (2) the sign of the weighting (positive or negative) corresponds to the contribution of that allele to the AMR phenotype (i.e., positive weights correspond to resistance while the negative weights correspond to susceptibility), (3) the magnitude and sign of an allele weighting is dependent upon the magnitudes and signs of other alleles within the same hyperplane, and (4) the use of bootstrapping (i.e., randomized subsampling of the population with replacement), and stochastic gradient descent ensures variability in the weights, signs, and set of alleles for each SVM hyperplane. Motivated by attributes 3 and 4, we hypothesized that two genes may interact if the weights, signs, and appearance of their alleles are significantly correlated across the ensemble of SVM hyperplanes (Methods). Therefore, to identify genetic interactions contributing to AMR in M. tuberculosis strains, we constructed a correlation matrix of allele weights across the ensemble of randomized SVM hyperplanes (Supplementary Data 3) and filtered for the top 60 highest gene–gene correlations for eight AMR classifications. The resulting set of gene–gene pairs were interrogated through logistic regression modeling, selecting those gene pairs with statistically significant allele–allele interactions (P < 0.05; Benjamini–Hochberg correction) (Methods and Supplementary Fig. 4). This approach uncovered 94 potential genetic interactions (Supplementary Fig. 4). We can use the evolution of ethambutol resistance as a case study to examine the output of our approach. Epistasis analysis of ethambutol AMR genes implicated interactions between embB, ubiA, and embR; all genes known to contribute to ethambutol resistance6,17,18. Although the embR alleles appeared few times across the multiple SVM simulations, their appearance was highly correlated with alterations in the sign and weight of the ubiA allele (see Supplementary Figure 6). This implies that embR is only a predictive feature within the context of ubiA, which may result from the weak penetrance of embR alleles within M. tuberculosis (Fig. 2a). Logistic regression modeling identified significant allele–allele interactions between ubiA and embR alleles (Supplementary Fig. 4). We investigated these interactions through a co-occurrence table of the genes, where each cell corresponds to the number of resistant strains with both alleles over the total number of strains with both alleles (Fig. 2a). The log odds ratio (LOR)—a measurement of the association of the co-occurrence of both alleles with AMR phenotype—was used to color each cell in the co-occurrence table (Fig. 2, see Methods). We observed that the resistant-dominant ubiA alleles (i.e., those with high positive LOR), 2 and 4, occurred exclusively in the background of nonsusceptible-dominant embR alleles (Fig. 2a). Interestingly, in contrast to embB and ubiA, no embR allele appeared as a clear resistance determinant (Fig. 2a). Furthermore, neither embR nor ubiA were significantly associated with ethambutol AMR in pairwise associations tests (Table 1 and Supplementary Data 1), showing that our ensemble-based machine learning approach uncovers M. tuberculosis AMR complexity. In addition to these known AMR determinants of ethambutol, our analysis implicated ubiA interactions with Rv3848 in ethambutol resistance (Table 2 and Supplementary Data 4). Interestingly, the resistant-dominant allele of Rv3848 occurs exclusively in the background of the AMR-neutral ubiA allele 3, hinting at an alternative route of high-level ethambutol resistance. Allele co-occurrence tables of correlated AMR genes. Co-occurrence of epistatic genes identified in a ethambutol and b isoniazid. For the rows on the bottom and on the far right, #R refers to the total number of strains that have the allele and are resistant to the specific drug. Total refers to the total number of strains that have that allele that were tested on that specific drug. Each cell is colored by the log odds ratio (LOR) with respect to the AMR phenotype. The numbers in the bottom right of each allele co-occurrence box describes the number of unique sublineages comprised by the strains with both alleles (Methods). The alleles enclosed by a purple box represent those chosen as features by the support vector machine (SVM). Note that in some cases the rows and columns do not sum up to the total strains due to rare cases when strains lack those alleles (Methods) Table 2 Newly proposed AMR genes For identified isoniazid AMR genes, the co-occurrence table highlighted cases where either katG or inhA genes provide the dominant mode of resistance (Fig. 2b). Specifically, the incidence of susceptible katG alleles 1, 2, 5, and 6 (i.e., low LOR) with the resistance inhA alleles 2 and 3 (i.e., high LOR) showed that isoniazid resistance in our dataset arose from either katG or inhA mutations, but not both. Aside from these two highly studied isoniazid AMR determinants, epistatic interactions between katG and oxcA appeared with a high signal and further displayed an interesting co-occurrence relationship with katG (Fig. 2b). This epistatic interaction for oxcA has not been previously described; specifically, alleles 3 and 7 of oxcA appear exclusively in isoniazid-resistant strains. While the AMR phenotypes for the strains containing these alleles may be attributed to the presence of the resistance-dominant katG alleles 3 and 7, as is often offered in studies to "explain resistance", the variation in AMR phenotypes across the different alleles were determined to be significant by the machine learning algorithm and thus motivated further investigation. Co-occurrence tables of epistatic AMR genes are provided for the ten antibiotic classifications (Supplementary Data 5). Structural analysis suggest drivers of selection Although the machine learning results agree with experimental literature, it remains unclear whether the uncovered genetic features are either true determinants of AMR or possible artifacts of the statistical learning algorithm. To gain additional insight into whether or not the uncovered alleles are causal in AMR evolution, we mapped the alleles of the 254 AMR genes to protein structures using both experimental crystal structures (20/254) and predicted homology models (50/254) using the ssbio Python package (Methods and Supplementary Data 6)19. Out of the 254 genes, 217 had available protein sequence annotations (i.e., binding domains, secondary structures, etc.). First, we established a positive control by mapping the alleles of known AMR genes to protein structures and verified that resistance-conferring alleles were located in annotated structural regions that indicate the known mechanism of action (Supplementary Fig. 7). For example, structural mapping of the isoniazid AMR-determinant, inhA, showed that the resistance-dominant alleles of 2 and 3 are located within two NAD-binding domains (Fig. 3a). The incidence of these two alleles in proximal NAD-binding domains is congruent with the experimentally derived mechanism of action, which describes the bactericidal effect of tight binding between the isoniazid-NAD adduct and inhA20,21. Moreover, the resistance-conferring mutations in the NAD-binding domains explains the previously described allele co-occurrence of susceptible katG alleles 1, 2, 5, and 6, with resistant inhA alleles 2 and 3, because the isoniazid-NAD adduct results from binding to katG, which would only occur if the M. tuberculosis strain lacks the resistance-conferring katG mutation that disables the isoniazid binding opportunity. With established confidence through case–controls, we set out to analyze the implicated and uncovered AMR genes. 3D and annotated protein structure mutation maps for identified AMR genes. a 3D protein structures with mapped mutations are shown for inhA, embR, and oxcA. The colors adjacent to and within the structural mutation table correspond to domains and mutations displayed on the protein structure, respectively. b Mutation tables for seven new AMR genes. The colors in the mutation table correspond to the incidence of an annotated structural feature located below the table. The two rows directly below the mutation table are colored according to the log odds ratio between the allele frequency and AMR phenotype. Two AMR classes are shown for Rv3471c and Rv3041c Revisiting the ethambutol case study, we noticed that the susceptible-dominant embR alleles shared an SNP that is 14.6 Å away from the DNA-binding domain (Fig. 3a). Given that embR is a positive regulator of embB22 and that the expression of embB decreases in the presence of ethambutol6, the SNP suggests a relative increase over alleles 1 and 3 in expression of the ethambutol target, embB, through increased DNA binding. For oxcA, the resistance-dominant alleles, 3 and 7, uniquely share mutations at residue 253, which is contained in the thiamin diphosphate-dependent enzyme M-terminal domain and is 4.51 Å proximal to a mutation at residue 224 shared by most alleles (Fig. 3). Notably, oxcA is an essential oxalyl-CoA decarboxylase enzyme that converts toxic oxalyl-CoA to CO2 and formyl-CoA, and plays a role in low pH adaptation in E. coli23. The totality of studies describing the poisonous effect of glyoxylate24, significant acid stress in the macrophage environment, use of CO2 as a carbon source25, and the importance of glyoxylate metabolism in antibiotic tolerance26, all suggest that the uncovered resistance-conferring adaptations in oxcA increase depletion of oxalyl-CoA through increased binding affinity of the thiamin diphosphate cofactor. Without structural models, sequence annotations of structural features enabled the delineation of resistant and susceptible allele mutations to unique structural domains—highlighting an advantage of our exact-variant perspective (Fig. 3b). We provide a list of newly implicated AMR genes along with their associated antibiotic, key mutation frequency, and structural protein features (Table 2). Resistant and susceptible alleles are globally stratified Since our set of M. tuberculosis strains spans multiple continents, we geographically contextualized our set of SVM-derived AMR genes towards delineating possible country-specific adaptations (Table 2). We observed that resistant and susceptible alleles of the identified AMR genes were stratified amongst specific countries of origin: resistant-dominant alleles were primarily located in Belarus, South Africa, and South Korea, while susceptible alleles were primarily located in India (Table 2). The geographic locality of ethambutol, rifampicin, and isoniazid resistant alleles suggests a genetic basis underlying the successful proliferation of M. tuberculosis in Belarus—a country with the highest prevalence of multidrug resistant (MDR) strains ever recorded27. We observed that the resistant alleles associated with para-aminosalicylic acid (PAS) were based in the high-burden MDR country of South Korea. Since PAS was a key component in the standard MDR treatment regimen of South Korea28, these alleles may represent specific adaptations to post-MDR PAS treatment that could be leveraged to better optimize the regimen. In total, these results portray a geographic basis for M. tuberculosis AMR evolution and demonstrate that our phylogenetically-agnostic machine learning approach is capable of capturing population behavior, which often confounds AMR predictions29,30. The data deluge on M. tuberculosis and its AMR characteristics is likely to continue unabated until all M. tuberculosis strains isolated from patients will be sequenced with associated metadata to guide clinical management. A reference-agnostic computational platform needs to be developed to receive, warehouse and continually analyze this data. We have taken the first step at developing a computational platform to meet this challenge. The platform was applied to 1595 sequenced strains to yield results in four categories: pan-genome properties, identification of genes conferring antibiotic resistance, their epistatic interactions, and protein structure based mechanistic insights. The pan-genome properties derived by our computational platform reflect the current understanding of M. tuberculosis genetic variability. The other three categories of results are intertwined. We recovered 33 known AMR genes and uncovered an additional 24 novel genetic targets. This demonstrates the platform's ability to generate hypotheses that may expand our knowledge of the genetic basis of AMR in M. tuberculosis. Some of these new targets are surprising (e.g., Rv3471c) and some are understandable (e.g., oxcA), but all provide an impetus for more detailed experimental studies (Supplementary Discussion). The third and fourth categories of results are interconnected and detail intricate features underlying M. tuberculosis AMR evolution. The 74 epistatic interactions revealed are new but in many cases involve known gene partners (e.g., ubiA). In other cases, these new epistatic interactions involve novel gene products (e.g., Rv2090). This novelty, reinforced by structural insights, inform a new line of experimental inquiry (Supplementary Discussion). The larger implications of these intricacies are threefold: (1) genetic background contributes to AMR phenotypic variation, but may be subtle (e.g., embR); (2) high-level resistance mutations are prevalent in off-target genes, such as transmembrane proteins (e.g., Rv3848); and (3) high-level resistance mutations localize to countries with poor M. tuberculosis management (i.e., Belarus). These features point to the adverse effects of prolonged treatment31. While our framework successfully identifies genetic AMR signatures, there are limitations to our approach that future efforts may expand upon. For one, our platform utilizes prior knowledge of known gene–antibiotic relationships and thus does not provide a means to uniquely deconvolve out an association of a region with a specific drug (Supplementary Discussion). In addition, while our structural analysis provided a foundation for hypothesizing potential evolutionary drivers, it did not provide further support to the causality of an allele. Novel statistical methods may leverage variations in structural features towards supporting causal alleles. Furthermore, our approach lacks the ability to understand systemic relationships connecting the alleles on a mechanistic level, such as interacting changes in biochemical flux. Future efforts may integrate genome-scale models of pathogens towards elucidating and understanding the genetic signatures of antibiotic resistance32. Taken together, the platform presented here meets the pressing need for disparate data-type analysis enabled by rapidly growing data available for M. tuberculosis pathogenesis and AMR. It both recovers known AMR features (i.e., positive control) and reveals new ones. This platform utilizes a unique combination of pan-genomic analysis, machine learning, structural analysis, and geographic contextualization. These data types are likely to become available for all urgent and serious threat human prokaryotic pathogens in the near future. Similar results to those presented here are thus likely to appear on a pathogen-specific basis in the coming years. M. tuberculosis strain dataset The selected set of M. tuberculosis strains are representative of various antimicrobial resistance phenotypes, geographic isolation sites, and genetic diversity. References for the published and unpublished data sets are provided (Supplementary Discussion, Supplementary Data 7). The sequencing data for the TB Antibiotic Resistance Catalog (TB-ARC) projects (Supplementary Data 7) were generated at the Broad institute. Additional information for each of these unpublished projects can be found at the Broad Institute website. All data were acquired from the PATRIC database. M. tuberculosis pan-genome construction and QA/QC We employed QA/QC of the constructed 1595 pan-genome by initially filtering out outlier strains. The initial selection of 1603 strains was reduced to 1595 upon review of both the cluster size distribution and the number of unique clusters across the set of all strains (Supplementary Fig. 3a, b). We found only four strains in the PATRIC database that had either a very low (<2000) or high number (>5500) of clusters. The final selection of 1595 strains has a cluster size distribution between 3900 and 4400, and a reasonable unique cluster distribution where the number of unique clusters did not exceed 160 (note that unique is defined here as being in only one strain). The pan-genome of all 1595 strains was constructed by clustering protein sequences based on their sequence homology using the CD-hit package (v4.6). CD-hit clusters protein sequences based on their sequence identity33. CD-hit clustering was performed with 0.8 threshold for sequence identity and a word length of 5. Pan-genome core and unique cutoff determination We determined the core and unique pan-genome through sensitivity analysis by plotting the change in core and unique cutoff values by the change in percentage. The cutoffs were chosen to be at the point where the second derivative of the curve is the largest. The curve represents the change in pan-genome core percentage to changes in the number of strains a gene must be found in to be defined as core (Supplementary Fig. 3c, d). Phylogenetic tree and categorization of lineages We created a robust phylogenetic tree of the 1595 strains using SNPs at the core genome. Specifically, we chose a set of 2803 core genes that appeared in at least 1593 strains, included the H37Rv reference strain (83332.12). We used needle34 to align sequences within the 2803 pan-genome clusters (a cluster is representative of a particular loci) to the H37Rv reference allele. We built a binary SNP matrix using all of the SNPs identified from the 2803 genes (21,206 SNPs in total), and then estimated a maximum-likelihood phylogeny using RaXML version 835. The tree was visualized using iTOL36. We used an existing SNP typing scheme11 for categorizing the strains into lineages and sublineages. Specifically, we used a total of 141 SNPs for identifying lineages and sublineages for our 1595 TB strains. These SNPs were previously determined to be sufficient for categorizing lineages11. Of these SNPs, 61 were in nonsynonymous sites and the other 70 were SNPs found in drug resistance genes. These 141 SNPs comprised a total of 74 genes. The presence of SNPs were then used to categorize the strains into the defined lineages. Of the 1595 strains, 1366 strains were categorized and 229 were uncategorized. The remaining 229 strains were categorized according to their proximity to strains with lineage-defining SNPs, with proximity defined according to our core genome SNP phylogeny. We have included the frequency of lineage variants in order to help readers discern between epistatic alleles and those in tight linkage (Supplementary Data 8). Implicated co-occurring alleles that span different lineages are unlikely to be in tight linkage (i.e., hitchhikers). For the numeric subscripts shown in Fig. 2—describing the number of unique sublineages for each allele–allele pair—were determined as the maximum number of unique sublineages at a single branch amongst all lineage/sublineage branches.. For example, an allele co-occurrence which has strains in both lineage 1.1 and 1.2 counts as two sublineages. An allele co-occurrence which has strains in both lineage 1.1, 1.1.2, 1.1.3, 1.1.3.1, 1.1.3.2, and 1.1.3.3 counts as three sublineages (1.1.3.1, 1.1.3.2, and 1.1.3.3). If an allele co-occurrence has strains in sublineages 4.1, 4.1.2, and 4.1.2.1, then only one sublineage is counted, since the strains can be traced through a single lineage (4.1 to 4.1.2 to 4.1.2.1). Pan-genome-wide correlation analysis We performed pairwise association analysis for all alleles in the pan-genome and for the 13 antibiotics to identify key AMR genes. We utilized MI, chi-squared tests, and ANOVA F-tests. MI has many statistical benefits, which include being a nonparametric method that can quantify nonlinear relationships, unlike Pearson's correlation which measures a linear relationship. MI has proven to be a natural and powerful means to equitably quantify statistical associations in large datasets37. The pairwise MI was calculated for each column vector in the unique variant pan-genome with each drug susceptibility vector (Supplementary Fig. 3g). The discrete entropy calculations were carried out using the Non-Parametric Entropy Estimation Toolbox (NPEET, https://github.com/gregversteeg/NPEET). Since both vectors are binary, the naive implementation of discrete entropy estimation used in NPEET is sufficient. The top 40 MI associations for 11 drugs are recorded (Supplementary Data 1). Associations were similarly calculated with chi-squared and ANOVA tests. P values were adjusted using the Bonferroni multiple-hypothesis testing correction. Theses statistical tests and corrections were implemented using the python package, statsmodels38. The top 40 associations determined by chi-squared and ANOVA F-test were recorded for 10 AMR classifications (Supplementary Data 1). Allele feature selection through support vector machines The support vector machine (SVM) attempts to account for all variants together by learning a multidimensional hyperplane that best separates the susceptible and resistant strains. The resulting hyperplane is a function of all exact-variant vectors in the pan-genome. Since the goal is not to predict resistance with high accuracy, but to instead extract key insights from the data, we take a feature selection approach by gearing the linear SVM with an L1-norm penalty and stochastic gradient descent optimization algorithm using the scikit-learn package. The L1-norm enforces sparsity in the decision function, which is ideal for feature selection. The stochastic gradient descent algorithm, in conjunction with the L1-norm, returns a different set of significant features each run. Since the chosen SVM does not reach the same solution, we look at the ensemble of 200 SVM feature selection simulations. Furthermore, we performed bootstrapping by randomly selecting a subpopulation representing 80% of the training data for each SVM simulation. Prior to simulation, we took out the primary resistance-conferring gene of an antibiotic from the machine learning analysis of other antibiotics in order to amplify the signal of other genes—a preprocessing step previously utilized in AMR gene identification studies5 (Supplementary Table 3). For example, all katG alleles were only accounted for as features in the machine learning analysis for isoniazid. Furthermore, we removed all mobile element proteins, PE/PPE/PE-PGRS proteins, transposases, and hypothetical proteins from consideration in the machine learning analysis due to primarily appearing in the accessory and unique pan-genome of M. tuberculosis which may confound the results. Finally, we balanced the class weight in the SVM algorithm in order to account for the imbalance of resistant and susceptible strains seen for each drug in our dataset. Features were selected from the SVM based on a threshold value. The value was determined through tenfold cross-validation where the threshold value was optimized through grid search (Supplementary Table 3). The use of bootstrapping in the machine learning algorithm may account for biased subpopulations in the data, which often confounds GWAS analysis for M. tuberculosis29,30. Filtering of gene sets for epistatic analysis Leveraging machine learning towards identification of genetic interactions, we constructed a correlation matrix of allele weights across the ensemble of randomized SVM hyperplanes for each antibiotic (Supplementary File 3). We limited our machine learning analysis to AMR classifications that achieved an average AUC (i.e., average area under ensemble of receiver–operator curves) greater than 0.80 (Supplementary Fig. 5). We selected the top 100 gene–gene correlations that include genes in the top 25 ranked SVM alleles for each antibiotic. We limited the correlations to in the top 25 ranked alleles in order to avoid the case when low weighted alleles appear sparsely with other low weighted alleles, which lead to significant correlations. The resulting set of gene–gene pairs were then analyzed using a logistic regression model in order to determine statistically significant interactions. The filtering of potential gene–gene pairs prior to classical quantitative epistasis analysis addresses the problem of combinatorial explosion of pairwise interaction terms in conventional techniques. Epistatic analysis with logistic regression models We utilized logistic regression to identify significant epistatic interactions. A logistic regression model was built for each potential gene–gene pair previously determined by the ensemble SVM correlation analysis. The variables of the gene–gene logistic regression model were composed of both alleles and allele–allele interaction variables: $$Y \sim \beta _o + \mathop {\sum}\nolimits_i {\beta _ia_i} + \mathop {\sum}\nolimits_j {\beta _{I + j}b_j} + {\sum} {\mathop {\sum}\nolimits_{i,j} {\beta _{I + J + k}a_ib_j} }{,}$$ where i and j index the alleles for genes a and b, respectively, I and J are the total number of alleles for genes a and b, respectively, Y is the binary AMR phenotype, k indexes each unique interaction term, aibj, and β is the regression coefficient corresponding to each predictor. The interaction terms were limited to cases in which the two alleles co-occur in at least one strain. The interaction variable was the dot product of the two allele presence–absence vectors. In order to account for collinearity in the variables, we applied the following three filtering criteria (note that ai is interchangeable with bj): If the allele ai presence–absence is the same as the interaction aibj presence–absence, remove the aibj interaction variable from the logistic regression model If the allele ai presence–absence is equal to allele bj presence–absence, remove both variables as well as the allele–allele interaction variable, aibj. If the allele ai presence–absence is equal to the sum of all interaction variables involving that allele (i.e., aibj for all j), remove the allele variable, but keep the interaction variables. We filtered for allele–allele interactions with P value < 0.05 after Benjamini–Hochberg multiple-testing corrections. The resulting set of gene–gene interactions encompassing significant allele–allele interactions were portrayed through allele co-occurrence tables (Supplementary Data 5). Logistic regression and statistical tests were implemented using the python package statsmodels38. Calculation of log odds ratio in allele co-occurrence tables The odds ratio of each cell in the allele co-occurrence tables was determined as follows: $${{\mathrm{OR} = \frac{{\mathrm{BR} \ast {\mathrm{NR}}}}{{\mathrm{BS} \ast {\mathrm{NS}}}}}}{,}$$ where BR is the number of strains that have both alleles and are resistant to the specified antibiotic, NR is the number of strains that do not have both alleles and are resistant to the specified antibiotic, BS is the of strains that have both alleles and are susceptible to the specified antibiotic, NS is the number of strains that do not have both alleles and are susceptible to the specified antibiotic. For a single allele, the odds ratio was calculated the same way with each variable representing the single allele case. If any of the four values (BR, BS, NR, and NS) were zero, 0.5 was added to each value in order to ensure a value when computing the logarithm of the odds ratio. Missing alleles in allele co-occurrence tables counts The lack of specific alleles shown in the allele co-occurrence table is due to strains missing some alleles. For example, embB allele 5 is found in 147 strains but only 144 strains have both embB allele 5 and ubiA allele 2 (Fig. 2). Specifically, the three strains missing the three ubiA alleles are the following PATRIC strains as described by their genome identifiers: 1423432.3, 1448794.3, and 1448824.3. Searching on the PATRIC database for either ubiA or Rv3806c results in 0 hits for these organisms. While it is unlikely that the strain is missing this allele, these limitations are not due to the analysis but instead results from the selection of strains. These events happen quite rarely and were accounted for in the partitioning of pan-genome portions. The large sample size was able to recapitulate the key genes due to large sample size. Structural protein analysis of identified AMR genes For identified AMR genes, the ssbio software was used to gain gene-specific, protein sequence and structure based information about residue-level changes (SNPs and deletions) present in the M. tuberculosis alleles19. Each AMR gene was mapped to a reference protein sequence file obtained from UniProt39 and sequence-based metadata identifying protein-specific features (e.g., active sites, secondary structures, and mutations in studied wild-type strains) was used to determine the occurrence of allele-specific AMR mutations within the gene feature set (Supplementary Data 6). When available, AMR genes were additionally mapped to experimentally obtained protein structures from the RCSB Protein Data Bank or to homology structures generated using the Iterative Threading ASSEmbly Refinement (I-TASSER) platform40,41. To help elucidate the mechanistic effects of AMR mutations, both AMR mutations and the residue-level feature set were mapped to these structures and visualized using the NGLview Jupyter notebook plugin42. The structural information was utilized to calculate distances between each mutation and annotated protein feature (Supplementary Data 6). The computational platform is provided as a github code repository. All data utilized in this study is publicly available at the PATRIC database. Identifiers for the 1595 genomes are provided in the Supplementary Information (Supplementary Data7). References for the published and unpublished data sets can be found in the Supplementary Information (Supplementary Data 7). The sequencing data for the TB Antibiotic Resistance Catalog (TB-ARC) projects (Supplementary Data 7) were generated at the Broad institute. Additional information for each of these unpublished projects can be found at the Broad Institute website (https://olive.broadinstitute.org/projects/tb_arc). Davis, J. J. et al. Antimicrobial resistance prediction in PATRIC and RAST. Sci. Rep. 6, 27930 (2016). Manson, A. L. et al. Genomic analysis of globally diverse Mycobacterium tuberculosis strains provides insights into the emergence and spread of multidrug resistance. Nat. Genet. 49, 395–402 (2017). Walker, T. M. et al. Whole-genome sequencing for prediction of Mycobacterium tuberculosis drug susceptibility and resistance: a retrospective cohort study. Lancet Infect. Dis. 15, 1193–1202 (2015). Farhat, M. R. et al. Genomic analysis identifies targets of convergent positive selection in drug-resistant Mycobacterium tuberculosis. Nat. Genet. 45, 1183–1189 (2013). Desjardins, C. A. et al. Genomic and functional analyses of Mycobacterium tuberculosis strains implicate ald in d-cycloserine resistance. Nat. Genet. 48, 544–551 (2016). Safi, H. et al. Evolution of high-level ethambutol-resistant tuberculosis through interacting mutations in decaprenylphosphoryl-[beta]-d-arabinose biosynthetic and utilization pathway genes. Nat. Genet. 45, 1190–1197 (2013). Zheng, J. et al. para-Aminosalicylic acid is a prodrug targeting dihydrofolate reductase in Mycobacterium tuberculosis. J. Biol. Chem. 288, 23447–23456 (2013). Moradigaravand, D. et al. dfrA thyA double deletion in para-aminosalicylic acid-resistant Mycobacterium tuberculosis Beijing strains. Antimicrob. Agents Chemother. 60, 3864–3867 (2016). Martinez, E., Holmes, N., Jelfs, P. & Sintchenko, V. Genome sequencing reveals novel deletions associated with secondary resistance to pyrazinamide in MDR Mycobacterium tuberculosis. J. Antimicrob. Chemother. 70, 2511–2514 (2015). Wattam, A. R. et al. PATRIC, the bacterial bioinformatics database and analysis resource. Nucleic Acids Res. 42, D581–D591 (2014). Coll, F. et al. A robust SNP barcode for typing Mycobacterium tuberculosis complex strains. Nat. Commun. 5, 4812 (2014). Shannon, C. E. A mathematical theory of communication. Bell Syst. Tech. J. 27, 623–656 (1948). Article MathSciNet Google Scholar Earle, S. G. et al. Identifying lineage effects when controlling for population structure improves power in bacterial association studies. Nat. Microbiol 1, 16041 (2016). Lees, J. A. et al. Sequence element enrichment analysis to determine the genetic basis of bacterial phenotypes. Nat. Commun. 7, 12797 (2016). Jaillard, M. et al. Representing genetic determinants in bacterial GWAS with compacted De Bruijn graphs. Preprint at https://www.biorxiv.org/content/early/2017/03/03/113563 (2017). Xu, W. et al. Chemical genetic interaction profiling reveals determinants of intrinsic antibiotic resistance in Mycobacterium tuberculosis. Antimicrob. Agents Chemother. 61, e01334–17 (2017). Xu, Y., Jia, H., Huang, H., Sun, Z. & Zhang, Z. Mutations found in embCAB, embR, and ubiA genes of ethambutol-sensitive and -resistant Mycobacterium tuberculosis clinical isolates from China. Biomed. Res. Int. 2015, 951706 (2015). Brossier, F. et al. Molecular analysis of the embCAB locus and embR gene involved in ethambutol resistance in clinical isolates of Mycobacterium tuberculosis in France. Antimicrob. Agents Chemother. 59, 4800–4808 (2015). Mih, N. et al. ssbio: a Python framework for structural systems biology. Bioinformatics 34, 2155–2157 (2018). Rozwarski, D. A., Grant, G. A., Barton, D. H., Jacobs, W. R. Jr & Sacchettini, J. C. Modification of the NADH of the isoniazid target (InhA) from Mycobacterium tuberculosis. Science 279, 98–102 (1998). Rawat, R., Whitty, A. & Tonge, P. J. The isoniazid-NAD adduct is a slow, tight-binding inhibitor of InhA, the Mycobacterium tuberculosis enoyl reductase: adduct affinity and drug resistance. Proc. Natl Acad. Sci. USA 100, 13881–13886 (2003). Sharma, K. et al. Transcriptional control of the mycobacterial embCAB operon by PknH through a regulatory protein, EmbR, in vivo. J. Bacteriol. 188, 2936–2944 (2006). Werther, T. et al. New insights into structure–function relationships of oxalyl-CoA decarboxylase from Escherichia coli. FEBS J. 277, 2628–2640 (2010). Puckett, S. et al. Glyoxylate detoxification is an essential function of malate synthase required for carbon assimilation in Mycobacterium tuberculosis. Proc. Natl Acad. Sci. USA 114, E2225–E2232 (2017). Beste, D. J. V. et al. 13C metabolic flux analysis identifies an unusual route for pyruvate dissimilation in mycobacteria which requires isocitrate lyase and carbon dioxide fixation. PLoS Pathog. 7, e1002091 (2011). Nandakumar, M., Nathan, C. & Rhee, K. Y. Isocitrate lyase mediates broad antibiotic tolerance in Mycobacterium tuberculosis. Nat. Commun. 5, 4306 (2014). Skrahina, A. et al. Alarming levels of drug-resistant tuberculosis in Belarus: results of a survey in Minsk. Eur. Respir. J. 39, 1425–1431 (2012). Park, J. S. Issues related to the updated 2014 Korean guidelines for tuberculosis. Tuberc. Respir. Dis. 79, 1–4 (2016). Power, R. A., Parkhill, J. & de Oliveira, T. Microbial genome-wide association studies: lessons from human GWAS. Nat. Rev. Genet. 18, 41–50 (2017). Chen, P. E. & Shapiro, B. J. The advent of genome-wide association studies for bacteria. Curr. Opin. Microbiol. 25, 17–24 (2015). Gagneux, S. et al. The competitive cost of antibiotic resistance in Mycobacterium tuberculosis. Science 312, 1944–1946 (2006). Kavvas, E. S. et al. Updated and standardized genome-scale reconstruction of Mycobacterium tuberculosis H37Rv, iEK1011, simulates flux states indicative of physiological conditions. BMC Syst. Biol. 12, 25 (2018). Li, W. & Godzik, A. Cd-hit: a fast program for clustering and comparing large sets of protein or nucleotide sequences. Bioinformatics 22, 1658–1659 (2006). Rice, P., Longden, I. & Bleasby, A. EMBOSS: the European Molecular Biology Open Software Suite. Trends Genet. 16, 276–277 (2000). Stamatakis, A. RAxML version 8: a tool for phylogenetic analysis and post-analysis of large phylogenies. Bioinformatics 30, 1312–1313 (2014). Letunic, I. & Bork, P. Interactive tree of life (iTOL) v3: an online tool for the display and annotation of phylogenetic and other trees. Nucleic Acids Res. 44, W242–W245 (2016). Kinney, J. B. & Atwal, G. S. Equitability, mutual information, and the maximal information coefficient. Proc. Natl Acad. Sci. USA 111, 3354–3359 (2014). Article ADS MathSciNet CAS Google Scholar Seabold, S. & Perktold, J. Statsmodels: econometric and statistical modeling with python. In Proc. 9th Python Science Conference (eds van der Walt, S. & Millman, J.) 57 (SciPy, 2010). The UniProt Consortium. UniProt: the universal protein knowledgebase. Nucleic Acids Res. 45, D158–D169 (Springer, New York, 2017). Berman, H. M. et al. The protein data bank. Nucleic Acids Res. 28, 235–242 (2000). Yang, J. et al. The I-TASSER Suite: protein structure and function prediction. Nat. Methods 12, 7–8 (2015). Nguyen, H., Case, D. A. & Rose, A. S. NGLview—Interactive molecular graphics for Jupyter notebooks. Bioinformatics 34, 1241-1242 (2017). Musser, J. M. et al. Characterization of the catalase-peroxidase gene (katG) and inhA locus in isoniazid-resistant and susceptible strains of Mycobacterium tuberculosis by automated DNA sequencing: restricted array of mutations associated with drug resistance. J. Infect. Dis. 173, 196–202 (1996). Torres, J. N. et al. Novel katG mutations causing isoniazid resistance in clinical M. tuberculosis isolates. Emerg. Microbes Infect. 4, e42 (2015). Taniguchi, H. et al. Rifampicin resistance and mutation of the rpoB gene in Mycobacterium tuberculosis. FEMS Microbiol. Lett. 144, 103–108 (1996). de Vos, M. et al. Putative compensatory mutations in the rpoC gene of rifampin-resistant Mycobacterium tuberculosis are associated with ongoing transmission. Antimicrob. Agents Chemother. 57, 827–832 (2013). Louw, G. E. et al. Rifampicin reduces susceptibility to ofloxacin in rifampicin-resistant Mycobacterium tuberculosis through efflux. Am. J. Respir. Crit. Care Med. 184, 269–276 (2011). Telenti, A. et al. The emb operon, a gene cluster of Mycobacterium tuberculosis involved in resistance to ethambutol. Nat. Med. 3, 567–570 (1997). Scorpio, A. & Zhang, Y. Mutations in pncA, a gene encoding pyrazinamidase/nicotinamidase, cause resistance to the antituberculous drug pyrazinamide in tubercle bacillus. Nat. Med. 2, 662–667 (1996). Nair, J., Rouse, D. A., Bai, G.-H. & Morris, S. L. The rpsL gene and streptomycin resistance in single and multiple drug-resistant strains of Mycobacterium tuberculosis. Mol. Microbiol. 10, 521–527 (1993). Wong, S. Y. et al. Mutations in gidB confer low-level streptomycin resistance in Mycobacterium tuberculosis. Antimicrob. Agents Chemother. 55, 2515–2522 (2011). Von Groll, A. et al. Fluoroquinolone resistance in Mycobacterium tuberculosis and mutations in gyrA and gyrB. Antimicrob. Agents Chemother. 53, 4498–4500 (2009). Fivian-Hughes, A. S., Houghton, J. & Davis, E. O. Mycobacterium tuberculosis thymidylate synthase gene thyX is essential and potentially bifunctional, while thyA deletion confers resistance to p-aminosalicylic acid. Microbiology 158, 308–318 (2012). Morlock, G. P., Metchock, B., Sikes, D., Crawford, J. T. & Cooksey, R. C. ethA, inhA, and katG loci of ethionamide-resistant clinical Mycobacterium tuberculosis isolates. Antimicrob. Agents Chemother. 47, 3799–3805 (2003). Wang, F. et al. Identification of a small molecule with activity against drug-resistant and persistent tuberculosis. Proc. Natl Acad. Sci. USA 110, E2510–E2517 (2013). Nakatani, Y. et al. Role of alanine racemase mutations in Mycobacterium tuberculosis d-cycloserine resistance. Antimicrob. Agents Chemother. 61 e01575–17 (2017). Eschenburg, S., Priestman, M. & Schönbrunn, E. Evidence that the fosfomycin target Cys115in UDP-N-acetylglucosamine enolpyruvyl transferase (MurA) is essential for product release. J. Biol. Chem. 280, 3757–3763 (2004). Gopal, P. et al. Pyrazinamide resistance is caused by two distinct mechanisms: prevention of coenzyme A depletion and loss of virulence factor synthesis. ACS Infect. Dis. 2, 616–626 (2016). Philalay, J. S., Palermo, C. O., Hauge, K. A., Rustad, T. R. & Cangelosi, G. A. Genes required for intrinsic multidrug resistance in Mycobacterium avium. Antimicrob. Agents Chemother. 48, 3412–3418 (2004). Bisson, G. P. et al. Upregulation of the phthiocerol dimycocerosate biosynthetic pathway by rifampin-resistant, rpoB mutant Mycobacterium tuberculosis. J. Bacteriol. 194, 6441–6452 (2012). Li, G. et al. Study of efflux pump gene expression in rifampicin-monoresistant Mycobacterium tuberculosis clinical isolates. J. Antibiot. 68, 431–435 (2015). Jang, J. et al. Efflux attenuates the anti-bacterial activity of Q203 in Mycobacterium tuberculosis. Antimicrob. Agents Chemother. doi: 0.1128/AAC.02637-16 (2017). Vilchèze, C. et al. Mycothiol biosynthesis is essential for ethionamide susceptibility in Mycobacterium tuberculosis. Mol. Microbiol. 69, 1316–1329 (2008). Li, X.-Z., Elkins, C. A. & Zgurskaya, H. I. Efflux-Mediated Antimicrobial Resistance in Bacteria: Mechanisms, Regulation and Clinical Implications (Springer, New York, 2016). Danilchanka, O., Mailaender, C. & Niederweis, M. Identification of a novel multidrug efflux pump of Mycobacterium tuberculosis. Antimicrob. Agents Chemother. 52, 2503–2511 (2008). We thank Anand Sastry for helpful discussions regarding machine learning. This research was supported by the NIH NIAID grant (1-U01-AI124316-01), and the NIH NIGMS (award U01GM102098). Department of Bioengineering, University of California, San Diego, La Jolla, CA, USA Erol S. Kavvas, Edward Catoiu, Nathan Mih, James T. Yurkovich, Yara Seif, David Heckmann, Amitesh Anand, Laurence Yang, Jonathan M. Monk & Bernhard O. Palsson Bioinformatics and Systems Biology Program, University of California, San Diego, La Jolla, CA, USA Nathan Mih, James T. Yurkovich & Bernhard O. Palsson Department of Pediatrics, University of California, San Diego, La Jolla, CA, USA Nicholas Dillon, Victor Nizet & Bernhard O. Palsson Skaggs School of Pharmacy and Pharmaceutical Sciences, University of California, San Diego, La Jolla, CA, USA Nicholas Dillon & Victor Nizet Erol S. Kavvas Edward Catoiu Nathan Mih James T. Yurkovich Yara Seif Nicholas Dillon David Heckmann Amitesh Anand Laurence Yang Victor Nizet Jonathan M. Monk Bernhard O. Palsson E.K., J.M.M. and B.O.P. conceived and designed the study. E.K. conducted all analysis, with contributions from E.C., N.M., D.H., and J.M.M., E.K., Y.S. and J.M.M. performed the pan-genome analysis. E.K. and D.H. performed the epistatic interaction analysis. E.C. and N.M. developed the 3D protein structural analysis pipeline. E.K., J.T.Y., E.C., N.M., Y.S., N.D., A.A., L.Y., D.H., V.N., J.M.M. and B.O.P. provided study oversight, wrote the manuscript, and edited the manuscript. J.M.M. and B.O.P. managed the study. All authors reviewed and approved the final manuscript. Correspondence to Jonathan M. Monk or Bernhard O. Palsson. Peer Review file Description of Additional Supplementary files Supplementary Data 1 Kavvas, E.S., Catoiu, E., Mih, N. et al. Machine learning and structural analysis of Mycobacterium tuberculosis pan-genome identifies genetic signatures of antibiotic resistance. Nat Commun 9, 4306 (2018). https://doi.org/10.1038/s41467-018-06634-y Dissecting microbial communities and resistomes for interconnected humans, soil, and livestock Alexandre Maciel-Guerra Tania Dottorini The ISME Journal (2023) Towards a robust out-of-the-box neural network model for genomic data Zhaoyi Zhang Songyang Cheng Claudia Solis-Lemus BMC Bioinformatics (2022) Differential early diagnosis of benign versus malignant lung cancer using systematic pathway flux analysis of peripheral blood leukocytes Xiaoyu Li Qi Mei Accelerating antibiotic discovery through artificial intelligence Marcelo C. R. Melo Jacqueline R. M. A. Maasch Cesar de la Fuente-Nunez The role of artificial intelligence in the battle against antimicrobial-resistant bacteria Hul Juan Lau Chern Hong Lim Hock Siew Tan Current Genetics (2021)
CommonCrawl
A convex formulation for joint RNA isoform detection and quantification from multiple RNA-seq samples Elsa Bernard1, 2, 3, Laurent Jacob4, Julien Mairal5, Eric Viara6 and Jean-Philippe Vert1, 2, 3Email author © Bernard et al. 2015 Detecting and quantifying isoforms from RNA-seq data is an important but challenging task. The problem is often ill-posed, particularly at low coverage. One promising direction is to exploit several samples simultaneously. We propose a new method for solving the isoform deconvolution problem jointly across several samples. We formulate a convex optimization problem that allows to share information between samples and that we solve efficiently. We demonstrate the benefits of combining several samples on simulated and real data, and show that our approach outperforms pooling strategies and methods based on integer programming. Our convex formulation to jointly detect and quantify isoforms from RNA-seq data of multiple related samples is a computationally efficient approach to leverage the hypotheses that some isoforms are likely to be present in several samples. The software and source code are available at http://cbio.ensmp.fr/flipflop. Isoform Alternative splicing Muti-task estimation Most genes in eukaryote genomes are subject to alternative splicing [1], meaning they can give rise to different mature mRNA molecules, called transcripts or isoforms, by including or excluding particular exons, retaining introns or using alternative donor or acceptor sites. Alternative splicing is a regulated process that not only greatly increases the repertoire of proteins that can be encoded by the genome [2], but also appears to be tissue-specific [3, 4] and regulated in development [5], as well as implicated in diseases such as cancers [6]. Hence, detecting isoforms in different cell types or samples is an important step to understand the regulatory programs of the cells or to identify splicing variants responsible for diseases. Next-generation sequencing (NGS) technologies can be used to identify and quantify these isoforms, using the RNA-seq protocol [7–9]. However, identification and quantification of isoforms from RNA-seq data, sometimes referred to as the isoform deconvolution problem, is often challenging because RNA-seq technologies usually only sequence short portions of mRNA molecules, called reads. A given read sequenced by RNA-seq can therefore originate from different transcripts that share a particular portion containing the read, and a deconvolution step is needed to assign the read to a particular isoform or at least estimate globally which isoforms are present and in which quantity based on all sequenced reads. When a reference genome is available, the RNA-seq reads can be aligned on it using a dedicated splice mapper [10–12], and the deconvolution problem for a given sample consists in estimating a small set of isoforms and their abundances that explain well the observed coverage of reads along the genome. One of the main difficulty lies in the fact that the number of candidate isoforms is very large, growing essentially exponentially with the number of exons. Approaches that try to perform de novo isoform reconstruction based on the read alignment include Cufflinks [13], Scripture [14], IsoLasso [15], NSMAP [16], SLIDE [17], iReckon [18], Traph [19], MiTie [20], and FlipFlop [21]. However, the problem is far from being solved and is still challenging, due in particular to identifiability issues (the fact that different combinations of isoforms can correctly explain the observed reads), particularly at low coverage, which limits the statistical power of the inference methods: as a result, the performance reported by the state-of-the-art is often disappointingly low. One promising direction to improve isoform deconvolution is to exploit several samples at the same time, such as biological replicates or time course experiments. If some isoforms are shared by several samples, potentially with different abundances, the identifiability issue may vanish and the statistical power of the deconvolution methods may increase due to the availability of more data for estimation. For example, the state-of-the-art methods CLIIQ [22] and MiTie [20] perform joint isoform deconvolution across multiple samples, by formulating the problem as an NP-hard combinatorial problem solved by mixed integer programming. MiTie avoids an explicit enumeration of candidate isoforms using a pruning strategy, which can drastically speed up the computation in some cases but remains very slow in other cases. The Cufflinks/Cuffmerge [13] method uses a more naive and straightforward approach, where transcripts are first predicted independently on each sample, before being merged (with some heuristics) in a unique set. In this paper, we propose a new method for isoform deconvolution from multiple samples. When applied to a single sample, the method boils down to FlipFlop [21]; thus, we simply refer to the new multi-sample extension of the technique as FlipFlop as well. It formulates the isoform deconvolution problem as a continuous convex relaxation of the combinatorial problem solved by CLIIQ and MiTie, using the group-lasso penalty [23, 24] to impose shared sparsity of the models estimated on each sample. The group-lasso penalty allows to select a few isoforms among many candidates jointly across samples, while assigning sample-specific abundance values. By doing so, it shares information between samples but still considers each sample to be specific, without learning a unique model for all samples together as a merging strategy would do. Compared to CLIIQ or MiTie, FlipFlop addresses a convex optimization problem efficiently, and involves an automatic model selection procedure to balance the fit of the data against the number of detected isoforms. We show experimentally, on simulated and real data, that FlipFlop is more accurate than simple pooling strategies and than other existing methods for isoform deconvolution from multiple samples. The deconvolution problem for a single sample can be cast as a sparse regression problem of the observed reads against expressed isoforms, and solved by penalized regression techniques like the Lasso, where the ℓ 1 penalty controls the number of expressed isoforms. This approach is implemented by several of the referenced methods, including IsoLasso [15] and FlipFlop [21]. When several samples are available, we propose to generalize this approach by using a convex penalty that leads to small sets of isoforms jointly expressed across samples, as we explain below. Multi-dimensional splicing graph The splicing graph for a gene in a single sample is a directed acyclic graph with a one-to-one mapping between the set of possible isoforms of the gene and the set of paths in the graph. The nodes of the graph typically correspond to exons, sub-exons [15, 17, 20] or ordered sets of exons [21, 25]—the definition we adopt here as it allows to properly model long reads spanning more than 2 exons [21]. The directed edges correspond to links between possibly adjacent nodes. When working with several samples, we choose to build the graph based on the read alignments of all samples pooled together. Since the exons used to build the graph are estimated from read clusters, this step already takes advantage of information from multiple samples, and leads to a more accurate graph. We associate a list of read counts, as many as samples, with each node of the graph. In other words, we extend the notion of splicing graph to the multiple-sample framework, using a shared graph structure with specific count values on each node. Our multi-dimensional splicing graph is illustrated in Fig. 1. Multi-dimensional splicing graph with three samples. Each candidate isoform is a path from source node s to sink node t. Nodes denoted as grey squares correspond to ordered set of exons. Each read is assigned to a unique node, corresponding to the exact set of exons that it overlaps. Note that more than 2 exons can constitute a node, properly modeling reads spanning more than 2 exons. A vector of read counts (one component per sample) is then associated to each node of the graph. Note also that some components of a vector can be equal to zero Throughout the paper, we call G=(V,E) the multi-dimensional splicing graph where V is the set of vertices and E the set of edges. We denote by \(\mathcal {P}\) the set of all paths in G. By construction of the graph, each path \(p \in \mathcal {P}\) corresponds to a unique candidate isoform. We denote by \({y_{v}^{t}}\) the number of reads falling in each node v∈V for each sample t∈{1,…,T}, where T is the number of samples. We denote by \({\beta _{p}^{t}} \in \mathbb {R}_{+}\) the abundance of isoform p for sample t. Finally, we define for every path p in \(\mathcal {P}\) the T-dimensional vector of abundances \(\boldsymbol \beta _{p} = \left [{\beta _{p}^{1}}, {\beta _{p}^{2}}, \ldots, {\beta _{p}^{T}}\right ]\), and denote by \(\boldsymbol \beta = \left [\boldsymbol \beta _{p}\right ]_{p \in \mathcal {P}}\) the matrix of all abundances values with \(|\mathcal {P}|\) rows and T columns. Joint sparse estimation We propose to estimate β through the following penalized regression problem: $$ \min_{\boldsymbol \beta} ~~ \mathcal{L}(\boldsymbol \beta) + \lambda \sum_{p \in \mathcal{P}} \Arrowvert\, \boldsymbol \beta_{p} \,\Arrowvert{\!~\!}_{2} ~~~ \text{such that}~~ \boldsymbol \beta_{p} \geq 0 ~~\text{for all}\,{p} \in \mathcal{P}, $$ where \(\mathcal {L}\) is a convex smooth loss function defined below, \(\Arrowvert \, \boldsymbol \beta _{p} \,\Arrowvert _{2}=\sqrt {\sum _{t=1}^{T} \left ({\beta _{p}^{t}}\right)^{2}}\) is the Euclidean norm of the vector of abundances of isoform p across the samples, and λ is a non-negative regularization parameter that controls the trade-off between loss and sparsity. The ℓ 1,2-norm \(\|\boldsymbol \beta \|_{1, 2} = \sum _{p \in \mathcal {P}} \Arrowvert \, \boldsymbol \beta _{p} \,\Arrowvert _{2}\), sometimes called the group-lasso penalty [23], induces a shared sparsity pattern across samples: solutions of (1) typically have entire rows equal to zero [23], while the abundance values in the non-zero rows can be different among samples. This shared sparsity-inducing effect corresponds exactly to our assumption that only a limited number of isoforms are present across the samples (non-zero rows of β). It can be thought of as a convex relaxation of the number of isoforms present in at least one sample, which is used as criterion in the combinatorial formulations of CLIIQ and MiTie. We define the loss function \(\mathcal {L}\) as the sum of the T sample losses, thus assuming independence between samples as reads are sampled independently from each sample. The loss is derived from the Poisson negative likelihood (the Poisson model has been successfully used in several RNA-seq studies [16, 21, 26, 27]) so that the general loss is defined as $$\mathcal{L}(\boldsymbol \beta)\,=\, \sum_{t=1}^{T} \sum_{v \in V} \left[{\delta_{v}^{t}}- {y_{v}^{t}} \log {\delta_{v}^{t}} \right] ~\text{with}~ {\delta_{v}^{t}} \,=\, \left(N^{t} l_{v}\! \sum_{p\in\mathcal{P} : p \ni v} \!\!\!{\beta_{p}^{t}}\right), $$ where N t is the total number of mapped reads in sample t and l v is the effective length of node v, as defined in [21]. The sum \(\sum {\beta _{p}^{t}}\) over all \(p \in \mathcal {P}\) that contain node v represents the sum of expressions in sample t of all isoforms involving node v. Candidate isoforms Since \(|\mathcal {P}|\) grows exponentially with the number of nodes in G, we need to avoid an exhaustive enumeration of all candidate isoforms \(p \in \mathcal {P}\). FlipFlop efficiently solves problem (1) in the case where T=1, i.e., the ℓ 1-regularized regression \( \min _{ \boldsymbol \beta _{p} \in \mathbb {R}_{+}} \mathcal {L}(\boldsymbol \beta) + \lambda \sum _{p \in \mathcal {P}} \beta _{p}\) using network flow techniques, without requiring an exhaustive path enumeration and leading to a polynomial-time algorithm in the number of nodes. Unfortunately, this network flow formulation does not extend trivially to the multi-sample case. We therefore resort to a natural two-step heuristic: we first generate a large set of candidate isoforms by solving T+1 one-dimensional problems—the T independent ones, plus the one corresponding to all samples pooled together—for different values of λ, and taking the union of all selected isoforms, and we then solve (1) restricted to this union of isoforms. This approach can potentially miss isoforms which would be selected by solving (1) over all paths \(p\in \mathcal {P}\) and are not selected for any single sample or when pooling all reads to form a single sample, but allows to efficiently approximate (1). We observe that it leads to good results in various settings in practice, as shown in the experimental part. We solve (1) for a large range of values of the regularization parameter λ, obtaining solutions from very sparse to more dense (a sparse solution involves few non-zero abundance vectors β p ). Each solution, i.e., each set of selected isoforms obtained with a particular λ value, is then re-fitted against individual samples—without regularization but keeping the non-negativity contraint—so that the estimated abundances do not suffer from shrinkage [28]. The solution with the largest BIC criterion [29], where the degree of freedom of a group-lasso solution is computed as explained in [23], is finally selected. Note that although the same list of isoforms selected by the group-lasso is tested on each sample, the refitting step lets each sample pick the subset of isoforms it needs among the list, meaning that all samples do not necessarily share all isoforms at the end of the deconvolution. We show results on simulated human RNA-seq data with both increasing coverage and increasing number of samples, with different simulation settings, and on real RNA-seq data. In all cases, reads are mapped to the reference with TopHat2 [10]. We compare FlipFlop implementing the group-lasso approach (1) to the simpler strategy of pooling all samples together, running single-sample FlipFlop [21] on the merged data, and performing a fit for each individual sample data against the selected isoforms. We also assess the performance of MiTie [20] and of the version 2.2.0 of the Cufflinks/Cuffmerge package [13]. Performances on isoform identification are summarized in terms of Fscore, the harmonic mean of precision and recall, as used in other RNA-seq studies [20, 22]. Of note, in all the following experiments, we consider a de novo setting, without feeding any of the methods with prior transcript annotations (i.e., MiTie and FlipFlop first reconstruct sub-exons and build the splicing graph, then perform isoform deconvolution). Influence of coverage and sample number The first set of simulations is performed based on the 1329 multi-exon transcripts on the positive strand of chromosome 11 from the RefSeq annotation [30]. Single-end 150 bp reads are simulated with the RNASeqReadSimulator software (available at http://alumni.cs.ucr.edu/~liw/rnaseqreadsimulator.html). We vary the number of reads from 10 thousand – 10 million per sample (corresponding approximately to sequencing depth from 1 to 1000 ×) and the number of samples from 1 – 10. All methods are run with default parameters, except that we fix region-filter to 40 and max-num-trans to 10 in MiTie as we notice that choosing these two parameter values greatly increases its performances (see Additional file 1: Figure A.1 for a comparison between MiTie with default parameters or not). Figure 2 shows the Fscore in two different settings: the Equal setting corresponds to a case where all samples express the same set of transcripts at the same abundances (in other words each sample is a noisy realization of a unique abundance profile), while in the Different setting the abundance profiles of each sample are generated independently. Hence in that case the samples share the same set of expressed transcripts but have very different expression values (the maximum correlation between two abundance vectors is 0.088). Human simulations with increasing coverage and number of samples In all cases and for all methods, the higher the coverage or the number of samples, the higher the Fscore. In the Equal case, the group-lasso and merging strategies give almost identical results, which shows the good behavior of the group-lasso, as pooling samples in that case corresponds to learning the shared abundance profile. In the Equal case again, for all methods the different Fscore curves obtained with increasing number of samples converge to different plateaux. None of these levels reaches a Fscore of 100, but the group-lasso level is the highest (together with the merging strategy). In the Different case, the group-lasso shows equal or higher Fscore than the merging strategy, with a great improvement when the coverage or the number of samples increases. The group-lasso also outperforms the Cufflinks/Cuffmerge method for all numbers of samples when the coverage is larger than 80. When using more than 5 samples the group-lasso shows greater Fscore as soon as the coverage is bigger than 15 (see table B.1 of the supplementary material for statistical significance). Finally, the group-lasso outperforms MiTie for all number of samples and all coverages. Of note, the group-lasso performances are better in the Different setting than in the Equal setting, showing that our multi-sample method can efficiently deal with diversity among samples. We also investigate the influence of the read length on the performance of the compared methods in the Different setting. Figure 3 shows the obtained Fscore when using either 2 or 5 samples with a fixed 100×104 coverage, while read length varies from 75 to 300 bp. Because we properly model long reads in our splicing graph the group-lasso performance greatly increases with the read length, proportionally much more than other state-of-the-art methods. When using 5 samples and long 300 bp reads, the group-lasso reaches a very high Fscore of 90 (compared to 84 for the second best Cufflinks/Cuffmerge method), showing that our method is very well adapted to RNA-Seq design with long reads and several biological replicates. Human simulations with various read lengths Note finally that our method generalizes to paired-end reads. We show in Additional file 1: Figure C.1 a comparison of the tested methods on simulations in the Different setting using both paired or single-end reads at comparable coverage. Influence of hyper-parameters with realistic simulations The second set of simulations is performed using a different and more realistic simulator, the Flux Simulator [31], in order to check that our approach performs well regardless the choice of the simulator. Coverage and single-end read length are respectively fixed to 105 reads and 150 bp, and we run experiments for one up to five samples. We study the influence of hyper-parameters on the performances of the compared methods, and show that our approach leads to better results with optimized parameters as well. Hyper-parameters are first tuned on a training set of 600 transcripts from the positive strand of chromosome 11, which is subsequently left aside from the evaluation procedure after tuning. We start by jointly optimizing a set of pre-processing hyperparameters. We then keep the combination that leads to the best training Fscore, and we jointly optimize a set of prediction hyperparameters. More specifically, we optimize 7 values of 3 different pre-processing or prediction parameters (hence 73 different combinations in both cases), except that for MiTie we add 2 values of one pre-processing parameter and 3 values of a fourth prediction parameter (hence optimizing over 9×72 and 3×73 parameters). A more detailed description of the optimized parameters is given in tables D.1 and D.2 of the supplementary. Fscore is shown on Fig. 4 for 600 other test transcripts, for both default and tuned settings (except that again we set region-filter to 40 and max-num-trans to 10 in MiTie instead of using all default parameters as it greatly improves its performances, see Additional file 1: Figure A.2 for a comparison of several versions of MiTie). For all methods and for both default and tuned settings, performances increase with the number of samples. Except for Cufflinks/Cuffmerge for the last three sample numbers, all methods improve their results after tuning of their hyper-parameters. When using default parameter values, the group-lasso shows the largest Fscore for the first three sample numbers, while Cufflinks/Cuffmerge is slightly better for the very last sample number. When using tuned parameter values, the group-lasso approach outperforms all other methods for the first three sample numbers, and is slightly better or equal to the default version of Cufflinks/Cuffmerge for the last two sample numbers. Fscore results on the Flux Simulator simulations Experiments with real data We use five samples from time course experiments on D. melanogaster embryonic development. Each sample corresponds to a 2-hour period, from 0–10 h (0–2 h, 2–4 h, …, 8–10 h). Data is available from the modENCODE [32] website. For each given period we pooled all 75 bp single-end technical replicate reads available, ending up with approximately 25 – 45 million mapped reads per sample. A description of the samples is given in table C.1. Data from the same source were also used in the MiTie paper [20]. Because the exact true sets of expressed transcripts is not known, we validated predictions based on public transcript annotations. We built a comprehensive reference using three different databases available on the UCSC genome browser [33], namely the RefSeq [30], Ensembl [34] and FlyBase [35] annotations. More specifically, we took the union of the multi-exon transcripts described in the three databases, while considering transcripts with the same internal exon/intron structure but with different length of the first or the last exon as duplicates. Reads were mapped to the reference transcriptome in order to restrict predictions to known genomic regions, and we perform independent analysis on the forward and reverse strands. All methods are run with default parameters. Figure 5 shows the Fscore per sample when FlipFlop, MiTie, and Cufflinks are run independently on each sample or when multi-sample strategies are used. Results on the forward and reverse strands are extremely similar. All methods give better results than their independent versions, and the performances of the multi-sample approaches increase with the number of used samples. Again, the group-lasso strategy of FlipFlop seems more powerful than the pooling strategy, and gives better Fscore than MiTie and Cufflinks/Cuffmerge in that context. Fscore results on the modENCODE data Considering running time, each method was run on a 48 CPU machine at 2.2 GHz with 256 GB of RAM using 6 threads (all tools support multi-threading). When using only a single sample and 6 threads, Cufflinks, FlipFlop and MiTie respectively completed in ∼4.2 min, ∼9.5 min and ∼26.6 min. while when using 5 samples and 6 threads, Cufflinks/Cuffmerge, FlipFlop with group-lasso and MiTie took ∼0.45 h, ∼1 h and ∼25 h(see Additional file 1: Figure G.1). We describe an example as a proof of concept that multi-sample FlipFlop with the group-lasso approach (1) can be much more powerful in some cases than its independent FlipFlop version, and than the merging strategy of Cufflinks/Cuffmerge. Figure 6 shows transcriptome assemblies of gene CG15717 on the first three modENCODE samples presented in the previous section, denoted as 0–2 h, 2–4 h and 4–6 h on the figure. For each sample, we display the read coverage along the gene, the junctions between exons, and the single-sample FlipFlop and Cufflinks predictions. At the bottom of the figure, we show the 6 RefSeq records as well as the multi-sample predictions obtained with FlipFlop or with Cuffmerge. A predicted transcript is considered as valid if all its exon/intron boundaries match a RefSeq record (✓ and ✗ denote validity or not). The estimated abundances in FPKM are given on the right-hand side of each predicted transcript. Of note, the group-lasso predictions come with estimated abundances (one specific value per sample), whereas Cufflinks/Cuffmerge only reports the structure of the transcripts. Transcriptome predictions of gene CG15717 from 3 samples of the modENCODE data. Samples name are 0–2 h, 2–4 h and 4–6 h. Each sample track contains the read coverage (light grey) and junction reads (red) as well as FlipFlop predictions (light blue) and Cufflinks predictions (light green). The bottom of the figure displays the RefSeq records (black) and the multi-sample predictions of the group-lasso (dark blue) and of Cufflinks/Cuffmerge (dark green) For single-sample predictions, FlipFlop and Cufflinks report the same number of transcripts for each sample (respectively 2, 2 and 3 predictions for samples 0–2 h, 2–4 h and 4–6 h), with the same number of valid transcripts, except for the first sample where FlipFlop makes 2 good guesses against 1 for Cufflinks. This difference might be due to the fact that FlipFlop not only tries to explain the read alignement as Cufflinks does, but also the coverage discrepancies along the gene. For multi-sample predictions, FlipFlop gives much more reliable results, with 4 validated transcripts (among 4 predictions), while Cufflinks/Cuffmerge makes only 1 good guess out of 2 predictions. FlipFlop uses evidences from all samples together to find transcripts with for instance missing junction reads in one of the sample (such as the one with 30, 7 and 20 FPKM) or lowly expressed transcripts (such as the one with 0, 0.5 and 2 FPKM). Cufflinks/Cuffmerge explains all read junctions but does not seek to explain the multi-sample coverage, which seems important in that example. Importantly, one can note that the results of multi-sample group-lasso FlipFlop are different from the union of all single-sample FlipFlop predictions (the union coincides here to the results of FlipFlop on the merged sample—data not shown). This illustrates the fact that designing a dedicated multi-sample procedure can lead to more statistical power than merging individual results obtained on each sample independently. We display an additional example in Additional file 1: Figure H.1. We proposed a multi-sample extension of FlipFlop, which implements a new convex optimization formulation for RNA isoform identification and quantification jointly across several samples. Experiments on simulated and real data show that an appropriate method for joint estimation is more powerful than a naive pooling of reads across samples. We also obtained promising results compared to MiTie, which tries to solve a combinatorial formulation of the problem. Accurately estimating isoforms in multiple samples is an important preliminary step to differential expression studies at the level of isoforms [36, 37]. Indeed, isoform deconvolution from single samples suffers from high false positive and false negatives rates, making the comparison between different samples even more difficult if isoforms are estimated from each sample independently. Although the FlipFlop formulation of joint isoform deconvolution across samples provides a useful solution to define a list ofisoforms expressed (or not) in each sample, variants of FlipFlop specifically dedicated to the problem of finding differentially expressed isoforms may also be possible by changing the objective function optimized in (1). Finally, as future multi-sample applications such as jointly analyzing large cohorts of cancer samples or many cells in single-cell RNA-seq are likely to involve hundreds or thousands of samples, more efficient implementations involving in particular distributed optimization may be needed. This work was supported by the European Research Council [SMAC-ERC-280032 to J-P.V., E.B.]; the European Commission [HEALTH-F5-2012-305626 to J-P.V., E.B.]; and the French National Research Agency [ANR-09-BLAN-0051-04, ANR-11-BINF-0001 to J-P.V., E.B., ANR-14-CE23-0003-01 to J.M, L.J.]. Additional file 1 This file provides additional results on the simulated experiments, as well as a detailed description of the real data, and more illustrative examples. (PDF 1505 kb) EB, LJ, JM and JPV conceived the study. EB, LJ, EV and JM implemented the method. EB performed the experiments. EB, LJ, JM and JPV wrote the manuscript. All authors read and approved the final manuscript. MINES ParisTech, PSL Research University, CBIO-Centre for Computational Biology, Fontainebleau, 77300, France Institut Curie, Paris, 75005, France INSERM U900, Paris, 75005, France Laboratoire Biométrie et Biologie Evolutive, Université de Lyon, Université Lyon 1, CNRS, INRA, UMR5558 Villeurbanne, France Inria, LEAR Team, Laboratoire Jean Kuntzmann, CNRS, Université Grenoble Alpes, 655, Avenue de l'Europe, Montbonnot, 38330, France Sysra, 91330 Yerres, France Pan Q, Shai O, Lee LJ, Frey BJ, Blencowe BJ. Deep surveying of alternative splicing complexity in the human transcriptome by high-throughput sequencing. Nat Genet. 2008; 40(12):1413–5.View ArticlePubMedGoogle Scholar Nilsen TW, Graveley BR. Expansion of the eukaryotic proteome by alternative splicing. Nature. 2010; 463(7280):457–63.View ArticlePubMedPubMed CentralGoogle Scholar Wang ET, Sandberg R, Luo S, Khrebtukova I, Zhang L, Mayr C. Alternative isoform regulation in human tissue transcriptomes. Nature. 2008; 456(7721):470–6.View ArticlePubMedPubMed CentralGoogle Scholar Xu Q, Modrek K, Lee C. Genome-wide detection of tissue-specific alternative splicing in the human transcriptome. Nucleic Acids Res. 2002; 30(17):3754–766.View ArticlePubMedPubMed CentralGoogle Scholar Kalsotra A, Cooper TA. Functional consequences of developmentally regulated alternative splicing. Nat Rev Genet. 2011; 12(10):715–29.View ArticlePubMedPubMed CentralGoogle Scholar Pal S, Gupta R, Davuluri RV. Alternative transcription and alternative splicing in cancer. Pharmacol Ther. 2012; 136(3):283–94.View ArticlePubMedGoogle Scholar Mortazavi A, Williams BA, McCue K, Schaeffer L. Mapping and quantifying mammalian transcriptomes by RNA-Seq. Nat Methods. 2008; 5(7):621–8.View ArticlePubMedGoogle Scholar Wang Z, Gerstein M, Snyder M. RNA-Seq: a revolutionary tool for transcriptomics. Nat Rev Genet. 2009; 10(1):57–63.View ArticlePubMedPubMed CentralGoogle Scholar Martin JA, Wang Z. Next-generation transcriptome assembly. Nat Rev Genet. 2011; 12(10):671–82.View ArticlePubMedGoogle Scholar Trapnell C, Patcher L, Salzberg SL. TopHat: discovering splice junctions with RNA-Seq. Bioinformatics. 2009; 25(9):1105–11.View ArticlePubMedPubMed CentralGoogle Scholar Li H, Durbin R. Fast and accurate short read alignment with burrows-wheeler transform. Bioinformatics. 2009; 25(14):1754–60.View ArticlePubMedPubMed CentralGoogle Scholar Dobin A, Carrie A, Schlesinger F, Drenkow J, Zaleski C, Sonali J, et al. STAR: ultrafast universal RNA-seq aligner. Bioinformatics. 2013; 29(1):15–21.View ArticlePubMedGoogle Scholar Trapnell C, Williams BA, Pertea G, Mortazavi AM, Kwan G, van Baren MJ, et al. Transcript assembly and quantification by RNA-Seq reveals unannotated transcripts and isoform switching during cell differentiation. Nat Biotechnol. 2010; 28(5):511–5.View ArticlePubMedPubMed CentralGoogle Scholar Guttman M, Garber M, Levin JZ, Donaghey J, Robinson J, Adiconis X, et al. Ab initio reconstruction of cell type-specific transcriptomes in mouse reveals the conserved multi-exonic structure of lincrnas. Nat Biotech. 2010; 28(5):503–10.View ArticleGoogle Scholar Li W, Feng J, Jiang T. IsoLasso: a LASSO regression approach to RNA-Seq based transcriptome assembly. J Comput Biol. 2011; 18(11):1693–1707.View ArticlePubMedPubMed CentralGoogle Scholar Xia Z, Wen W, Chang CC, Zhou X. NSMAP: a method for spliced isoforms identification and quantification from RNA-Seq. BMC Bioinformatics. 2011; 12:162.View ArticlePubMedPubMed CentralGoogle Scholar Li JJ, Jiang CR, Brown JB, Huang H, Bickel PJ. Sparse linear modeling of next-generation mRNA sequencing (RNA-Seq) data for isoform discovery and abundance estimation. Proc Natl Acad Sci USA. 2011; 108(50):19867–19872.View ArticlePubMedPubMed CentralGoogle Scholar Mezlini AM, Smith EJM, Fiume M, Buske O, Savich G, Shah S, et al. iReckon: Simultaneous isoform discovery and abundance estimation from RNA-seq data. Genome Res. 2013; 23(3):519–29.View ArticlePubMedPubMed CentralGoogle Scholar Tomescu AI, Kuosmanen A, Rizzi R, Makinen V. A novel min-cost flow method for estimating transcript expression with rna-seq. BMC Bioinformatics. 2013; 14(Suppl 5):15.Google Scholar Behr J, Kahles A, Zhong Y, Sreedharan VT, Drewe P, Ratsch G. Mitie: Simultaneous rna-seq based transcript identification and quantification in multiple samples. Bioinformatics. 2013; 29(20):2529–38.View ArticlePubMedPubMed CentralGoogle Scholar Bernard E, Jacob L, Mairal J, Vert JP. Efficient rna isoform identification and quantification from rna-seq data with network flows. Bioinformatics. 2014; 30(17):2447–455.View ArticlePubMedPubMed CentralGoogle Scholar Lin YY, Dao P, Hach F, Bakhshi M, Mo F, Lapuk A, et al. Cliiq: Accurate comparative detection and quantification of expressed isoforms in a population In: Raphael BJ, Tang J, editors. WABI. Lecture Notes in Computer Science. Berlin Heidelberg: Springer-Verlag: 2012. p. 178–89.Google Scholar Yuan M, Lin Y. Model selection and estimation in regression with grouped variables. J R Stat Soc Ser. 2006; 68(1):49–67.View ArticleGoogle Scholar Lounici K, Pontil M, Tsybakov AB, van de Geer S. Taking advantage of sparsity in multi-task learning. In: Proceedings of the 22nd Conference on Information Theory. Madison: Omnipress: 2009. p. 73–82.Google Scholar Montgomery SB, Sammeth M, Gutierrez-Arcelus M, Lach RP, Ingle C, Nisbett J. Transcriptome genetics using second generation sequencing in a Caucasian population. Nature. 2010; 464(7289):773–7.View ArticlePubMedGoogle Scholar Jiang H, Wong WH. Statistical inferences for isoform expression in RNA-Seq. Bioinformatics. 2009; 25(8):1026–32.View ArticlePubMedPubMed CentralGoogle Scholar Salzman J, Jiang H, Wong WH. Statistical modeling of RNA-Seq data. Stat Sci. 2011; 26(1):62–83.View ArticleGoogle Scholar Tibshirani R. Regression shrinkage and selection via the Lasso. J Roy Stat Soc B. 1996; 58(1):267–88.Google Scholar Schwarz G. Estimating the dimension of a model. Ann Stat. 1978; 6(2):461–4. doi:10.2307/2958889http://dx.doi.org/10.2307/2958889.View ArticleGoogle Scholar Pruitt KD, Tatusova T, Maglott DR. Ncbi reference sequence (refseq): a curated non-redundant sequence database of genomes, transcripts and proteins. Nucleic Acids Res. 2005; 33(supp1):501–4.Google Scholar Griebel T, Zacher B, Ribeca P, Raineri E, Lacroix V, Guigo R, et al. Modelling and simulating generic rna-seq experiments with the flux simulator. Nucleic Acids Res. 2012; 40(20):10073–83.View ArticlePubMedPubMed CentralGoogle Scholar Celniker ES, Dillon LAL, Gerstein MB, Gunsalus KC, Henikoff S, Kerpen GH, et al. Unlocking the secrets of the genome. Nature. 2009; 459(7249):927–30.View ArticlePubMedPubMed CentralGoogle Scholar Karolchik D, Hinrichs AS, Furey TS, Roskin KM, Sugnet CW, Haussler D. The ucsc table browser data retrieval tool. Nucleic Acids Res. 2004; 32(supp1):493–6.View ArticleGoogle Scholar Cunningham F, Amode MR, Barrell D, Beal K, Billis K, Brent S, et al. Ensembl 2015. Nucleic Acids Res. 2015; 43(D1):662–9.View ArticleGoogle Scholar Marygold SJ, Leyland PC, Seal RL, Goodman JL, Thurmond J, Strelets VB, et al. Flybase: improvements to the bibliography. Nucleic Acids Res. 2013; 41(D1):751–7.View ArticleGoogle Scholar Anders S, Reyes A, Huber W. Detecting differential usage of exons from rna-seq data. Genome Res. 2012; 22:2008–017.View ArticlePubMedPubMed CentralGoogle Scholar Trapnell C, Hendrickson DG, Sauvageau M, Goff L, Rinn JL, Patcher L. Differential analysis of gene regulation at transcript resolution with rna-seq. Nat Biotechnol. 2013; 31(1):46–53.View ArticlePubMedGoogle Scholar Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
CommonCrawl
Multiple solutions to a Neumann problem with equi-diffusive reaction term DCDS-S Home Multiple solutions for a Navier boundary value problem involving the $p$--biharmonic operator August 2012, 5(4): 753-764. doi: 10.3934/dcdss.2012.5.753 Multiple solutions for a Neumann-type differential inclusion problem involving the $p(\cdot)$-Laplacian Antonia Chinnì 1, and Roberto Livrea 2, Department of Science for Engineering and Architecture (Mathematics Section), Engineering Faculty, University of Messina, Messina, 98166, Italy Department MECMAT, Engineering Faculty, University of Reggio Calabria, Reggio Calabria, 89100, Italy Received April 2011 Revised August 2011 Published November 2011 Using a multiple critical points theorem for locally Lipschitz continuous functionals, we establish the existence of at least three distinct solutions for a Neumann-type differential inclusion problem involving the $p(\cdot)$-Laplacian. Keywords: three-critical-points theorem., variable exponent Sobolev space, $p(x)$-Laplacian, critical points of locally Lipschitz continuous functionals, differential inclusion problem. Mathematics Subject Classification: Primary: 35J20, 35R7. Citation: Antonia Chinnì, Roberto Livrea. Multiple solutions for a Neumann-type differential inclusion problem involving the $p(\cdot)$-Laplacian. Discrete & Continuous Dynamical Systems - S, 2012, 5 (4) : 753-764. doi: 10.3934/dcdss.2012.5.753 G. Bonanno, Some remarks on a three critical points theorem,, Nonlinear Anal., 54 (2003), 651. doi: 10.1016/S0362-546X(03)00092-0. Google Scholar G. Bonanno and P. Candito, Non-differentiable functionals and applications to elliptic problems with discontinuous nonlinearities,, J. Differential Equations, 244 (2008), 3031. doi: 10.1016/j.jde.2008.02.025. Google Scholar G. Bonanno and A. Chinnì, Discontinuous elliptic problems involving the $p(x)$-Laplacian,, Math. Nachr., 284 (2011), 639. doi: 10.1002/mana.200810232. Google Scholar G. Bonanno and A. Chinnì, Multiple solutions for elliptic problems involving the $p(x)$-Laplacian,, Le Matematiche, LXVI (2011), 105. Google Scholar G. Bonanno and S. A. Marano, On the structure of the critical set of non-differentiable functions with a weak compactness condition,, Appl. Anal., 89 (2010), 1. doi: 10.1080/00036810903397438. Google Scholar F. Cammaroto, A. Chinnì and B. Di Bella, Multiple solutions for a Neumann problem involving the $p(x)$-Laplacian,, Nonlinear Anal., 71 (2009), 4486. doi: 10.1016/j.na.2009.03.009. Google Scholar K. C. Chang, Variational methods for nondifferentiable functionals and their applications to partial differential equations,, J. Math. Anal. Appl., 80 (1981), 102. doi: 10.1016/0022-247X(81)90095-0. Google Scholar F. H. Clarke, "Optimization and Nonsmooth Analysis," Second edition,, Classics Appl. Math., 5 (1990). Google Scholar G. Dai, Three solutions for a Neumann-type differential inclusion problem involving the $p(x)$-Laplacian,, Nonlinear Anal., 70 (2009), 3755. doi: 10.1016/j.na.2008.07.031. Google Scholar G. Dai, Infinitely many solutions for a Neumann-type differential inclusion problem involving the $p(x)$-Laplacian,, Nonlinear Anal., 70 (2009), 2297. doi: 10.1016/j.na.2008.03.009. Google Scholar X. Fan and S.-G. Deng, Remarks on Ricceri's variational principle and applications to the $p(x)$-Laplacian equations,, Nonlinear Anal., 67 (2007), 3064. doi: 10.1016/j.na.2006.09.060. Google Scholar X. Fan and C. Ji, Existence of infinitely many solutions for a Neumann problem involving the $p(x)$-Laplacian,, J. Math. Anal. Appl., 334 (2007), 248. doi: 10.1016/j.jmaa.2006.12.055. Google Scholar X. Fan and D. Zhao, On the spaces $L^{p(x)}(\Omega)$ and $W^{m,p(x)}(\Omega)$,, J. Math. Anal. Appl., 263 (2001), 424. doi: 10.1006/jmaa.2000.7617. Google Scholar O. Kováčik and J. Rákosník, On spaces $L^{p(x)}$ and $W^{1,p(x)}$,, Czechoslovak Math., 41 (1991), 592. Google Scholar A. Kristály, Infinitely many solutions for a differential inclusion problem in $\mathbbR^n$,, J. Differential Equations, 220 (2006), 511. doi: 10.1016/j.jde.2005.02.007. Google Scholar A. Kristály, M. Mihǎilescu and V. Rǎdulescu, Two non-trivial solutions for a non-homogeneous Neumann problem: An Orlicz-Sobolev space setting,, Proc. Royal Soc. Edinburgh Sect. A, 139 (2009), 367. doi: 10.1017/S030821050700025X. Google Scholar S. A. Marano and D. Motreanu, On a three critical points theorem for non-differentiable functions and applications to nonlinear boundary value problems,, Nonlinear Anal., 48 (2002), 37. doi: 10.1016/S0362-546X(00)00171-1. Google Scholar S. A. Marano and D. Motreanu, Infinitely many critical points of non-differentiable functions and applications to a Neumann-type problem involving the p-Laplacian,, J. Differential Equations, 182 (2002), 108. doi: 10.1006/jdeq.2001.4092. Google Scholar M. Mihǎilescu, Existence and multiplicity of solutions for a Neumann problem involving the $p(x)$-Laplace operator,, Nonlinear Analysis, 67 (2007), 1419. doi: 10.1016/j.na.2006.07.027. Google Scholar D. S. Moschetto, A quasilinear Neumann problem involving the $p(x)$-Laplacian,, Nonlinear Anal., 71 (2009), 2739. doi: 10.1016/j.na.2009.01.109. Google Scholar D. Motreanu and N. S. Papageorgiou, On some elliptic hemivariational and variational-hemivariational inequalities,, Nonlinear Anal., 62 (2005), 757. doi: 10.1016/j.na.2005.03.101. Google Scholar N. S. Papageorgiou and E. M. Rocha, "Existence and Multiplicity of Solutions for the Noncoercive Neumann p-Laplacian,", Preceedings of the 2007 Conference on Variational and Toplogical Methods: Theory, 18 (2010), 57. Google Scholar N. S. Papageorgiou and G. Smyrlis, Multiple solutions for nonlinear Neumann problems with the p-Laplacian and a nonsmooth crossing potential,, Nonlinearity, 23 (2010), 529. doi: 10.1088/0951-7715/23/3/005. Google Scholar B. Ricceri, On a three critical points theorem,, Arch. Math. (Basel), 75 (2000), 220. Google Scholar B. Ricceri, A general variational principle and some of its applications,, J. Comput. Appl. Math., 113 (2000). doi: 10.1016/S0377-0427(99)00269-1. Google Scholar Qingfang Wang, Hua Yang. Solutions of nonlocal problem with critical exponent. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5591-5608. doi: 10.3934/cpaa.2020253 Gongbao Li, Tao Yang. Improved Sobolev inequalities involving weighted Morrey norms and the existence of nontrivial solutions to doubly critical elliptic systems involving fractional Laplacian and Hardy terms. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020469 Fabian Ziltener. Note on coisotropic Floer homology and leafwise fixed points. Electronic Research Archive, , () : -. doi: 10.3934/era.2021001 Lekbir Afraites, Chorouk Masnaoui, Mourad Nachaoui. Shape optimization method for an inverse geometric source problem and stability at critical shape. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021006 Shengbing Deng, Tingxi Hu, Chun-Lei Tang. $ N- $Laplacian problems with critical double exponential nonlinearities. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 987-1003. doi: 10.3934/dcds.2020306 Bing Yu, Lei Zhang. Global optimization-based dimer method for finding saddle points. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 741-753. doi: 10.3934/dcdsb.2020139 Stanislav Nikolaevich Antontsev, Serik Ersultanovich Aitzhanov, Guzel Rashitkhuzhakyzy Ashurova. An inverse problem for the pseudo-parabolic equation with p-Laplacian. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021005 Li Cai, Fubao Zhang. The Brezis-Nirenberg type double critical problem for a class of Schrödinger-Poisson equations. Electronic Research Archive, , () : -. doi: 10.3934/era.2020125 Lingwei Ma, Zhenqiu Zhang. Monotonicity for fractional Laplacian systems in unbounded Lipschitz domains. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 537-552. doi: 10.3934/dcds.2020268 Russell Ricks. The unique measure of maximal entropy for a compact rank one locally CAT(0) space. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 507-523. doi: 10.3934/dcds.2020266 Wenrui Hao, King-Yeung Lam, Yuan Lou. Ecological and evolutionary dynamics in advective environments: Critical domain size and boundary conditions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 367-400. doi: 10.3934/dcdsb.2020283 Giulio Ciraolo, Antonio Greco. An overdetermined problem associated to the Finsler Laplacian. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021004 Youshan Tao, Michael Winkler. Critical mass for infinite-time blow-up in a haptotaxis system with nonlinear zero-order interaction. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 439-454. doi: 10.3934/dcds.2020216 Nicolas Dirr, Hubertus Grillmeier, Günther Grün. On stochastic porous-medium equations with critical-growth conservative multiplicative noise. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020388 Kai Yang. Scattering of the focusing energy-critical NLS with inverse square potential in the radial case. Communications on Pure & Applied Analysis, 2021, 20 (1) : 77-99. doi: 10.3934/cpaa.2020258 Chungen Liu, Huabo Zhang. Ground state and nodal solutions for fractional Schrödinger-maxwell-kirchhoff systems with pure critical growth nonlinearity. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020292 Darko Dimitrov, Hosam Abdo. Tight independent set neighborhood union condition for fractional critical deleted graphs and ID deleted graphs. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 711-721. doi: 10.3934/dcdss.2019045 Yohei Yamazaki. Center stable manifolds around line solitary waves of the Zakharov–Kuznetsov equation with critical speed. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021008 Hongming Ru, Chunming Tang, Yanfeng Qi, Yuxiao Deng. A construction of $ p $-ary linear codes with two or three weights. Advances in Mathematics of Communications, 2021, 15 (1) : 9-22. doi: 10.3934/amc.2020039 Manuel del Pino, Monica Musso, Juncheng Wei, Yifu Zhou. Type Ⅱ finite time blow-up for the energy critical heat equation in $ \mathbb{R}^4 $. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3327-3355. doi: 10.3934/dcds.2020052 HTML views (0) Antonia Chinnì Roberto Livrea
CommonCrawl
Mathematics Educators Stack Exchange is a question and answer site for those involved in the field of teaching mathematics. It only takes a minute to sign up. Teaching Asymptotes Yes, I've read a number of definitions In analytic geometry, an asymptote of a curve is a line such that the distance between the curve and the line approaches zero as they tend to infinity. Some sources include the requirement that the curve may not cross the line infinitely often, but this is unusual for modern authors. And for most examples, such as $y=1/x$, it seems clear what's going on, and how to discuss the asymptote. Yesterday, a student asked me about this function - Only after I told her it was a discontinuous function, and when drawing it, be certain to show that at x=0, both lines should be an open circle indicating "not a point", did she tell me the teacher said that $y=1$ and $y=-1$ were asymptotes. I think there's a mistake here as there's nothing approaching the lines, the lines are the equation. Just as a first order binomial, $y=x+2$, is just a line, no asymptotes there. This prompted the follow on - And the question of whether today's usage allows crossing to occur. In this example, the distance itself, from first definition, will cross zero, infinitely many times. Looking at the quoted definition, what does "modern authors" mean? Do I need to be concerned that an older book will call the X axis of second graph "Not Asymptote", but a newer one is fine? calculus graphing JTP - Apologise to MonicaJTP - Apologise to Monica $\begingroup$ In my opinion as a mathematician, "asymptote" is a somewhat informal term, and like most informal terms they break when stretched too much or when you look too hard at them. I tend only to use it in settings where it's traditionally used, such as describing hyperbolas, or graphs of nonlinear rational functions. In other settings, I would use other terms. I often use other terms even in these settings. $\endgroup$ – user797 Asymptote - if $f(x)-g(x)\to0$ as $x\to a$, then $f(x)$ is asymptotic to $g(x)$ as $x\to a$. Similarly, $g(x)$ is asymptotic to $f(x)$ as $x\to a$. (IMO) Personally, I do not see the problem with intercepting any amount of times. If you wish to modify the definition of an asymptote to fit this, I would probably use limit superior and limit inferior. Modified Asymptotes - if $\limsup\limits_{x\to a}f(x)-g(x)=\liminf\limits_{x\to a}f(x)-g(x)=0$, then $f(x)$ is asymptotic to $g(x)$ as $x\to a$ and vice versa regardless of interceptions. IMO, I have never seen something marked as non-asymptotic because of interceptions. If you wanted, you could still squeeze theorem the asymptote using limit superior and limit inferior, so it shouldn't be any problem. Also note that classifying it as non-asymptotic is probably less helpful than classifying it as an asymptote. Also, while you are dealing with the more geometric asymptotes, there are also asymptotes that are more "functional" (as I would put it) in the fields of "Asymptotic Analysis", "Asymptotic Theory", along with the concept of "Big 'O' Notation". Such things allow different meanings to "asymptote", for example: $$\lim_{x\to\infty}\frac{f(x)}{g(x)}=1\implies f(x)\sim g(x)$$ A famous example of when the above does not agree with our original definition of asymptote is "Stirling's Approximation: $$\lim_{n\to\infty}\frac{\sqrt{2\pi n}\left(\frac ne\right)^n}{n!}=1,\lim_{n\to\infty}\sqrt{2\pi n}\left(\frac ne\right)^n-n!=-\infty$$ The idea can be thought of in the much more simple example of comparing $f(x)=10^x+x$ and $g(x)=10^x$. Clearly $f(6)=10\text{ Meg}+6$, but $g(6)=10\text{ Meg}$ (Meg=million), but what is $6$ in comparison to $10\text{ Meg}$? It is negligible. But our original definition of asymptote does not consider the $6$ as negligible in comparison to the ten million... Big $O$ notation is also useful, for example, $$\frac{x^5+7}{x^2-2}=x^3+O(x^2)$$ all this is really saying is that the rational function has a dominant $x^3$ growth/behavior, and the rest of the behavior is of $x^2$ order or less. This can quickly help determine the behavior of extremely complicated functions by extracting the dominant behavior of a function for either limits or approximations, etc. Simply Beautiful ArtSimply Beautiful Art $\begingroup$ That takes care of the second example. Much appreciated. Is a straight line its own asymptote? if $f(x)-g(x)\to0$ as $x\to a$, is great, but if it's always zero? $\endgroup$ – JTP - Apologise to Monica $\begingroup$ @JoeTaxpayer Hm, I'm not really sure. I mean, the whole idea of asymptotes is that $f(x)\ne g(x)$, but one can treat it as $f(x)\approx g(x)$ for approximations and use of things like squeeze theorem. So I guess it really depends on the context of whether or not $f(x)=g(x)$ is allowed. I mean, it doesn't "approach" $0$, but it is most certainly true that limit is "equal" to $0$... $\endgroup$ – Simply Beautiful Art $\begingroup$ Right. In math we expect precision, and for a definition to be complete and unambiguous. I'm ok with convention changing. When I was in school, a 7 sided figure was a septagon. At 51, I found its called a heptagon now. Just an example. $\endgroup$ $\begingroup$ @JoeTaxpayer: Every function is asymptotic to itself, yes. (In other words, "asymptotic" is a strictly weaker condition than "equal".) A definition of "asymptotic" that involves exceptions for functions that intersect or are eventually equal to their asymptotes would be completely at odds with standard mathematical usage. $\endgroup$ – Daniel Hast $\begingroup$ One thing to keep in mind is how the analytic notion in your edit differs from the geometrical notion of slant asymptotes, which is used in curve sketching. For example, $f(x)=\frac{x^2+1}{x-1}\sim x$ but the geometrical slant asymptote of $y=f(x)$ is $y=x+1$. Also all linear functions of the same slope are asymptotic to one another, but they have differing slant asymptotes, since a line is its own linear asymptote. $\endgroup$ Mathematically a horizontal line is asymptotic to itself. So for example the horizontal asymptote of the constant function $f(x)=3$ is the line $y=3$. This issue of whether the definition of a horizontal asymptote should place some sort of restriction on the graph of $y=f(x)$ crossing a horizontal asymptote seems to come up frequently. As a mathematician this seems peculiar, and has always puzzled me. I suspect the root of this resides in the etymology of the word "asymptote" which traces back to something like "not together." Thinking about the etymology in this way might tempt one to think that the mathematical definition of an asymptote perhaps should include some restriction to rule out the notion of a line being asymptotic to itself. But it should not and does not. What we have here is just a peculiar and interesting drift over time from the etymological meaning of a word to a meaning and usage that is quite different. $\begingroup$ I think a more likely cause is the use of the word "approaches" (as in the OP's highlighted definition) to commonly indicate "limit". We might possibly consider a constant function to have a "degenerate limit", which has the same confusion as other types of degeneracy (triangles, conics, etc.). As mathematicians we like a definition to encompass many things (because proofs are thus more concise), but in daily life people want to distinguish those with separate mental objects. $\endgroup$ – Daniel R. Collins $\begingroup$ @Daniel Collins: A reason why the etymology explanation might be equally likely is found in the history of the parallel postulate beginning with Proclus and leading to the Playfair version of the parallel postulate. In this realm, asymptotic lines are lines that do not meet. In what would eventually emerge as the hyperbolic plane, asymptotic lines are divided into limiting parallels and ultra parallels. What some of these authors termed as an "asymptotic line" was termed by Heath as a "non-secant line." $\endgroup$ $\begingroup$ Historically true, but surely that's not a meme in most of our students' minds. $\endgroup$ I don't think that a horizontal line being asymptotic to itself is a "trick question we just never really run into" (as you suggest in the comments). Probability theory gives a natural context in which this sort of thing occurs (albeit in a way which is more than just a horizontal line). A standard result is that a continuous function $F$ defined on $(-\infty, \infty)$ is a cumulative distribution function of a continuous random variable if and only if it is monotone increasing and is asymptotic to $0$ as it approaches $-\infty$ and $1$ as it approaches $\infty$. For some distributions (e.g. a normal distribution) these are asymptotes as you typically think of them -- you approach $0$ or $1$ but never get there for any finite value. On the other hand, if the distribution is for a random variable with bounded support then the asymptotes $0$ and $1$ will be achieved long before $\pm \infty$. For example, the graph of the cdf of the standard uniform variable on $[0,1]$ looks like Most of the graph consists of horizontal lines which are equal to the asymptotic values. I don't see any reason to consider this a counterexample to the claim that cumulative distribution functions are always asymptotic to $0$ and $1$. John ColemanJohn Coleman Thanks for contributing an answer to Mathematics Educators Stack Exchange! Not the answer you're looking for? Browse other questions tagged calculus graphing or ask your own question. How is teaching calculus in high school different from teaching calculus in college? Teaching integration (online). Logical progression in subject? Practical experience with teaching differentials in freshman calc? The 'epsilon-delta' method for teaching limits Teaching Calculus Less Formally Which examples should we mention when teaching the concept of derivatives? Resources for teaching calc III Appropriate context for teaching derivative (undergraduate/graduate) Teaching calculus in AP without the limit definition
CommonCrawl
CHAPTER III Applications of the Derivative (Draft- 2014 work in progress) © 2014 M. Flashman III.B. First Derivative Analysis Draft Version 4/19/14 In Chapter I.G we looked informally at some of the possible analysis based on the first and second derivatives. In that work we relied on the sense that this analysis made in the interpretations and the visualizations of some simple examples. In the next two sections we will review that analysis with a more examples providing further justification for the bases of this type of analysis. Review of an informal approach: Traveling along a road on the way from San Francisco to Los Angeles we usually watch our trip mileage change and use this to estimate how much distance remains before we reach our destination. The mileage numbers get larger as time passes, and the distance to Los Angeles is diminishing. If we were to treat the distance traveled as a function of time, it should make sense to claim that our speed is the derivative of this function, that it is positive, and that this is connected directly to the fact that the distance traveled is increasing. The distance remaining till Los Angeles can also be treated as a function, and its derivative will be the same in magnitude as our speed, but opposite in sign, as this distance is diminishing as we proceed to our destination. As we saw in Chapter I.G, it appears informally from the interpretation that a positive derivative indicates an increasing function while a negative derivative is evidence for a function that has decreasing values. Differential Estimates: If you use the differential to estimate the values of a function $f$ close to a point a where the derivative $f '(a)$ is positive and $x$ is larger than $a$, it should make sense that $f(x)$ will be larger than $f(a)$. Likewise, if the derivative is negative then the differential estimate suggests the value of $f(x)$ will be less than $f(a)$. These very simple and sensible situations point again to the importance of knowing the sign of the derivative to determine whether the values of the function are increasing or decreasing close to a point in the domain. Of course with the ever increasing graphing capabilities of computers and hand held calculators, the role of the calculus in visualizing the graph of a function has changed. Before the technology of graphing was available, if variables satisfied some function relationship then information about the derivative would be useful to draw a quick sketch of the graph to visualize the relationship. Now the graphing technology supplies this visualization without much effort or thought in many situations. The calculus today provides the background that explains in some sense why we see what we see while the technology supplies the graph. It also can alert us to features of the graph that the technology may have missed. The problems of scaling and the basic nature of graphing technology can only provide the illusion of a complete picture because it only samples the values of the function to create its pictures. In this section we will apply the ability to find and analyze the derivative of a function to gain information that explains the behavior of that function. From this we will be able to gain further insight into the extremes of the function. We begin the work of the section with some definitions, then we'll state and apply the major results, and finally we'll justify the major results using an important theorem in the theoretical structure of the calculus called the Mean Value Theorem. III.B.1. Increasing and Decreasing Functions. The description of a function as increasing or decreasing should be familiar to you. We include the more formal definition at this stage for some review. Suppose $f$ is defined for all points in an interval $I$ and $a$ and $b$ are any two points in $I$ Definition. Suppose $f$ is defined for all points in an interval $I$ and $a$ and $b$ are any two points in $I$. We say that $f$ is (strictly) increasing on the interval $I$ if $a < b$ implies $f(a) < f(b)$ for all possible choices of $a$ and $b$ in the interval. We say that $f$ is non-decreasing on the interval $I$ if $a < b$ implies $f(a) \le f(b)$ for all possible choices of $a$ and $b$ in the interval. We say that $f$ is (strictly) decreasing on the interval $I$ if $a < b$ implies $f(a) > f(b)$ for all possible choices of $a$ and $b$ in the interval. We say that $f$ is non-increasing on the interval $I$ if $a < b$ implies $f(a) \ge f(b)$ for all possible choices of $a$ and $b$ in the interval. Interpretations: 1. In the motion interpretation an increasing function is always moving in an upward direction on the mapping figure. Arrows drawn between the source and target on an interval where the function is increasing will never cross each other in the region between the source and the target. See Figure 1. Non-decreasing functions may remain at the same value on the target for some time interval, but once they pass above a value in the target they never return to it in an interval where they are non-decreasing. 2. In the graphical interpretation an increasing function on an interval has a graph which has points higher on the graph as the points move from left to right. Once the graph lies above or on any horizontal line, it will not be found below it or even at the same level to the right of that point while over the interval. See Figure 2. The graphs of non-decreasing functions may appear in some sections like horizontal lines, but they may not appear below the level of a horizontal line at a point to the right of any point at which that level is reached. 3. The first definitions apply precisely to the type of function that arises as a (cumulative) probability distribution function $F$ from a random variable $X$. Recall that such a function is defined at $A$ by finding the probability that $X \le A$. If $A < B$, certainly $F(A) \le F(B)$ since the likelihood of $X$ being less than or equal to $B$ is certainly no less than that of $X$ being less than or equal to $A$. 4. An economic interpretation of an increasing function might consider how costs increase with increased levels of production. On the other hand the annual salary of many employees can be seen as non decreasing functions of time. 5. Similar interpretations are possible for the concepts of decreasing and non-increasing, but are left for the reader to make. See Figures 3 and 4. The following two key results relate the derivative to increasing/non-decreasing behavior of a function in a way which should seem familiar and make sense at this stage: Theorem III.B.1. (P) If $f$ is a differentiable function on an interval $I$ with $f '(x) > 0$ for all $x$ in the interval, then $f$ is increasing on the interval. If $f '(x) \ge 0$ for all $x$ in the interval, then $f$ is non-decreasing on the interval. Theorem III.B.2. (N) If $f$ is a differentiable function on an interval $I$ with $f '(x) < 0$ for all $x$ in the interval, then $f$ is decreasing on the interval. If $f '(x)\le 0$ for all $x$ in the interval, then $f$ is non-increasing on the interval. ___________________________________________________________________________________________________________________________ Here is an example of the application of these two theorems to analyze a function's behavior. Notice that the analysis uses the continuity of the derivative function with the intermediate value theorem to determine the intervals where the function is increasing and decreasing. Example III.B.1. Find any intervals of the real numbers for which the function $f(x) = x^4-8x^2$ is increasing. For which intervals is the function decreasing? Solution: By the previous theorems we need to find those intervals where $f '(x) > 0$ and those for which $f '(x) < 0$. We first find that $f '(x) = 4 x^3 - 16 x$ and determine the critical points, i.e., where $f '(x) = 0$. These are $x = -2, 0,$ and $2$. Since $f '(x)$ is a continuous function, an application of the intermediate value theorem (Chapter I.I.B. ) shows that the sign of $f '(x)$ can change only at one of the critical points. Testing the value of $f '$ at non-critical points will determine whether the derivative is positive or negative between the critical points. With some simple calculations you can see easily that $f '(10) > 0, f '(-10) < 0, f '(-1) > 0,$ and $f '(1) < 0$. Thus the derivative is positive for the intervals $(-2,0) $ and $(2,\infty)$ and is negative for the intervals $(-\infty,-2)$ and $(0,2)$. We visualize this situation in the following sign chart figure for $f'$. $f '(x)$ - - - - - 0 + + + + + + 0 - - - - - - 0 + + + + + $x$ -2 -1 0 1 2 3 4 Figure 5: Sign Chart for $f'(x)$ Based on this analysis of the derivative we can say now that the function $f$ is increasing on the intervals $(-2,0)$ and $(2,\infty)$ and is decreasing for the intervals $(-\infty,-2)$ and $(0,2)$. Note as well at this stage the values of the function at the critical points are $f (-2) = -16$, $f (0) = 0$, and $f (2) = -16$. We can combine this information with the previous figure to arrive at an enhanced sign chart figure showing the full analysis of the function using the first derivative. [Increasing is indicated by $\nearrow$ while decreasing is indicated by $\searrow$.] $f$ is: $\searrow$ -16 $\nearrow$ 0 $\searrow$ -16 $\nearrow$ Figure 6: Enhanced Sign Chart for $f'(x)$ The results of this analysis are illustrated in the graph of $f $ as shown in GeoGebra graph and mapping diagram and are consistent with the values of the function shown in the accompanying table on Figure 7. Example III.B.2. Let $f (x) = x \ln(x)$ with $x>0$. Find any intervals for which this function is increasing. Solution: Our analysis begins finding $f '(x) = \ln(x) + 1$. Again the crucial piece of work is to find the critical points. Solving $0 = \ln(x) + 1$ we see that $f '(x) = 0$ only when $\ln(x)= -1$, i.e., when $x=\frac 1e$. Using intermediate value theorem analysis for the derivative again allows us to see what the sign of the derivative is on the intervals determined by this critical point. We have $f '(.1) = \ln(.1)+1 < 0$ and $f '(1) = \ln(1) + 1 = 1$, so the derivative is positive only on the interval $(\frac 1e,\infty )$ as Figure 8 illustrates. Applying Theorem P we see that $f$ is increasing on the interval $(\frac 1e, \infty)$. See Figure 8 and Figure 9. $f '(x)$ - - - - - 0 + + + + + + + + + + + + + + + $x$ 0 $\frac 1e$ .5 1 2 $f$ is: $\searrow$ $-\frac 1e$ $\nearrow$ Example III.B.3. Let $f(x) = \sin(x) + \cos(x)$. Find any intervals between $0$ and $2\pi$ for which this function is increasing. Solution: Our analysis begins by finding the derivative, $f '(x) = \cos(x) - \sin(x)$. Again the crucial piece of work is to find the critical points. Solving $0 = \cos(x) - \sin(x)$ we see that $f '(x) = 0$ only when $\sin(x) = \cos(x)$, i.e., when $x = \pi/4$ and $5\pi/4$. Using intermediate value theorem analysis for the derivative again allows us to see what the sign of the derivative is on the intervals determined by these critical points. We can see that $f '(0)=1, f '(\pi) = -1$, and $f '(2\pi) = 1$, so the derivative is positive only on the intervals $[0,\pi/4)$ and $(5\pi/4, 2\pi]$ as Figure *** illustrates. Applying Theorem P we see that $f$ is increasing on the intervals $[0,\pi/4)$ and $(5\pi/4, 2\pi]$. See Figures 10 and 11. $f '(x)$ + + + + 0 - - - - - - - - 0 + + + + + + + $x$ 0 $ \pi/4 $ $5\pi/4$ $2\pi$ $f$ is: $\nearrow$ $\sqrt 2$ $\searrow$ $-\sqrt 2$ $\nearrow$ Figure 10: Enhanced Sign Chart for $f'(x)$ Exercises III.B.1: Generic GeoGebra for First Derivative Analysis Exercises.
CommonCrawl
Stummer, Wolfgang ; Lao, Wei Limits of Bayesian decision related quantities of binomial asset price models. (English). Kybernetika, vol. 48 (2012), issue 4, pp. 750-767 MSC: 62C10, 91B25, 94A17 | MR 3013397 Bayesian decisions; power divergences; Cox--Ross--Rubinstein binomial asset price models We study Bayesian decision making based on observations $\left(X_{n,t} : t\in\{0,\frac{T}{n},2\frac{T}{n},\ldots,n\frac{T}{n}\}\right)$ ($T>0, n\in \mathbb{N}$) of the discrete-time price dynamics of a financial asset, when the hypothesis a special $n$-period binomial model and the alternative is a different $n$-period binomial model. As the observation gaps tend to zero (i. e. $n \rightarrow \infty$), we obtain the limits of the corresponding Bayes risk as well as of the related Hellinger integrals and power divergences. Furthermore, we also give an example for the "non-commutativity" between Bayesian statistical and optimal investment decisions. [1] A. K. Bera, Y. Bilias: The MM, ME, ML, EL, EF and GMM approaches to estimation: a synthesis. J. Econometrics 107 (2002), 51-86. DOI 10.1016/S0304-4076(01)00113-0 | MR 1889952 | Zbl 1088.62505 [2] A. Berlinet, I. Vajda: Selection rules based on divergences. Statistics 45 (2011), 479-495. DOI 10.1080/02331880903573385 | MR 2832180 | Zbl 1229.62035 [3] M. Broniatowski, I. Vajda: Several applications of divergence criteria in continuous families. To appear in Kybernetika (2012). See also Research Report No. 2257, Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Prague 2009; moreover, see arXiv:0911.0937v1 [math.ST]. [4] J. C. Cox, S. A. Ross, M. Rubinstein: Option pricing: a simplified approach. J. Finan. Econ. 7 (1979), 229-263. DOI 10.1016/0304-405X(79)90015-1 | Zbl 1131.91333 [5] N. Cressie, T. R. C. Read: Multinomial goodness-of-fit tests. J. Roy. Stat. Soc. Ser. B Stat. Methodol. 46 (1984), 440-464. MR 0790631 | Zbl 0571.62017 [6] I. Csiszár, F. Matúš: Generalized maximum likelihood estimates for exponential families. Probab. Theory Related Fields 141 (2008), 213-246. DOI 10.1007/s00440-007-0084-z | MR 2372970 | Zbl 1133.62039 [7] I. Csiszár, P. C. Shields: Information theory and statistics: a tutorial. Found. Trends Commun. Inform. Theory 1 (2004), 4, 417-528. DOI 10.1561/0100000004 | Zbl 1157.62300 [8] I. V. Girsanov: On transforming a certain class of stochastic processes by absolutely continuous substitution of measures. Theory Probab. Appl. 5 (1960), 285-301. MR 0133152 | Zbl 0100.34004 [9] A. Golan: Information and entropy econometrics - editor's view. J. Econometrics 107 (2002), 1-15. DOI 10.1016/S0304-4076(01)00110-5 | MR 1889949 | Zbl 1088.62519 [10] A. Gretton, L. Györfi: Consistent nonparametric tests of independence. J. Mach. Learn. Res. 11 (2010), 1391-1423. MR 2645456 | Zbl 1242.62033 [11] P. Harremoes, I. Vajda: On the Bahadur-effcient testing of uniformity by means of the entropy. IEEE Trans. Inform. Theory 54 (2008), 321-331. DOI 10.1109/TIT.2007.911155 | MR 2446756 [12] P. Harremoes, I. Vajda: On Bahadur efficiency of power divergence statistics. Preprint arXiv:1002.1493v1 [math.ST] (2010). [13] T. Hobza, L. Pardo, D. Morales: Rényi statistics for testing equality of autocorrelation coefficients. Statist. Methodol. 6 (2009), 424-436. DOI 10.1016/j.stamet.2009.03.001 | MR 2751084 [14] F. Liese, K.-J. Miescke: Statistical Decision Theory. Springer-Verlag, New York 2008. MR 2421720 | Zbl 1154.62008 [15] F. Liese, D. Morales, I. Vajda: Asymptotically sufficient partitions and quantizations. IEEE Trans. Inform. Theory 52 (2006), 5599-5606. DOI 10.1109/TIT.2006.885495 | MR 2300722 [16] F. Liese, I. Vajda: Convex Statistical Distances. Teubner, Leipzig 1987. MR 0926905 | Zbl 0656.62004 [17] F. Liese, I. Vajda: On divergences and informations in statistics and information theory. IEEE Trans. Inform. Theory 52 (2006), 4394-4412. DOI 10.1109/TIT.2006.881731 | MR 2300826 [18] E. Maasoumi: A compendium to information theory in economics and econometrics. Econometrics Rev. 12 (1993), 2, 137-181. DOI 10.1080/07474939308800260 | MR 1222574 | Zbl 0769.62003 [19] D. Morales, I. Vajda: Generalized information criteria for optimal Bayes decisions. Research Report No. 2274, Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Prague 2010. [20] D. B. Nelson, K. Ramaswamy: Simple binomial processes as diffusion approximations in financial models. Rev. Financ. Stud. 3 (1990), 393-430. DOI 10.1093/rfs/3.3.393 [21] L. Pardo: Statistical Inference Based on Divergence Measures. Chapman \& Hall, Boca Raton 2005. MR 2183173 | Zbl 1118.62008 [22] M. C. Pardo: Testing equality restrictions in generalized linear models for multinomial data. Metrika 73 (2011), 231-253. DOI 10.1007/s00184-009-0275-y | MR 2769272 | Zbl 1206.62131 [23] T. R. C. Read, N. A. C. Cressie: Goodness-of-Fit Statistics for Discrete Multivariate Data. Springer-Verlag, New York 1988. MR 0955054 | Zbl 0663.62065 [24] H. Strasser: Mathematical Theory of Statistics. De Gruyter, Berlin 1985. MR 0812467 | Zbl 0594.62017 [25] W. Stummer: Exponentials, Diffusions, Finance, Entropy and Information. Shaker, Aachen 2004. Zbl 1140.91013 [26] W. Stummer, I. Vajda: Optimal statistical decisions about some alternative financial models. J. Econometrics 137 (2007), 441-471. DOI 10.1016/j.jeconom.2005.10.001 | MR 2354952 [27] W. Stummer, I. Vajda: On divergences of finite measures and their applicability in statistics and information theory. Statistics 44 (2010), 169-187. DOI 10.1080/02331880902986919 | MR 2674416 [28] I. Vajda, E.C. van der Meulen: Goodness-of-Fit criteria based on observations quantized by hypothetical and empirical percentiles. In: Handbook of Fitting Statistical Distributions with R (Z.A. Katrian, E.J. Dudewicz eds.), Chapman \& Hall / CRC, 2010, pp. 917-994. [29] I. Vajda, J. Zvárová: On generalized entropies, Bayesian decisions and statistical diversity. Kybernetika 43 (2007), 675-696. MR 2376331 | Zbl 1143.94006
CommonCrawl
M3 Combinatorics Beads in a Line, With Constraints Defining a Convenient Sequence, Recursion Recursion Step-by-Step How come for the example in Day 10, we represented "a sub n" or "a sub n-1" as the possibilities we START WITH when trying to find a recursive pattern. But for Day 11, instead we represent "B sub n" or "B sub n-1" as possibilities we END WITH. On a real test, how would we know when to use which strategy? And how were you guys able to determine which one would suit better for each problem? Thanks! Alternate Solution Using Tiling You're right that the "doesn't end in white" problem has a lot to do with "extending to 7 spaces"! If you go back to the video at around 1:43, you can see that Professor Loh starts talking about this. He says that using "B" and "WB" blocks can get you a lot of good strings of 6 beads... but there's the slight problem that it doesn't count the ones that end in W! And that's why we extend it to 7 spaces! Why does this fix the problem? Well, those 7 spaces will always end in black, right? So, the actual "things" we are considering are all those combinations of beads in the first 6 spaces! You can see that the 6th bead could definitely be white, so that solves the issue. Variation on Beads in a Line @rz923 Thank you for supporting this suggestion! This would be a really clever addition to the problem. I will note it down. Building the Recursion lol weird @The-Blade-Dancer Sure! Let's take a quick look at the question statement again. We're trying to create a line of \(6\) beads, either black or white, and the only constraint is that we can have at most two consecutive white beads. Earlier, in Parts 1 through 3, Prof. Loh showed us a solution using recursion (building upon a smaller line of beads in order to get the next bigger string). In Part 4, he's going to resurrect a method that he introduced in Day 10, tiling. It's a remarkably different but totally valid way of doing this problem. Pretend you had a set of tiles which consisted of three types: 1.) a single black square; 2.) a white square followed by a black square; and 3.) two white squares followed by a black square. Pretend that they have letters on them to denote which way is up, so that, when placed down, the white square is always to the left of the black square. When arranged in a line right-side-up, the tiles magically produce a string of black and white "beads" which obey the rules of the problem which we are trying to solve! ...but is this any easier, though, than what we did before? ....well, the answer is yes! You see, these tiles above are colored black and white, but we actually don't have to color them black and white. We can think of them as "blind" tiles. Or, rather, that we are blind to the tiles. This is because only the length of the tile really matters. The single square is always black. The \(2\)-length rectangle always has white on the left, black on the right. And the \(3\)-length rectangle always has two whites on the left with one black on the right. $$ \text{ Blind tiles. } $$ Okay, we're in good shape now! Now we don't even have colors anymore! Amazing; how did that happen? Now all we have to worry about is the length of the tiles and how to assemble \(6\) of them in a line. In order to make a \(6\)-length line, we can start with a \(5\)-length line and add a \(1 \times 1\) square, we can start with a \(4\)-length line and add a \(1 \times 2\)-rectangle, or we can start with a \(3\)-length line and add a \(1 \times 3\)-rectangle. Remember this? This recursion illustration hopefully looks familiar by now. We're still not caring anymore about whether the individual squares (beads, in reality) plastered together inside the rectangular tiles are black or white. We just think of them as \(1 \times 1\) tiles, \(1 \times 2\) tiles, and \(1 \times 3\) tiles. In general, the formula for the number of ways to make a line of length \(n\), for this problem, is $$ a_n = a_{n-1} + a_{n-2} + a_{n-3} $$ which, again, isn't Fibonacci either, but it's very similar! From there, you can calculate the ways to make a string of length \(n\) if \(n\) is very small, and then use the recursion to work your way up to finding \(a_6\) for a string of length \(6.\)
CommonCrawl
Remarks on nonlinear elastic waves in the radial symmetry in 2-D July 2016, 36(7): 4063-4075. doi: 10.3934/dcds.2016.36.4063 A formula of conditional entropy and some applications Xiaomin Zhou 1, Wu Wen-Tsun Key Laboratory of Mathematics, USTC, Chinese Academy of Sciences, School of Mathematics, University of Science and Technology of China, Hefei, Anhui 230026, China Received June 2015 Revised November 2015 Published March 2016 In this paper we establish a formula of conditional entropy and give two examples of applications of the formula. Keywords: Measure-theoretical entropies, conditional entropies., measure decomposition. Mathematics Subject Classification: Primary: 37B05, 37B20, 37A1. Citation: Xiaomin Zhou. A formula of conditional entropy and some applications. Discrete & Continuous Dynamical Systems - A, 2016, 36 (7) : 4063-4075. doi: 10.3934/dcds.2016.36.4063 R. L. Adler, A. G. Konheim and M. H. McAndrew, Topological entropy,, Trans. Amer. Math. Soc., 114 (1965), 309. doi: 10.1090/S0002-9947-1965-0175106-9. Google Scholar T. Bogenschütz, Entropy, pressure, and a variational principle for random dynamical systems,, Random Comput. Dynam., 1 (): 99. Google Scholar R. Bowen, Topological entropy for noncompact sets,, Trans. Amer. Math. Soc., 184 (1973), 125. doi: 10.1090/S0002-9947-1973-0338317-X. Google Scholar M. Brin and A. Katok, On local entropy,, in Geometric Dynamics, 1007 (1981), 30. doi: 10.1007/BFb0061408. Google Scholar E. I. Dinaburg, A correlation between topological entropy and metric entropy, (Russian), Dokl. Akad. Nauk SSSR, 190 (1970), 19. Google Scholar T. Downarowicz and J. Serafin, Fiber entropy and conditional variational principles in compact non-metrizable spaces,, Fund. Math., 172 (2002), 217. doi: 10.4064/fm172-3-2. Google Scholar M. Einsiedler and T. Ward, Ergodic Theory with a View Towards Number Theory,, Graduate Texts in Mathematics, 259 (2011). doi: 10.1007/978-0-85729-021-2. Google Scholar C. Fang, W. Huang, Y. Yi and P. Zhang, Dimensions of stable sets and scrambled sets in positive finite entropy systems,, Ergodic Theory Dynam. Systems, 32 (2012), 599. doi: 10.1017/S0143385710000982. Google Scholar D. Feng and W. Huang, Variational principles for topological entropies of subsets,, J. Funct. Anal., 263 (2012), 2228. doi: 10.1016/j.jfa.2012.07.010. Google Scholar E. Glasner, Ergodic Theory Via Joinings,, Mathematical Surveys and Monographs, 101 (2003). doi: 10.1090/surv/101. Google Scholar T. N. T. Goodman, Relating topological entropy and measure entropy,, Bull. London Math. Soc., 3 (1971), 176. doi: 10.1112/blms/3.2.176. Google Scholar L. W. Goodwyn, Topological entropy bounds measure-theoretic entropy,, Proc. Amer. Math. Soc., 23 (1969), 679. doi: 10.1090/S0002-9939-1969-0247030-3. Google Scholar W. Huang, Stable sets and $\epsilon$-stable sets in positive-entropy systems,, Comm. Math. Phys., 279 (2008), 535. doi: 10.1007/s00220-008-0430-8. Google Scholar W. Huang, J. Li and X.D. Ye, Stable sets and mean Li-Yorke chaos in positive entropy systems,, J. Funct. Anal., 266 (2014), 3377. doi: 10.1016/j.jfa.2014.01.005. Google Scholar F. Ledrappier and P. Walters, A relativised variational principle for continuous transformations,, J. London Math. Soc.(2), 16 (1977), 568. Google Scholar P. D. Liu, A note on the entropy of factors of random dynamical systems,, Ergodic Theory Dynam. Systems, 25 (2005), 593. doi: 10.1017/S0143385704000586. Google Scholar A. Katok, Lyapunov exponents, entropy and periodic orbits for diffeomorphisms,, Inst. Hautes Études Sci. Publ. Math., 51 (1980), 137. Google Scholar Y. Kifer, Ergodic Theory of Random Transformations,, Progress in Probability and Statistics, 10 (1986). doi: 10.1007/978-1-4684-9175-3. Google Scholar A. N. Kolmogorov, A new metric invariant of transient dynamical systems and automorphisms in Lebesgue spaces, (Russian), Dokl. Akad. Nauk. SSSR (N.S.), 119 (1958), 861. Google Scholar Ugur G. Abdulla. Wiener's criterion at $\infty$ for the heat equation and its measure-theoretical counterpart. Electronic Research Announcements, 2008, 15: 44-51. doi: 10.3934/era.2008.15.44 Andrzej Biś. Entropies of a semigroup of maps. Discrete & Continuous Dynamical Systems - A, 2004, 11 (2&3) : 639-648. doi: 10.3934/dcds.2004.11.639 Vincent Giovangigli, Lionel Matuszewski. Structure of entropies in dissipative multicomponent fluids. Kinetic & Related Models, 2013, 6 (2) : 373-406. doi: 10.3934/krm.2013.6.373 Mickaël Crampon. Entropies of strictly convex projective manifolds. Journal of Modern Dynamics, 2009, 3 (4) : 511-547. doi: 10.3934/jmd.2009.3.511 Peng Sun. Measures of intermediate entropies for skew product diffeomorphisms. Discrete & Continuous Dynamical Systems - A, 2010, 27 (3) : 1219-1231. doi: 10.3934/dcds.2010.27.1219 Eduard Feireisl. Relative entropies in thermodynamics of complete fluid systems. Discrete & Continuous Dynamical Systems - A, 2012, 32 (9) : 3059-3080. doi: 10.3934/dcds.2012.32.3059 Ansgar Jüngel, Ingrid Violet. First-order entropies for the Derrida-Lebowitz-Speer-Spohn equation. Discrete & Continuous Dynamical Systems - B, 2007, 8 (4) : 861-877. doi: 10.3934/dcdsb.2007.8.861 Welington Cordeiro, Manfred Denker, Xuan Zhang. On specification and measure expansiveness. Discrete & Continuous Dynamical Systems - A, 2017, 37 (4) : 1941-1957. doi: 10.3934/dcds.2017082 Welington Cordeiro, Manfred Denker, Xuan Zhang. Corrigendum to: On specification and measure expansiveness. Discrete & Continuous Dynamical Systems - A, 2018, 38 (7) : 3705-3706. doi: 10.3934/dcds.2018160 Petr Kůrka. On the measure attractor of a cellular automaton. Conference Publications, 2005, 2005 (Special) : 524-535. doi: 10.3934/proc.2005.2005.524 Tomasz Downarowicz, Yonatan Gutman, Dawid Huczek. Rank as a function of measure. Discrete & Continuous Dynamical Systems - A, 2014, 34 (7) : 2741-2750. doi: 10.3934/dcds.2014.34.2741 Zari Dzalilov, Iradj Ouveysi, Alexander Rubinov. An extended lifetime measure for telecommunication network. Journal of Industrial & Management Optimization, 2008, 4 (2) : 329-337. doi: 10.3934/jimo.2008.4.329 Barbara Brandolini, Francesco Chiacchio, Cristina Trombetti. Hardy type inequalities and Gaussian measure. Communications on Pure & Applied Analysis, 2007, 6 (2) : 411-428. doi: 10.3934/cpaa.2007.6.411 Oliver Jenkinson. Every ergodic measure is uniquely maximizing. Discrete & Continuous Dynamical Systems - A, 2006, 16 (2) : 383-392. doi: 10.3934/dcds.2006.16.383 Neil S. Trudinger, Xu-Jia Wang. Quasilinear elliptic equations with signed measure. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 477-494. doi: 10.3934/dcds.2009.23.477 Hayden Schaeffer, John Garnett, Luminita A. Vese. A texture model based on a concentration of measure. Inverse Problems & Imaging, 2013, 7 (3) : 927-946. doi: 10.3934/ipi.2013.7.927 Jane Hawkins, Michael Taylor. The maximal entropy measure of Fatou boundaries. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4421-4431. doi: 10.3934/dcds.2018192 M. Baake, P. Gohlke, M. Kesseböhmer, T. Schindler. Scaling properties of the Thue–Morse measure. Discrete & Continuous Dynamical Systems - A, 2019, 39 (7) : 4157-4185. doi: 10.3934/dcds.2019168 Tomasz Szarek, Mariusz Urbański, Anna Zdunik. Continuity of Hausdorff measure for conformal dynamical systems. Discrete & Continuous Dynamical Systems - A, 2013, 33 (10) : 4647-4692. doi: 10.3934/dcds.2013.33.4647 Roland Zweimüller. Asymptotic orbit complexity of infinite measure preserving transformations. Discrete & Continuous Dynamical Systems - A, 2006, 15 (1) : 353-366. doi: 10.3934/dcds.2006.15.353 Xiaomin Zhou
CommonCrawl
The mixed $H_2$ and $H_\infty$ control problem A.A. Stoorvogel, H.L. Trentelman Robust Control (Proceedings of a workshop, Tokyo, Japan, June 23-24, 1991) S. Hosoe Lecture Notes in Control and Information Sciences Stoorvogel, A. A., & Trentelman, H. L. (1992). The mixed $H_2$ and $H_\infty$ control problem. In S. Hosoe (Ed.), Robust Control (Proceedings of a workshop, Tokyo, Japan, June 23-24, 1991) (pp. 202-209). (Lecture Notes in Control and Information Sciences; Vol. 183). Springer. Stoorvogel, A.A. ; Trentelman, H.L. / The mixed $H_2$ and $H_\infty$ control problem. Robust Control (Proceedings of a workshop, Tokyo, Japan, June 23-24, 1991). editor / S. Hosoe. Berlin : Springer, 1992. pp. 202-209 (Lecture Notes in Control and Information Sciences). @inproceedings{4ea241e040424834932f8b3868c85e47, title = "The mixed $H_2$ and $H_\infty$ control problem", author = "A.A. Stoorvogel and H.L. Trentelman", series = "Lecture Notes in Control and Information Sciences", editor = "S. Hosoe", booktitle = "Robust Control (Proceedings of a workshop, Tokyo, Japan, June 23-24, 1991)", Stoorvogel, AA & Trentelman, HL 1992, The mixed $H_2$ and $H_\infty$ control problem. in S Hosoe (ed.), Robust Control (Proceedings of a workshop, Tokyo, Japan, June 23-24, 1991). Lecture Notes in Control and Information Sciences, vol. 183, Springer, Berlin, pp. 202-209. The mixed $H_2$ and $H_\infty$ control problem. / Stoorvogel, A.A.; Trentelman, H.L. Robust Control (Proceedings of a workshop, Tokyo, Japan, June 23-24, 1991). ed. / S. Hosoe. Berlin : Springer, 1992. p. 202-209 (Lecture Notes in Control and Information Sciences; Vol. 183). T1 - The mixed $H_2$ and $H_\infty$ control problem AU - Stoorvogel, A.A. AU - Trentelman, H.L. T3 - Lecture Notes in Control and Information Sciences BT - Robust Control (Proceedings of a workshop, Tokyo, Japan, June 23-24, 1991) A2 - Hosoe, S. Stoorvogel AA, Trentelman HL. The mixed $H_2$ and $H_\infty$ control problem. In Hosoe S, editor, Robust Control (Proceedings of a workshop, Tokyo, Japan, June 23-24, 1991). Berlin: Springer. 1992. p. 202-209. (Lecture Notes in Control and Information Sciences).
CommonCrawl
18.305 - Advanced Analytic Methods (Fall, 2019) Problem Sets Problem Set 1 (due Sep. 9, Mon) Chapter 1, Prob 1a, 1b; Chapter 1, Prob 2b, 2d, 2e. Problem Set 2 ( due Sep 16, Mon) Consider the Laplace equation which holds in a two dimensional strip: $(\dfrac{\partial^{2}}{\partial x^{2}}+\dfrac{\partial^{2}}{\partial y^{2}})u(x,y)=0,$ $-\infty$ < $x$ < $\infty,$ $0$ < $y$ < $a,$ The function $u(x,y)$ satisfies the boundary conditions $u(x,0)=f(x),$ $u(x,a)=0,$ $-\infty$ < $x$ < $\infty,$ ($f(x)$ is a given function) as well as $u(\pm\infty,y)=0.$ Find $u(x,y).$ Consider the equation ($i\dfrac{\partial}{\partial t}+\dfrac{\partial^{2}}{\partial x^{2}})$ $\Psi(x,t)=\rho(x,t),$ where $\rho(x,t)$ is a given function and where $\Psi(x,t)$ vanishes at the infinity of space, i.e, $\Psi(\pm\infty,t)=0.$ $\Psi(x,t)$ is required to satisfy the initial condition $\Psi(x,0)=f(x),$ Find $\Psi(x,t).$ Problem Set 3 (due Sep 25, Wed) The wavefunction of the electron $\Psi(x,t)$ satisfies the Schrodinger equation ($i\dfrac{\partial}{\partial t}+\dfrac{\partial^{2}}{\partial x^{2}})$ $\Psi(x,t)=U(x,t)\Psi(x,t),$ where $U(x,t)$ is a given function (physically, it is the potential which acts on the electron). The wavefunction $\Psi(x,t)$ satisfies the same boundary conditions and initial condition as those given in prob 2. Use the result in prob 2 to convert the Schridinger equation into an integral equation. Find the perturbtion series for $\Psi(x,t).$ Consider the heat equation $\dfrac{\partial T(x,t)}{\partial t}-\dfrac{\partial^{2}T(x,t)}{\partial x^{2}}=\rho(x,t),$ $-\infty$ < $x$ < $\infty,$ $t$ > $0,$ where $T(x,t)$ is the temperature of a one-dimensional rod and $\rho(x,t)$ is a given source function. Let the initial temperture be $T(x,0)=f(x)$. Find the Green function $G(x-x^{\prime},t-t^{\prime})$ for the heat equation above in a closed form (not as an integral) and express $T(x,t)$ with it. The Green function for the wave equation is given by the integral $G(x,t)=\int_{-\infty}^{\infty}\dfrac{dk}{(2\pi)}e^{ikx}\cos kt.$ Evaluate this integral. What does this result mean physically? Problem Set 4 (due Sep 30, Mon) Let $y$ satisfy the equation $y"+x^{2}y=0$ and the initial conditions $y(x_{0})=1,$ $y^{\prime}(x_{0})=0$ where $x_{0}$ > $0.$ Find the WKB approximation of $y(x)$ for $x>x_{0}$. For what values of $x$ do you expect it be a good approximation? Use the computor to obtain the numerical values of $y(x)$ as a function of $x$ for $x_{0}=1,5,10.$ Compute also the numerical values of the WKB approximation of $y(x)$. Compare the computor result with the approximate result. Problem 4, Chapter 7. Problem Set 5 (due Oct 16, Wed) Evaluate the integral $F_{\epsilon}(x)\equiv\int_{-\infty}^{\infty}e^{ikx}e^{-\epsilon k^{2}} \dfrac{dk}{2\pi}.$ Show that in the limit $\epsilon\rightarrow0,$ $F_{\epsilon}(x)$ becomes the Dirac delta function. Consider the Schrodinger equation $[\dfrac{d^{2}}{dx^{2}}+\lambda^{2}(E-g\left\vert x\right\vert )]\Psi(x)=-0,$ $-\infty$ < $x$ < $\infty,$ $\lambda^{2}=m/(2\pi^{2}h^{2}),$ $h$=$6.626x10^{-27}erg-\sec,$ and $g$ is a constant. Use the WKB method to determine, approximately, the energy eigenvalues$.$ Find the approximate energy eigenvalues for the radial Schrodinger equation $[\dfrac{d^{2}}{dr^{2}}+\lambda^{2}(E+\dfrac{e^{2}}{r})]\Psi(r)=-0$, $\infty>r>0.$ In the above, $e$ is the electric charge. Also, $-\dfrac{e^{2}}{r}$ is the attractive Coulomb potential the the positively charged hydrogen nucleus provides the negatively charged electron. Note that the energy $E$ is negative and that the boundary condition at $r=0$ is $\Psi(0)=0$. The exact result is $E=-\dfrac{e^{4}\lambda^{2}}{4(n+1)^{2}},n=0,1,2\cdot\cdot\cdot.$ Which of the eigenvalues you find are good approximations of the exact result? Problem Set 6 (due Oct 21, Mon) Evaluate the following integrals: $\int_{-\infty}^{\infty}\dfrac{dx}{(x^{2}+9)(x-3i)(x-5i)}.$ $\int_{0}^{2\pi}\dfrac{d\theta}{(2+\sin\theta)^{2}}.$ The Green function for the Schrodinger equation of one spatial dimension satisfies ($i\dfrac{\partial}{\partial t}+\dfrac{\partial^{2}}{\partial x^{2}})$ $G(x-x^{\prime},t-t^{\prime})=\delta(x-x^{\prime})\delta(t-t^{\prime}),$ where $G(x-x^{\prime},t-t^{\prime})$ vanishes for $t$ < $t^{\prime}.$ Find $G(x-x^{\prime},t-t^{\prime})$ by expressing the Dirac delta functions with their Fourier integrals, i.e. $\delta(t-t^{\prime})=\int_{-\infty}^{\infty}e^{-ik_{0}(t-t^{\prime})} dk_{0}/(2\pi)$ etc. and carrying out the integration over $k_{0}$. Find the leading term for the following integrals for $\lambda>>1$: $\int_{0}^{\pi/2}e^{-\lambda\cos t}dt,$ $\int_{-1}^{1}e^{\lambda t^{2}}dt,$ $\int_{-\infty}^{\infty}e^{\lambda(x^{2}-x^{4})}dx.$ Use the computer to evaluate the numerical values of these integrals and compare them with the numerical values of their leading terms as a function of $\lambda$. Problem Set 8 (due Nov 4, Mon) As we know, the Gamma function is defined by $\Gamma(1+\lambda)=\int_{0}^{\infty}e^{-t}t^{\lambda}dt..$ Find the asymptotic form of $\Gamma(1+\lambda)$ when $\lambda>>1.$ Hint: you will need to change the variable of integration (why?) The Laplace transform of the function $f(x),$ $0$ < $x$ < $\infty$, is defined as $L(s)=\int_{0}^{\infty}dx$ $e^{-sx}f(x).$ Find the asymptotic form of $L(s)$ when $s>>1.$ Verify your result with the examples of $f(x)=\sin x$ and $f(x)=\cos x.$ Find the asymptotic form for $I(k)=\int_{-\infty}^{\infty}e^{-ikx}e^{-x^{4}},$ $k>>1,$ (The integral above is the Fourier transform of $e^{-x^{4}}).$ Problem Set 9 (due Nov 13, Wed) Find the leading asymptotic form of the integral $I(\lambda)=\int_{-\infty}^{\infty}e^{i\lambda t}e^{-it^{3}/3}dt, \lambda>>1.$ Determine the unshaded regions of $f(z)=e^{iz^{5}}$ in the infinity of which $f(z)$ vanishes. Find both $y_{in}$ and $y_{out}$ for the equation $\epsilon y^{"}-y^{\prime}+(1+x^{3})y=0$, $0$ < $x$ < $1$, $y(0)=1$, $y(1)=3$, $\epsilon$ << $1$. Where is the boundary layer and what is its width? Problem Set 10 (due Nov 18, Mon) $\epsilon y^{"}-y^{\prime}+(1+x^{3})y=0,$ $0$ < $x$ < $1,$ $y(0)=1,$ $y(1)=3,$ $\epsilon<<1.$ $\epsilon y^{"}+2y^{\prime}+(1+x^{3})y=0,$ $0$ < $x$ < $1,$ $y(0)=1,$ $y(1)=0.$ It is assumed that $\epsilon<<1.$ Where is the boundary layer? Solve approximately $\epsilon y^{\prime\prime}-(3\sin x)y^{\prime}+(\cos x)y=0,$ $0$ < $x$ < $1$ with the boundary conditions $y(0)=1,y(1)=2.$ Solve approximately $\epsilon y^{\prime\prime}+(3\sin x)y^{\prime}+(\cos x)y=0,0$ < $x$ < $1,$ $-1$ < $x$ < $1$, with the boundary conditions $y(-1)=-1,y(1)=3.$ Problem Set 12 (due Dec 2, Mon) Solve approximately $\epsilon y^{\prime\prime}-(3\sin x)y^{\prime}+(\cos x)y=0,$ $-1$ < $x$ < $1$ with the boundary conditions $y(-1)=1,y(1)=2.$ Solve approximately $\epsilon y^{\prime\prime}+(3\sin x)y^{\prime}+(\cos x)y=0,-1$ < $x$ < $1$, with the boundary conditions
CommonCrawl
Markov Chain Monte Carlo The goal of Markov Chain Monte Carlo (MCMC) is to generate random samples from complicated high dimensional distributions about which we have incomplete information. For example, it might be that we don't know the normalizing constant of the distribution, as we saw in the code breaking example of the previous section. Suppose the distribution from which we want to generate a sample is called $\pi$. We are going to assume that $\pi$ is a probability distribution on a finite set, and you should imagine the set to be large. MCMC relies on a few observations. Let $X_0, X_1, \ldots $ be an irreducible aperiodic Markov Chain on a finite state space. Then the distribution of $X_n$ converges to a stationary distribution as $n$ gets large. If we can create a Markov Chain $\{X_n\}$ that has the desired distribution $\pi$ as its stationary distribution, then we can simulate draws from $\pi$ (or close enough to it) by running the chain for a long time and using the values $X_n$ for large $n$. To create a transition matrix that results in $\pi$ as the stationary distribution, the easiest way is to try to ensure that the detailed balance equations are solved. The detailed balance equations are equivalent to $$ \frac{\pi(j)}{\pi(i)} ~ = ~ \frac{P(i, j)}{P(j, i)}, ~~ i \ne j $$ The right hand side only involves the transition probabilities of the chain that we want to create. The left hand side only involves ratios of the terms in $\pi$, and therefore can be checked even if we don't know the constant that normalizes $\pi$. Metropolis Algorithm Exactly who proposed the first algorithm to create such a Markov Chain is the subject of some debate. A general version was proposed by Hastings. Here we will describe an earlier version attributed to Metropolis and co-authors in 1953. The goal is to create a transition matrix $\mathbb{P}$ so that $\pi$ and $\mathbb{P}$ together solve the detailed balance equations. The algorithm starts with any symmetric, irreducible transition matrix $\mathbb{Q}$ on the state space. For example, if the state space is numerical you could start with, "Wherever the chain is, it picks one of the three closest values (including itself) with probability $1/3$ each." For a pair of states $i$ and $j$, the transition probability $Q(i, j)$ is called the proposal probability. The algorithm then introduces additional randomization to create a new chain that is irreducible and aperiodic and has $\pi$ as its stationary distribution. Here are the rules that determine the transitions of the new chain. Suppose the chain is at $i$ at time $n$, that is, suppose $X_n = i$. Pick a state $j$ according to the proposal probability $Q(i, j)$. This $j$ is the candidate state to which your chain might move. Define the acceptance ratio $$ r(i, j) = \frac{\pi(j)}{\pi(i)} $$ If $r(i, j) \ge 1$, set $X_{n+1} = j$. If $r(i, j) < 1$, toss a coin that lands heads with chance $r(i, j)$. If the coin lands heads, set $X_{n+1} = j$. If the coin lands tails, set $X_{n+1} = i$. Repeat all the steps, with $X_{n+1}$ as the starting value. Thus the new chain either moves to the state picked according to $\mathbb{Q}$, or it stays where it is. We say that it accepts a move to a new state based on $\mathbb{Q}$ and $r$, and otherwise it doesn't move. The new chain is irreducible because the proposal chain is irreducible. It is aperiodic because it can stay in place. So it has a steady state distribution. The alogrithm says that this steady state distribution is the same as the distribution $\pi$ that was used to define the ratios $r(i, j)$. How to Think About the Algorithm Before we prove that the algorithm works, let's examine what it is doing in the context of decoders. First notice that we are requiring $\mathbb{Q}$ to be symmetric as well as irreducible. The symmetry requirement makes sense as each detailed balance equation involves transitions $i \to j$ as well as $j \to i$. Fix any starting decoder and call it $i$. Now you have to decide where the chain is going to move next, that is, what the next decoder is going to be. The algorithm starts this process off by picking a decoder $j$ according to $\mathbb{Q}$. We say that $\mathbb{Q}$ proposes a move to $j$. To decide whether or not the chain should move to $j$, remember that the distribution $\pi$ contains the likelihoods of all the decoders. You want to end up with decoders that have high likelihood, so it is natural to compare $\pi(i)$ and $\pi(j)$. The algorithm does this by comparing the acceptance ratio $r(i, j) = \pi(j)/\pi(i)$ to 1. If $r(i, j) \ge 1$, the likelihood of $j$ is at least as large that of $i$, so you accept the proposal and move to $j$. If $r(i, j) < 1$, the proposed decoder $j$ has less likelihood than the current $i$, so it is tempting to stay at $i$. But this risks the chain getting stuck at a local maximum. The algorithm provides a chance to avoid this, by tossing a biased coin. If the coin lands heads, the chain moves to $j$ even though $j$ has a lower likelihood than the current decoder $i$. The idea is that from this new position there might be paths to decoders that have the highest likelihoods of all. The Algorithm Works We will now show that the detailed balance equations are solved by the desired limit distribution $\pi$ and the transition matrix $\mathbb{P}$ of the chain created by the Metropolis algorithm. Take any two distinct states $i$ and $j$. Case 1: $\pi(i) = \pi(j)$ Then $r(i, j) = 1$. By the algorithm, $P(i, j) = Q(i, j)$ and also $P(j, i) = Q(j, i) = Q(i, j)$ by the symmetry of $Q$. Therefore $P(i, j) = P(j, i)$ and the detailed balance equation $\pi(i)P(i, j) = \pi(j)P(j, i)$ is satisfied. Case 2: $\pi(j) < \pi(i)$ Then $r(i, j) < 1$, so $$ P(i, j) ~=~ Q(i, j)r(i, j) ~=~ Q(j, i)\frac{\pi(j)}{\pi(i)} ~~~~ \text{ by the symmetry of } Q \text{ and definition of }r $$ Now $r(j, i) > 1$, so the algorithm says $P(j, i) = Q(j, i)$. $$ P(i, j) ~ = ~ P(j, i)\frac{\pi(j)}{\pi(i)} $$ which is the same as $$ \pi(i)P(i, j) ~ = ~ \pi(j)P(j, i) $$ Case 2: $\pi(j) > \pi(i)$ Reverse the roles of $i$ and $j$ in Case 2. That's it! A simple and brilliant idea that provides a solution to a difficult problem. In lab, you will see it in action when you implement the algorithm to decode text. 〈 Code Breaking Review Set on Conditioning and Mar... 〉
CommonCrawl
Patterns of infections and antimicrobial drugs' prescribing among pregnant women in Saudi Arabia: a cross sectional study Mohamed A. Baraka ORCID: orcid.org/0000-0002-7645-24201,2 na1, Lina Hussain AlLehaibi3 na1, Hind Nasser AlSuwaidan4, Duaa Alsulaiman5, Md. Ashraful Islam6, Badriyah Shadid Alotaibi7, Amany Alboghdadly8, Ali H. Homoud9, Fuad H. Al-Ghamdi10, Mastour S. Al Ghamdi11 & Zaheer-Ud-Din Babar12 Journal of Pharmaceutical Policy and Practice volume 14, Article number: 9 (2021) Cite this article 53 Accesses Antimicrobial agents are among the most commonly prescribed drugs in pregnancy due to the increased susceptibility to infections during pregnancy. Antimicrobials can contribute to different maternal complications. Therefore, it is important to study their patterns in prescription and utilization. The data regarding this issue is scarce in Saudi Arabia. Therefore, the aim of this study is to generate data on the antimicrobial agents that are most commonly prescribed during pregnancy as well as their indications and safety. This is a retrospective study focusing on pregnant women with a known antimicrobial use at Johns Hopkins Aramco Healthcare (JHAH). The sample included 344 pregnant women with a total of 688 antimicrobial agents prescribed. Data was collected on the proportion of pregnant women who received antimicrobial agents and on the drug safety during pregnancy using the risk categorization system of the U.S. Food and Drug Administration (FDA). The results showed that urinary tract infections (UTIs) were the most reported (59%) infectious diseases. Around 48% of pregnant women received antimicrobial medications at some point during pregnancy. The top two antimicrobial agents based on prescription frequency were B-lactams (44.6%) and azole anti-fungals (30%). The prescribed drugs in the study were found to be from classes B, C and D under the FDA risk classification system. The study revealed a high proportion of antimicrobials prescribed during pregnancy that might pose risks to mothers and their fetuses. Future multicenter studies are warranted to evaluate the rational prescription of antimicrobial medications during pregnancy. Pregnancy is a critical period for women. Exposure to medications during this period might lead to adverse events that affect not only the pregnant woman but possibly the fetus [1]. Antimicrobials are commonly used among pregnant women because they are prone to different types of infections due to the lower immunity during that period [2]. On the other hand, antimicrobials remain important in reducing maternal mortality related to infections [3]. According to the published literature, the most commonly reported infections among pregnant women are respiratory tract infections, urinary tract infections and sexually transmitted infections [4, 5]. In fact, data on the use of antimicrobials in pregnancy for different indications needs to be studied to improve evidence-based care for this special population [6]. In fact, anti-infective drugs aren't easy to deal with, since its overuse and misuse could lead to antimicrobial resistance. In 2019, WHO considered antimicrobial resistance as one of the top ten threats to global health. Therefore, all physicians and patients must be cautious while prescribing and using these medications [7, 8]. According to the Centers for Disease Control and Prevention (CDC), around 70% of women reported taking a minimum of one prescribed medication throughout their pregnancy. Amoxicillin was one of the most frequently used prescription drugs [9]. Another Omani study revealed that only 63% of prescribed antimicrobial agents were selected appropriately, and 79% of infections were treated empirically, while only 21% of patients were treated based on an obtained microorganism culture. It was also reported that 12% of empirical antimicrobials have been changed to match culture results. The most frequently prescribed antimicrobials were Piperacillin/tazobactam followed by Amoxicillin/clavulanic acid and clarithromycin [10]. Another retrospective study was conducted in an antenatal clinic in rural Ghana. The study reported that around two-thirds of pregnant women attending the clinic received antibiotic prescriptions. The most commonly prescribed antibiotics were categorized under classes B, C and D in the FDA risk classification system. The results of the study showed that 3.5% of antibacterial prescriptions were filled without proper diagnosis or justification [11]. Data on the use of medication during pregnancy in a Nepali tertiary hospital in 2016 showed an increase in the use of all drugs in the third trimester, and 12.8% of the drugs used were antimicrobials. The four most prescribed antimicrobials included Cefixime, Amoxicillin, Metronidazole and Ceftriaxone. The majority of prescribed medications were from FDA pregnancy category B [12]. In 2012, a Canadian study reported a decline in the use of broad-spectrum antibiotics over the study period from 1998 to 2002. On the other hand, the use of other classes was escalating, including macrolides, quinolones, tetracyclines, antimycotics and antimicrobials that treat urinary tract infections. Use of Penicillins and Sulfonamides was also decreasing, while Cephalosporins, anti-protozoals and antimycobacterials showed no trend. Researchers concluded that compliance with evidence-based guidelines by Canadian clinicians could be an explanation for such trends [13]. While the studies mentioned above provide valuable information on the use of antimicrobials during pregnancy, there is unfortunately scarce information on this important topic in Saudi Arabia. To the best of our knowledge, this is the first study in Saudi Arabia that comprehensively considers patterns in prescription or use of antimicrobial drugs among pregnant women. We aim in our study to identify the most common types of infections among pregnant women in a Saudi hospital, to measure the amount of antimicrobials prescribed for pregnant women, and to assess the safety of prescribed antimicrobials during pregnancy according to FDA risk categorization. Study design and site This is a retrospective observational study that was conducted to collect data from pregnant women with known antimicrobial utilization during pregnancy at Johns Hopkins Aramco Healthcare (JHAH), which is located in the city of Dhahran in the Eastern province of Saudi Arabia. The data was collected from patients' electronic medical records (EMRs). Sampling technique and sample selection procedure Medical records for pregnant women who had delivered their babies either through vaginal delivery or Caesarean section (C-section) as confirmed through a positive Human Chorionic Gonadotropin test (hCG) at JHAH were identified. The total number of pregnant women from December 2017 to February 2019 with a positive hCG was 5124, and all of them had been screened. A total of 2440 of these patients had received antimicrobial prescriptions, and 1760 of them had met the inclusion criteria. After collecting the medical records, we used a systematic random sampling method by selecting every fifth file. We identified 344 valid files—which was a few more than the minimum number required as per the sample size calculation—with a total number of 688 antimicrobial agents prescribed. Calculation of sample size To determine the size of the sample for this study we used the power study method. This is a very useful and frequently used tool in health research for proving the adequacy of the sample size for a study. The proportion of pregnant women using antimicrobial drugs in Saudi Arabia is 3% [14]. Because the size of the population was unknown, we used the following formula to obtain an appropriate sample size: $$n = \frac{{\left( {Z_{1 - \beta } + Z_{{{\alpha \mathord{\left/ {\vphantom {\alpha 2}} \right. \kern-\nulldelimiterspace} 2}}} } \right)^{2} \left[ {p(1 - p)} \right]}}{{d^{2} }}$$ where n = required sample size; Z1−β = Z value at power 1 − β (minimum power 80%, value = 0.84); Zα/2 = standard normal value at a confidence level of 100 (1 − α) % (ideal value is 1.96 at 95% CI); p = referred proportion for the study 0.03 (3%); d = margin of error 0.05 (ideal value is 0.05 for estimated proportion in the range of 20–80%, and around 0.03 for less common or very common events [< 20% or > 80%]) [15]. Considering an 80% power of test, a 95% confidence interval, 3% marginal error, and 3% proportion rate, the formula gave us a sample size of 253.49. In practice, may need to enroll more participants to account for potential missing/non-response errors [16]. The formula for adjusting the sample size is $${n}_{1}=n/(1-d)$$ n = required sample size as per formula, n1 = adjusted sample size, d = the dropout rate. Considering a 20% missing/non-response error rate, the adjusted sample size was 316.87, which is the minimum number. Patients who had normal pregnancies, attended JHAH, received antimicrobial medications from December 2017 to February 2019, and delivered their babies at JHAH have been included. The age of the patients ranged between 15 and 50 years as some women may have married earlier than 18 years of age. Exclusion criteria Patients who underwent abortions, experienced ectopic pregnancies, received antimicrobials for normal delivery prophylaxis, and experienced post-cesarean delivery prophylaxis have been excluded (as shown in Fig. 1) Sample selection procedures Demographic data, clinical data, anti-infective medications and comorbidities were collected for pregnant women who met the study inclusion criteria. If a patient was prescribed antimicrobial agents at any point in the pregnancy, all antimicrobial courses during the pregnancy were considered. Definition of the study variables The age of the patient at the time the antimicrobial was received (gestational age) was categorized as 15–24, 25–34, 35–44, and equal to or older than 45 years. The pregnancy trimester was calculated after the patient's last menstrual period (LMP). Trimesters were divided as follows: first trimester (1–12 weeks), second trimester (13–27 weeks) and third trimester (28–40 weeks). The FDA has established risk categories designated with five letters to indicate the safety of drug use during pregnancy as A, B, C, D or X. Drugs under categories A or B are considered safe for use during pregnancy. Drugs in category C could be given if benefit outweighs risk. Drugs in categories D and X are considered harmful, especially those in category X, if any, which are absolutely contraindicated. Allergies toward drugs or food were also documented. Drug resistance is defined as a reduction in effectiveness of antimicrobials that happens when microorganisms change after exposure to antimicrobial drugs. The resistance of a drug to a pathogen was reported by physicians in patients' EMRs. The mode of delivery, either vaginal or by C-section, was also recorded. Indications of the prescribed antimicrobials, either for treatment of a known infection or as prophylaxis for pregnant women who were at high risk of infection, were also documented. Complications that may occur during the period of pregnancy were divided into complication for preterm pregnancies (when a baby is born before 37 weeks of pregnancy) or complication for abortions. In addition, data on fetal complications—whether the fetus was exposed to any complications or abnormalities during the pregnancy—was collected. The maternal co-morbidities variable was defined as the presence of one or more additional diseases or disorders occurring with an infection. The gravida variable describes the total number of confirmed pregnancies that a patient has had, regardless of the outcome. Living children refers to children who lived beyond neonatal period. Duration of medication, expressed in days, indicates duration of treatment with antimicrobials. Prescribed medications include antimicrobial agents used to kill or slow the growth of microbes, including antibacterial, antiviral, antifungal, and anti-parasitic drugs. Pattern of infection designates the type of infection a pregnant woman has experienced, including bacterial, fungal, viral, and parasitic diseases. Data was analyzed using the Statistical Package for Social Sciences (IBM SPSS software version 22). The results are represented using mean and standard deviation (SD) for continuous variables and frequencies and percentages for categorical variables. The study received ethical approval from the university Institutional Review Board (IRB) under the following number: (IRB-UGS-2018-5-048). It was also approved by the JHAH IRB under (IRB number 18-07). Our study revealed that 48% of pregnant women received antimicrobial prescriptions (Fig. 2). The mean age of the respondents was 19.19 (SD 6.33) years, ranging from 17 to 48 years. More than half (55.5%) of the respondents were 25–34 years old. However, the mean gestational age was 23.49 weeks (SD 10.35) as shown in Tables 1a and 2. Percentage of pregnant women who received antimicrobial prescriptions Table 1 a Baseline characteristics of study participants/pregnant women (n = 344); b Baseline characteristics of study variables related to prescribed antimicrobials (n = 688) Table 2 Mean, standard deviation (SD), 95% confidence interval (CI) of mean and range for some study variables The majority of patients did not experience any bacterial resistance (84.0%). There were 20 missing data for the drug resistance variable. According to the susceptibility tests that were obtained with cultures, microorganisms were most likely resistant to B-lactam antibiotics (38.5%), followed by multi-drug resistance (MDR) (32.7%) as shown in Table 1a. Around three quarters of participating pregnant women had completed vaginal delivery (71.2%). Gravida mean ± SD was 3.12 ± 2.37 with a range of 1 to 13 pregnancies as shown in Tables 1a and 2. Around half of the mothers were following up after their pregnancies without any comorbidities (56.9%). Complications experienced by pregnant women were mainly abortions (29.8%). Most of the babies had no complications after their mothers received antimicrobial drugs (86.3%). Mean ± SD for the number of living children was 2.36 ± 1.78 as shown in Tables 1a and 2. Around half of the antimicrobial prescriptions were issued during the third trimester (47.4%). The lowest percentage of antimicrobials prescribed was during the first trimester (21.6%). Three antimicrobial agents did not specify the trimester, as shown in Table 1b. FDA categorization for safety of drug use during pregnancy fell into classes B, C and D. However, antimicrobials in classes A and X were not prescribed. Most of the drugs fell into class B (66.4%), as shown in Table 1b. The vast majority of patients had no known allergies (NKA) (92.0%) to the prescribed drugs. Beta lactams antibiotics accounted for (21.8%) of active drug allergies among pregnant women, followed by dopaminergic drugs and quinolones with (20.0%) and (12.7%) respectively, as shown in Table 1b. Most of the pregnant women were prescribed antimicrobials to treat infections (99.0%), and only (1.0%) were prescribed as prophylaxis. Median duration for taking an antimicrobial was 7.0 (2.0) with a range of 1–120 days, as shown in Tables 1b and 2. Out of 5124 pregnant women who attended the hospital between December 2017 and February 2019, 2440 were exposed to antimicrobials for different indications, and 2684 women were not exposed. Pattern of infections affecting pregnant women The majority of patients in this study suffered from bacterial infections (64.0%). UTI from bacterial infection had the highest proportion (59.3%) followed by fungal infections (34.5%). There were 101 missing data as shown in Table 3. Table 3 Patterns of infection among pregnant women from the Eastern region, Saudi Arabia (n = 587) Patterns of antimicrobial prescriptions among pregnant women including systemic and/or topical routes The most frequent antimicrobial prescriptions among pregnant women were B-lactams (44.6%) followed by prescriptions of azoles (30.2%). The other antimicrobial prescriptions were not as frequent; these included macrolides (7.7%), quinolones (6.7%) and other antibiotics (6.5%), as shown in Table 4. Table 4 Pattern of prescribed antimicrobials among pregnant women from the Eastern region of Saudi Arabia (n = 688) More than half of the patients who have been prescribed antimicrobials, or 55.8%, were treated empirically. Microbiological cultures were requested for the remaining 44.2%, of which 12.5% revealed no growth. Escherichia coli bacteria were identified in 8.7% of performed cultures followed by mixed flora with 8.4%. Other pathogens were Streptococcus agalactiae with 3.5%, Klebsiella pneumoniae with 2.9%, Candida albicans with 2.7%, Extended-Spectrum Beta-Lactamase (ESBL)-producing E. coli with 1.1%, and Staphylococcus aureus with 0.9%. There were 32 missing data in this part. The details of these microbiological cultures are described in Table 5. Table 5 Descriptive statistics for microbiological culture among pregnant mothers from the Eastern region of Saudi Arabia (n = 656) Indications for antimicrobial use The most prevalent infectious diseases among pregnant women in JHAH were bacterial infections (predominantly UTI and RTIs) and fungal infections. Our findings conformed with the Canadian study as RTIs and UTIs were the most prevalent bacterial infections in both studies. However, RTIs ranked first in the Canadian study, whereas UTIs had the highest proportion in our study. This may be due to the differences in weather between the two countries. The weather in Canada may contribute to a higher prevalence of RTIs because of the long, harsh winters as mentioned in their study [17]. In Saudi Arabia, meanwhile, winter is much shorter and warmer. The general prevalence of UTIs in pregnant women may be due to the physiological changes that arise in the gestational period, when the uterus grows and blocks the drainage of urine from the bladder, thus creating a susceptible medium for infections [18]. The second most common microbial infection at JHAH was fungal. This can be explained by the physiological decline in immunity in addition to the hormonal fluctuations during pregnancy [2, 19]. Percentage of antimicrobial exposure Forty-eight percent of pregnant women in our study received antimicrobial medication during their pregnancies. This was higher than the prevalence of anti-infective use reported in a study conducted in 2012 in Quebec, Canada [13]. However, it was less than the documented antibiotic use in a recent study conducted in an antenatal clinic in rural Ghana [11]. Classes of prescribed antimicrobials Top two antimicrobial agents based on prescription frequency were B-lactams and azole antifungals. This conformed with the findings of the study conducted in Quebec, Canada, where Penicillins was the most prescribed antimicrobial class, while the next top three antimicrobials were macrolides, quinolones and antifungal agents respectively [13]. This is also in agreement with the findings of the Ghanaian study, where beta lactam antibiotics—i.e., Cephalosporins and Penicillins—also represented the majority of antibiotics used [11], and as reported in another study conducted in a hospital in Western Nepal [12]. The superiority of beta lactams at JHAH may be linked to the high number of UTI infections. Beta lactams are recommended for use during pregnancy to treat Asymptomatic Bacteriuria and UTIs [20]. Moreover, the frequent prescriptions of azole antifungals were related to the frequent fungal infections prevalent among pregnant women at JHAH. Based on FDA risk categorization, Penicillins and Cephalosporins are considered safe options in pregnancy. If used systemically, Azoles could be teratogenic in animals and humans. However, topical azoles are not absorbed, or are minimally absorbed, and hence are permitted at any stage of pregnancy [21, 22]. In our data, the prescription of azoles for pregnant women was systemic and topical. Pregnancy risk categories The majority of antimicrobial drugs prescribed to our participants belong to FDA category B (66.4%), followed by category C (32.6%) and category D (1.0%), which is considered harmful according to FDA recommendations. No antimicrobials from category A or X were documented in our study. Similar findings were revealed in the study conducted in an antenatal clinic in rural Ghana between 2011 and 2015, where the antimicrobials taken by pregnant women were mainly from FDA category B (69.6%), with fewer drugs prescribed from categories C (2.9%) and D (0.5%). Antimicrobials in categories A and X were not prescribed in the Ghanaian study [11]. In addition to the risks categorized by the FDA, some cases of exposure to potentially harmful drugs prescribed inappropriately against therapeutic guidelines have been identified. For example, one of the pregnant women was diagnosed with acne excoriee in the first trimester and received Minocycline 100 mg orally twice daily for 2 weeks. Tetracycline antibiotics (including doxycycline and minocycline) are known to exert toxic effects on fetal teeth and bones as they bind to calcium orthophosphate and undergo active deposition in teeth and bones of the fetuses. It has been documented that oral antibiotics such as Erythromycin, Azithromycin, Cephalexin and Amoxicillin are more appropriate for treating acne during pregnancy [23]. In another example, a pregnant woman in her first trimester had a diagnosis of UTI caused by Candida and received Fluconazole 200 mg orally once daily for 1 week. A recent study reported that the use of oral fluconazole in the first trimester is associated with musculoskeletal malformations, and the researchers recommended the use of topical azoles as an alternative treatment [24]. Drug use per trimester In our study, antimicrobials were prescribed in all three trimesters with more frequent prescription in the third trimester. A similar study shows total medication use during pregnancy was maximum in the third trimester with an average of (3.88) drugs per patient, followed by the second trimester with (3.05) drugs per patient and the first trimester with (3.01) drugs per patient [12]. These data might be explained by the general perception that there is little risk for development of major malformation in the fetus beyond the organogenesis phase in the first trimester [25, 26]. For this reason, physicians in our institution might have been more comfortable prescribing medications in the third trimester. Another study [27] revealed that the prevalence of prescribed medications was higher in the first trimester (47.0%), which is considered to be the critical period for most major congenital abnormalities [28]. However, at JHAH, most study participants who received anti-infective medication in the first trimester did so before confirming pregnancy. Microbiological culture and empiric treatment Almost half of the patients had microbiological cultures prior to the initiation of antimicrobial agents, which revealed negative results in 12.5% of the cultures. However, in the positive cultures, the most predominant microorganism was Escherichia coli, followed by Streptococcus agalactiae, Klebsiella pneumoniae, Candida albicans, ESBL E. coli and Staphylococcus aureus. Al Yamani et al. reported similar results in which the most common organisms in their hospital were gram-negative bacteria (E. coli, followed by Klebsiella pneumoniae, Pseudomonas aeruginosa, methicillin-resistant Staphylococcus aureus [MRSA], and Acinetobacter). In their institution, the practice differed in obtaining the cultures before the antimicrobial course initiation, and cultures were collected from only one-quarter of their patients [10]. This may reflect the attitude of JHAH practitioners in considering the bacterial culture and its importance before prescribing antimicrobials as availability of pathogens and antimicrobial susceptibility testing can be helpful for antimicrobial stewardship programs [29]. Clinical and policy impact of the study The present study describes the overall practices of prescribing antimicrobial agents in pregnant women and the most common types of infectious diseases occurring during pregnancy in a Saudi Arabian hospital. These findings are expected to help in generating knowledge about better utilization of antimicrobials for pregnant women, thereby improving the prescription of antimicrobials for pregnant women through safe selection of antimicrobial regimens. They may also shed light on the prescription patterns of different antimicrobials among pregnant women. Awareness and educational programs are warranted to help healthcare providers rationalize prescription of antimicrobials for pregnant women. Study limitations and recommendations for future research The current study has several limitations. First, prescription of antimicrobials during pregnancy was evaluated in a single center, and every antimicrobial prescription was considered an encounter and was counted as a separate file for statistical purposes. Therefore, caution is required for generalizing the findings to the entire population. Second, the poor documentation in some encounters has led to a lack of information regarding treatment indications. Therefore, we could not explain or connect the use of antimicrobials to the disease state or maternal and fetal consequences nor to check appropriateness of antimicrobials prescribing against published guidelines. Moreover, this resulted in missing data that hindered the correlation of antimicrobial use with demographic data. Nevertheless, the study investigated the prescription patterns of antimicrobial agents during pregnancy. It highlighted the general prescription practices and most common infections at the JHAH hospital in the city of Dhahran. Our study revealed a high frequency of antimicrobials prescribed during pregnancy that might pose risks to mothers and their fetuses. Different approaches are needed to increase awareness among healthcare providers as well as pregnant women about the common types of infections during pregnancy and how to prevent them. The study has identified a gap in training and a need for educational programs to avoid prescribing antimicrobials in FDA categories C and D unless well indicated and benefits outweigh risks. Further studies are warranted in order to identify factors associated with such antimicrobial prescription and to generalize the results to the rest of the population. All data and materials are available upon request to any scientist wishing to use them. JHAH: Johns Hopkins Aramco Healthcare UTIs: RTIs: Respiratory tract infections URTIs: LRTIs: Lower respiratory tract infections C-section: FDA: EMRs: hCG: LMP: SPSS: Statistical Package for Social Sciences IRB: IQR: Interquartile range NSAIDs: Non-steroidal anti-inflammatory drugs NKA: No known allergy Multi-drug resistance MRSA: ESBL E. coli : Extended-Spectrum Beta-Lactamase producing E. coli E. coli : Daw JR, Hanley GE, Greyson DL, Morgan SG. Prescription drug use during pregnancy in developed countries: a systematic review. Pharmacoepidemiol Drug Saf. 2011;20(9):895–902. PubMed PubMed Central Google Scholar Theiler RN. Evidence-based antimicrobial therapy in pregnancy: long overdue. Clin Pharmacol Ther. 2009;86(3):237–8. CAS PubMed PubMed Central Article Google Scholar Lockitch G. Maternal–fetal risk assessment. Clin Biochem. 2004;37(6):447–9. Geerlings SE. Clinical presentations and epidemiology of urinary tract infections. Urin Tract Infect Mol Pathog Clin Manag. 2017;27–40. Ray K, Bala M, Bhattacharya M, Muralidhar S, Kumari M, Salhan S. Prevalence of RTI/STI agents and HIV infection in symptomatic and asymptomatic women attending peripheral health set-ups in Delhi, India. Epidemiol Infect. 2008;136(10):1432–40. Beigi RH. The importance of studying antimicrobials in pregnancy. In: Seminars in perinatology. Elsevier; 2015. p. 556–60. Ten threats to global health in 2019 Air pollution and climate change. World Health Organization (WHO). 2019. p. 1–18. https://www.who.int/emergencies/ten-threats-to-global-health-in-2019. Antibiotic resistance Antibiotic resistance. World Heal Organ. 2010;(July):2005–6. Thorpe PG, Gilboa SM, Hernandez-Diaz S, Lind J, Cragan JD, Briggs G, Kweder S, Friedman JM, Mitchell AA HM. Research on medicines and pregnancy. Centers for Disease Control and Prevention. 2020. p. 1–4. https://www.cdc.gov/pregnancy/meds/treatingfortwo/research.html. Al-Yamani A, Khamis F, Al-Zakwani I, Al-Noomani H, Al-Noomani J, Al-Abri S. Patterns of antimicrobial prescribing in a tertiary care hospital in Oman. Oman Med J. 2016;31(1):35. PubMed PubMed Central Article Google Scholar Mensah KB, Opoku-Agyeman K, Ansah C. Antibiotic use during pregnancy: a retrospective study of prescription patterns and birth outcomes at an antenatal clinic in rural Ghana. J Pharm Policy Pract. 2017;10(1):24. Devkota R, Khan GM, Alam K, Regmi A, Sapkota B. Medication utilization pattern for management of pregnancy complications: a study in Western Nepal. BMC Pregnancy Childbirth. 2016;16(1):272. Santos F, Sheehy O, Perreault S, Ferreira E, Bérard A. Trends in anti-infective drugs use during pregnancy. J Popul Ther Clin Pharmacol. 2012;19(3). Raheel H, Alsakran S, Alghamdi A, Ajarem M, Alsulami S, Mahmood A. Antibiotics and over the counter medication use and its correlates among arab pregnant women visiting a tertiary care hospital in Riyadh, Saudi Arabia. Pak J Med Sci. 2017;33(2):452–6. Gorstein J, Sullivan KM, Parvanta I, Begin F. Indicators and methods for cross-sectional surveys of vitamin and mineral status of populations. Micronutr Initiat Centers Dis Control Prev. 2007;53. Sakpal T. Sample size estimation in clinical trial. Perspect Clin Res. 2010;1(2):67. McCormick T, Ashe RG, Kearney PM. Urinary tract infection in pregnancy. Obstet Gynaecol. 2008;10(3):156–62. Africa C, Nel J, Stemmet M. Anaerobes and bacterial vaginosis in pregnancy: virulence factors contributing to vaginal colonisation. Int J Environ Res Public Health. 2014;11(7):6979–7000. Sebastian F. Infectious diseases in obstetrics and gynecology. Boca Raton: CRC Press; 2008. Lee H, Le J. PSAP 2018 BOOK 1 Urinary tract infections. PSAP 2018 B 1-Infect Dis. 2018;(Sobel 2014):7–28. Pilmis B, Jullien V, Sobel J, Lecuit M, Lortholary O, Charlier C. Antifungal drugs during pregnancy: an updated review. J Antimicrob Chemother. 2014;70(1):14–22. Patel VM, Schwartz RA, Lambert WC. Topical antiviral and antifungal medications in pregnancy: a review of safety profiles. J Eur Acad Dermatol Venereol. 2017;31(9):1440–6. Chien AL, Qi J, Rainer B, Sachs DL, Helfrich YR. Treatment of Acne in pregnancy. J Am Board Fam Med. 2016;29(2):254–62. Zhu Y, Bateman BT, Gray KJ, Hernandez-Diaz S, Mogun H, Straub L, et al. Oral fluconazole use in the first trimester and risk of congenital malformations: population based cohort study. BMJ. 2020;369:m1494. Picciano MF. Pregnancy and lactation: physiological adjustments, nutritional requirements and the role of dietary supplements. J Nutr. 2003;133(6):1997S-2002S. Bookstaver PB, Bland CM, Griffin B, Stover KR, Eiland LS, McLaughlin M. A review of antibiotic use in pregnancy. Pharmacother J Hum Pharmacol Drug Ther. 2015;35(11):1052–62. Berard A, Sheehy O. The Quebec Pregnancy Cohort—prevalence of medication use during gestation and pregnancy outcomes. PLoS ONE. 2014;9(4):e93870. Czeizel AE. The first trimester concept is outdated. Congenit Anom (Kyoto). 2001;41(3):204. Baraka MA, Alsultan H, Alsalman T, Alaithan H, Islam MA, Alasseri AA. Health care providers' perceptions regarding antimicrobial stewardship programs (AMS) implementation—facilitators and challenges: a cross-sectional study in the Eastern province of Saudi Arabia. Ann Clin Microbiol Antimicrob. 2019;18(26):1–10. We would like to thank the deanship of scientific research of AAU & IAU and all clinicians who participated in the study. Our thanks also go to our colleagues in Pharmacy Practice department for their help during the survey validation process and for their constructive feedback during the proposal discussion and approval in the department. Without the help of our colleagues in JHAH who helped in the study conception and data selection and collection, we could not accomplish such interesting project. Our deep thanks also go to our colleagues in Princess Nourah University for their precious contribution to the study design, conduction and writing the manuscript. This project was granted by the Health Science Research Center at Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia. This research project was funded by the Health Sciences Research Center, King Abdullah bin Abdulaziz University Hospital, Princess Nourah bint Abdulrahman University, through the Research Funding Program, Grant No (G18-00021). The funding center had no role in the design of the study and collection, analysis, and interpretation of data or in writing the manuscript. Mohamed A. Baraka and Lina Hussain AlLehaibi contributed equally to first author Clinical Pharmacy Department, College of Pharmacy, Al Ain University, Al Ain Campus, Al Ain, United Arab Emirates Mohamed A. Baraka Clinical Pharmacy Department, College of Pharmacy, Al-Azhar University, Cairo, Egypt First Health Cluster in Eastern Province, Dammam Medical Complex, Dammam, 32245, Saudi Arabia Lina Hussain AlLehaibi College of Clinical Pharmacy, Imam Abdulrahman Bin Faisal University, P.O. Box. 1982, Dammam, 31441, Saudi Arabia Hind Nasser AlSuwaidan King Fahd Hospital of the University (KFHU), Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia Duaa Alsulaiman Pharmacy Practice Department, College of Clinical Pharmacy, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia Md. Ashraful Islam Department of Pharmaceutical Sciences, College of Pharmacy, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia Badriyah Shadid Alotaibi College of Pharmacy, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia Amany Alboghdadly Clinical Pharmacy Service, Johns Hopkins Aramco Healthcare, Dhahran, Saudi Arabia Ali H. Homoud Pharmacy Department at Johns Hopkins Aramco Healthcare, Dhahran, Saudi Arabia Fuad H. Al-Ghamdi Department of Pharmacology, College of Clinical Pharmacy, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia Mastour S. Al Ghamdi Department of Pharmacy, School of Applied Sciences, University of Huddersfield, Huddersfield, UK Zaheer-Ud-Din Babar MAB and LHA have equally contributed to the study design, conduction, analysis, writing and all other project phases. DA, MAI, HNA, BSA, AA and ZB have contributed to the conception and design of the study. MAB, AHH, FHA, MSAG contributed to the generation, collection, assembly, analysis and/or interpretation of data. All authors especially MAB and ZB have contributed to deigning, drafting and revision of the manuscript. All authors read and approved the final manuscript. Correspondence to Mohamed A. Baraka. The study received an ethical approval from the University IRB under the following number (IRB-UGS-2018-5-048). It has also been approved by the Johns Hopkins Aramco Healthcare (JHAH) institution Review Board (IRB) under (IRB number 18-07). All authors have given verbal consent for publication. All authors declare that they have no any kind of competing interests. Baraka, M.A., AlLehaibi, L.H., AlSuwaidan, H.N. et al. Patterns of infections and antimicrobial drugs' prescribing among pregnant women in Saudi Arabia: a cross sectional study. J of Pharm Policy and Pract 14, 9 (2021). https://doi.org/10.1186/s40545-020-00292-6 Antimicrobial stewardship programs Drug utilization pattern
CommonCrawl
Is there a matrix whose permanent counts 3-colorings? Actually, I suppose that the answer is technically "yes," since computing the permanent is #P-complete, but that's not very satisfying. So here's what I mean: Kirchhoff's theorem says that if you take the Laplacian matrix of a graph, and chop off a row and a column, then the determinant of the resulting matrix is equal to the number of spanning trees of the graph. It would be nice to have some analogue of this for other points of the Tutte polynomial, but this is in general too much to ask: the determinant can be computed in polynomial time, but problems such as counting 3-colorings are #P-hard. However, if we use the permanent instead of the determinant, we don't run into these complexity-theoretic issues, at least. So, given a graph G on n vertices, can we construct a matrix of size around nxn whose permanent is the number of 3-colorings of G? (The secret underlying motivation here is a vague personal hope that we can extend the analogy between the Laplacian matrix and the Laplacian operator [no, the naming isn't a total coincidence] to analogies between other matrices and general elliptic operators, and then prove some sort of "index theorem," which could [even more speculatively, here] help us understand why graph isomorphism is hard, prove or construct a counterexample to the reconstruction conjecture, prove the Riemann hypothesis, and achieve world peace forever.) graph-colorings Harrison Brown Harrison BrownHarrison Brown $\begingroup$ Have you tried using the deletion-retraction recurrence? Is there an analogue of expansion by minors for the permanent? $\endgroup$ – Qiaochu Yuan $\begingroup$ @Qiaochu: Yeah, expansion by minors essentially works the same way, except you drop the sign changes. Maybe I'm slow, but is this really enough to construct such a matrix? $\endgroup$ – Harrison Brown $\begingroup$ Harrison, the fact that 3-coloring is #P does not mean that there is a description of the number of 3-coloring of a graph as a permanent. $\endgroup$ "The number of edge 3-colorings of a planar cubic graph as a permanent" by David E. Scheim gives a permanent formula for the number of edge-3-colorings of planar cubic graphs (i.e. the number of 3-coloring of graphs which are line graphs of cubic planar graphs.) This is generalized (using some results of Ellingham and Goddyn) to the case of n-colorings of n-regular planar graphs in "Colorings and orientations of matrices and graphs" by Uwe Schauz. This paper interprets Ryser's permanent formula as a statement about colorings and gives a "matrix form" of a theorem of Alon and Tarsi. This doesn't answer your question but I hope you find the above references interesting. On the other hand about the fact that the Laplacian matrix generalizes to the Laplace operator on graphs, I wanted to mention that, in turn, it generalizes to the Laplacian on vector bundles on graphs. I learned about this generalization in the talk that Kenyon gave this year at the JMM. This new approach generalizes Kirkhoff's theorem from spanning trees to cycle-rooted-spanning-forests. edited Aug 5, 2020 at 11:49 answered Feb 16, 2010 at 9:03 Gjergji ZaimiGjergji Zaimi $\begingroup$ A la Tait, this means we can count the number of vertex 4-colourings of a fully-triangulated planar graph using the permanent of a matrix. $\endgroup$ – Adam P. Goucher I do not know too much about the topic, but Valiant also defined the VNP classes: http://delivery.acm.org/10.1145/810000/804419/p249-valiant.pdf?key1=804419&key2=2034149521&coll=GUIDE&dl=GUIDE&CFID=65423911&CFTOKEN=83322167 There is also a notion of reducibility there and PERMANENT (with many other counting problems is complete) while the number of colorings is not there, so probably there is no simple reduction but unfortunately I am really not familiar with these notions. domotorpdomotorp I'm afraid I can't answer your excellent question... but perhaps you'll be interested in this vaguely relevant remark. It gives a formula (originally by MacMahon) for the number of $n \times n$ Latin squares, but has a graph-theoretic interpretation that is related. Let $G$ be the "rook's graph", that is, the simple graph with vertex set $\{(i,j):1 \leq i,j \leq n\}$ and an edge between $(i,j)$ and $(i',j')$ whenever $i=i'$ or $j=j'$. Define the $n \times n$ square matrix $X=(x_{ij})$ where $x_{ij}$ are variables. Then the number of $n$-colourings of $G$ is the coefficient of $\prod_{i=1}^n \prod_{j=1}^n x_{ij}$ in $\text{per}(X)^n$ (this is also the number of $n \times n$ Latin squares). answered Dec 28, 2009 at 8:21 Douglas S. StonesDouglas S. Stones The matrix tree theorem for weighted graphs Is there a group whose cardinality counts non-intersecting paths? Number of spanning forests in a graph The Matrix-Tree Theorem without the matrix Bicycles and spanning trees of graphs What is the effect of adding one edge on the number of spanning trees of a given graph ? Generalized Cauchy-Binet sum over a fixed subset of indices "Good" edge-colorings
CommonCrawl
Earth and Environmental Sciences (2) Proceedings of the International Astronomical Union (3) Journal of Fluid Mechanics (2) Animal Genetic Resources/Resources génétiques animales/Recursos genéticos animales (1) BSAP Occasional Publication (1) British Journal of Nutrition (1) Bulletin of Entomological Research (1) Powder Diffraction (1) BSAS (7) International Astronomical Union (3) Ryan Test (2) Nutrition Society (1) Toward design of the antiturbulence surface exhibiting maximum drag reduction effect V. Krieger, R. Perić, J. Jovanović, H. Lienhart, A. Delgado Journal: Journal of Fluid Mechanics / Volume 850 / 10 September 2018 Print publication: 10 September 2018 The flow development in a groove-modified channel consisting of flat and grooved walls was investigated by direct numerical simulations based on the Navier–Stokes equations at a Reynolds number of $5\times 10^{3}$ based on the full channel height and the bulk velocity. Simulations were performed for highly disturbed initial flow conditions leading to the almost instantaneous appearance of turbulence in channels with flat walls. The surface morphology was designed in the form of profiled grooves aligned with the flow direction and embedded in the wall. Such grooves are presumed to allow development of only the statistically axisymmetric disturbances. In contrast to the rapid production of turbulence along a flat wall, it was found that such development was suppressed over a grooved wall for a remarkably long period of time. Owing to the difference in the flow structure, friction drag over the grooved wall was more than 60 % lower than that over the flat wall. Anisotropy-invariant mapping supports the conclusion, emerging from analytic considerations, that persistence of the laminar regime is due to statistical axisymmetry in the velocity fluctuations. Complementary investigations of turbulent drag reduction in grooved channels demonstrated that promotion of such a state across the entire wetted surface is required to stabilize flow and prevent transition and breakdown to turbulence. To support the results of numerical investigations, measurements in groove-modified channel flow were performed. Comparisons of the pressure differentials measured along flat and groove-modified channels reveal a skin-friction reduction as large as $\text{DR}\approx 50\,\%$ owing to the extended persistence of the laminar flow compared with flow development in a flat channel. These experiments demonstrate that early stabilization of the laminar boundary layer development with a grooved surface promotes drag reduction in a fully turbulent flow with a preserving magnitude as the Reynolds number increases. Genetic diversity in European and Chinese pig breeds – the PigBioDiv project S. Blott, M. SanCristobal, C. Chevalet, C.S. Haley, G. Russell, G. Plastow, K. Siggens, M.A.M. Groenen, M.-Y. Boscher, Y. Amigues, K. Hammond, G. Laval, D. Milan, A. Law, E. Fimland, R. Davoli, V. Russo, G. Gandini, A. Archibald, J.V. Delgado, M. Ramos, C. Désautés, L. Alderson, P. Glodek, J.-N. Meyer, J.-L. Foulley, L. Andersson, R. Cardellino, N. Li, L. Huang, K. Li, L. Ollivier Journal: BSAP Occasional Publication / Volume 30 / 2004 Characterisation of genetic diversity in a large number of European pig populations has been undertaken with EC support. The populations sampled included local (rare) breeds, national varieties of the major international breeds, commercial lines and the Chinese Meishan breed. A second phase of the project will sample a further 50 Chinese breeds. Neutral genetic markers (AFLP and microsatellites), with individual or bulk typing, were used and compared. DNA from 59 European pig populations was extracted on samples of about 50 individuals per population. Individuals were typed for 50 microsatellites and for 148 AFLP bands. A subset of 25 populations was typed for 20 microsatellites on pools of DNA. Allele frequencies were estimated by direct allele counting for the co-dominant markers. Frequencies of AFLP negative alleles (absent bands) were obtained by taking the square root of absent band frequencies. Within-breed variability was summarised using standard statistics: expected and observed heterozygosity, mean observed and effective numbers of alleles, and F statistics. Between-breed diversity analysis was based on a bootstrapped Neighbor-Joining (NJ) tree derived from Reynolds distances (DR). The standard distance of Nei (DS) was also calculated. Dissection of ancestral genetic contributions to Creole goat populations N. Sevane, O. Cortés, L. T. Gama, A. Martínez, P. Zaragoza, M. Amills, D. O. Bedotti, C. Bruno de Sousa, J. Cañon, S. Dunner, C. Ginja, M. R. Lanari, V. Landi, P. Sponenberg, J. V. Delgado, The BioGoat Consortium Journal: animal / Volume 12 / Issue 10 / October 2018 Published online by Cambridge University Press: 08 January 2018, pp. 2017-2026 Goats have played a key role as source of nourishment for humans in their expansion all over the world in long land and sea trips. This has guaranteed a place for this species in the important and rapid episode of livestock expansion triggered by Columbus' arrival in the Americas in the late 1400s. The aims of this study are to provide a comprehensive perspective on genetic diversity in American goat populations and to assess their origins and evolutionary trajectories. This was achieved by combining data from autosomal neutral genetic markers obtained in more than two thousand samples that encompass a wide range of Iberian, African and Creole goat breeds. In general, even though Creole populations differ clearly from each other, they lack a strong geographical pattern of differentiation, such that populations of different admixed ancestry share relatively close locations throughout the large geographical range included in this study. Important Iberian signatures were detected in most Creole populations studied, and many of them, particularly the Cuban Creole, also revealed an important contribution of African breeds. On the other hand, the Brazilian breeds showed a particular genetic structure and were clearly separated from the other Creole populations, with some influence from Cape Verde goats. These results provide a comprehensive characterisation of the present structure of goat genetic diversity, and a dissection of the Iberian and African influences that gave origin to different Creole caprine breeds, disentangling an important part of their evolutionary history. Creole breeds constitute an important reservoir of genetic diversity that justifies the development of appropriate management systems aimed at improving performance without loss of genomic diversity. A model to infer the demographic structure evolution of endangered donkey populations F. J. Navas, J. Jordana, J. M. León, C. Barba, J. V. Delgado Journal: animal / Volume 11 / Issue 12 / December 2017 Stemming from The Worldwide Donkey Breeds Project, an initiative aiming at connecting international researchers and entities working with the donkey species, molecularly tested pedigree analyses were carried out to study the genetic diversity, structure and historical evolution of the Andalusian donkey breed since the 1980s to infer a model to study the situation of international endangered donkey breeds under the remarkably frequent unknown genetical background status behind them. Demographic and genetic variability parameters were evaluated using ENDOG (v4.8). Pedigree completeness and generation length were quantified for the four gametic pathways. Despite mean inbreeding was low, highly inbred animals were present in the pedigree. Average coancestry, relatedness, and non-random mating degree trends were computed. The effective population size based on individual inbreeding rate was about half when based on individual coancestry rate. Nei's distances and equivalent subpopulations number indicated differentiated farms in a highly structured population. Although genetic diversity loss since the founder generations could be considered small, intraherd breeding policies and the excessive contribution of few ancestors to the gene pool could lead to narrower pedigree bottlenecks. Long average generation intervals could be considered when reducing inbreeding. Wright's fixation statistics indicated slight inbreeding between farms. Pedigree shallowness suggested applying new breeding strategies to reliably estimate descriptive parameters and control the negative effects of inbreeding, which could indeed, mean the key to preserve such valuable animal resources avoiding the extinction they potentially head towards, making the present model become an international referent when assessing endangered donkey populations. Pig cognitive bias affects the conversion of muscle into meat by antioxidant and autophagy mechanisms Y. Potes, M. Oliván, A. Rubio-González, B. de Luxán-Delgado, F. Díaz, V. Sierra, L. Arroyo, R. Peña, A. Bassols, J. González, R. Carreras, A. Velarde, M. Muñoz-Torres, A. Coto-Montes Journal: animal / Volume 11 / Issue 11 / November 2017 Published online by Cambridge University Press: 18 April 2017, pp. 2027-2035 Slaughter is a crucial step in the meat production chain that could induce psychological stress on each animal, resulting in a physiological response that can differ among individuals. The aim of this study was to investigate the relationship between an animal's emotional state, the subsequent psychological stress at slaughter and the cellular damage as an effect. In all, 36 entire male pigs were reared at an experimental farm and a cognitive bias test was used to classify them into positive bias (PB) or negative bias (NB) groups depending on their decision-making capabilities. Half of the animals, slaughtered in the same batch, were used for a complete study of biomarkers of stress, including brain neurotransmitters and some muscle biomarkers of oxidative stress. After slaughter, specific brain areas were excised and the levels of catecholamines (noradrenaline (NA) and dopamine (DA)) and indoleamines (5-hydroxyindoleacetic acid and serotonin (5HT)) were analyzed. In addition, muscle proteasome activity (20S), antioxidant defence (total antioxidant activity (TAA)), oxidative damage (lipid peroxidation (LPO)) and autophagy biomarkers (Beclin-1, microtubule-associated protein I light chain 3 (LC3-I) and LC3-II) were monitored during early postmortem maturation (0 to 24 h). Compared with PB animals, NB pigs were more susceptible to stress, showing higher 5HT levels (P<0.01) in the hippocampus and lower DA (P<0.001) in the pre-frontal cortex. Furthermore, NB pigs had more intense proteolytic processes and triggered primary muscle cell survival mechanisms immediately after slaughter (0 h postmortem), thus showing higher TAA (P<0.001) and earlier proteasome activity (P<0.001) and autophagy (Beclin-1, P<0.05; LC3-II/LC3-I, P<0.001) than PB pigs, in order to counteract the induced increase in oxidative stress, that was significantly higher in the muscle of NB pigs at 0 h postmortem (LPO, P<0.001). Our study is the first to demonstrate that pig's cognitive bias influences the animal's susceptibility to stress and has important effects on the postmortem muscle metabolism, particularly on the cell antioxidant defences and the autophagy onset. These results expand the current knowledge regarding biomarkers of animal welfare and highlight the potential use of biomarkers of the proteasome, the autophagy (Beclin-1, LC3-II/LC3-I ratio) and the muscle antioxidant defence (TAA, LPO) for detection of peri-slaughter stress. Abundance ratios & ages of stellar populations in HARPS-GTO sample E. Delgado Mena, M. Tsantaki, V. Zh. Adibekyan, S. G. Sousa, N. C. Santos, J. I. González Hernández, G. Israelian Journal: Proceedings of the International Astronomical Union / Volume 12 / Issue S330 / April 2017 In this work we present chemical abundances of heavy elements (Z>28) for a homogeneous sample of 1059 stars from HARPS planet search program. We also derive ages using parallaxes from Hipparcos and Gaia DR1 to compare the results. We study the [X/Fe] ratios for different populations and compare them with models of Galactic chemical evolution. We find that thick disk stars are chemically disjunt for Zn adn Eu. Moreover, the high-alpha metal-rich population presents an interesting behaviour, with clear overabundances of Cu and Zn and lower abundances of Y and Ba with respect to thin disk stars. Several abundance ratios present a significant correlation with age for chemically separated thin disk stars (regardless of their metallicity) but thick disk stars do not present that behaviour. Moreover, at supersolar metallicities the trends with age tend to be weaker for several elements. Genetic parameters of traits associated with the growth curve in Segureña sheep T. M. Lupi, J. M. León, S. Nogales, C. Barba, J. V. Delgado Journal: animal / Volume 10 / Issue 5 / May 2016 This paper studies the genetic importance of growth curve parameters and their relevance as selection criteria in breeding programmes of Segureño sheep. Logistic and Verhulst growth functions were chosen for their best fit to BW/age in this breed; the first showed the best general fit and the second the best individual fit. Live weights of 41 330 individuals from the historical archives of the National Association of Segureña Sheep Breeders were used in the analysis. The progeny of 1464 rams and 27 048 ewes were used to study the genetic and phenotypic parameters of growth curve parameters and derived traits. Reproductive management in the population consists in controlled natural mating inside every herd, with a minimum of 15% of the females fertilized by artificial insemination with fresh semen; with the purpose being the herd genetic connections, all herd genealogies are screened with DNA markers. Estimates of growth curve parameters from birth to 80 days were obtained for each individual and each function by the non-linear regression procedure using IBM SPSS statistics (version 21) with the Levenberg–Marquart estimation method. (Co)variance components and genetic parameters were estimated by using the REML/Animal model methodology. The heritability of mature weight was estimated as 0.41±0.042 and 0.38±0.021 with the logistic and Verhulst models, respectively, and the heritability of other parameters ranged from 0.41 to 0.62 and 0.37 to 0.61, with the models, respectively. A negative genetic correlation between mature weight and rate of maturing was found. Social class based on occupation is associated with hospitalization for A(H1N1)pdm09 infection. Comparison between hospitalized and ambulatory cases J. PUJOL, P. GODOY, N. SOLDEVILA, J. CASTILLA, F. GONZÁLEZ-CANDELAS, J. M. MAYORAL, J. ASTRAY, S. GARCIA, V. MARTIN, S. TAMAMES, M. DELGADO, A. DOMÍNGUEZ, the CIBERESP Cases and Controls in Pandemic Influenza Working Group Journal: Epidemiology & Infection / Volume 144 / Issue 4 / March 2016 This study aimed to analyse the existence of an association between social class (categorized by type of occupation) and the occurrence of A(H1N1)pmd09 infection and hospitalization for two seasons (2009–2010 and 2010–2011). This multicentre study compared ambulatory A(H1N1)pmd09 confirmed cases with ambulatory controls to measure risk of infection, and with hospitalized A(H1N1)pmd09 confirmed cases to asses hospitalization risk. Study variables were: age, marital status, tobacco and alcohol use, pregnancy, chronic obstructive pulmonary disease, chronic respiratory failure, cardiovascular disease, diabetes, chronic liver disease, body mass index >40, systemic corticosteroid treatment and influenza vaccination status. Occupation was registered literally and coded into manual and non-manual worker occupational social class groups. A conditional logistic regression analysis was performed. There were 720 hospitalized cases, 996 ambulatory cases and 1062 ambulatory controls included in the study. No relationship between occupational social class and A(H1N1)pmd09 infection was found [adjusted odds ratio (aOR) 0·97, 95% confidence interval (CI) 0·74–1·27], but an association (aOR 1·53, 95% CI 1·01–2·31) between occupational class and hospitalization for A(H1N1)pmd09 was observed. Influenza vaccination was a protective factor for A(H1N1)pmd09 infection (aOR 0·41, 95% CI 0·23–0·73) but not for hospitalization. We conclude that manual workers have the highest risk of hospitalization when infected by influenza than other occupations but they do not have a different probability of being infected by influenza. Characterization of commercial and biological growth curves in the Segureña sheep breed T. M. Lupi, S. Nogales, J. M. León, C. Barba, J. V. Delgado Journal: animal / Volume 9 / Issue 8 / August 2015 Non-linear models were analysed to describe both the biological and commercial growth curves of the Segureña sheep, one of the most important Spanish breeds. We evaluated Brody, von Bertalanffy, Verhulst, logistic and Gompertz models, using historical data from the National Association of Segureña Sheep Breeders (ANCOS). These records were collected between 2000 and 2013, from a total of 129 610 weight observations ranging from birth to adulthood. The aim of this research was to establish the mathematical behaviour of body development throughout this breed's commercial life (birth to slaughter) and biological life (birth to adulthood); comparison between both slopes gives important information regarding the best time for slaughter, informs dietary advice according to animals' needs, permits economical predictions of productions and, by using the curve parameters as selection criteria, enables improvements in growth characteristics of the breed. Models were fitted according to the non-linear regression procedure of statistical package SPSS version19. Model parameters were estimated using the Levenberg–Marquardt algorithm. Candidate models were compared using the determinative coefficient, mean square error, number of iterations, Akaike information coefficient and biological coherence of the estimated parameters. The von Bertalanffy and logistic models were found to be best suited to the biological and commercial growth curves, respectively, for both sexes. The Brody equation was found to be unsuitable for studying the commercial growth curve. Differences between the parameters in both sexes indicate a strong impact of sexual dimorphism on growth. This can emphasize the value of the highest growth rate for females, indicating that they reach maturity earlier. Age-specific differences in influenza virus type and subtype distribution in the 2012/2013 season in 12 European countries J. BEAUTÉ, P. ZUCS, N. KORSUN, K. BRAGSTAD, V. ENOUF, A. KOSSYVAKIS, A. GRIŠKEVIČIUS, C. M. OLINGER, A. MEIJER, R. GUIOMAR, K. PROSENC, E. STAROŇOVÁ, C. DELGADO, M. BRYTTING, E. BROBERG Journal: Epidemiology & Infection / Volume 143 / Issue 14 / October 2015 The epidemiology of seasonal influenza is influenced by age. During the influenza season, the European Influenza Surveillance Network (EISN) reports weekly virological and syndromic surveillance data [mostly influenza-like illness (ILI)] based on national networks of sentinel primary-care providers. Aggregated numbers by age group are available for ILI, but not linked to the virological data. At the end of the influenza season 2012/2013, all EISN laboratories were invited to submit a subset of their virological data for this season, including information on age. The analysis by age group suggests that the overall distribution of circulating (sub)types may mask substantial differences between age groups. Thus, in cases aged 5–14 years, 75% tested positive for influenza B virus whereas all other age groups had an even distribution of influenza A and B viruses. This means that the intepretation of syndromic surveillance data without age group-specific virological data may be misleading. Surveillance at the European level would benefit from the reporting of age-specific influenza data. Numerical simulation of turbulent flow through Schiller's wavy pipe G. Daschiel, V. Krieger, J. Jovanović, A. Delgado Journal: Journal of Fluid Mechanics / Volume 761 / 25 December 2014 The development of incompressible turbulent flow through a pipe of wavy cross-section was studied numerically by direct integration of the Navier–Stokes equations. Simulations were performed at Reynolds numbers of $4.5\times 10^{3}$ and $10^{4}$ based on the hydraulic diameter and the bulk velocity. Results for the pressure resistance coefficient ${\it\lambda}$ were found to be in excellent agreement with experimental data of Schiller (Z. Angew. Math. Mech., vol. 3, 1922, pp. 2–13). Of particular interest is the decrease in ${\it\lambda}$ below the level predicted from the Blasius correlation, which fits almost all experimental results for pipes and ducts of complex cross-sectional geometries. Simulation databases were used to evaluate turbulence anisotropy and provide insights into structural changes of turbulence leading to flow relaminarization. Anisotropy-invariant mapping of turbulence confirmed that suppression of turbulence is due to statistical axisymmetry in the turbulent stresses. High Transmission and Low Resistivity Cadmium Tin Oxide Thin Films Deposited by Sol-Gel Carolina. J. Diliegros Godines, Rebeca Castanedo Pérez, Gerardo Torres Delgado, Orlando Zelaya Ángel Journal: MRS Online Proceedings Library Archive / Volume 1675 / 2014 Published online by Cambridge University Press: 18 September 2014, pp. 151-156 Transparent conducting cadmium tin oxide (CTO) thin films were obtained from a mixture of CdO and SnO2 precursor solutions by the dip-coating sol-gel technique. The thin films studied in this work were made with 7 coats (∼200 nm) on corning glass and quartz substrates. Each coating was deposited at a withdrawal speed of 2 cm/min, dried at 100°C for 1 hour and then sintered at 550°C for 1 hour in air. In order to decrease the resistivity values of the films, these were annealed in a vacuum atmosphere and another set of films were annealed in an Ar/CdS atmosphere. The annealing temperatures (Ta) were 450°C, 500°C and 550°C, as well as 600°C and 650°C, when corning glass and quartz substrates were used, respectively. X-Ray diffraction (XRD) patterns of the films annealed in a vacuum showed that there is only the presence of CTO crystals for 450°C≤ Ta ≤ 600°C and CTO+SnO2 crystals for Ta=650°C. The films annealed in Ar/CdS atmosphere were only constituted of CTO crystals independent of the Ta. The minimum resistivity value obtained was ∼4 x 10-4 Ωcm (Rsheet= 20 Ω/□) for the films deposited on quartz and annealed at Ta=600°C under an Ar/CdS atmosphere. The films deposited on quartz showed the higher optical transmission (∼90%) with respect to the films deposited on corning glass substrates (∼85%) in the Uv-vis region. For their optical and electrical characteristics, these films are good candidates as transparent electrodes in solar cells. Conserved peptide sequences bind to actin and enolase on the surface of Plasmodium berghei ookinetes J. HERNÁNDEZ-ROMANO, M. H. RODRÍGUEZ, V. PANDO, J. A. TORRES-MONZÓN, A. ALVARADO-DELGADO, A. N. LECONA VALERA, R. ARGOTTE RAMOS, J. MARTÍNEZ-BARNETCHE, M. C. RODRÍGUEZ Journal: Parasitology / Volume 138 / Issue 11 / September 2011 The description of Plasmodium ookinete surface proteins and their participation in the complex process of mosquito midgut invasion is still incomplete. In this study, using phage display, a consensus peptide sequence (PWWP) was identified in phages that bound to the Plasmodium berghei ookinete surface and, in selected phages, bound to actin and enolase in overlay assays with ookinete protein extracts. Actin was localized on the surface of fresh live ookinetes by immunofluorescence and electron microscopy using specific antibodies. The overall results indicated that enolase and actin can be located on the surface of ookinetes, and suggest that they could participate in Plasmodium invasion of the mosquito midgut. Coupled Thermo-Hydro-Geochemical Models of Engineered Barrier Systems: The Febex Project J. Samper, R. Juncosa, V. Navarro, J. Delgado, L. Montenegro, A. Vázquez Journal: MRS Online Proceedings Library Archive / Volume 663 / 2000 Published online by Cambridge University Press: 21 March 2011, 561 FEBEX (Full-scale Engineered Barrier EXperiment) is a demonstration and research project dealing with the bentonite engineered barrier designed for sealing and containment of waste in a high level radioactive waste repository (HLWR). It includes two main experiments: an situ full-scale test performed at Grimsel (GTS) and a mock-up test operating since February 1997 at CIEMAT facilities in Madrid (Spain) [1,2,3]. One of the objectives of FEBEX is the development and testing of conceptual and numerical models for the thermal, hydrodynamic, and geochemical (THG) processes expected to take place in engineered clay barriers. A significant improvement in coupled THG modeling of the clay barrier has been achieved both in terms of a better understanding of THG processes and more sophisticated THG computer codes. The ability of these models to reproduce the observed THG patterns in a wide range of THG conditions enhances the confidence in their prediction capabilities. Numerical THG models of heating and hydration experiments performed on small-scale lab cells provide excellent results for temperatures, water inflow and final water content in the cells [3]. Calculated concentrations at the end of the experiments reproduce most of the patterns of measured data. In general, the fit of concentrations of dissolved species is better than that of exchanged cations. These models were later used to simulate the evolution of the large-scale experiments (in situ and mock-up). Some thermo-hydrodynamic hypotheses and bentonite parameters were slightly revised during TH calibration of the mock-up test. The results of the reference model reproduce simultaneously the observed water inflows and bentonite temperatures and relative humidities. Although the model is highly sensitive to one-at-a-time variations in model parameters, the possibility of parameter combinations leading to similar fits cannot be precluded. The TH model of the "in situ" test is based on the same bentonite TH parameters and assumptions as for the "mock-up" test. Granite parameters were slightly modified during the calibration process in order to reproduce the observed thermal and hydrodynamic evolution. The reference model captures properly relative humidities and temperatures in the bentonite [3]. It also reproduces the observed spatial distribution of water pressures and temperatures in the granite. Once calibrated the TH aspects of the model, predictions of the THG evolution of both tests were performed. Data from the dismantling of the in situ test, which is planned for the summer of 2001, will provide a unique opportunity to test and validate current THG models of the EBS. Relative breed contributions to neutral genetic diversity of a comprehensive representation of Iberian native cattle J. Cañón, D. García, J. V. Delgado, S. Dunner, L. Telo da Gama, V. Landi, I. Martín-Burriel, A. Martínez, C. Penedo, C. Rodellar, P. Zaragoza, C. Ginja Journal: animal / Volume 5 / Issue 9 / 05 August 2011 Published online by Cambridge University Press: 11 March 2011, pp. 1323-1334 This study is aimed at establishing priorities for the optimal conservation of genetic diversity among a comprehensive group of 40 cattle breeds from the Iberian Peninsula. Different sets of breed contributions to diversity were obtained with several methods that differ in the relative weight attributed to the within- and between-breed components of the genetic variation. The contributions to the Weitzman diversity and the expected heterozygosity (He) account for between- and within-breed variation only, respectively. Contributions to the core set obtained for several kinship matrices, incorporate both sources of variation, as well as the combined contributions of Ollivier and Foulley and those of Caballero and Toro. In general, breeds that ranked high in the different core set applications also ranked high in the contribution to the global He, for example, Sayaguesa, Retinta, Monchina, Berrenda en Colorado or Marismeña. As expected, the Weitzman method prioritised breeds with low contributions to the He, like Mallorquina, Menorquina, Berrenda en Negro, Mostrenca, Vaca Palmera or Mirandesa, all showing highly negative contributions to He – that is, their removal would significantly increase the average He. Weighing the within- and between-breed components with the FST produced a balanced set of contributions in which all the breeds ranking high in both approaches show up. Unlike the other methods, the contributions to the diversity proposed by Caballero and Toro prioritised a good number of Portuguese breeds (Arouquesa, Barrosã, Mertolenga and Preta ranking highest), but this might be caused by a sample size effect. Only Sayaguesa ranked high in all the methods tested. Considerations with regard to the conservation scheme should be made before adopting any of these approaches: in situv. cryoconservation, selection and adaptation within the breeds v. crossbreeding or the creation of synthetic breeds. There is no general consensus with regard to balancing within- and between-breed diversity and the decision of which source to favour will depend on the particular scenario. In addition to the genetic information, other factors, such as geographical, historical, economic, cultural, etc., also need to be considered in the formulation of a conservation plan. All these aspects will ultimately influence the distribution of resources by the decision-makers. Growth of Hydrogenated Amorphous Silicon (A-Si:H) on Patterned Substrates for Increased Mechanical Stability Wan-Shick Hong, J. C. Delgado, O. Ruiz, V. Perez-Mendez Published online by Cambridge University Press: 21 February 2011, 209 Residual stress in hydrogenated amorphous silicon (a-Si:H) film has been studied. Deposition on square island pattern reduced the stress when the lateral dimension of the islands became comparable to the film thickness. The overall stress was reduced by approximately 40% when the lateral dimension was decreased to 40 μm, but the adhesion was not improved much. However, substrates having a 2-dimensional array of inversed pyramids of 200 μm in lateral dimension produced overall stress 3∼4 times lower than that on the normal substrates. The inversed pyramid structure also had other advantages including minimized delamination and increased effective thickness. Computer simulation confirmed that the overall stress can be reduced by deposition on the pyramidal structure. Dietary fat modifications and blood pressure in subjects with the metabolic syndrome in the LIPGENE dietary intervention study Hanne L. Gulseth, Ingrid M. F. Gjelstad, Audrey C. Tierney, Danielle I. Shaw, Olfa Helal, Anneke M. J. v. Hees, Javier Delgado-Lista, Iwona Leszczynska-Golabek, Brita Karlström, Julie Lovegrove, Catherine Defoort, Ellen E. Blaak, Jose Lopez-Miranda, Aldona Dembinska-Kiec, Ulf Risérus, Helen M. Roche, Kåre I. Birkeland, Christian A. Drevon Journal: British Journal of Nutrition / Volume 104 / Issue 2 / 28 July 2010 Print publication: 28 July 2010 Hypertension is a key feature of the metabolic syndrome. Lifestyle and dietary changes may affect blood pressure (BP), but the knowledge of the effects of dietary fat modification in subjects with the metabolic syndrome is limited. The objective of the present study was to investigate the effect of an isoenergetic change in the quantity and quality of dietary fat on BP in subjects with the metabolic syndrome. In a 12-week European multi-centre, parallel, randomised controlled dietary intervention trial (LIPGENE), 486 subjects were assigned to one of the four diets distinct in fat quantity and quality: two high-fat diets rich in saturated fat or monounsaturated fat and two low-fat, high-complex carbohydrate diets with or without 1·2 g/d of very long-chain n-3 PUFA supplementation. There were no overall differences in systolic BP (SBP), diastolic BP or pulse pressure (PP) between the dietary groups after the intervention. The high-fat diet rich in saturated fat had minor unfavourable effects on SBP and PP in males. Chemical Fingerprinting and Chemical Analysis of Galactic Halo Substructure Steven R. Majewski, Mei-Yin Chou, Katia Cunha, Verne V. Smith, Richard J. Patterson, David Martínez-Delgado Journal: Proceedings of the International Astronomical Union / Volume 5 / Issue S265 / August 2009 We present high-resolution spectroscopic measurements of the abundances of the α-like element titanium (Ti) and s-process elements yttrium (Y) and lanthanum (La) for M giant candidates of (a) the Sagittarius (Sgr) dwarf spheroidal + tidal tail system, (b) the Triangulum-Andromeda (TriAnd) Star Cloud, and (c) the Galactic Anticenter Stellar Structure (GASS, or Monoceros Stream). All three systems show abundance patterns unlike the Milky Way but typical of dwarf galaxies. The Sgr system abundance patterns resemble those of the Large Magellanic Cloud. GASS/Mon chemically resembles Sgr but is distinct from TriAnd, a result that does not support previous suggestions that TriAnd is a piece of the Monoceros Stream. Stellar Populations in Luminous and Ultraluminous Infrared Galaxies R. M. González Delgado, R. Cid Fernandes, E. Pérez, J. Rodríguez-Zaurín, C. Tadhunter, O. Dors, V. Muñoz Marín, M. Villar-Martín The goal of this work is to determine the properties of the stellar populations in a sample of LIRGs and ULIRGs. Using the ages as a clock we investigate: a) whether LIRGs-ULIRGs evolve into Radio Galaxies and QSOs; b) whether cool LIRGs-ULIRGs can evolve into warm LIRGs-ULIRGs; c) the merger sequence deduced from the morphological studies is reflected in the properties of the stellar populations. Using evolutionary synthesis models with high spectral resolution stellar libraries we have found that the intermediate age stellar population dominates at optical wavelengths. The stellar population in LIRGs is similar to ULIRGs and ULIRGs-QSOs transition objects. Variations in the mitochondrial cytochrome c oxidase subunit I gene indicate northward expanding populations of Culicoides imicola in Spain J.H. Calvo, C. Calvete, A. Martinez-Royo, R. Estrada, M.A. Miranda, D. Borras, V. Sarto I Monteys, N. Pages, J.A. Delgado, F. Collantes, J. Lucientes Journal: Bulletin of Entomological Research / Volume 99 / Issue 6 / December 2009 Culicoides imicola is the main vector for bluetongue (BT) and African horse sickness (AHS) viruses in the Mediterranean basin and in southern Europe. In this study, we analysed partial mitochondrial cytochrome c oxidase subunit I (COI) gene to characterize and confirm population expansion of Culicoides imicola across Spain. The data were analysed at two hierarchical levels to test the relationship between C. imicola haplotypes in Spain (n=215 from 58 different locations) and worldwide (n=277). We found nineteen different haplotypes within the Spanish population, including 11 new haplotypes. No matrilineal subdivision was found within the Spanish population, while western and eastern Mediterranean C. imicola populations were very structured. These findings were further supported by median networks and mismatch haplotype distributions. Median networks demonstrated that the haplotypes we observed in the western Mediterranean region were closely related with one another, creating a clear star-like phylogeny separated only by a single mutation from eastern haplotypes. The two, genetically distinct, sources of C. imicola in the Mediterranean basin, thus, were confirmed. This type of star-like population structure centred around the most frequent haplotype is best explained by rapid expansion. Furthermore, the proposed northern expansion was also supported by the statistically negative Tajima's D and Fu's Fs values, as well as predicted mismatch distributions of sudden and spatially expanding populations. Our results thus indicated that C. imicola population expansion was a rapid and recent phenomenon.
CommonCrawl
Running at a Constant Speed Alignments to Content Standards: 6.RP.A.3 6.RP.A.3.b A runner ran 20 miles in 150 minutes. If she runs at that speed, How long would it take her to run 6 miles? How far could she run in 15 minutes? How fast is she running in miles per hour? What is her pace in minutes per mile? The purpose of this task is to give students experience in reasoning with equivalent ratios and unit rates from both sides of the ratio. This prepares for studying proportional relationships in Grade 7, and in Grade 8 for understanding that a proportional relationship can be viewed as a function in two different ways, depending on which variable is regarded as the input variable and which as the output variable. Solution: Using a table Number of Minutes 150 15 7.5 30 45 60 Number of Miles 20 2 1 4 6 8 The values in column B were found by dividing both values in column A by 10. The values in column C were found by dividing both values in column B by 2. The other columns contain multiples of the values in column B. If we look in column E, we can see that it would take her 45 minutes to run 6 miles. If we look in column B, we can see that she could run 2 miles in 15 minutes. If we look in column F, we can see that she is running 8 miles every 60 minutes (which is 1 hour), so she is running 8 miles per hour. If we look in column C, we can see that her pace is 7.5 minutes per mile. Solution: Finding a unit rate If we divide 150 by 20, we get the unit rate for the ratio 150 minutes for every 20 miles. $$150\div20=7.5$$ So the runner is running 7.5 minutes per mile. We can multiply this unit rate by the number of miles: $$7.5 \frac{\text{minutes}}{\text{mile}}\times 6\text{ miles} = 45\text{ minutes}$$ Thus it will take her 45 minutes to run 6 miles at this pace. If it takes her 45 minutes to run 6 miles, it will take her $45\div3=15$ minutes to run $6\div3=2$ miles at the same pace. If it takes her 15 minutes to run 2 miles, it will take her $4\times15=60$ minutes to run $4\times2=8$ miles at the same pace. Since 60 minutes is 1 hour, she is running at a speed of 8 miles per hour. We found her pace in minutes per miles in part (a).
CommonCrawl
What are some famous rejections of correct mathematics? Dick Lipton has a blog post that motivated this question. He recalled the Stark-Heegner Theorem: There are only a finite number of imaginary quadratic fields that have unique factorization. They are $\sqrt{d}$ for $d \in \{-1,-2,-3,-7,-11,-19,-43,-67,-163 \}$. From Wikipedia (link in the theorem statement above): It was essentially proven by Kurt Heegner in 1952, but Heegner's proof had some minor gaps and the theorem was not accepted until Harold Stark gave a complete proof in 1967, which Stark showed was actually equivalent to Heegner's. Heegner "died before anyone really understood what he had done". I am also reminded of Grassmann's inability to get his work recognized. What are some other examples of important correct work being rejected by the community? NB. There was a complementary question here before. soft-question big-list ho.history-overview Steve Huntsman $\begingroup$ There is, of course, the Cantor-Kronecker debacle, but I think this is too well-known to merit an answer. $\endgroup$ – Qiaochu Yuan Feb 3 '10 at 1:26 $\begingroup$ @Qiaochu--I think the Hilbert existence proof ( en.wikipedia.org/wiki/David_Hilbert#The_finiteness_theorem ) is in the same class. $\endgroup$ – Steve Huntsman Feb 3 '10 at 1:43 $\begingroup$ I don't think the proofs of the four-color theorem and Kepler conjecture were really rejected, but they merit a footnote. $\endgroup$ – Steve Huntsman Feb 3 '10 at 2:01 $\begingroup$ See also Emch, A. "Rejected Papers of Three Famous Mathematicians". National Mathematics Magazine 11, 186 (1937), in which papers of Schläfli, Riemann, and De Jonquières are discussed. Available at jstor.org/pss/3028220 $\endgroup$ – Steve Huntsman Feb 3 '10 at 15:37 $\begingroup$ mathoverflow.net/questions/27284/mathematical-controversies/… $\endgroup$ – Steve Huntsman Jun 7 '10 at 13:34 Tarski ran into some trouble when he tried to publish his result that the Axiom of Choice is equivalent to the statement that an infinite set $X$ has the same cardinality as $X \times X$. From Mycielski: "He tried to publish his theorem in the Comptes Rendus but Frechet and Lebesgue refused to present it. Frechet wrote that an implication between two well known propositions is not a new result. Lebesgue wrote that an implication between two false propositions is of no interest. And Tarski said that after this misadventure he never tried to publish in the Comptes Rendus." fredjalves $\begingroup$ On p. 215 of his book on the AC, Moore tells the same story, but with Lebesgue and Hadamard. He cites a personal communication from Tarski as source. $\endgroup$ – Péter Komjáth Jun 16 '10 at 6:01 From wikipedia: Higher homotopy groups were first defined by Eduard Čech in 1932 (Čech 1932, p. 203). (His first paper was withdrawn on the advice of Pavel Sergeyevich Alexandrov and Heinz Hopf, on the grounds that the groups were commutative so could not be the right generalizations of the fundamental group.) Steven Gubkin $\begingroup$ Perhaps Alexandrov and Hopf were right. The higher homotopy groups are not the right generalisation of the fundamental group. The latter classifies covering spaces, but the higher homotopy groups have no corresponding property. $\endgroup$ – Tim Porter Feb 3 '10 at 15:23 $\begingroup$ There are the corresponding Whitehead towers where you "kill" the lowest-dimensional non-trivial homotopy group of a space by a fibration whose fibre is an Eilenberg-Maclane space. In the 1-dimensional case this is a covering space. An example of killing $\pi_2$ of the base would be the Hopf fibration $S^3 \to S^2$. It's maybe not as complete an analogy as you'd like? $\endgroup$ – Ryan Budney Feb 4 '10 at 15:26 The Mordell-Weil theorem, when submitted by Mordell to the London mathematical society's journal, was rejected. This theorem was the start of the whole set of investigations on elliptic curves, and indeed on arithmetic geometry. Andre Weil in his Ph. D. thesis created the subject of arithmetic of algebraic varieties and Galois cohomology, to prove his strengthened version of this theorem and to understand Mordell's calculations. I also believe that for him the motivation to re-write the foundations of algebraic geometry was also motivated by the desire to give the Mordell-Weil theorem a cleaner form, thoughs the officially stated motivation is for putting his proof of Riemann hypothesis for function fields over finite fields on a firm ground. And, the subject grew, flowered, through greats like Grothendieck, and one must remark the work of Faltings on Mordell conjecture on the same direction proposed in the same paper, which could be proved only so many years later, after Weil failed in his Ph. D. time. Indeed, Fermat's last theorem proof also belongs to the same subject. Looking back, rejection of Mordell's groundbreaking paper is so unbelievable. Excerpt from source: Mordell submitted his subsequent work on indeterminate equations of the third and fourth degree when he became a candidate for a Fellowship at St John's College, but he was not successful. His paper on this topic was rejected for publication by the London Mathematical Society but accepted by the Quarterly Journal. Mordell was bitterly disappointed at the way his paper had been received. He wrote at the time on an offprint of the paper:- This paper was originally sent for publication to the L.M.S. in 1913. It was rejected ... Indeterminate equations have never been very popular in England (except perhaps in the 17th and 18th centuries); though they have been the subject of many papers by most of the greatest mathematicians in the world: and hosts of lesser ones ... Such results as [those in the paper] ... marks the greatest advance in the theory of indeterminate equations of the 3rd and 4th degrees since the time of Fermat; and it is all the more remarkable that it can be proved by quite elementary methods. ... We trust that the author may be pardoned for speaking thus of his results. But the history of this paper has shown him that in his estimation, it has not been properly appreciated by English mathematicians. The details of Weil's work can be found in his autobiography, "Apprenticeship of a mathematician".. 5 revisions, 2 users Anweshi 85% $\begingroup$ Good story, but you've mixed two papers: the one that was rejected by LMS in 1913 is Mordell, L. J. Indeterminate equations of the third and fourth degrees. Quart. J., 170-186 (1914). The "Mordell-Weil theorem" paper is JFM 48.0140.03 Mordell, L. J. On the rational solutions of the indeterminate equations of the third and fourth degrees. Cambr. Phil. Soc. Proc. 21, 179-192 (1922). $\endgroup$ – Victor Protsak Jun 7 '10 at 4:40 Galois theory, maybe? Quoting Wikipedia: Galois returned to mathematics after his expulsion from Normale, although he was constantly distracted in this by his political activities. After his expulsion from Normale was official in January 1831, he attempted to start a private class in advanced algebra which did manage to attract a fair bit of interest, but this waned as it seemed that his political activism had priority. Simeon Poisson asked him to submit his work on the theory of equations, which he submitted on January 17. Around July 4, Poisson declared Galois' work "incomprehensible", declaring that "[Galois'] argument is neither sufficiently clear nor sufficiently developed to allow us to judge its rigor"; however, the rejection report ends on an encouraging note: "We would then suggest that the author should publish the whole of his work in order to form a definitive opinion." While Poisson's rejection report was made before Galois' Bastille Day arrest, it took some time for it to reach Galois, which it finally did in October that year, while he was imprisoned. It is unsurprising, in the light of his character and situation at the time, that Galois reacted violently to the rejection letter, and he decided to forget about having the Academy publish his work, and instead publish his papers privately through his friend Auguste Chevalier. Apparently, however, Galois did not ignore Poisson's advice and began collecting all his mathematical manuscripts while he was still in prison, and continued polishing his ideas until he was finally released on April 29, 1832. $\begingroup$ Neumann's Ford Prize-winning book review of a book by Edwards on Galois theory, mathdl.maa.org/images/upload_library/22/Ford/Neumann407-411.pdf, implies that Poisson's complaints are justified: Galois' original paper is very hard to read, and Edwards' book-length treatment is mainly about rewriting its material in a more understandable way. $\endgroup$ – David Eppstein Feb 3 '10 at 1:30 $\begingroup$ @DE: Thanks for the link -- that was one of the best mathematical book reviews I've ever seen. $\endgroup$ – Pete L. Clark Feb 4 '10 at 3:43 $\begingroup$ I think it's also important to note that Galois died about a month after being released. It was then Liouville who filled in the details and presented Galois' discoveries to the community. $\endgroup$ – j0equ1nn Jan 6 '15 at 7:25 $\begingroup$ An updated link to the review is maa.org/sites/default/files/pdf/upload_library/22/Ford/…. $\endgroup$ – dvitek Oct 9 '17 at 14:26 Smale's eversion of the 2-sphere was first thought to be an "obvious counterexample" to a result he proved in his 1958 thesis. See the Wikipedia article "Smale's paradox" for further information. John Stillwell 71% $\begingroup$ I can't find anything in the Wikipedia article about Smale's eversion looking like an "obvious counterexample" to anything. Can you give us some more sources or information? $\endgroup$ – Vectornaut Feb 3 '10 at 1:27 $\begingroup$ Sorry, I didn't make myself clear. Smale proved a general result about immersions of spheres of arbitrary dimension (I don't know the details). His advisor Raoul Bott pointed out that the general result implied that the 2-sphere could be smoothly turned inside-out in $\mathbb{R}^3$, which he thought was obviously wrong. But Smale's general result was correct, and this led to the discovery of the explicit eversions of the 2-sphere mentioned in the Wikipedia article. $\endgroup$ – John Stillwell Feb 3 '10 at 1:51 $\begingroup$ @JohnStillwell I think it's worthwhile to incorporate this comment into the answer for completeness (remember: comments are not meant to be of lasting value $\endgroup$ – Danu Nov 6 '15 at 22:11 Wigner's (motivated by quantum mechanics) classification of the irreducible unitary representations of the Poincaré group (the group of automorphisms of R4 with the Lorentz metric) was not accepted by the "American Journal of mathematics" on the ground of not being mathematically interesting. Later it was published by "Annals of mathematics" on von Neumann's suggestion. In this work Wigner introduced the method of induced representations and "normal subgroup analysis". Both became very important building blocks in the representation theory of Lie groups. David Bar Moshe $\begingroup$ I wish I could remember where I first heard another version, where rejection was by Physical Review (different slant!). Articles by Irving Segal (1996, p. 459) and Wigner himself (1979; pdf) say the paper was rejected by "a Springer physics journal", resp. "one of our mathematical journals". $\endgroup$ – Francois Ziegler Oct 17 '17 at 0:22 $\begingroup$ @Francois Ziegler, Thank you very much for the corrections. I have heard this story a long time ago from Prof. Louis Michel. I don't remember which version it was. $\endgroup$ – David Bar Moshe Nov 8 '17 at 20:02 Ross Street's groundbreaking paper on higher categorical descent and higher nerves R. Street, The algebra of oriented simplexes, J. Pure Appl. Algebra 49 (1987) 283-335 was first submitted to Topology but Greame Segal rejected it without review. Street had expected that Segal would appreciate a paper on higher categorical nerve, in view of the fundamental work of Segal on nerves and classifying spaces published in Topology and Publ. IHES. But curiously, Segal considered Street's work on higher nerves of little relevance to topology. The contemporary merger of homotopy theory and higher category theory in the works of Joyal, Simpson, Lurie, Rezk, Cisinski, Jardine and others has of course proven Segal to be wrong in that statement. Zoran Skoda Ludwig Schläfli discovered the regular polytopes in $\mathbb{R}^4$, including the 24-cell, 120-cell, and 600-cell, among many results of n-dimensional geometry, between 1850 and 1852. He wrote up his results in a big manuscript, Theorie der vielfachen Kontinuität, which was rejected by the Vienna Akademie, and also in Berlin. It was finally published after his death, in 1901. In the meantime, the regular polytopes had been rediscovered by Stringham in 1880. See Coxeter's Regular Polytopes and the Wikipedia article on Schläfli. You all know about the journal of rejected math papers: http://www.rejecta.org $\begingroup$ This is awesome! $\endgroup$ – Victor Protsak Jun 7 '10 at 4:19 $\begingroup$ I'm unsure if it's a practical joke, even. There's only the one issue! $\endgroup$ – Kevin O'Bryant Jun 10 '10 at 1:45 $\begingroup$ Sadly, it appears not to exist anymore. $\endgroup$ – Charles Rezk Jun 28 '13 at 21:48 $\begingroup$ @CharlesRezk It's on web.archive.org; I've added the link. $\endgroup$ – Mark Hurd May 21 '14 at 11:53 $\begingroup$ I thought this was a joke! I think it would be nicer if the full referee reports were published together with the paper and an open letter from the author. $\endgroup$ – alvarezpaiva May 21 '14 at 13:57 What about deBranges' proof of the Bieberbach conjecture: http://en.wikipedia.org/wiki/Louis_de_Branges_de_Bourcia Deane Yang $\begingroup$ I guess de Branges would probably say "My proof of the Riemann hypothesis". Has anyone been known to take a serious look at it? $\endgroup$ – Steve Huntsman Feb 3 '10 at 1:45 $\begingroup$ The Bieberbach conjecture is a good example. Regarding the Riemann Hypothesis, I recall that I have seen at least one paper that claimed to show that de Brange's approach (at the time) could not prove it. Doing a quick search, it was probably the Conrey and Li paper from 1998 mentioned on the Wikipedia page. Whether his approach has changed since then I do not know. $\endgroup$ – Lasse Rempe-Gillen Feb 3 '10 at 11:28 $\begingroup$ I believe Lagarias has studied it. See math.lsa.umich.edu/~lagarias/doc/debranges-houches.pdf $\endgroup$ – Will Orrick Feb 6 '10 at 2:28 $\begingroup$ He wished it upon himself: the first two rejected versions were wrong! $\endgroup$ – Victor Protsak Jun 7 '10 at 4:43 $\begingroup$ de Branges proof of the Bieberbach conjecture was never rejected. His paper of Riemann hypothesis was never submitted to a journal, thus it also could not be rejected. $\endgroup$ – Alexandre Eremenko Jan 29 '17 at 20:07 I just remembered Apery's theorem: Due to the wholly unexpected nature of the result and Apéry's blasé and very sketchy approach to the subject many of the mathematicians in the audience dismissed the proof as flawed. Ostrogradsky rejected Lobachevsky's paper "A concise outline of the foundations of geometry". $\begingroup$ Gauss also invented hyperbolic geometry, but himself rejected it. See Milnor's article on "150 years of hyperbolic geometry". I think there are also many other examples where Gauss's "few but ripe" policy made him reject amazing results he invented. Link (might need institutional access): ams.org/bull/1982-06-01/S0273-0979-1982-14958-8/home.html $\endgroup$ – Ilya Grigoriev Feb 4 '10 at 6:30 $\begingroup$ Also, Beltrami's 1868 "pseudosphere" model of hyperbolic geometry (more like the universal cover of the pseudosphere), which should have settled the issue, was delayed for a year. Apparently Beltrami was at first deterred by criticisms from Cremona. $\endgroup$ – John Stillwell Feb 4 '10 at 23:37 Qiaochu Yuan already mentioned in a comment Kronecker's negative impact on Cantor's first paper on set theory which was ahead of its time about 20 years, when published in 1874, and which had been delayed several months, unusually long (for that time) such that Cantor considered to withdraw it. Über eine Eigenschaft des Inbegriffes aller reellen algebraischen Zahlen. Crelles Journal f. Mathematik Bd. 77, S. 258 - 262 (1874). But it is less well known that Cantor's paper PRINCIPIEN EINER THEORIE DER ORDNUNGSTYPEN, ERSTE MITTHEILUNG had to be withdrawn from Mittag-Leffler's Acta in 1884 and had to wait for publication until Ivor Grattan-Guinness published it in Acta Mathematica 124 (1970) 65 - 107. This paper contains a tremendous richness of proposed applications of set theory. Its enforced withdrawal seems to have given Cantor the first really hard stroke. $\begingroup$ Of course, Cantor himself originally rejected his set-theoretic arguments: "Je le vois, mais je ne le crois pas!" en.m.wikipedia.org/w/… $\endgroup$ – Matt F. Oct 17 '17 at 2:30 René Schoof once told me that when he submitted his PhD Thesis, the chapter containing his algorithm to compute the number of points of elliptic curves over finite fields did not appeal at all to the referee, who wondered whether such questions had some interest at all... History decided otherwise! According to the nLab category theory entry, Eilenberg and Mac Lane's paper introducing category theory, General theory of natural equivalences, was originally rejected before being published. Gauss essentially invented the Fast Fourier Transform in 1805, but the importance of his work was not understood for a century. "A 1965 paper by John Tukey and John Cooley [2] is generally credited as the starting point for modern usage of the FFT. However, a paper by Gauss published posthumously in 1866 [3] (and dated to 1805) contains indisputable use of the splitting technique that forms the basis of modern FFT algorithms. "Gauss was interested in the problem of computing accurate asteroid orbits from observations of their positions. His paper contains 12 data points on the position of the asteroid Pallas, through which he wished to interpolate a trigonometric polynomial with 12 coefficients. Instead of solving the resulting 12-by-12 system of linear equations by hand, Gauss looked for a shortcut. He discovered how to separate the equations into three subproblems that were much easier to solve, and then how to recombine the solutions to obtain the desired result. The solution is equivalent to estimating the DFT of the data with an FFT algorithm." http://www.mathworks.com/access/helpdesk/help/techdoc/math/brentm1-1.html "Recent studies of the history of the fast Fourier transform (FFT) algorithm, going back to Gauss[1], provide an example of exactly the opposite situation. After having been published and used over a period of 150 years without being regarded as having any particular importance, the FFT was re-discovered, developed extensively, and applied on electronic computers in 1965, creating a revolutionary change in the scale and types of problems amenable to digital processes." http://www.springerlink.com/content/g5762llr5knw8505/ Marko Amnell $\begingroup$ It was not really rejected, it is simply that until cheap computational power was around, it was a solution waiting for a problem. By the way, it was invented yet another time between Gauss and Cooley and Tukey by some mathematicians in the British admiralty, circa WWI. $\endgroup$ – David Lehavi Feb 4 '10 at 7:25 $\begingroup$ This cannot be completely true: Since Gauss used the method in an actual computation, it must have provided an advantage even then. Were these kinds of calculations rare, or was it just that Gauss didn't make his idea more public? $\endgroup$ – rem May 21 '14 at 15:43 Abel's work on elliptic functions was unappreciated by the referee Fourier. I think this is the Abel part of the famous Abel-Jacobi theorem. All in all, Abel could not find appreciation in his lifetime, and by the time he got a decent job, he was ill with tuberculosis and died in obscurity, without even money for a treatment, and without being able to marry his sweetheart Christina. I remember all this from E. T. Bell. I am however unable to dig up an online reference. Anweshi $\begingroup$ According to Bell, Legendre and Cauchy were asked to referee Abel's manuscript (on Abel's theorem) in 1826. They stalled, with the excuse that manuscript was not legible, and eventually Cauchy mislaid it. It was not published until 1841. $\endgroup$ – John Stillwell Feb 6 '10 at 22:06 $\begingroup$ If true, Stubhaug's "Niels Henrik Abel and his Times: Called Too Soon by Flames Afar" must have it. $\endgroup$ – Victor Protsak Jun 7 '10 at 4:49 Gelfand-Mazur — every real unital Banach algebra where every non-zero element is invertible is isomorphic to either $\mathbb{R}$, $\mathbb{C}$ or $\mathbb{H}$ — was first published without proof by Mazur. Mazur had a (rather short) proof, but the editor demanded he shortened it further. He refused to shorten it, and so it was published without proof. Later Gelfand published a proof of a weaker version (only for complex commutative Banach algebras), probably without knowing about Mazur's result. Jan Jitse Venselaar $\begingroup$ I don't understand: Mazur refused to do what? to publish his result without proof? but then, in the previous line, it is said "was first published without proof by Mazur". Can you please clarify? Otherwise, very good answer. $\endgroup$ – Joël May 21 '14 at 12:09 $\begingroup$ @Joël: The original phrasing was indeed a bit unclear, I've fixed it now. $\endgroup$ – Jan Jitse Venselaar May 27 '14 at 18:10 $\begingroup$ There is a related 1932 work on locally compact topological skewfields by Pontrjagin, with the "same" classification outcome, Über Stetige Algebraische Korper, Annals of Mathematics 33, No. 1 (Jan., 1932) 163-174 jstor.org/stable/1968110 $\endgroup$ – Zoran Skoda May 7 '15 at 17:22 The Monty hall problem, surely? Benjol Supposedly Vizing had some difficulty publishing what is now known as Vizing's theorem (the result that every degree-d graph can be edge-colored with at most d+1 colors) leading to it eventually being published in a very obscure journal, Akademiya Nauk SSSR. Sibirskoe Otdelenie. Institut Matematiki. Diskretny˘ı Analiz. Sbornik Trudov. Soifer recounts the story in The Mathematical Coloring Book, pages 136–137. Steve Huntsman 56% Riemann's work with curved spaces, particularly their applications to physics was at least 40 years ahead of it's time. He pushed forward ideas that space is perhaps curved and that forces such as gravity could be thought of as bending in space. He gave a few lectures on these ideas, but fellow mathematicians and physicists didn't really know what good could come of them and didn't pay much attention. Of course, Einstein finally solved the puzzle many years later. Also, Joseph Fourier was laughed at when he proposed the notion of Fourier series for solving the heat equation, particularly at the lack of rigor and the overall scope of it's applications. Opinions on the matter changed a decade or two later when the theory began to root itself in rigor thanks to Dirichlet. $\begingroup$ Isn't this - particularly the first - more a case of things being "ignored" rather than "rejected"? Another famous example in this vein would be Poincare's discovery of sensitive dependence on initial conditions (or "chaos" in popular lingo) in the n-body problem. It took until the mid-20th century for the fundamental importance of these phenomena to be recognized, but that does not mean that his result was rejected. $\endgroup$ – Lasse Rempe-Gillen Feb 4 '10 at 10:56 $\begingroup$ It sounds like he wasn't ignored or rejected. He had a lot of nice ideas, some of which just didn't have a natural place in physics yet. The success of an idea has a lot to do with whether or not the ambient culture is ready to hear it -- in this case it took Lorentz's work, the Michelson-Morley experiment and Maxwell's equations to "set the stage", I suppose in the opposite order to which I wrote them. :) $\endgroup$ – Ryan Budney Feb 4 '10 at 15:40 This does not fit the original poster's question but is vaguely related: I have heard it said that Carleson did not get the Fields Medal in '66 because his proof of Carleson's Theorem was too difficult to read and verify at the time. (Granted, the result was only published in '66.) And alas, he was too old to get it in '70. Ryan O'Donnell The idea of Van Kampen diagrams in group theory was developed by Van Kampen in the 1930s but no one really used them until the 1960s when they were rediscovered independently by Lyndon and Ol'Shanskii. Today they are an important tool in geometric group theory. Johannes Hahn $\begingroup$ I would say these were "ignored" rather than "rejected". $\endgroup$ – Steve Huntsman Feb 3 '10 at 14:02 $\begingroup$ Ol'shanskii did not rediscover van Kampen diagrams independently of Lyndon. The book of Lyndon and Schupp appeared first and was widely known. However, there were errors in L-S including the proof of the van Kampen lemma (about the diagrams), so many leading group theorists rejected the method in the 60-80s. Ol'shanskii did make everything completely rigorous in his book and earlier papers. $\endgroup$ – Mark Sapir Oct 28 '10 at 2:49 This is somehow related to, yet it is not about a specific result, but it's about a life. Namely, according to these sources: http://en.wikipedia.org/wiki/Nikolai_Luzin#The_Luzin_affair_of_1936, and http://www-history.mcs.st-andrews.ac.uk/Extras/Luzin.html, it was claimed in "Pravda", in the summer of '36, that Nikolai Luzin published "would-be scientific papers", within foreign journals (!). Also, he was accused of "praising weak work" (!!). And this accusation was certified by some sort of court, AFAIK. Great shame on these former students of Luzin! $\begingroup$ That was not a mathematical rejection, but a political one, not unlike the campaign led by Teichmüller against Landau and his own adviser Hasse (who was not Nazi enough for him). Some of the actors probably had opportunistic reasons as well. This is a very interesting subject, but I think it is off-topic. $\endgroup$ – darij grinberg Feb 4 '10 at 12:26 $\begingroup$ There is a recent book on the "Lusin affair" that compiled previously unavailable archive materials, "Дело Лузина". $\endgroup$ – Victor Protsak Jun 7 '10 at 4:15 Regarding Carleson's theorem, one of the Fields committee member is supposed to have said that "it would be an insult for Carleson, to give him the Fields medal for that". (I remember the anecdote from Paul Koosis's class, but I might have distorted it in details). To this day, I'm puzzled by what the committee member really meant. $\begingroup$ perhaps... that would be a good explanation! (i'm not a complex analyst, and i'm not familiar with carleson's work). $\endgroup$ – maks Feb 7 '10 at 7:33 It is only a slight exaggeration to say that the collected works of Hannes Alfven would fit this description. Andrew Mullhaupt $\begingroup$ Could you expand on that? $\endgroup$ – Jacques Carette Feb 19 '10 at 1:48 $\begingroup$ I think the easiest way to fit that into the limited characters allowed here is to point at the wikipedia page for Alfven waves: en.wikipedia.org/wiki/Alfv%C3%A9n_wave This was the rule, not the exception for Alfven. Even after his Nobel he didn't get much respect, (except from guys like Fermi and Chandrasekhar - you could do worse). It's also worth looking at his wikipedia page: en.wikipedia.org/wiki/Hannes_Alfv%C3%A9n $\endgroup$ – Andrew Mullhaupt Feb 19 '10 at 20:34 $\begingroup$ @AndrewMullhaupt: You can edit the answer instead of writing in comments, if the limit on length is an issue. $\endgroup$ – timur May 21 '14 at 12:54 Not the answer you're looking for? Browse other questions tagged soft-question big-list ho.history-overview or ask your own question. Most interesting mathematics mistake? What are some very important papers published in non-top journals? Mathematicians whose works were criticized by contemporaries but became widely accepted later Theorems that impeded progress Endless controversy What does it mean to suspect that two conjectures are logically equivalent? The definition of "proof" throughout the history of mathematics What are some applications of other fields to mathematics? What are some examples of narrowly missed discoveries in the history of mathematics? What are some examples of colorful language in serious mathematics papers? What are some correct results discovered with incorrect (or no) proofs? What are some mathematical sculptures? What are some Applications of Teichmüller Theory? What are some deep theorems, and why are they considered deep? What percentage of published mathematics papers are correct? What are some noteworthy "mic-drop" moments in math?
CommonCrawl
An efficient authentication and key agreement protocol for IoT-enabled devices in distributed cloud computing architecture Huihui Huang ORCID: orcid.org/0000-0003-2663-82401, Siqi Lu1,2, Zehui Wu1 & Qiang Wei1 With the widespread use of Internet of Things and cloud computing in smart cities, various security and privacy challenges may be encountered.The most basic problem is authentication between each application, such as participating users, IoT devices, distributed servers, authentication centers, etc. In 2020, Kang et al. improved an authentication protocol for IoT-Enabled devices in a distributed cloud computing environment and its main purpose was in order to prevent counterfeiting attacks in Amin et al.' protocol, which was published in 2018. However, We found that the Kang et al.'s protocol still has a fatal vulnerability, that is, it is attacked by offline password guessing, and malicious users can easily obtain the master key of the control server. In this article, we extend their work to design a lightweight pseudonym identity based authentication and key agreement protocol using smart card. For illustrating the security of our protocol, we used the security protocol analysis tools of AVISPA and Scyther to prove that the protocol can defend against various existing attacks. We will further analyze the interaction between participants authentication path to ensure security protection from simulated attacks detailedly. In addition, based on the comparison of security functions and computing performance, our protocol is superior to the other two related protocols. As a result, the enhanced protocol will be efficient and secure in distributed cloud computing architecture for smart city. In recent years, Internet of things (IoT) devices, such as sensor devices, RFID tags, actuators and smart objects, are increasingly being used in daily life to provide people with a convenient life. The main functions of IoT-enabled devices are interconnected and interlinked in a heterogeneous wireless environment, in which the devices can continuously monitor and analyze sensor data from multifarious applications to achieve real-time automation of smart decision-making processes in smart cities. However, as we all know, IoT devices are resource-constrained and data-intensive. Thus, there should be a standard platform that can handle efficiently large amount of heterogeneity data and devices, as the data and devices are growing exponentially [1]. To process such a large database repository generated from various IoT devices, Cloud Computing has emerged as a key technology [2,3,4]. In current days, there are several types of cloud services provided by the cloud provider such as Software as a Service (SaaS) cloud (Ex. IBM LotusLive), Platform as a Service (PaaS) (Ex. Google AppEngine) and Infrastructure as a Service (IaaS) (Ex. Amazon Web Services) [5]. However, there is a basic problem that how the private distributed cloud server authenticates the connected IoT devices. For example, the private information from IoT devices is stored in distributed private cloud server, so that only legitimate users are allowed to access the sensitive information. Recently, many authentication protocols integrated with IoT and distributed cloud computing have been proposed for secure access control on large-scale IoT networks [5,6,7,8,9,10,11,12,13]. In Amin et al. [5] proposed an authentication protocol for IoT-enabled devices in distributed cloud computing environment, which showed many security vulnerabilities of two authentication protocols proposed by Xue et al. [8] and Chuang and Cheng [9]. However, Kang et al. [10] found that Amin et al.'s [5] protocol is vulnerable to counterfeit attacks and improved the protocol. Unfortunately, by studying a large number of authentication protocols [14], we further discover an off-line password guessing attack on Kang et al.'s protocol, that is, a malicious user can easily get the secret number of the master control server. This is a fatal vulnerability to the entire system. Thus, we extend upon their work by designing a lightweight dynamic pseudonym identity based authentication and key agreement protocol using a smartcard, which is proven to be efficient and secure. The rest of paper is organized as follows. The methods and experimental of our article are briefly introduced in Sect. 2. In Sect. 3, we review the Kang et al.'s protocol and point out the security weaknesses in detail. The enhanced protocol is proposed in Sect. 4. Results and Discussion are given in Sect. 5. Finally, the article is concluded in Sect. 6. Methods and experimental In this paper, we give a scenario: Assumed a cloud computing service provider has built a distributed private cloud environment covering the entire smart city. There are many IoT devices that should be interconnected to each other via the nearest private cloud service which records confidential information. Then, the distributed cloud service can realize high-speed computing and real-time communication with each IoT-enabled device to provide high-quality services [15, 16]. This scenario involves three main entities: the cloud computing provider, which is regarded as the server control CS, a single distributed private cloud server namely \(S_m\) and each IoT-enabled device, which belong to the user \(U_i\) in smart city. We briefly describe this scenario as shown in Fig. 1. Since the protocol is designed for IoT devices, which have tight computing resources and data-intensive, the protocol only uses hash functions and X-or operations. IoT-enabled distributed cloud architecture in smart city. The real scenario of the IoT-enabled distributed cloud architecture in smart city, which involves three main entities: the cloud computing provider, which is regarded as the server control CS, a single distributed private cloud server namely \(S_m\) and each IoT-enabled device, which belong to the user \(U_i\) In the experimental section, we used the security protocol analysis tools of AVISPA and Scyther to simulation of our proposed protocol for illustrating the security of the protocol. And We personally build the AVISPA (Version of 2006/02/13) and Scyther(v1.1.3) in a virtual machine of an ubuntu operating system. Then, in the security analysis, we mainly use cryptography knowledge to analyze in detail the authentication paths among \(U_i\), \(S_m\), and CS in our proposed, so as to protect against the most common attacks of impersonation attack. Finally, security functionality and computational performance are concretely compared among our protocol with the other two protocols. Kang et al.'s protocol and its weaknesses In this section, we give the overview of Kang et al.'s [10] protocol and some security drawbacks of their protocol are described carefully. In Kang et al.'s protocol, there are 3 participants: an ordinary user \(U_i\), mth cloud providing servers \(S_m\), and the control server (CS). The server CS is a trusted third party responsible for registration and authentication of users and cloud servers. The notations used in this article are shown in Table 1. Table 1 Notations used in this paper Kang et al.'s protocol In this section, we introduce the registration, login, and authentication key agreement phases of Amin et al.'s [5] protocol, as their protocol only includes three parts. To facilitate analysis, the full implementation of Kang et al.'s protocol is shown in Fig. 2. Implementation of Kang et al.'s protocol. Implementation of the registration, login, and authentication key agreement phases in Kang et al.'s protocol Registration phase During server registration, the cloud server \(S_m\) sends the message \(\left\langle {B{S_m},d} \right\rangle\) to CS. After receiving it, CS computes \(PSI{D_m} = h\left( {SI{D_m}\parallel d} \right) ,\;B{S_m} = h\left( {PSI{D_m}\;\parallel SI{D_m}\parallel d} \right) \;\) and sends \(BS_m\) to \(S_m\) via a secure channel. Finally, \(S_m\) stores secret parameter \(\left\langle {B{S_m},d} \right\rangle\) into the memory. In the phase of user registration, the user \(U_i\) computes \({A_i} = {P_{i\;}} \oplus h\left( {{B_i}} \right)\), where \(B_i\) is the biometric of \(U_i\), and sends \(\left\langle {I{D_i},{A_i}} \right\rangle\) to the CS securely. On getting it, CS chooses a random number \(b_i\) and calculates the following operations: \(PI{D_i} = h\left( {I{D_i}\;\parallel {b_i}} \right)\), \({C_i} = h\left( {I{D_i}\;\parallel {A_i}} \right)\), \({D_i} = h\left( {PI{D_i}\;\parallel x} \right)\), \({E_i} = {D_{i\;}} \oplus {A_i}\) and \({\Delta _i} = h\left( {PI{D_i}\;\parallel I{D_i}\parallel x} \right)\). Finally, CS delivers a smart card recording the information \(\left\langle {{C_i},{{{\Omega }}_i},{\Delta _i},{E_i},h\left( \cdot \right) } \right\rangle\) to \(U_i\) in a secure channel. Login phase When wanting to access the information of the cloud server \(S_m\), \(U_i\) provides \(ID_i^{{{*}}}\), \(P_i^{{{*}}}\) and \(B_i^{{{*}}}\) to a card reader (CR). Then, CR calculates \(A_i^{{{*}}} = P_i^{{{*}}} \oplus h\left( {B_i^{{{*}}}} \right)\), \(C_i^{{{*}}} = h\left( {ID_i^{{{*}}}\parallel A_i^{{{*}}}} \right)\) and checks whether \(C_i^*\) is equal to \({C_i}\) . If \(C_i^* = {C_i}\) , CR produces a random number \(N_i\) and current timestamp \(T{S_i}\) to compute the following operations: \({b_i} = {{{\Omega }}_i} \oplus {A_i}\), \(PI{D_i} = h\left( {I{D_i}\parallel {b_i}} \right)\), \({D_i} = {E_i} \oplus {A_i}\), \({O_i} = I{D_i} \oplus D\), \({G_i} = h\left( {I{D_i}\parallel SI{D_m}\parallel {N_i}\parallel T{S_i}\parallel {D_i}} \right)\), \({F_i} = {\Delta _i} \oplus {N_i}\) and \({Z_i} = SI{D_m} \oplus h\left( {{D_i}\parallel {N_i}} \right)\). After that, CR submits the login message \({{\;}}\left\langle {{G_i},{F_i},{Z_i},{O_i},PI{D_i},T{S_i}} \right\rangle\) to the cloud server \(S_m\) over an public channel. Authentication key agreement phase This phase describes mutual authentication and key agreement among the participants, which can be divided into four steps as follows. Step 1:: When receiving the login message from \(U_i\), \(S_m\) first checks the time interval condition \(T{S_m} - T{S_i} < \Delta T\), where \(T{S_m}\) is \(S_m\)'s current timestamp and \(\Delta T\) is expected time interval during message transmission. If \(T{S_m} - T{S_i} \ge \Delta T\), \(S_m\) terminates the connection; otherwise, \(S_m\) takes a random number \(N_m\) to calculate $$\begin{aligned}&{{\;}}{J_i} = {B_m} \oplus {N_m}\\&{K_i} = h\left( {{N_m}\parallel B{S_m}\parallel PI{D_i}\parallel {G_i}\parallel T{S_m}} \right) \\ \end{aligned}$$ Next, \(S_m\) sends \(\left\langle {{J_i},{K_i},PSI{D_m},{G_i},{F_i},{Z_i},{O_i},PI{D_i},T{S_i},T{S_m}} \right\rangle\) to the control server CS via an public channel. After getting the message, CS checks time interval condition \(T{S_{CS}} - T{S_m} < \Delta T\), where \(T{S_{CS}}\) is CS's current timestamp. If \(T{S_{CS}} - T{S_m} < \Delta T\) , CS computes $$\begin{aligned}&{D_i} = h(PI{D_i}\parallel x)\\&I{D_i} = {O_i} \oplus {D_i}\\&{N_i} = {F_i} \oplus h(PI{D_i}\parallel I{D_i}\parallel x)\\&SI{D_m} = {Z_i} \oplus h({D_i}\parallel {N_i})\\&G_i^* = h(I{D_i}\parallel SI{D_m}\parallel {N_i}\parallel T{S_i}\parallel {D_i})\\ \end{aligned}$$ Then, CS checks \(G_i^{{{*}}}\) is equal to \({G_i}\) or not. If \(G_i^{{{*}}} = {G_i}\), CS thinks that the user \(U_i\) is legal; otherwise, it terminates the session. After that, CS calculates $$\begin{aligned}&B{S_m} = h(PSI{D_m}\parallel SI{D_m}\parallel y)\\&{N_i} = B{S_m} \oplus {J_i}\\&K_i^* = h({N_m}\parallel B{S_m}\parallel PI{D_i}\parallel {G_i}\parallel T{S_m})\\ \end{aligned}$$ for authenticating the cloud server \(S_m\). If \(K_i^{{{*}}} \ne {K_i}\), CS thinks the cloud server \(S_m\) is illegal and terminates the session; otherwise, CS randomly selects a number \({N_{CS}}\) and computes $$\begin{aligned}&{P_{CS}} = {N_m} \oplus {N_{CS}} \oplus h\left( {{N_i}\parallel {D_i}\parallel {F_i}} \right) \\&{R_{CS}} = {N_i} \oplus {N_{CS}} \oplus h\left( {B{S_m}\parallel {N_m}} \right) \\&{K_{CS}} = h\left( {{N_i}\parallel {N_m}\parallel {N_{CS}}} \right) \\&{Q_{CS}} = h\left( {({N_m} \oplus {N_{CS}})\parallel S{K_{CS}}} \right) \\&{V_{CS}} = h\left( {({N_i} \oplus {N_{CS}})\parallel S{K_{CS}}} \right) \\ \end{aligned}$$ where \({K_{CS}}\) is the secret session key between \(U_i\) and \(S_m\). Finally, CS sends \(\left\langle {{P_{CS}},{Q_{CS}},{\mathrm{{R}}_{CS}},{V_{CS}}{{\;}}} \right\rangle\) to \(S_m\) through public communication. When obtaining the message from CS, \(S_m\) calculates $$\begin{aligned}&{W_m} = h\left( {B{S_m}\parallel {N_m}} \right) \\&{N_i} \oplus {N_{CS}} = {R_{CS}} \oplus {W_m}\\&S{K_m} = h\left( {{N_i}\parallel {N_{CS}}\parallel {N_m}} \right) \\&V_{CS}^{{{*}}} = h\left( {({N_i} \oplus {N_{CS}})\parallel S{K_m}} \right) .\\ \end{aligned}$$ Next, \(S_m\) checks whether \(V_{CS}^{{{*}}}\) is equal to \({V_{CS}}{{\;}}\). If \(V_{CS}^{{{*}}} = {V_{CS}}{{\;}}\), \(S_m\) sends \(\left\langle {{P_{CS}},{Q_{CS}}} \right\rangle\) to the user \(U_i\). On receiving the reply message from \(S_m\), \(U_i\) computes $$\begin{aligned}&{L_i} = h\left( {{N_i}\parallel {D_i}\parallel {F_i}} \right) \\&{N_m} \oplus {N_{CS}} = {P_{CS}} \oplus {L_i}\\&S{K_i} = h\left( {{N_m}\parallel {N_{CS}}\parallel {N_i}} \right) \\&Q_{CS}^{{{*}}} = h\left( {({N_m} \oplus {N_{CS}})\parallel S{K_j}} \right) .\\ \end{aligned}$$ Then, the \(U_i\) checks the condition whether \(Q_{CS}^{{{*}}}\) is equal \({Q_{CS}}\) or not. If the condition is true, \(U_i\) confirms CS and \(S_m\) are authentic. Cryptanalysis of Kang et al.'s protocol In this section, we make cryptanalysis of the protocol proposed by Kang et al. [10] in details. For analysis, there are some valid assumptions that can be found in [17–20]. Off-line password guessing attack The authors in [10] stated that their protocol is protected against off-line password guessing attacks. However, we discover that a malicious attacker can obtain the master secret key of CS after launching the above attack. The details are described as below: An attacker namely Eve first registers in the control server CS with identity \(I{D_{Eve}}\) like a normal user. Next, he logins in and sends the message \(\left\langle {{G_{Eve}},{F_{Eve}},{Z_{Eve}},{O_{Eve}},PI{D_{Eve}},T{S_{Eve}}} \right\rangle\) to \(S_m\). Because the message is transmitted publicly, he can easily obtain the values \({{O_{Eve}}}\) and \({PI{D_{Eve}}}\). For example, using the wireshark tool to capture the packets locally. According to the description in the login phase, Eve computes \({D_{Eve}} = {O_{Eve}} \oplus I{D_{Eve}}\) , where has been shown the "First flaw" in the Fig. 2. Since \({D_i} = h\left( {PI{D_i}\;\parallel x} \right)\), the off-line password guessing attack can be implemented by Algorithm 1. Although the algorithm may take a long time to execute, Eve will be willing to keep trying because the control server CS uses the key x to authenticate all the user \(U_i\), which is crucial parameter to the whole system. Thus, the protocol proposed by Kang et al. is vulnerable to the above attack. Design redundant in the user registration phase In order to avoid the impersonation attack in Amin et al.'s [5] protocol, the authors compute \(B{S_m} = h\left( {PSI{D_m}{\;}\parallel SI{D_m}\parallel y} \right)\), which indicates the identity \({SI{D_m}}\) and pseudoidentity \({PSI{D_m}{{\;}}}\) of \(S_m\) are bundled up with the secret key y of CS by hash function. As proved in the section of security analysis in [10], this technique can be effective against the cloud server impersonation attack. Similarly, the authors claim that the operation \({\Delta _i} = h\left( {PI{D_i}\;\parallel I{D_i}\parallel x} \right)\) is aslo used to avoid that the user cheats CS with a false identity. Unfortunately, we further research discovered that this design is redundant in the user registration phase. As described in [8], the authentication scheme using smart card is mainly to resolve the problem, which the remote servers must store a verification table containing user identities and passwords. In the login phase of Kang et al.'s [10] protocol, only legal \(U_i\) with the real identity \(I{D_i}\), password \(P_i\) and biometric \(B_i\) can access the card reader. Moreover, the operation \(PI{D_i} = h\left( {I{D_i}\parallel {b_i}} \right)\) makes clear that pseudoidentity \(PI{D_i}\) is also bound to the real identity \(I{D_i}\) by hash function during the subsequent login phase, and the value \(b_i\) is protected in the smart card. So, if \(U_i\) can login into the card reader, the control server CS can authenticate \(U_i\). That's why the smart card is used in this authentication protocol. Therefore, the operation \({\Delta _i} = h\left( {PI{D_i}\;\parallel I{D_i}\parallel x} \right)\) is designed redundant in Amin et al.'s protocol. The detailed description will be presented in Sect. 4.2. Inconvenient for password change Generally, it is essential to update password for the legal \(U_i\). However, for the sake of brevity, the password change phase is not introduced in [10]. Furthermore, we further discover that even if this phase is designed according to the Kang et al.'s protocol [10], \(U_i\) has to re-register to the control server CS via a secure channel. CS should deliver a new smartcard for the \(U_i\) or requires the \(U_i\) to mail the original smart card for replacement. Our following description will demonstrate that an existing \(U_i\) could not change password with his/her smart card locally. Assumed that, \(U_i\) can renew password with smart card during the login phase. Then the following these steps will be performed: After punching the smart card, \(U_i\) provides \(ID_i^{{{*}}}\), \(P_i^{{{*}}}\) and \(B_i^{{{*}}}\) to the card reader(CR). CR computes \(A_i^{{{*}}} = P_i^{{{*}}} \oplus h\left( {B_i^{{{*}}}} \right)\) and \(C_i^{{{*}}} = h\left( {ID_i^{{{*}}}\parallel A_i^{{{*}}}} \right)\). Then, it checks whether the condition \(C_i^*\) equals \({C_i}\). If \(C_i^* ={C_i}\) , the terminal prompts \(U_i\) for a new password. \(U_i\) enters a new password \(P_i^{new}\) to CR. When \(U_i\) logins to the card reader normally, CR executes the following operations according to the login phase of Kang et al's protocol: $$\begin{aligned}&A_i^{new} = P_i^{new} \oplus h\left( {{B_i}} \right) \\&C_i^{new} = h\left( {I{D_i}\parallel A_i^{new}} \right) \\&b_i^{new} = {{{\Omega }}_i} \oplus A_i^{new}\\&PID_i^{new} = h\left( {I{D_i}\parallel b_i^{new}} \right) \\&D_i^{new} = {E_i} \oplus A_i^{new}\\&O_i^{new} = I{D_i} \oplus D_i^{new}\\&b_i^{new} = {{{\Omega }}_i} \oplus A_i^{new}\\&G_i^{new} = h\left( {I{D_i}\parallel SI{D_m}\parallel {N_i}\parallel T{S_i}\parallel D_i^{new}} \right) \\&{F_i} = {\Delta _i} \oplus {N_i}\\&{Z_i} = SI{D_m} \oplus h\left( {{D_i}\parallel {N_i}} \right) \\ \end{aligned}$$ Obviously, since \(b_i^{new} \ne {b_i}\), where \({b_i}\) is produced by CS; so \(PID_i^{new} \ne PI{D_i}\), where \(PI{D_i} = h\left( {I{D_i}\parallel {b_i}} \right)\). What's more, since \({\Delta _i} = h\left( {PI{D_i}{{\;}}\parallel I{D_i}\parallel x} \right)\), the value \({\Delta _i}\) is also changed. If \(U_i\) does not register again for substituting the recorded values \(\left\langle {{C_i},{{{\Omega }}_i},{\Delta _i},{E_i},h\left( \cdot \right) {{\;}}} \right\rangle\) in the smart card, CS could not authenticate \(U_i\) in the subsequent communication phase. Therefore, it is inconvenient for password change in Kang et al.'s improved protocol. Our protocol This section introduces an enhanced authentication and key agreement protocol for the IoT-enabled devices in distributed cloud computing environment, as Fig. 1 is showing in smart city. The current scenario involves 3 main entities: the server control CS, the cloud server \(S_m\) and each IoT-enabled device, which belong to the user \(U_i\). There are 5 phases in our enhanced protocol: (1) Registration phase, (2) login phase, (3) authentication and key agreement phase, (4) password change phase, (5) Identity update phase. The detailed implementation of the first three phases is showed in Fig. 3. Implementation of our protocol. Implementation of the registration, login, and authentication key agreement phases in the proposed protocol Firstly, the control server CS randomly produces two high-entropy numbers x and y, which x is used as the secret key only known to CS for authenticate all \(U_i\) and y is used as another secret key only known to CS for authenticate all \(S_m\), respectively [21–23]. Then, any cloud server and user can register with CS. In addition, the secure channel referred to in this phase can be the Internet Key Exchange Protocol version 2(IKEv2) [13] or Secure Socket Layer Protocol (SSL) [24]. Cloud server registration phase During the cloud server registration, \(S_m\) sends the message\(\left\langle {SI{D_m},d} \right\rangle\) to CS, where \({SI{D_m}}\) is its identity and d is a random number. On receiving the message, CS calculates \(PSI{D_m} = h\left( {SI{D_m}\parallel d} \right)\), \(B{S_m} = h\left( {PSI{D_m}{{\;}}\parallel SI{D_m}\parallel y} \right)\) and sends \(\left\langle {B{S_m}} \right\rangle\) back to \(S_m\) via a secure channel. Finally, \(S_m\) stores secret parameter \(\left\langle {B{S_m},d} \right\rangle\) into the memory. User registration phase When a user \(U_i\) wishes to register with CS, \(U_i\) selects desired identity \(I{D_i}\) and password \(P_i\) to enter his/her IoT-enabled device such as a card reader [25, 26]. Then, the device collects \(U_i\)'s biometric \(B_i\) and generates a random number b to compute \(PI{D_i} = h\left( {I{D_i}\parallel b} \right)\), \({A_i} = {P_{i{{\;}}}} \oplus h\left( {{B_i}} \right)\) and \({{{\Omega }}_i} = b \oplus {A_i}\). Next, it sends \(\left\langle {I{D_i},PI{D_i},{A_i}} \right\rangle\) to CS in a secure channel. After receiving the message, CS verifies the authenticity of the user's identity \(I{D_i}\). If \(I{D_i}\) is illegal, CS rejects \(U_i\)'s registration. Otherwise, CS calculates \({C_i} = h\left( {PI{D_i}\parallel {A_i}} \right)\), \({D_i} = h\left( {PI{D_i}{{\;}}\parallel x} \right)\) and \({E_i} = {D_{i{{\;}}}} \oplus {A_i}\). Then, CS writes the data \(\left\langle {{C_i},{\mathrm{{E}}_i},h\left( \cdot \right) {{\;}}} \right\rangle\) to a smart card and delivers it to \(U_i\) through private communication. When obtains the smart card, \(U_i\) inserts it to IoT-enabled device and inputs \(I{D_i}\) and \(P_i\) to the device again. Then, the device writes \({{{\Omega }}_i}\) to the smart card. Finally, the smart card records the informations \(\left\langle {{C_i},{{{\Omega }}_i},{\mathrm{{E}}_i},h\left( \cdot \right) \;} \right\rangle\). If \(U_i\) wants to get information from the private cloud server \(S_m\), \(U_i\) inserts the smart cart into the IoT-enabled device and provides \(ID_i^*\),\(P_i^*\) and \(B_i^*\). The device computes \(A_i^{{{*}}} = P_i^{{{*}}} \oplus h\left( {B_i^{{{*}}}} \right)\) and \(C_i^{{{*}}} = h\left( {ID_i^{{{*}}}\parallel A_i^{{{*}}}} \right)\). Then, it verifies if \(C_i^*\) is equal \({C_i}\). If \(C_i^* = {C_i}\), the device authenticates the real \(U_i\); otherwise, it rejects this login of \(U_i\). Next, the device generates an at least 128 bits random number \(N_i\) and executes the follow operations: $$\begin{aligned}&b = {{{\Omega }}_i} \oplus {A_i}\\&PI{D_i} = h\left( {I{D_i}\parallel b} \right) \\&{D_i} = {E_i} \oplus {A_i}\\&{G_i} = h\left( {PI{D_i}\parallel SI{D_m}\parallel {N_i}\parallel T{S_i}\parallel {D_i}} \right) \\&{F_i} = {D_i} \oplus {N_i}\\&{Z_i} = SI{D_m} \oplus h\left( {{D_i}\parallel {N_i}} \right) \\ \end{aligned}$$ where \(T{S_i}\) is the current timestamp of the device, \(SI{D_m}\) is the private server \(S_m\)'s identity. After that, the device transmits \(\left\langle {{G_i},{F_i},{Z_i},PI{D_i},T{S_i}} \right\rangle\) to \(S_m\) via a public channel. Authentication and key agreement phase In this phase, the mutual authentication and key agreement among three parties is mainly achieved through four-way handshake. In the first handshake, after receiving \(U_i\)'s login message, \(S_m\) calculates its own verification condition to append with the login message and sends them to CS. In the second handshake, on receiving the message from \(S_m\), CS verifies the legitimacy of \(U_i\) and \(S_m\). If they are legit, \(S_m\) produces itself authentication conditions for \(U_i\) and \(S_m\) respectively, and sends the conditions to \(S_m\). In the third handshake, \(S_m\) selects verification conditions related to itself to verify CS and sends the remaining message to \(U_i\). In the fourth handshake, \(U_i\) verifies the legitimacy of CS. If any party fails to pass the authentication, the session will be ended in this phase. As a result, the entire authentication path \(\left( {{U_i} \rightarrow \;{S_m} \rightarrow SC \rightarrow {S_m} \rightarrow {U_i}} \right)\) is established. In the meantime, a shared secret key SK is negotiated to encrypt the subsequent communication traffic between \(U_i\) and \(S_m\). The detailed description is as follows: On receiving the login message, \(S_m\) first checks the condition whether \(T{S_m} - T{S_i} < \Delta T\) holds or not, If \(T{S_m} - T{S_i} < \Delta T\), \(S_m\) terminates the connection; otherwise, \(S_m\) produce a 128 bits random number \(N_m\) and calculates $$\begin{aligned}&{J_i} = {B_m} \oplus {N_m}\\&{K_i} = h\left( {{N_m}\parallel B{S_m}\parallel PI{D_i}\parallel {G_i}\parallel T{S_m}} \right) \\ \end{aligned}$$ Then, \(S_m\) sends \(\left\langle {{J_i},{K_i},PSI{D_m},{G_i},{F_i},{Z_i},{O_i},PI{D_i},T{S_i},T{S_m}} \right\rangle\) to the control server CS publicly. After getting the message, CS also checks whether \(T{S_{CS}} - T{S_m} < \Delta T\) or not. If \(T{S_{CS}} - T{S_m} < \Delta T\), CS computes $$\begin{aligned}&{D_i} = h\left( {PI{D_i}\parallel x} \right) \\&I{D_i} = {O_i} \oplus {D_i}\\&{N_i} = {F_i} \oplus h\left( {PI{D_i}{{\;}}\parallel I{D_i}\parallel x} \right) \\&SI{D_m} = {Z_i} \oplus h\left( {{D_i}\parallel {N_i}} \right) \\&G_i^{{{*}}} = h\left( {I{D_i}\parallel SI{D_m}\parallel {N_i}\parallel T{S_i}\parallel {D_i}} \right) \\ \end{aligned}$$ Then, CS checks the condition whether \(G_i^{{{*}}}\) is equal \({G_i}\). If \(G_i^{{{*}}} = {G_i}\), CS authenticates the \(U_i\) is legal; otherwise, CS terminates the session. After that, CS calculates $$\begin{aligned}&B{S_m} = h\left( {PSI{D_m}{{\;}}\parallel SI{D_m}\parallel y} \right) \\&K_i^{{{*}}} = h\left( {{N_m}\parallel B{S_m}\parallel PI{D_i}\parallel {G_i}\parallel T{S_m}} \right) \\ \end{aligned}$$ Next, CS checks if \(K_i^{{{*}}}\) is equal \({K_i}\). If \(K_i^{{{*}}} \ne {K_i}\), CS thinks \(S_m\) is illegal and terminates the session; otherwise, CS randomly selects a 128 bits number \({{N_{CS}}}\) and computes $$\begin{aligned}&{P_{CS}} = {N_m} \oplus {N_{CS}} \oplus h\left( {{N_i}\parallel {D_i}\parallel {F_i}} \right) \\&{R_{CS}} = {N_i} \oplus {N_{CS}} \oplus h\left( {B{S_m}\parallel {N_m}} \right) \\&S{K_{CS}} = h\left( {{N_i}\parallel {N_m}\parallel {N_{CS}}} \right) \\&{Q_{CS}} = h\left( {({N_m} \oplus {N_{CS}})\parallel S{K_{CS}}} \right) \\&{V_{CS}} = h\left( {({N_i} \oplus {N_{CS}})\parallel S{K_{CS}}} \right) \\ \end{aligned}$$ where, \(S{K_{CS}}\) is the secret session key which can encrypt the following communicate message between \(U_i\) and \(S_m\). Finally, CS sends \(\left\langle {{P_{CS}},{Q_{CS}},{R_{CS}},{V_{CS}}} \right\rangle\) to \(S_m\) through public channel. When obtaining the messge from CS, the \(S_m\) calculates $$\begin{aligned}&{W_m} = h\left( {B{S_m}\parallel {N_m}} \right) \\&{N_i} \oplus {N_{CS}} = {R_{CS}} \oplus {W_m}\\&V_{CS}^{{{*}}} = h\left( {({N_i} \oplus {N_{CS}})\parallel S{K_m}} \right) \\ \end{aligned}$$ Next, \(S_m\) checks whether \(V_{CS}^{{{*}}} = {V_{CS}}\) or not. If \(V_{CS}^{{{*}}} = {V_{CS}}\), \(S_m\) authenticates CS and sends \(\left\langle {{P_{CS}},{Q_{CS}}} \right\rangle\) to \(U_i\). $$\begin{aligned}&{L_i} = h\left( {{N_i}\parallel {D_i}\parallel {F_i}} \right) \\&{N_m} \oplus {N_{CS}} = {P_{CS}} \oplus {L_i}\\&S{K_i} = h\left( {{N_m}\parallel {N_{CS}}\parallel {N_i}} \right) \\&Q_{CS}^* = h\left( {({N_m} \oplus {N_{CS}})\parallel S{K_j}} \right) \\ \end{aligned}$$ Then, \(U_i\) checks whether \(Q_{CS}^{{{*}}}\) is equal \({Q_{CS}}\). If \(Q_{CS}^{{{*}}} = {Q_{CS}}\), \(U_i\) confirms that CS and \(S_m\) are authentic. At last, the 3 participants of \(U_i\) , \(S_m\) and CS negotiate a shared secret key $$\begin{aligned} SK = h\left( {{N_m}\parallel {N_{CS}}\parallel {N_i}} \right) \end{aligned}$$ Password change phase This phase is invoked whenever \(U_i\) wants to update his/her password without communicating with the control server CS. After inserting the smart card into the IoT-enabled device, \(U_i\) provides \(ID_i^*\), \(P_i^*\) and \(B_i^*\). Then, the device executes \(A_i^{{{*}}} = P_i^{{{*}}} \oplus h\left( {B_i^{{{*}}}} \right)\) and \(C_i^{{{*}}} = h\left( {ID_i^{{{*}}}\parallel A_i^{{{*}}}} \right)\). Then, it verifies if \(C_i^*\) is equal \({C_i}\) or not. If \(C_i^* = {C_i}\),the device prompts \(U_i\) for a new password \(P_i^{new}\) and generates a random number \(b_i^{new}\); otherwise, it rejects \(U_i\)'s password change. Then, it computes the following operations $$\begin{aligned}&A_i^{new} = P_i^{new} \oplus h\left( {{B_i}} \right) \\&C_i^{new} = h\left( {I{D_i}\parallel A_i^{new}} \right) \\&{{\Omega }}_i^{new} = b_i^{new} \oplus A_i^{new}\\&b = {{{\Omega }}_i} \oplus A_i^{{{*}}}\\&{D_i} = {E_i} \oplus A_i^{{{*}}}\\&E_i^{new} = {D_i} \oplus A_i^{new}\\ \end{aligned}$$ Finally, the device replaces recorded values \(\left\langle {{C_i},{{{\Omega }}_i},{\mathrm{{E}}_i}} \right\rangle\) with \(\left\langle {C_i^{new},{{\Omega }}_i^{new},E_i^{new}} \right\rangle\) in the smart card respectively. So, it is very convenient and fast for \(U_i\) to update password using smart card locally in our protocol. Identity update phase It is practical that a legal \(U_i\) updates his identity \({I{D_i}}\), such as the identity has expired. However, because the control server CS needs to verify the authenticity of the user's \({I{D_i}}\), \(U_i\) should re-register to CS through the secure channel in this phase. In this section, we defines the capabilities of the attacker and makes a discussion on security analysis of our protocol. Based on adversarial model, we use the security protocol analysis tools of Automated Validation of Infinite-State Systems (AVISPA) and Scyther to prove the protocol can defend the various existing attacks. Then, we detailedly analyze the authentication paths among the three participators to ensure security protection from the most common vulnerabilities of impersonation attacks. Finally, the performance comparisons of our protocol with others are described briefly. Adversarial model In this section, we give the threat attack model, which the main reference is Dolev-Yao adversary threat model [27–29]. The detailed descriptions of Dolev–Yao adversary threat model are as follows: Adversary can eavesdrop and intercept all messages passing through the network; Adversary can store and send the intercepted or self-constructed messages; Adversary can participate in the operation of the protocol as a legal subject. The power analysis or side-channel attacks can help the attacker to extract the secret information stored in user's smart card. Simulation of our protocol using security protocol analysis tools This section presents simulation of our protocol using security protocol analysis tools of AVISPA and Scyther, both of which are complete and standard formal automatic analysis tools. The detailed instructions of AVISPA can refer to [30–33] and Scyther to [34–36]. Simulation code description The first step in the use of simulation tools is to describe the target protocol in a formal language. This section introduces the AVISPA tool formal language HLPSL(High Level Protocol Specification Language) and the Scyther tool formal language SPDL(Security Protocol Description Language) to formally simulate our agreement. (1) The HLPSL simulation code of our protocol The HLPSL simulation code of our protocol involves 5 roles: "role user" simulates real user \(U_i\) ; "role server" simulates the cloud server \(S_m\); 'role control server" simulates the server control CS; "role session" represent the role of the four interactive handshakes; "role environment" represent high-level corner with intruder; "role goal" represents the purpose of simulation. Below we only briefly introduce the part HLPSL description of user roles, environmental roles and security goals, as showing in Fig. 4. Part HLPSL simulation code of our protocol. Figure includes two pictures. The b presents the HLPSL description of user roles. The role environment and the security goals in HLPSL code are showing in b In Fig. 4a, the user role process describes the parameters, initial states and transition that using at the beginning. The "transition" represents the acceptance of information and the sending of response information. "Channel (dy)" means that the attack mode is the Dolev–Yao attack model [37], in which the attacker can control of the network of the protocol. For example, an attacker can intercept, steal, modify, and replay the information transmitted on the channel in the protocol and even pretends to be a legal role in the protocol to perform operations to initiate an attack. The Fig. 4b presents the role environment and the security goals. The high-level role process includes global constants and a mixed role process of one or more sessions. Among them, the intruder may pretend to be a legitimate user and run certain role processes. There are also some sentences that describe the knowledge known to the intruder in initial state, generally including the name of the agent, all the keys shared by other agents, and all known functions. For the HLPSL modeling of security goals, we only give the confidentiality goal of HLPSL supporting one of the two goals of confidentiality and authentication. For confidentiality, the target instance indicates which values are kept secret among the declared roles. If it cannot be achieved, it means that the intruder has obtained a confidential value and can successfully attack the protocol. For authentication, the main purpose is to verify identity masquerading attacks. Although Amin et al. [5] claimed that their protocol can reach the three authentication security goals (the authentication_on alice_server_ni, the authentication_on server_aserver_ncs, the authentication_on aserver_alice_nm) [5], Kang et al. [10] pointed out the server cannot guarantee the cloud server chosen by the user, which is vulnerable to counterfeit attack. We will specifically demonstrate how our protocol resist this common attack in Sect. 4.2. Control server CS role in SPDL. The control server CS role in the SPDL code using Scyther tool (2) The SPDL simulation code of our protocol It is similar to HLPSL that the SPDL simulation code of our protocol includes 3 roles: "role U" simulates real user \(U_i\) ; "role S" simulates the cloude server \(S_m\); "role CS" simulates the server control CS. Here, we take the control server CS role as an example to introduce the SPDL code, which is presented in Fig. 5. After defining the variables required for session protocol, the full implementation of our protocol is represented by the collection of events that occur in CS. The "send" and "recv" events indicate that CS sends a message and receives one respectively. One of the advantages of the Scyther tool is that it flexibly describe target attributes, whether it is the confidentiality of a variable or the authentication of a certain subject to another subject. The Scyther tool can analyze and verify the security attributes that users are interested in. The description of the target attribute is completed through the "claim" event, which can be used to describe the authentication of roles and the confidentiality of variables. This section presents the simulation results of our protocol using two formal analysis tools. We personally build the AVISPA (Version of 2006/02/13) and Scyther(v1.1.3) in a virtual machine of an ubuntu operating system. Figure 6 presents the results of all the four back-end analysis tools provided by AVISPA to simulate the proposed protocols for all entities. The test results of OFMC, CL-AtSe, and SATMC modules show that our protocol is safe (SUMMARY SAFE), which means it can achieve the expected security goals; the TA4SP verification model represents INCONCLUSIVE, as the current TA4SP module does not support one-way hash function and the result of No ATTACK TRACE can be provided with the current version. When using the Scyther tool to simulate the protocol, we also use the Dolev-Yao attack model and the minimum number of execution rounds in the analysis parameters is set to 3. The simulation results of the Scyther tool is present in Fig. 7. Figure 7a shows the attack path of the Scyther tool's formal analysis under the Dolev-Yao model for our protocol. The reachability analysis report of our protocol messages is presented in Fig. 7b. The test results show that our proposed protocol does not have any threat of attack under this model. Therefore, we can assert that our protocol can resist the various common attacks, such as insider attack, replay attack, session key discloser attack and so on. Simulation results of the AVISPA tool under the four backends analysis. The results of all the four back-end analysis tools provided by AVISPA to simulate the proposed protocols for all entities. The test results of OFMC, CL-AtSe, and SATMC modules respectively Simulation results of the Scyther tool. Figure includes two pictures. The a shows the attack path of the Scyther tool's formal analysis under the Dolev–Yao model for our protocol. The reachability analysis report of our protocol messages is presented (b) Security analysis In the following, we mainly use cryptography knowledge to analyze in detail the authentication paths among \(U_i\), \(S_m\), and CS in our proposed, so as to protect against the most common attacks of impersonation attack [38,39,40,41]. (1) Mutual authentication between \(S_m\) and CS In the cloud server registration phase, \(S_m\) negotiates with CS to produce a value \(B{S_m} = h\left( {PSI{D_m}{{\;}}\parallel SI{D_m}\parallel y} \right)\), which can be regarded as the symmetric secret key for \(S_m\) and CS, since the value \(B{S_m}\) only can be calculate by \(S_m\) and CS. Therefore, \(S_m\) and CS can achieve mutual authentication through the symmetric secret key \(B{S_m}\) in the authentication phase, such as Kerberos protocol authentication. Moreover, since the identity \({SI{D_m}}\) and pseudoidentity \({PSI{D_m}}\) of \(S_m\) all bind up with the secret number y of the control server CS, CS will authenticate both identities of \(S_m\). Thus, our protocol can realize mutual authentication between \(S_m\) and CS in the authentication phase. Based on [5], we mark it with the following symbols: $$\begin{aligned} \text {In the authentication phase: }{S_m}\left( {SI{D_m}} \right) \Leftrightarrow {S_m}\left( {PSI{D_m}} \right) {{\;}}\mathop \Leftrightarrow \limits ^{B{S_m}} \;CS\left( y \right) \end{aligned}$$ (2) Mutual authentication between \(U_i\) and CS As discussed in Chapter 2.2.2, in order to avoid recording the \(U_i\)'s identity and password information on the control serverCS, CS distributes a smart card to \(U_i\) during the registration phase. The smart card records the values \(\left\langle {{C_i},{E_i},h\left( \cdot \right) } \right\rangle {{\;}}\) in our protocol. Firstly, as the only \(U_i\) that knows \({I{D_i}}\),\({{B_i}}\) and \({P_{i}}\) can computes \({C_i} = h\left( {I{D_i}\parallel {A_i}} \right)\), and \({A_i} = {P_{i{{\;}}}} \oplus h\left( {{B_i}} \right)\) for logging into the IoT-enabled device, the value \({C_i}\) recording in the smart card is mainly used to verify \(U_i\). So, we mark it with the following symbols: $$\begin{aligned} \text {In the user logined phase:} {U_i}\left( {I{D_i}} \right) \mathop \Leftrightarrow \limits ^{smart{{\;}}card\left( {{{\;}}{C_i}} \right) } \text { IoT-enabled device } \end{aligned}$$ The above symbol means that: with the help of value \({C_i}\) recording in the smart card, IoT-enabled devices can authenticate \(U_i\). On the other hand, the user trusts the IoT-enabled device obviously. Secondly, when \(U_i\) logins into the device, the device will compute \(b = {{{\Omega }}_i} \oplus {A_i}\),\(PI{D_i} = h\left( {I{D_i}\parallel b} \right)\) and\({D_i} = {E_i} \oplus {A_i}\). The value \({E_i}\) recording in the smart card can be regarded as an intermediate data in the process of authentication between the IoT-enabled device and CS. On the one hand, only the IoT-enabled device can compute \({D_i} = {E_i} \oplus {A_i}\) with the data \({E_i}\), if \(U_i\) logined into the device with \(B_i\) and \(P_i\). On the other hand, only CS that knows x and \(PI{D_i}\) can compute \({D_i} = h\left( {PI{D_i}{{\;}}\parallel x} \right)\), then computes \({A_i}{{\;}} = {D_{i{{\;}}}} \oplus {E_i}\) with the data \({E_i}\). Thus, IoT-enabled device and CS can realize mutual authentication in the help of the smart card in the user login phase. So, we mark it with the following symbols: $$\begin{aligned} \text {During the user login phase: IoT-enabled device }\mathop \Leftrightarrow \limits ^{smart{{\;}}card\left( {{{\;}}{E_i}} \right) } {{\;\;}}CS\left( x \right) \end{aligned}$$ Thirdly, as \(U_i\) logined into the IoT-enabled device, the device can compute \({D_i}\) with the value \({E_i}\). Then, the value \({D_i}\) can be the symmetric secret key for the IoT-enabled device and CS in the authentication, since only the IoT-enabled device and CS can calculate the value \({D_i}\). Therefore, the IoT-enabled device and CS can achieve mutual authentication through the symmetric secret key \({D_i}\) in the authentication phase. So, we mark it with the following symbols: $$\begin{aligned} \text {In the authentication phase: IoT-enabled device } \mathop \Leftrightarrow \limits ^{{D_i}} CS\left( x \right) \end{aligned}$$ Based on the symbol (2), symbol (3) and symbol (4), we can deduce with the following symbol: $$\begin{aligned} \text {In the authentication phase: }{U_i}\left( {I{D_i}} \right) {{\;}}\mathop \Leftrightarrow \limits ^{{D_i}} {{\;}}\;CS\left( x \right) \end{aligned}$$ The above symbol means that: with the help of the smart card, \(U_i\) with the identity \({I{D_i}}\) can authenticate each other with CS in the authentication phase. In addtion, after receiving \(U_i\) registration message, CS should verify the authenticity of \(U_i\) 's identity \({I{D_i}}\) . When the identity \({I{D_i}}\) is confirmed to be legal, CS will perform subsequent operations and delivers a smart card to \(U_i\). Then, while \(U_i\) logined into the IoT-enabled device, the device computes \(PI{D_i} = h\left( {I{D_i}\parallel {b_i}} \right)\) , which makes clear that pseudoidentity \(PI{D_i}\) is bound with the real identity \(I{D_i}\) by hash function, and the value \(b_i\) is protected by \({\Omega _i}\) recording in the smart card. So, the \(U_i\)'s identity \(I{D_i}\) is indirectly controlled by \(U_i\)'s pseudoidentity \(PI{D_i}\), which is bound with the secret number x of the control server CS with operation \({D_i} = h\left( {PI{D_i}{{\;}}\parallel x} \right)\). Thus, we mark it with the following symbol: $$\begin{aligned} \text {In the authentication phase:} {U_i}\left( {I{D_i}} \right) {{\;}} \Leftrightarrow {U_i}\left( {PI{D_i}} \right) {{\;}}\mathop \Leftrightarrow \limits ^{{D_i}} {{\;}}\;CS\left( x \right) \end{aligned}$$ (3) Mutual authentication between \(U_i\) and \(S_m\) Just like the above part (2) analysis, we can mark with the following symbols in this part: $$\begin{aligned} \text {In the authentication phase: } {U_i}\left( {PI{D_i}} \right) {{\;}}\mathop \Leftrightarrow \limits ^{{N_i},SI{D_m}} CS\left( x \right) \end{aligned}$$ Since the values \({{N_i}}\) and \({SI{D_m}}\) are encrypted and transmitted by the symmetric secret key \(D_i\),where \({F_i} = {D_i} \oplus {N_i}\) and \({Z_i} = SI{D_m} \oplus h\left( {{D_i}\parallel {N_i}} \right)\). $$\begin{aligned} \text {In the authentication phase: } {S_m}\left( {PSI{D_m}} \right) {{\;}}\mathop \Leftrightarrow \limits ^{{N_m}} CS\left( y \right) \; \end{aligned}$$ Since the value \(N_m\) is encrypted and transmitted by the symmetric secret key \(B{S_m}\), where \({J_i} = B{S_m} \oplus {N_m}\). $$\begin{aligned} \text {In the authentication phase: } {U_i}\left( {PI{D_i}} \right) {{\;}}\mathop \Leftrightarrow \limits ^{{N_m} \oplus \;{N_{CS}}} CS\left( x \right) \end{aligned}$$ Since the value \({{N_m} \oplus {N_{CS}}}\) is encrypted and transmitted by the secret value \({{N_i}}\) and \({{D_i}}\), where \({P_{CS}} = {N_m} \oplus {N_{CS}} \oplus h\left( {{N_i}\parallel {D_i}} \right)\) . $$\begin{aligned} \text {In the authentication phase: }{S_m}\left( {PSI{D_m}} \right) \mathop \Leftrightarrow \limits ^{{N_i} \oplus \;{N_{CS}}} CS\left( y \right) \end{aligned}$$ Since the value \({{N_i} \oplus \;{N_{CS}}}\) is encrypted and transmitted by the secret value \(B{S_m}\) and \({{N_m}}\), where \({R_{CS}} = {N_i} \oplus \;{N_{CS}} \oplus h\left( {B{S_m}\parallel {N_m}} \right)\) . Therefore,we we can deduce with the following symbol: $$\begin{aligned} \text {In the authentication phase: }{U_i}\left( {PI{D_i}} \right) {{\;}}\mathop \Leftrightarrow \limits ^{S{K_i}} {{\;}}\;CS\left( {x,y} \right) \mathop \Leftrightarrow \limits ^{S{K_m}} {S_m}\left( {PSI{D_m}} \right) \end{aligned}$$ As the symbol (11) shows, our protocol realize mutual authentication between \(U_i\) and \(S_m\) through the mediator of CS. What's more, the 3 parties share the same session key \(SK = h\left( {{N_m}\parallel {N_{CS}}\parallel {N_i}} \right)\). As a result, we can assert that our protocol can effectively resist impersonation attacks. Performance comparisons In the following, we concretely compare our protocol with the other two protocols [5, 10] in terms of resistance to security functionality and computational performance. In the Table 2, we list the 9 general security requirements of a robust authentication protocol for IoT-enabled devices and cloud servers. The results in Table 2 show the superiorities of our protocol are User auditability, simple and secure password change, resist off-line password guessing attack, resist impersonation attack and protection of the biometric. Moreover, the Table 3 shows the number of times the hash function and XOR operation have cost in each phase of our protocol with other related protocol. From the total count in the last line, we can see that our protocol uses the hash function and XOR the least number of times. Thus, it is more suitable for the environment in which the applications are resource-constrained and data-intensive, such as IoT-enabled devices in the smart city. Table 2 Security functionality comparison of our protocol with the related protocols Table 3 Operations comparison among our scheme with other related schemes In this paper, we deeply researched the authentication protocols for IoT-enabled devices in distributed cloud computing environment. We discover that Kang et al.'s protocol has 3 security drawbacks, such as vulnerable to off-line password guessing attack, designed redundant in the user registration phase and inconvenient for password change. Then, we introduced a lightweight pseudonym identity based authentication and key agreement protocol using smart card. To illustrate the security of our protocol, the security protocol analysis tools of AVISPA and Scyther are used to prove the proposed protocol can defend the various existing attacks, such as repaly attack, weak password guessing attack, man-in-the-middle attack, session key discloser attack and so on. We further analyze the authentication paths among participants in our proposed with cryptography knowledge, so as to avoid the most common attacks of impersonation attack. Moreover, we concretely compare our protocol with the other two protocols in terms of resistance to security requirements and computational performance. Both results show that our protocl is superior to the other two related protocols. As a result, the enhanced protocol will be applicable in distributed cloud computing architecture for smart city. The corresponding author shall keep the analysis and full simulation code set. If necessary, the data set can be requested from the corresponding author according to reasonable requirements. IoT: SaaS: PaaS: IaaS: The server control CR: AVISPA: Automated validation of infinite-state systems HLPSL: High level protocol specification language SPDL: Security protocol description language OFMC: On-the fly model-checke AtSe: Attack searcher SATMC: SAT-based model-checke A4SP: Tree automatabased protocol analyzer A.B. Zaslavsky, C. Perera, D. Georgakopoulos, in Sensing as a Service and Big Data in International Conference on Advances in Cloud Computing, ACC-2012, Bangalore, India, (2012), p. 219 G. Ateniese, R. Burns, R. Curtmola, et al., in Provable Data Possession at Untrusted Stores in Proceedings of the 14th ACM Conference on Computer and Communications Security (CCS'07), Virginia, VA, USA, (2007), p. 598–609 S.R. Chandra, Yf. WANG, Cloud things construction—the integration of internet of things and cloud computing. Future Gener. Comput. Syst. 56(C), 684–700 (2016) M. Díaz, C. Martín, B. Rubio, State-of-the-art, challenges, and open issues in the integration of Internet of things and cloud computing. J. Netw. Comput. Appl. 67, 99–117 (2016) R. Amin, N. Kumar, G.P. Biswas, R. Iqbal, V. Chang, A light weight authentication protocol for IoT-enabled devices in distributed cloud computing environment. Future Gener. Comput. Syst. 78, 1005–1019 (2018) S.K. Sood, A.K. Sarje, K. Singh, A secure dynamic identity based authentication protocol for multi-server architecture. J. Netw. Comput. Appl. 34(2), 609–618 (2011) X. Li, Y. Xiong, J. Ma, W. Wang, An efficient and security dynamic identity based authentication protocol for multi server architecture using smart cards. J. Netw. Comput. Appl. 35(2), 763–769 (2012) K. Xue, P. Hong, C. Ma, A lightweight dynamic pseudonym identity based authentication and key agreement protocol without verification tables for multi-server architecture. J. Comput. Syst. Sci. 80(1), 195–206 (2014) M.C. Chuang, M.C. Chen, An anonymous multi-server authenticated key agreement scheme based on trust computing using smartcards and biometrics. Expert Syst. Appl. 41(4), 1411–1418 (2014) B. Kang, Y. Han, K. Qian et al., Analysis and improvement on an authentication protocol for IoT-enabled devices in distributed cloud computing environment. Math. Probl. Eng. 2020(2), 1–6 (2020) MathSciNet MATH Google Scholar L. Zhou, X. Li, K. Yeh, C. Su, W. Chiu, Lightweight IoT based authentication scheme in cloud computing circumstance. Future Gener. Comput. Syst. 91, 1244–1251 (2019) O. Ruan, N. Kumar, D. He, J.-H. Lee, Efficient provably secure password-based explicit authenticated key agreement. Pervasive Mob. Comput. 24, 50–60 (2015) C. Kaufman, Internet Key Exchange (IKEv2) Protocol. RFC 4306, (2005) M.S. Hwang, L.H. Li, A new remote user authentication scheme using smart cards. IEEE Trans. Ind. Electron. 46(1), 28–30 (2000) A. Vilmos, C. Medaglia, A. Moroni, Vision and challenges for realising the internet of things[J]. hot working technology, 2010 A. Whitmore, A. Agarwal, L. Da Xu, The internet of things—a survey of topics and trends. Inf. Syst. Front. 17(2), 261–274 (2015). https://doi.org/10.1007/s10796-014-9489-2 C.C. Chang, T.C. Wu, Remote password authentication with smartcards. IEEProc. Comput. Digit. Tech. 38(3), 165–168 (1999) R. Amin, G.P. Biswas, A secure light weight scheme for user authentication and key agreement in multi-gateway based wireless sensor networks. Ad Hoc Netw. (2015). https://doi.org/10.1016/j.adhoc P. Kocher, J. Jaffe, B. Jun, Differential power analysis, in Proceedings of Advances in Cryptology, (1999), p. 388–397 T.S. Messerges, E.A. Dabbish, R.H. Sloan, Examining smart-card security under the threat of power analysis attacks. IEEE Trans. Comput. 51(5), 541–552 (2002) D.B. He, N. Kumar, J. Chen et al., Robust anonymous authentication protocol for healthcare applications using wireless medical sensor networks. Multimed. Syst. 21(1), 49–60 (2015). (14) W. Diffie, M.E. Hellman, New directions in cryptography. IEEE Trans. Inf. Theory 22(6), 644–654 (1976) K.Y. Yoo, E.J. Yoon, G.R. Alavalapati, Comment on efficient and secure dynamic ID-based remote user authentication scheme for distributed systems using smart cards. IET Inf. Secur. 11(4), 220–221 (2016) C. Heinrich, Transport layer security (TLS). Hit.bme.hu 31(4), 2009 (2005) J.S. Leu, W.B. Hsieh, Efficient and secure dynamic ID-based remote user authentication scheme for distributed systems using smart cards. IET Inf. Secur. (2014). https://doi.org/10.1049/iet-ifs.2012.0206 D. He, N. Kumar, N. Chilamkurti, A secure temporal-credential-based mutual authentication and key agreement scheme with pseudo identity for wireless sensor networks. Inf. Sci. 321, 263–277 (2015) D. Dolev, A. Yao, On the security of public key protocols. IEEE Trans. Inf. Theory 29(2), 198–208 (1983) D. He, Y. Zhang, D. Wang, K.-K. Raymond Choo, Secure and efficient two-party signing protocol for the identity-based signature scheme in the IEEE P1363 standard for public key cryptography. IEEE Trans. Dependable Secure Comput. 17(5), 1124–1132 (2020) Y. Zhang, D. He, X. Huang, D. Wang, K.-K.R. Choo, J. Wang, White-box implementation of the identity-based signature scheme in the IEEE P1363 standard for public key cryptography. IEICE Trans. Inf. Syst. E103–D(2), 188–195 (2020) AVISPA Team. (2014). AVISPA Tool. http://www.avispa-project.org. Accessed: Aug 2020 A. Armando et al., The AVISPA tool for the automated validation of Internet security protocols and applications, in 17th International Conference on Computer Aided Verification, (CAV05), Lecture Notes in Computer Science, (vol. 3576, Springer-Verlag, 2005), p. 281–285 L. Viganò, Automated security protocol analysis with the AVISPA tool. Electron. Notes Theor. Comput. Sci. 155, 61–86 (2006) Scyther Team. (2014). Scyther Tool. https://people.cispa.io/cas.cremers/scyther/index.html. Accessed: Aug 2020 C.J.F. Cremers, Scyther: semantics and verification of security protocols/door Casimier Joseph Franciscus Cremers (2006) C.J.F. Cremers, The scyther tool: verification, falsification, and analysis of security protocols, in International Conference on Computer Aided Verification (Springer, Berlin, Heidelberg, 2008) C.J.F. Cremers, P. Lafourcade, P. Nadeau, Comparing state spaces in automatic security protocol analysis, in Formal to Practical Security—Papers Issued from the 2005–2008 French-Japanese Collaboration (2009) D. He, S. Zeadally, N. Kumar, J.H. Lee, Anonymous authentication for wireless body area networks with provable security. IEEE Syst. J. PP(99), 1–12 (2016). https://doi.org/10.1109/JSYST.2016.2544805 L. Wu, Y. Zhang, K.K.R. Choo et al., Efficient and secure identity-based encryption scheme with equality test in cloud computing. Future Gener. Comput. Syst. 73(Aug.), 22–31 (2017) S.M. Bellovin, M. Merritt, Encrypted key exchange: password-based protocols secure against dictionary attacks, in Proceedings of IEEE Symposium on Security & Privacy, (1992), p.72–84 A. Groce, J. Katz, A new framework for efficient password-based authenticated key exchange, in 17th ACM Conference on Computer and Communications Security (CCS), (2010), p. 516–525 Q. Feng, D. He, Z. Liu, D. Wang, R. Choo, Multi-party signing protocol for the identity-based signature scheme in IEEE P1363 standard. IET Inf. Secur. 14(4), 443–451 (2020) This work was supported the National key research and development projects (No. 2019QY0501) and the National Natural Science Foundation of China (No. 2904020211). State Key Laboratory of Mathematical Engineering and Advanced Computing, Zhengzhou, 450001, Henan, China Huihui Huang, Siqi Lu, Zehui Wu & Qiang Wei Henan Key Laboratory of Network Cryptography Technology, Zhengzhou, 450001, Henan, China Siqi Lu Huihui Huang Zehui Wu Qiang Wei In this paper, HH conceived and designed the study. The paper was written by HH and ZW. SL discussed the formal analysis and QW worked as the advisors to discuss. All authors read and revised the manuscript. Correspondence to Zehui Wu. Huang, H., Lu, S., Wu, Z. et al. An efficient authentication and key agreement protocol for IoT-enabled devices in distributed cloud computing architecture. J Wireless Com Network 2021, 150 (2021). https://doi.org/10.1186/s13638-021-02022-1 Received: 20 August 2020 AVISPA tool Scyther tool Security attack Distributed cloud architecture Security & Privacy Solutions for Smart City
CommonCrawl
BMC Chemistry Equilibria in cobalt(II)–amino acid–imidazole system under oxygen-free conditions: effect of side groups on mixed-ligand systems with selected L-α-amino acids Magdalena Woźniczka1, Andrzej Vogt2 and Aleksander Kufelnicki1Email author Chemistry Central Journal201610:14 © Woźniczka et al. 2016 Heteroligand Co(II) complexes involving imidazole and selected bio-relevant L-α-amino acids of four different groups (aspartic acid, lysine, histidine and asparagine) were formed by using a polymeric, pseudo-tetrahedral, semi-conductive Co(II) complex with imidazole–[Co(imid)2]n as starting material. The coordination mode in the heteroligand complexes was unified to one imidazole in the axial position and one or two amino acid moieties in the appropriate remaining positions. The corresponding equilibrium models in aqueous solutions were fully correlated with the mass and charge balance equations, without any of the simplified assumptions used in earlier studies. Precise knowledge of equilibria under oxygen-free conditions would enable evaluation of the reversible oxygen uptake in the same Co(II)–amino acid–imidazole systems, which are known models of artificial blood-substituting agents. Heteroligand complexes were formed as a result of proton exchange between the two imidazole molecules found in the [Co(imid)2]n polymer and two functional groups of the amino acid. Potentiometric titrations were confirmed by UV/Vis titrations of the respective combinations of amino acids and Co-imidazole. Formation of MLL′ and ML2L′ species was confirmed for asparagine and aspartic acid. For the two remaining amino acids, the accepted equilibrium models had to include species protonated at the side-chain amine group (as in the case of lysine: MLL′H, ML2L′H2, ML2L′H) or at the imidazole N1 (as in the case of histidine: MLL′H and two isomeric forms of ML2L′). Moreover, the Δlog10 β, log10 β stat, Δlog10 K, and log10 X parameters were used to compare the stability of the heteroligand complexes with their respective binary species. The large differences between the constant for the mixed-ligand complex and the constant based on statistical data Δlog10 β indicate that the heteroligand species are more stable than the binary ones. The parameter Δlog10 K, which describes the influence of the bonded primary ligand in the binary complex CoII(Himid) towards an incoming secondary ligand (L) forming a heteroligand complex, was negative for all the Amac ligands (except for histidine, which shows stacking interactions). This indicates that the mixed-ligand systems are less stable than the binary complexes with one molecule of imidazole or one molecule of amino acid, in contrast to Δlog10 β, which deals with binary complexes CoII(Himid)2 and CoII(AmacH−1)2 containing two ligand molecules. The high positive values of the log10 X disproportionation parameter were in good agreement with the results of the Δlog10 β calculations mentioned above. The mixed-ligand MLL′-type complexes are formed at pH values above 4–6 (depending on the amino acid used), however, the so-called "active" ML2L′-type complexes, present in the equilibrium mixture and known to be capable of reversible dioxygen uptake, attain maximum share at a pH around nine. For all the amino acids involved, the greater the excess of amino acid, the lower the pH where the given heteroligand complex attains maximum share. The results of our equilibrium studies make it possible to evaluate the oxygenation constants in full accordance with the distribution of species in solution. Such calculations are needed to drive further investigations of artificial blood-substituting systems. Cobalt(II) L-α-Amino acid Oxygen-free ternary complexes Heteroligand Co(II)–L-α-amino acid–imidazole complexes are formed with protein amino acids in accessible coordination sites under an oxygen-free atmosphere [1]. Those paramagnetic, high-spin, mixed-ligand complexes of Co(II) contain six coordination sites. The structure is regarded as analogous to the binary amino acid complexes of Co(II) and other divalent metals, where the amino acid chelate rings are known to be in an equatorial trans-position [2]. The axial sites are occupied by imidazole (coordinated by N3) and a water molecule [3]. Due to the "trans-effect" of imidazole, these complexes are capable of the multiple cyclic uptake and release of molecular oxygen and therefore, are capable of imitating natural O2 carriers. In addition they exhibit a suitable temperature range (0–40 °C) for a full equilibrium displacement to the left or right, and are formed from ligands which are non-volatile and low-toxic. It is important to note that in order to obtain the heteroligand complex, a solid, semi-conductive polymeric complex [Co(imid)2]n is used as starting material to ensure the position of imidazole in one of the axial sites [4]. Existing literature data suggests that heteroligand complexes in the cobalt(II)–amino acid–imidazole systems are formed within the range pH 6–10 [1, 3, 5]. This indicates the deciding donor properties of the amino groups: they dissociate in basic medium. The structures of the mixed-ligand complexes have been confirmed inter alia by the molar neutralization coefficient of imidazole [5] released from the inner coordination sphere of the heteroligand complex, and by additional results obtained in the presence of O2 [6]. As one of the two imidazole molecules is known to be released to solution from the Co(imid)2 unit during formation of the mixed-ligand complex, it may be assumed that two amino acid ligands are coordinated via the amino group nitrogens and hydroxyl oxygens of the carboxyl groups. A water molecule or an OH− group may be expected as the remaining sixth donor. Earlier experiments carried out with analogous systems [6] suggest that the uptake of dioxygen does not change the pH, which would undoubtedly occur if O2 replaced a hydroxyl group. Therefore, the remaining sixth donor is evidently the oxygen of a water molecule. On the other hand, an alternative heteroligand complex, though inactive towards dioxygen uptake, may involve only one amino acid in the equatorial plane but three water molecules in the remaining sites. The stability constants of mixed-ligand cobalt(II) complexes, with amino acids as primary ligands and imidazole as secondary ligand, under oxygen-free conditions have so far been determined potentiometrically for glycine, DL-α-alanine and DL-valine [7], but the stability constants resulting from combined potentiometric and spectrophotometric titrations have been determined only for L-α-alanine (a monoaminocarboxylic acid) [8]. It should be emphasized that coordinating interactions in the cobalt(II)–amino acid–imidazole systems have been also investigated in solid state: with acetyl- DL-phenylglycine [9], N-acetyl, N-benzoyl and N-tosyl derivatives of amino acids [10] as well as in solution: with imidazole-4-acetic acid [11], bis(imidazolyl) derivatives of amino acids [12], 1,2-disubstituted derivative of L-histidine [13] and biomimetic models of coenzyme B12 [14]. In the present work, the investigations have been extended from our previous studies with L-α-alanine [8] to a number of amino acids representative of the other four groups: monoaminodicarboxylic acids (L-α-aspartic acid, Asp), diaminomonocarboxylic acids (L-α-lysine, Lys), amino acids with a heterocyclic ring (L-α-histidine, His), as well as amino acids with an amide side-chain group (L-α-asparagine, Asn). The forms of the ligands under study were specified by abbreviated names (Fig. 1). Prior to the experiments with these heteroligand systems, similar experiments using solutions of binary parent species had been performed under the same conditions by both methods used in the present study: pH-potentiometry and UV/Vis spectrophotometry. The essential value of determining the formation constants of the heteroligand species is that the procedure allows the stability constants, K O2, of the corresponding Co(II)—dioxygen complexes to be evaluated based on the full mass balance equations without any simplifying assumptions. Abbreviations used for naming the ligand forms Heteroligand complexes are formed as a result of proton exchange between the two imidazole molecules found in the [Co(imid)2]n polymer and two functional groups of the amino acid [8]. Accurate protonation constants of the amino acids and imidazole (Table 1) and formation constants of the binary complexes are needed to determine the formation constants of heteroligand complexes in reactions (1) and (2), which are also given in Table 1. Under more acidic conditions, the predominating reaction is: (charges omitted for clarity) $${\rm Co{(imid)_2}}\,+\, {\rm AmacH} \, =\, \mathop{{\rm Co(AmacH}_{{-}1}){\rm Himid}} \limits_{{\rm MLL}^\prime} \,+\, {\rm Himid}$$ Then, along with alkalization, the predominating reaction is: (charges omitted for clarity) $${\rm Co{(imid)_2}}\,+\, {\rm 2\, Amac} \, =\, \mathop{{\rm Co}({\rm AmacH}_{{-}1})_2{\rm Himid}}\limits_{{\rm ML_2L}^\prime}\,+\, {\rm Himid}$$ Logarithms of overall formation constants in the CoII(Himid)(L-α-Amac)nH2O system and UV–Vis parameters l′ Refinement results (log10 β mll′h ) σ b λ max (ε) nm (L mol−1 cm−1) Co(H2O) 6 2+ a −8.45(3) Imidazolea Alaninea 12.13(1) Asparagine Temp. 25.0 ± 0.1 °C, I = 0.5 mol L−1 (KNO3). Programs: Hyperquad 2008 and HypSpec. Standard deviations at the last decimal points—in parentheses. β mll'h = [M m L l L′ l′ H h ]/[M] m [L] l [L′] l′ [H] h , where M = Co(II), L = AmacH-1, L′ = Himid, H = proton a Results for Co(H2O) 6 2+ , Ala and Imidazole taken from previous paper [8] b σ statistical residual parameter of Hyperquad [27] L-α-Asparagine (Asn) For the system with L-α-asparagine, two M/L/L′/H ratios have been suggested (Fig. 2). The exact coordination modes were assumed from previous literature reports and evidenced by successful refinement of the convergence between the experimental and theoretical titration curves, as well as by Vis spectroscopy. In both the heteroligand structures (ML2L′ and MLL′) chelation only occurs due to the carboxyl and amino groups at the α-carbon (Fig. 2). It is known that above pH 13, asparagine is a potentially tridentate ligand [15]. In such an alkaline medium, the amide-NH2 side group is deprotonated, which may lead to other coordination modes. However, within the pH range 9–10, used in the present study, asparagine behaves only as a bidentate ligand, in a similar way to alanine, among other amino acids [8]. The relevant determined stability constants and speciation diagram are presented in Table 1 and Fig. 3. Suggested coordination modes of the ternary Co(II)–Himid–L-α-Asn complexes: a ML2L′; b MLL′ Distribution diagram of complex species versus pH for a solution of Co[(imid)2]n and asparagine in molar ratio 1:5. C Co = 0.01 mol L−1. L–asparagine (AsnH-1), L′–imidazole (Himid) L-α-Aspartic acid (Asp) As it follows from the speciation in Fig. 5, the ML2L′ heteroligand complex with aspartic acid (Fig. 4a) predominates in basic medium (pH > 7). It may be suggested that in this case, one of the amino acid molecules coordinates the metal via two carboxyl groups and one amino group (in place of equatorial and axial H2O). However, the second L molecule forms chelates only via α-COO− and −NH2. The remaining carboxyl side group is not able to substitute imidazole from the opposite axial position due to the presence of a much weaker electron-pair donation than the imidazole N3 [16]. In turn, although only one amino acid molecule is involved in the formation of coordinative bonds in the MLL′ ternary complex (Fig. 4b), in this case, donation occurs via all the potential donors: α-COOH, β-COO− and α-NH2. As can be seen in the speciation diagram (Fig. 5), the MLL′ complex exists in ca 30 % at pH 6.5–7.0. Suggested coordination modes of the ternary Co(II)–Himid–L-α-Asp complexes: a ML2L′; b MLL′ Distribution diagram of complex species versus pH for a solution of Co[(imid)2]n and aspartic acid in molar ratio 1:5. C Co = 0.01 mol L−1. L–aspartic acid (AspH-1), L′–imidazole (Himid) L-α-Lysine (Lys) Three types of heteroligand complexes (MLL′H, ML2L′H, ML2L′H2) were confirmed in the lysine-containing systems. At higher pH values, the equilibrium set comprises a share of species with an amino acid molecule deprotonated at ε-NH2, owing to the proximity of the protonation constant of ε-NH2: 11.12 in logarithm (Table 1) and similar IUPAC data under analogous conditions [17]. Thus, the refinement results make it possible to propose three coordination modes (Fig. 6). Suggested coordination modes of the ternary Co(II)–Himid–l-α-Lys complexes: a MLL′H; b ML2L′H2; c ML2L′H. R = (CH2)4 In all of the species, lysine forms dative bonds with the central ion in the equatorial plane: via–COO− and α-NH2. The complexes arise along with deprotonation of ε-NH3 + (Fig. 6) but this group is not likely to coordinate because an eight-membered ring at the axial position would be an unstable structure. The formation constant of ML2L′H2 becomes very high (Table 1), and its share in solution (up to 60 %) is the highest within the measurable pH range (Fig. 7). Distribution diagram of complex species versus pH for a solution of Co[(imid)2]n and lysine in molar ratio 1:5. C Co = 0.01 mol L−1. L–lysine (LysH-1), L′–imidazole (Himid) L-α-Histidine (His) In the cobalt(II)–histidine–imidazole system, both experimental methods confirmed the presence of two heteroligand species: MLL′H and ML2L′ (Fig. 8). Histidine is a potentially tetradentate ligand but in the measurable pH range, the imidazole N1 proton (pK 14.29) does not dissociate [18]. It follows from the speciation diagram that the MLL′H complex is formed within pH 4–7 (Fig. 9a). As it has been already suggested by literature CD data [2], at this pH range, histidine contains a protonated imidazole ring, whereas dissociation occurs at the carboxyl and amine groups. These groups take up two of the equatorial sites; the remaining three positions (two equatorial and one axial) are occupied by the solvent molecules H2O (as in Fig. 8a). In the ML2L′ complex, predominating at pH > 7, the histidine imidazole N3 undergoes deprotonation. Numerous potentiometric, calorimetric and spectroscopic studies [2] carried out for the binary ML2 cobalt(II)–histidine system have indicated that this complex occurs in solution in the form of an isomer mixture. Hence, analogous to our ML2L′ heteroligand complex, there is a possibility of amine and imidazole nitrogen atoms being coordinated in the equatorial positions. Thus, the –COO− group of one of the histidines may be found in the axial position (Fig. 8b-I). Another probable form of this complex may occur also when a strongly dative imidazole N3 is coordinated in the axial position, substituting the H2O molecule, and then the –COO− group moves to the equatorial site (Fig. 8b-II). The resulting species distribution (Fig. 9a) indicates a higher maximum share of ML2L′ than the protonated MLL′H complex, predominating in the more acidic medium. Suggested coordination modes of the ternary Co(II)–Himid–L-α-His complexes: a MLL′H; b ML2L′ (in two isomeric forms, I and II) Distribution diagram of complex species versus pH for a solution of: a Co[(imid)2]n and histidine in molar ratio 1:5. C Co = 0.01 mol L−1. L–histidine (HisH-1), L′–imidazole (Himid); b Co(NO3)2 and histidine in molar ratio 1:5. C Co = 0.04 mol L−1. L–histidine (His−) The visible absorption spectra, presented by way of example for cobalt(II)–histidine–imidazole (Fig. 10a), show stepwise dissociation of the heteroligand system to binary complexes which can be attributed to acidification. Finally, the binary complexes decompose to the cobalt(II) aqua-ion of λ max = 512 nm (ε = 4.9), similar to our previous results for l-alanine [8]. For comparison, the literature data [19] referring to Co(H2O) 6 2+ are as follows: 515 nm (ε = 4.6). This band corresponds to a ligand field d–d transition T1g(F) → 4T1g(P) in admixture with a shoulder around 475 nm caused by spin forbidden transitions to doublet states. The hypsochromic shift becomes visible when comparing the spectra at higher and lower pH as a result of an exchange of the weaker σ donor (water) to much stronger function groups of the amino acids. Since the molar absorbance coefficients of binary complexes of cobalt with amino acids or imidazole are needed to study the equilibria with heteroligand complexes by Vis, they had to be determined independently prior to the calculations with the heteroligand species. Example absorption spectra of the binary Co(II)–histidine system are shown in Fig. 10b. The complexes of cobalt(II) and the ligand are formed along with alkalization starting from pH ca 3.5. Titration was carried out only to pH 6.12 due to precipitation following hydrolysis of the aqua-ion. It is important to note that the use of [Co(imid)2]n as the starting compound, as described before, allows heteroligand complexes existing high above pH 6 to be created (cf. speciation in Fig. 9a) and to obtain relevant absorption spectra at pH 8.64 (Fig. 10a). A comparison of the two spectrophotometric titrations shows the differences within pH 5–6 as a result of higher share of the CoL and CoLH species of weaker ligand field power in the absence of imidazole (cf. Fig. 10b, higher shoulders of curves 3 and 4 at ca 530 nm). When comparing the speciations of Fig. 9a, b, it can be seen that the share of CoL is ca 40 % and the share of CoL2 up to 90 % in the absence of imidazole, whereas in the titrations with [Co(imid)2]n the respective values are 20 and 60 %. Importantly, the absorption spectrum of the free Co(II) aqua-ion show the almost exact shape in Fig. 10a, b, which indicates lack of Co(II) oxidation to Co(III) during the experiments. Fig. 10 Vis absorption spectra of: a the heteroligand system in a solution containing [Co(imid)2]n and histidine at 1:5 molar ratio (starting from basic solution of pH 8.64; curve 1). C Co = 3.5 × 10−2 mol L−1. Curves 2–5 denote the spectra scanned after adding a consecutive portion of acid. pH: 2–6.15; 3–5.09; 4–4.92; 5–4.20. Curve 6—absorption spectrum of the Co(II) aquo-ion; b the binary system in a solution containing Co(II) and histidine at 1:5 molar ratio (starting from acid solution of pH 3.54; curve 2). C Co = 3.5 × 10−2 mol L−1. Curves 3–6 denote the spectra scanned after adding a consecutive portion of base. pH: 3–4.61; 4–5.05; 5–6.12; 6–precipitate. Curve 1—absorption spectrum of the Co(II) aquo-ion Comparison of the stability constants of the heteroligand complexes in the CoII(Himid)(L-α-Amac)n system It is essential to compare the values of log10 β mll′h stability constants of the heteroligand complexes Co(II)(Amac)2(Himid), which are potential models of dioxygen carriers in solution. Assuming that one of them (Lys) contains the Amac ligand in a protonated AmacH or AmacH2 form, it was necessary in this case to subtract the protonation constant log10 β 0101 or log10 β 0102 from log10 β 1211 or log10 β 1212, respectively. Finally, it may be concluded from Table 1 that the comparable, corresponding stability constants of Co(II)(Amac)2(Himid) follow the series: His > Asp > Lys > Ala > Asn = 14.61 > 12.24 > 11.45 (11.65) > 9.94 > 9.70. It may be suggested that the effect of the stacking interaction between the aromatic ring of amino acid and imidazole in the CoII(Himid)(His) is responsible for the highest value among all of the Amac ligands. On the other hand, CoII (Asp)2(Himid) is the most favored heteroligand species from the ones with aliphatic side chains, most probably due to coordination of the carboxyl oxygen trans to the axial imidazole N3. Moreover, there are different methods allowing the stability of heteroligand complexes to be compared with those of the corresponding binary systems. One such method is calculation of the stabilization constant (Δlog10 β) [11, 20] (Eq. 3) on the grounds of the difference between the experimental stability constant for the mixed-ligand complex (log10 β 1110) and the constant based on statistical data (log10 β stat): $$\Delta { \log }_{ 10} \beta = { \log }_{ 10} \beta_{ 1 1 10} - { \log }_{ 10} \beta_{\text{stat}}$$ $${ \log }_{ 10} \beta_{\text{stat}} = { \log }_{ 10} 2 { } + \, \left( { 1/ 2} \right){ \log }_{ 10} \beta_{ 1 200} + \, \left( { 1/ 2} \right){ \log }_{ 10} \beta_{ 1 0 2 0} .$$ For lysine and histidine, heteroligand complexes with one amino acid which was always protonated were identified (Table 1). Therefore, Δlog10 β and log10 β stat were calculated on the basis of Eqs. (5), (6): $$\Delta { \log }_{ 10} \beta = { \log }_{ 10} \beta_{ 1 1 1 1} - { \log }_{ 10} \beta_{\text{stat}}$$ $${ \log }_{ 10} \beta_{\text{stat}} = { \log }_{ 10} 2 { } + \, \left( { 1/ 2} \right){ \log }_{ 10} \beta_{ 1 20 2} + \, \left( { 1/ 2} \right){ \log }_{ 10} \beta_{ 10 20} .$$ Table 2 presents the values of the stabilization constants for the four heteroligand complexes. Δlog10 β was not calculated for histidine because the stability constant of the binary complex containing two protonated amino acids (Table 1) is unavailable. The large differences between experimental and calculated stability constants Δlog10 β indicate that the heteroligand species are more stable than the binary ones. The heteroligand complex with aspartic acid has the highest value of Δlog10 β, suggesting that formation of the binary complex involving two molecules of the tridentate ligand (juxtaposed to the binary species with bidentate alanine, asparagine and protonated lysine) is less favoured than the heteroligand complex with one tridentate ligand. This may be easily explained by that the initial CoII(Himid) moiety has even five available coordination sites that can be occupied by two carboxyl groups and one amino group. Evaluated values of Δlog10 β, Δlog10 K, log10 X used for comparison of the stability of the heteroligand CoII(Himid)(L-α-Amac)n complexes with their parent binary complexes log10 β 1110 (experimental) log10 β stat a (calculated) Δlog10 β b Δlog10 K c log10 X d −0.08h 1.30i 2.07h a log10 β stat = log10 2 + (1/2)log10 β 1200 + (1/2) log10 β 1020 b Δlog10 β = log10 β 1110−log10 β stat c Δlog10 K = log10 β 1110−log10 β 1010−log10 β 1100 d log10 X = (2 log10 β 1110−log10 β 1200−log10 β 1020) e For log10 β 1111 f log10 β stat = log10 2 + (1/2)log10 β 1202 + (1/2) log10 β 1020 g Δlog10 β = log10 β 1111−log10 β stat h Δlog10 K = log10 β 1111−log10 β 1010−log10 β 1101 i log10 X = (2 log10 β 1111−log10 β 1202−log10 β 1020) Another very important parameter used to compare the stabilization of the heteroligand complexes with their binary system is Δlog10 K [21]. It is calculated according to Eq. (7) as the difference between the stability constants for the deprotonated mixed-ligand, CoII(Himid)(AmacH−1) and two binary, CoII(Himid) and CoII(AmacH−1), complexes: $$\Delta { \log }_{ 10} K = { \log }_{ 10} \beta_{ 1 1 10} - { \log }_{ 10} \beta_{ 10 10} - { \log }_{ 10} \beta_{ 1 100}$$ For the complexes containing protonated ligand forms (lysine and histidine), Δlog10 K is calculated as shown in the Eq. (8) [11]: $$\Delta { \log }_{ 10} K = { \log }_{ 10} \beta_{ 1 1 1 1} - { \log }_{ 10} \beta_{ 10 10} - { \log }_{ 10} \beta_{ 1 10 1}$$ The parameter Δlog10 K describes the influence of the bonded primary ligand in the binary complex CoII(Himid) towards an incoming secondary ligand (L) forming a heteroligand complex. The negative values (Table 2) indicate that the mixed-ligand systems are less stable than the binary complexes with one molecule of imidazole or one molecule of amino acid, in contrast to Δlog10 β, which deals with binary complexes CoII(Himid)2 and CoII(AmacH−1)2 containing two ligand molecules. More coordination positions are available for bonding the first ligand than the second ligand [21]. An exception is the positive value of Δlog10 K for the heteroligand complex MLL′H with histidine (Table 2). By comparing the structure of this ligand with that of other amino acids, it can be seen that histidine has an aromatic ring containing N as a donor atom, which affects the stability of the heteroligand complex [21, 22]. Similar aromatic ring stacking has been observed in mixed-ligand complexes formed by two different ligands which contain aromatic rings [23]. At least one of these rings has to be incorporated in a flexible side chain, just as it occurs in the histidine containing MLL′H species. The other ring may also be of the flexible type or it may be rigidly fixed to the metal ion, as is the case with imidazole. Evidently, a stacking interaction occurring between aromatic ring of amino acid and imidazole in the CoII(Himid)(L-α-Amac) system leads to a higher stability of this heteroligand complex than the binary complex with one protonated histidine. The intramolecular ligand–ligand interaction may also be possible between the aliphatic chain of the amino acid and aromatic ring of the second ligand. Qualitative observations found that the extent of the intramolecular interaction in the complexes increases in the following series: aliphatic–aliphatic < aliphatic–aromatic < aromatic–aromatic [24]. This accounts for the lower stability of the mixed-ligand complexes containing aliphatic amino acids (negative value of Δlog10 K) in comparison with the heteroligand complex of the histidine. Another parameter which enables the stability of the mixed-ligand complexes to be determined is the disproportionation constant log10 X (Table 2) [11]. Like Δlog10 β, log10 X is based on the stability constants of the binary complexes with two ligand molecules (Eq. (9) for deprotonated amino acids and Eq. (10) for protonated lysine): $${ \log }_{ 10} X = \, ( 2 {\text{log}}_{ 10} \beta_{ 1 1 10} - { \log }_{ 10} \beta_{ 1 200} - { \log }_{ 10} \beta_{ 10 20} )$$ $${ \log }_{ 10} X = \, ( 2 {\text{log}}_{ 10} \beta_{ 1 1 1 1} - { \log }_{ 10} \beta_{ 1 20 2} - { \log }_{ 10} \beta_{ 10 20} )$$ Higher values of log10 X indicate more stable heteroligand complexes than their binary counterparts. Comparison of the calculated data log10 X with Δlog10 β values leads to the same conclusion. The experimental results make it possible to conclude that mixed-ligand complexes of MLL′ type are present in the equilibrium mixture created by [Co(imid)2]n and Amac already at pH >4–6. On the other hand, the heteroligand ML2L′ species, known as the "active complex", as it is able to take up dioxygen in a reversible manner, predominate within the higher pH range and attain their maximum share at pH ~9. Our present findings allow the oxygenation constants to be evaluated in full accordance with the species distribution in solution. Such calculations are required also in our laboratory as part of investigations of artificial blood-substituting systems. By knowing the formation constants of the heteroligand ML2L′ species, it is possible to compare their stability in solution. For the group of amino acids used in the present work, the highest value of stability constant was found for L-α-histidine with a heterocyclic side ring, which leads to relative high concentration of the "active" ML2L′ species in the equilibrium mixture. From among the other Amac ligands with aliphatic side groups, the highest stability constant of ML2L′ was evidenced for L-α-aspartic acid. Various methods allow a comparison of the stability of heteroligand complexes with that of the corresponding binary systems. The stabilization constant Δlog10 β calculated on the basis of the difference between the experimental stability constant for the mixed-ligand complex MLL′ (log10 β 1110) and the constant based on statistical data (log10 β stat) indicate that the heteroligand species are more stable than the binary ones. The parameter Δlog10 K, used to compare the stabilization of the heteroligand complexes with their binary system by showing the difference between the stability constants for the deprotonated mixed-ligand, CoII(Himid)(AmacH−1), and two binary complexes, CoII(Himid) and CoII(AmacH−1), demonstrates the lower stability of the mixed-ligand complexes containing aliphatic amino acids (negative value of Δlog10 K) in comparison with the heteroligand complex of the histidine. Also, both the disproportionation constant log10 X and the Δlog10 β value indicate that the heteroligand complexes are more stable than their binary counterparts. Furthermore, it is evident from our studies that excess of amino acid in solution prior to reaction results in an increasing percentage of the CoII(Himid)(Amac)2 heteroligand species as compared with CoII(Himid)(Amac). The use of amino acid in excess leads also to a rise in the share of the binary cobalt(II)–amino acid compounds together with a decreasing share of binary cobalt(II)–imidazole complexes. For all the amino acids involved, greater excesses of amino acid are associated with lower pH values for which the heteroligand complex reaches maximum share. The procedure for preparation of the polymeric, pseudo-tetrahedral, semi-conductive Co(II) complex with imidazole–[Co(imid)2]n, as well as analytical and IR spectroscopic identification, have already been reported in [3, 4, 8]. The purity of amino acids used in this investigation: L-α-asparagine (Sigma Aldrich), L-α-aspartic acid (Fluka), L-α-lysine (Sigma Aldrich) and L-α-histidine (Fluka) was determined potentiometrically. Imidazole p.a. was purchased from Merck. Alkali (0.5 mol L−1 NaOH, carbonate-free) was purchased from Malinckrodt Baker B. V. Cobalt(II) nitrate hexahydrate, potassium nitrate, nitric acid (POCh Gliwice) were also p.a. reagents. Argon (99.999 %) from Linde Gas (Poland) was used. General potentiometric procedures The protonation constants of ligands and formation constants of binary complexes was determined by means of a MOLSPIN automatic titration kit (Molspin Ltd, Newcastle-upon-Tyne). A Hamilton Bonaduz AG microsyringe for 250 μL was used with the auto burette. The titrant (0.5 mol L−1 NaOH) was taken from an external flask protected from CO2. The measurements were carried out with a combined OSH 10–10 electrode (Metron, Gliwice) in a thermostated vessel at initial volume 4.00 mL, constant temperature 25.0 ± 0.1 °C and ionic strength I = 0.5 mol L−1 (KNO3). Prior to the proper titrations, a two-buffer standardization of the electrode according to [25] was performed, and the measurement cell was then calibrated in the EMF = −log10 [H+] scale by strong acid–strong base titration at the same constant temperature and constant ionic strength I = 0.5 mol L−1 (KNO3) according to [26]. The values of standard electromotive force, E 0 (which also takes into account the liquid junction potential) and slope, s from equation \(E = E_{0} - s\, \cdot 59.16 \cdot \;( - \log_{10} \,[{\text{H}}^{ + } ])\) were then subsequently inserted in the input files used to evaluate the overall, concentration formation constants. Potentiometric procedures in protonation of the amino acids and imidazole The titrations of amino acids and imidazole solutions were carried out under the same conditions as in the calibration procedures. Nitric acid was used as a mineral acid added prior to titrations. The following acidified ligand solutions of three various concentrations were used—asparagine: (1.5; 1.75; 2.0) × 10−2 mol L−1 at ligand to mineral acid molar ratio 1: 0.9, aspartic acid: (1.0; 1.1; 1.2) × 10−2 mol L−1 at ligand to mineral acid molar ratio 1:1.2, lysine: (1.0; 1.1; 1.2) × 10−2 mol L−1 at ligand to mineral acid molar ratio 1:2, histidine: (1.0; 1.1; 1.2) × 10−2 mol L−1 at ligand to mineral acid molar ratio 1:2, imidazole: (1.9; 2.0; 2.1) × 10−2 mol L−1 at ligand to mineral acid molar ratio 1.5:2. Potentiometric procedures in determination of CoII(Himid)n and CoII(L-α-Amac)n complexing equilibria under oxygen-free conditions Solutions containing metal and ligand were prepared at three molar L:M ratios (from 2: 1 to 5:1), with exception of lysine (from 3:1 to 7:1) and imidazole (from 5:1 to 10:1). During preparation, the solutions, initially in absence of cobalt, were acidified to a various extent of mineral acid to ligand: asparagine and aspartic acid molar ratio 0.1:1; lysine and histidine 1:1; imidazole 1.1:1. The titrations were carried out under pure argon. Potentiometric procedures in determination of heteroligand oxygen-free cobalt(II)–L-α- amino acid–imidazole complexes under oxygen-free conditions An isobaric laboratory set was used for pH-metric and volumetric measurements. Prior to each experiment, the glass electrode was standardized with two buffers (pH 4.00 and pH 9.00, Russell pH Ltd) at 25.0 ± 0.1 °C. The thermostated measurement vessel with an aqueous solution of initial volume 30.0 mL contained the amino acid and potassium nitrate to maintain ionic strength I = 0.5 mol L−1. Prior to the experiments, the solutions of aspartic acid and histidine were alkalized with 0.5 mol L−1 NaOH (molar ratio aspartic acid to base 1:1.2 and histidine to base 1:0.25), whereas the solution of lysine was acidified with 0.5 mol L−1 HNO3 (molar ratio ligand to mineral acid 1:0.5). The initial pH measured with a PHM 85 precision pH-meter Radiometer (Copenhagen) and combined C2401 electrode ranged within 5–6. A small glass vessel with 0.3 mol of the polymeric [Co(imid)2]n complex (i.e. 0.3 mol of Co and 0.6 mol of imidazole) was hung from a glass rod over the solution surface and then the entity was tightly closed with a silicon stopper and purged with pure argon. During continuous gas flow and constant stirring, the polymer was inserted into the sample and then, after dissolution, each Co(imid)2 unit could form complexes with the amino acid. As a result of the evolution of one imidazole molecule into solution from each Co(imid)2 unit, the pH rose to about 9–10. Finally, three solutions of three molar L:M (2:1, 3:1, 5:1 at constant amount of [Co(imid)2]n) were titrated with 0.5 mol L−1 HNO3 until pH decreased to ca 4. A change in color from deep-violet to pink was observed during the titration. The titration end point (a= 2) corresponded to neutralization of the total content of 0.6 mol imidazole involved in the Co(imid)2 unit of the polymeric [Co(imid)2]n complex (Fig. 11). At higher excess of the amino acids containing a third functional group (e.g. for lysine), the end point moved towards higher values. Representative titrations of the [Co(imid)2]n–L-α-lysine system under argon. C Co = 0.01 mol L−1. Curves correspond to various amino acid-to-cobalt ratios Spectrophotometric procedures in the determination of CoII(Himid)n and CoII(L-α-Amac)n complexing equilibria under oxygen-free conditions The experiments were carried out by means of a Cary 50 Bio UV–Visible spectrophotometer, slit width 1.5 nm (Varian Pty. Ltd., Australia) equipped with Peltier accessory (temp. 25.0 ± 0.1 °C). Acidified (HNO3) solutions of the ligands and Co(NO3)2 at ionic strength I = 0.5 mol L−1 (KNO3) were prepared in three molar ratios L:M, corresponding to the potentiometric measurements described before. All the investigations were made within concentration range of Co(NO3)2 (3.0–6.5) × 10−2 mol L−1. The solution of initial volume 2.40 mL was placed in a weighted empty cell, closed by a silicon stopper. Then, after argonation for 15–20 min, the solution was titrated by small aliquots (0.10–0.20 mL) of de-aerated 0.5 mol L−1 NaOH of known density, up to a precipitation caused by hydrolysis of the Co(II) aqua-ion at higher pH [29]. After each aliquot of alkali, the cell was weighed before recording the UV/Vis spectrum. Spectrophotometric studies of the dissociation of the heteroligand cobalt(II)–L-α-amino acid–imidazole complexes Solutions were prepared with a Co(imid)2 to amino acid molar ratio of 1:2, 1:3 or 1:5. The total concentration of cobalt amounted to 3.5 × 10−2 mol L−1. Appropriate amounts of amino acid, polymer and potassium nitrate (I = 0.5 mol L−1) were weighed directly in a silica cell. The cell was closed with a silicon stopper, rinsed with pure argon and then the necessary volume of argonated water was added to make the sample up to 2.40 mL. Initial solutions of some amino acids were alkalized with 0.5 mol L−1 NaOH (aspartic acid and histidine) or acidified with 0.5 mol L−1 HNO3 (lysine), in order to attain an initial pH ~9. The solution inside the cell was then titrated with portions of argonated 0.5 mol L−1 HNO3 of known density. Each dose of the titrant amounted to 0.10–0.20 mL. The cell with a silicon stopper was weighed after each titration step and then the UV/Vis absorption curve was taken by the spectrophotometer at 25.0 ± 0.1 °C. Following the potentiometric titrations of amino acids and imidazole in the absence of the metal, of amino acids and imidazole along with Co(II) and then of amino acids in presence of the [Co(imid)2]n polymer, all the formation constants were determined consecutively using the Hyperquad 2008 fitting procedure [27] under the same temperature and medium. Goodness of fit was controlled by the objective function U = Σ i=1,m W i r i 2 , where W is the weight and r is the residual, equal to the difference between observed and calculated values of the electromotive force EMF (E exp –E theoret ), m—number of experimental points, n—number of refined parameters. The weighting factor W i is defined as the reciprocal of the estimated variance of measurements in dependence on the estimated variances of EMF and volume readings. The normalized sum of squared residuals, σ = U/(m−n) was compared with a χ 2 (Chi squared) test of randomness at a number of degrees of freedom equal to m−n. Our value for the ionic product of water under the corresponding conditions, pK w = 13.94, was in close accordance with the literature 13.97 [17]. For each system, data from different titrations was taken together in a comprehensive file. Graphical simulations of speciation diagrams on the basis of calculated constants were created using HySS 2009 [28]. The equilibrium models with potentiometrically-determined formation constants were consecutively confirmed spectrophotometrically by using HypSpec (part of Hyperquad suite, Protonic Software). The HypSpec program resolves a linear equation system based on Lambert–Beer's law, using the spectrophotometric data and known (or estimated) equilibrium constants, yielding the molar extinction coefficients of individual absorbing species. Then, optionally the program can be used to refine the estimated equilibrium constants from spectrophotometric data. Amac: imid: Asp: Lys: His: L = AmacH-1 : amino acid ligand in fully deprotonated form, as commonly recognized in equilibrium studies L′ = Himid: imidazole ligand found in the species of equilibrium mixture M = Co2+ : metal center L:M: amino acid ligand-to-metal concentration ratio mmole of base/mmole of ligand β mll'h = [M m L l L′ l' H h ]/[M] m [L] l [L′] l' [H] h : cumulative stability constant m, l, l′, h : number of metals (central ions), ligands and protons, respectively concentrations in square brackets denote equilibrium concentrations All authors contributed equality in the development of the manuscript. MW carried out the potentiometric and UV–Vis spectroscopic analysis, participated in the results and discussion. AK suggested the research idea, participated in the results and discussion and coordinated the final formulation. AV provided the polymeric complex and participated in the discussion. All authors read and approved the final manuscript. Financial support of this work by the Medical University of Łódź (Statute Fund No. 503/3-014-02/503-31-001—A. Kufelnicki) is gratefully acknowledged. Department of Physical and Biocoordination Chemistry, Faculty of Pharmacy, Medical University of Łódź, Muszyńskiego 1, 90-151 Łódź, Poland Faculty of Chemistry, University of Wrocław, F. Joliot-Curie 14, 50-383 Wrocław, Poland Jeżowska-Trzebiatowska B (1974) Complex compounds as models of biologically active systems. Pure Appl Chem 38:367–390Google Scholar Kiss T (1990) Complexes of amino acids. In: Burger K (ed) Biocoordination chemistry: coordination equilibria in biologically active systems. Ellis Horwood Ltd., Chichester, pp 56–134Google Scholar Jeżowska-Trzebiatowska B, Vogt A, Kozłowski H, Jezierski A (1972) New Co(II) complexes, reversibly binding oxygen in aqueous solution. Bull Acad Pol Sci, Ser Chim 20:187–192Google Scholar ŚwiątekTran B, Kołodziej H, Tran VH, Baenitz M, Vogt A (2003) Magnetism of Co(C3H4N2)2(CO3)(H2O)2. Phys Stat Sol (a) 196:232–235View ArticleGoogle Scholar Kufelnicki A, Woźniczka M (2005) Effect of amino acid side groups on complexing equilibria in mixed cobalt(II)–amino acid–imidazole systems. Ann Polish Chem Soc 1:236–240Google Scholar Kufelnicki A, Pająk M (2003) Dioxygen uptake by ternary complexes cobalt (II)–amino acid–imidazole. Ann Polish Chem Soc 2:467–471Google Scholar Khatoon Z, Kabir-ud-Din (1989) Potentiometric studies on mixed-ligand complexes of cobalt(II) and nickel(II) with amino acids as primary ligands and imidazole as secondary ligand. Transit Met Chem 14:34–38View ArticleGoogle Scholar Woźniczka M, Pająk M, Vogt A, Kufelnicki A (2006) Equilibria in cobalt(II)–amino acid–imidazole system under oxygen-free conditions. Part I. Studies on mixed ligand systems with L-α-alanine. Polish J Chem 80:1959–1966Google Scholar Abdel-Rahman LH, Battaglia LP, Cauzzi D, Sgarabotto P, Mahmoud MR (1996) Synthesis and spectroscopic properties of N-acetyl-DL-phenyl-glycinato complexes of cobalt(II), nickel(II) and copper(II). Crystal structures of bis(N-acetyl-DL-phenyl-glycinato)diaquobis-(N-methylimidazole)cobalt(II), bis(N-acetyl-DL-phenylglycinato)-diaquobis (imidazole)cobalt(II) and nickel(II). Polyhedron 15:1783–1791View ArticleGoogle Scholar Abdel-Rahman LH, Nasser LAE (2007) Synthesis and characterization of some new cobalt(II) and nickel(II) ternary complexes of N-acetyl, N-benzoyl and N-tosyl derivatives of amino acids. Transit Met Chem 32:367–373View ArticleGoogle Scholar Aljahdali M, El-Sherif AA, Shoukry MM, Mohamed SE (2013) Potentiometric and thermodynamic studies of binary and ternary transition metal(II) complexes of imidazole-4-acetic acid and some bio-relevant ligands. J Solut Chem 42:1028–1050View ArticleGoogle Scholar Várnagy K, Sóvágó I, Goll W, Süli-Vargha H, Micera G, Sanna D (1998) Potentiometric and spectroscopic studies of transition metal complexes of bis(imidazolyl) and bis(pyridyl) derivatives of amino acids. Inorg Chim Acta 283:233–242View ArticleGoogle Scholar Martins JG, Gameiro P, Barros MT, Soares HMVM (2010) Potentiometric and UV-visible spectroscopic studies of cobalt(II), copper(II), and nickel(II) complexes with N, N'-(S, S)-bis[1-carboxy-2-(imidazol-4-yl)ethyl]ethylenediamine. J Chem Eng Data 55:3410–3417View ArticleGoogle Scholar Pallavi P, Nagabab P, Satyanarayana S (2007) Biomimetic model of coenzyme B12: aquabis(ethane-1,2-diamine-κN, κN′)ethylcobalt(III)—its kinetic and binding studies with imidazoles and amino acids and interactions with CT DNA. Helv Chim Acta 90:627–639View ArticleGoogle Scholar Vogt A, Kufelnicki A, Leśniewska B (1994) The distinctive properties of dioxygen complexes formed in the cobalt(II)–asparagine–OH− systems (in relation to other amino acids and mixed complexes with N-base). Polyhedron 13:1027–1033View ArticleGoogle Scholar Padmavathi M, Satyanarayana S (1999) Potentiometric and proton NMR studies on ternary metal(II) complexes containing thiaminepyrophosphate and a series of secondary ligands. Indian J Chem 38A:295–298Google Scholar Pettit LD, Powell KJ. Stability Constants Database. London: IUPAC, Academic Software, Royal Society of Chemistry; 1993–2000Google Scholar Smith RM, Martell AE (1975) Critical stability constants, vol 2. Plenum Press, New York, p 144View ArticleGoogle Scholar Lever ABP (1984) Inorganic electronic spectroscopy, 2nd edn. Elsevier Science Publishers BV, Amsterdam, p 481Google Scholar Aljahdali M, El-Sherif AA (2012) Equilibrium studies of binary and mixed-ligand complexes of zinc(II) involving 2-(aminomethyl)-benzimidazole and some bio-relevant ligands. J Solut Chem 41:759–1776View ArticleGoogle Scholar Huber PR, Griesser R, Sigel H (1970) Ternary complexes in solution. IX. The stability increasing effect of the pyridyl and imidazole groups on the formation of mixed-ligand-copper (II)-pyrocatecholate complexes. Inorg Chem 10:945–947Google Scholar Walker FA, Sigel H, McCormick DB (1972) Spectral properties of mixed-ligand copper(II) complexes and their corresponding binary parent complexes. Inorg Chem 11:2756–2763View ArticleGoogle Scholar Fischer BE, Sigel H (1980) Ternary complexes in solution. 35. Intramolecular hydrophobic ligand-ligand interactions in mixed ligand complexes containing an aliphatic amino acid. J Am Chem Soc 23:2998–3008View ArticleGoogle Scholar Tabatala M, Tanaka M (1988) Enhanced stability of binary and ternary copper(II) complexes with amino acids: importance of hydrophobic interaction between bound ligands. Inorg Chem 27:3190–3192View ArticleGoogle Scholar Buck RP, Rondinini S, Covington AK, Baucke FGK, Brett CMA, Camões MF, Milton MJT, Mussin T, Naumann R, Pratt KW, Spitzer P, Wilson GS (2002) Measurement of pH. Definition, standards, and procedures. Pure Appl Chem 74:2169–2200View ArticleGoogle Scholar Irving HM, Miles MG, Pettit LD (1967) A study of some problems in determining the stoichiometric proton dissociation constants of complexes by potentiometric titrations using a glass electrode. Anal Chim Acta 38:475–488View ArticleGoogle Scholar Gans P, Sabatini A, Vacca A (1996) Investigation of equilibria in solution. Determination of equilibrium constants with the Hyperquad suite of programs. Talanta 43:1739–1753View ArticleGoogle Scholar Alderighi L, Gans P, Ienco A, Peters D, Sabatini A, Vacca A (1999) Hyperquad simulation and speciation (HySS): a utility program for the investigation of equilibria involving soluble and partially soluble species. Coord Chem Rev 184:311–318View ArticleGoogle Scholar Mukherjee GN, Ghosh TK (1994) Metal ion interactions with some antibiotic drugs of penicillin family. Part-IV. Equilibrium study on the complex formation of cobalt(II), nickel(II) and zinc(II) with ampicillin. J Indian Chem Soc 71:169–173Google Scholar Submission enquiries: [email protected]
CommonCrawl
3GPP QoS-based scheduling framework for LTE Pablo Ameigeiras1, Jorge Navarro-Ortiz1, Pilar Andres-Maldonado1, Juan M. Lopez-Soler1, Javier Lorca2, Quiliano Perez-Tarrero3 & Raquel Garcia-Perez3 This paper proposes the design of a scheduling framework for the downlink of the Long Term Evolution (LTE) system with the objective of meeting the Quality of Service (QoS) requirements as defined by the QoS architecture of the 3G Partnership Project (3GPP) specifications. We carry out a thorough review of 3GPP specifications analyzing the requirements of the 3GPP QoS architecture. LTE bearers may be associated with a Guaranteed Bit Rate (i.e., GBR bearers) or not (i.e., non-GBR bearers). Additionally, the specifications establish a Packet Delay Budget (PDB) to limit the maximum packet transfer delay. To achieve our goal, we design a channel-aware service discipline for GBR bearers which is able to fulfill not only the GBR but also the PDB. Additionally, we also design an algorithm for prioritizing GBR and non-GBR bearers from different QoS Class Identifiers (QCIs) following 3GPP QoS rules. We compare the proposed framework with two reference schedulers by means of network-level simulations. The results will show the ability of the proposed framework to address the QoS requirements from 3GPP specifications while providing an interesting performance from a spectral efficiency viewpoint. Numerous operators worldwide have commercially launched Long Term Evolution (LTE) and LTE-Advanced (LTE-A) networks in recent years. The 3G Partnership Project (3GPP) has defined these radio access technologies with the objective of providing improved network capacity, coverage, and latency. LTE is based on Orthogonal Frequency Division Multiple Access (OFDMA) where the resource blocks are assigned by an eNodeB scheduler, which therefore plays a key role in the system performance. LTE-A also includes various features that enhance LTE performance. 3GPP specifications have also standardized a Quality of Service (QoS) architecture [1] different from the one defined for previous 3G radio access networks. This architecture is based on the fundamental concept of the bearer, which is assigned to one predefined QoS class. This class-based association determines the final QoS attributes of the provided services to the subscriber groups. The architecture defines relative priorities for the QoS classes, and depending on its class, the bearer is associated with a Guaranteed Bit Rate (i.e., GBR bearer) or not (i.e., non-GBR bearer). The objective of our investigation is the design of a scheduling framework for the downlink of the LTE system which satisfies the QoS requirements defined by the QoS architecture of 3GPP specifications [1, 2]. To the best of the authors' knowledge, no previous work has achieved that objective. For that purpose, we will review the QoS concept in 3GPP specifications. We will show that 3GPP specifications establish a per-QoS-class upper bound for the delay of the data packets transferred by a bearer. This bound is named Packet Delay Budget (PDB) by the 3GPP specifications, and they establish it as the primary goal of the scheduling framework. We will review the terms indicated in 3GPP specifications regarding the satisfaction of the PDB for GBR and non-GBR bearers. For these reasons, we propose in this paper an innovative scheduling framework for the downlink of LTE. The novelty of our design is that it aims at globally addressing the QoS requirements as defined by 3GPP specifications. For this, we carry out the following contributions: (i) a thorough review of 3GPP specifications analyzing the QoS requirements imposed by the 3GPP QoS architecture and their implications on the scheduling design. From this review, we identify the Packet Delay Budget as a key requirement to be fulfilled. Additionally, we identify that if the PDB cannot be fulfilled for all bearers then prioritization between bearers of different QoS classes should be triggered. (ii) The design of a channel-aware service discipline for GBR bearers that is able to fulfill not only the Guaranteed Bit Rate but also the Packet Delay Budget. The discipline incorporates a delay-dependent factor based on a sigmoid function. The benefit of this delay-dependent factor is that a parameter controls its upper bound. This facilitates the prioritization of bearers of different QoS classes compared to other service disciplines.(iii) The design of an algorithm for prioritizing GBR and non-GBR bearers from different QoS Class Identifiers (QCIs) following 3GPP QoS rules when the PDB cannot be met for all bearers. We have evaluated the proposed framework by means of network-level simulations and compared it with two reference schedulers. For the evaluation, we have considered scenarios with different load levels and traffic mixes of real-time, progressive video, and elastic traffic. The results will show the ability of the proposed framework to satisfy the QoS requirements from 3GPP specifications while providing an interesting performance from a spectral efficiency viewpoint. The rest of the paper is organized as follows: Section 2 provides an overview of related works. Section 3 presents the system model. Section 4 describes the QoS concept in 3GPP and discusses its implications on the scheduling design. Section 5 presents the proposed QoS scheduling design and Section 6 its performance results by means of simulations. Section 7 finally draws the main conclusions. The literature on downlink scheduling for LTE and OFDMA systems is extensive. Sadiq et al. [3], Capozzi et al. [4], and Dardouri and Bouallegue [5] provide interesting overviews of related prior work. QoS-aware strategies for real-time traffic Concentrating on QoS-aware scheduling algorithms, the Modified-Largest Weighted Delay First (M-LWDF) [6], the Exponential/Proportional Fair (EXP/PF), the Exp rule, and the Log rule are relevant proposals described in those overviews. Despite these scheduling algorithms providing very interesting performance benefits [3–5], they are not well suited to fulfilling the QoS requirements of 3GPP specifications. They were originally designed for a scenario with real-time traffic only. Therefore, they require to be extended to support non-real-time traffic and a strategy to provide relative prioritization between traffic classes. Moreover, although these strategies increase the user's priority when the Head of Line (HOL) packet delay increases, they may be enhanced by emphatically increasing the user's priority when the HOL packet delay approaches its upper bound. The urgency of the HOL packet delay is addressed in the Delay-Prioritized Scheduling (DPS) algorithm [7] by prioritizing a user according to δ=D−w, where w denotes the user's HOL packet delay and D the packet delay upper bound. However, DPS has also been designed for a real-time traffic scenario only. QoS-aware strategies for heterogeneous traffic Some other algorithms have been developed for a scenario with a mix of real-time and non-real-time traffic. The Rate-Level-Based Scheduling (RLBS) [8] algorithm prioritizes a user according to their δ, their spectral efficiency, and a GBR-related factor if the user supports a GBR bearer. However, it is not appropriate to apply a packet delay upper bound to non-real-time traffic that is delay tolerant (see Section 4.3). In [9], Ai et al. propose to compute the service order of real-time users based on the factor D−β·w, where β is an adaptive delay adjustment. In the last step, the algorithm assigns the remaining resource blocks to the non-real-time users. In [10], Iturralde et al. propose to implement a virtual token mechanism along with either the M-LWDF or the EXP/PF scheduling algorithms for real-time traffic. Their strategy provides very interesting performance for real-time flows, but it penalizes non-real-time flows. In [11], Nasralla and Martini suggest a modification to Iturralde's proposal by incorporating the queue size and the HOL packet delay in the priority computation for both real-time and non-real-time traffic. Their results show a balanced performance between the flows of the different traffic clases. In [12], Capozzi et al. propose the Frame Label Scheduler (FLS), which is composed of two levels. At the highest level, FLS calculates every frame (i.e., 10 ms) the total amount of data that real-time flows should transmit in the following frame in order to satisfy their delay constraints. At the lowest level, every Transmission Time Interval (TTI) (i.e., 1 ms) FLS assigns resource blocks to each real-time flow following a Maximum Throughput policy. Then, Proportional Fair is used to share the spare spectrum among best effort users. QoS-aware strategies for bit rate guarantees Other algorithms have focused on providing a guaranteed bit rate. In [13], Mongha et al. present a decoupled time/frequency domain scheduler intended for providing a target bit rate to GBR bearers and fairness control to non-GBR bearers. The proposal of Zaki et al. in [14] is based on the previous one, but it includes a relative prioritization between the bearers of the different QoS classes. Mongha's and Zaki's proposals do not increase the user's priority when the HOL packet delay approaches its upper bound, and therefore, they are not well suited to satisfy the PDB. In [15], Gora proposes a QoS-aware resource management for LTE-A-based relay networks. Although his proposal concentrates on multi-hop relays, it presents valuable utility functions to satisfy the GBR and/or the PDB for real-time traffic. However, his proposal has been neither designed nor evaluated in a traffic mix scenario with several QCIs. All these proposals presented in Sections 2.2 and 2.3 are not well suited to fulfilling the QoS requirements of 3GPP specifications. They either do not provide relative prioritization between QoS classes or they provide relative prioritization without following the rules imposed by 3GPP specifications. Furthermore, the large majority of these related works do not include users' birth and death processes in their evaluation models, which may strongly impact their obtained results (see [16]). Let us consider the OFDMA downlink transmission of an LTE cell where a base station transmits data to a set of users. Let K = {1,.,k,..,K} represent both the set of users and its cardinality at an arbitrary epoch in time. The notation used in the paper is given in Table 1. The base station carries out the transmission towards the K users in TTIs of fixed duration T = 1 ms. The total available bandwidth for transmission is divided in S resource blocks of fixed size. Each resource block is composed of 12 consecutive subcarriers with a subcarrier spacing of 15 kHz. Table 1 Notation used in this paper The base station transmission power is assumed to be constant across all subcarriers. The channel model assumes small-scale fading, shadow fading, and path loss. The signal received by each user is corrupted with AWGN noise and intercell interference. The resulting signal-to-interference ratio SINR k [n,s] of user k on resource block s and TTI n is used to compute its achievable transmission rate R k [n,s] based on a finite set of Modulations and Coding Schemes (MCSs). The base station is assumed to have instantaneous and perfect knowledge of the achievable transmission rate R k [n,s] of all the users based on the Channel Quality Indicator (CQI) reports. The scheduling algorithm uses this information to assign bandwidth resources to the users. The granularity of the bandwidth assignment equals one resource block: $$ {\begin{aligned} \Psi_{k}[n,s]=\left\{ \begin{array}{ll} 1& \textrm{resource block}\, s\, \textrm{is assigned to user}\, k \, \text{in TTI}\, n\\ 0& \text{otherwise} \end{array} \right. \end{aligned}} $$ The base station applies link adaptation, and it employs a single MCS for all resource blocks assigned to user k in TTI n. If user k successfully decodes the radio block in TTI n, he is assumed to correctly receive R MCS·T bits of data for each resource block s of the radio block. R MCS denotes the rate of the selected MCS. The base station has one queue buffer for each user. Let q k [n] denote the number of bits of all packets stored in the queue of user k. Whenever the base station receives a packet destined for user k, it stores the packet in the user's queue buffer and it updates q k [n]. Additionally, it starts a timer for the received packet. This timer is used obtain the HOL packet delay w k [n] when the packet reaches the front of the queue. If user k successfully decodes the radio block in TTI n, the queue q k [n] corresponding to the flow of user k is decremented according to the transmitted bits: $$ q_{k}[n]=q_{k}[n-1]-\sum\limits_{s=1}^{S} R_{\text{MCS}}\cdot T \cdot \Psi_{k}[n,s] $$ Every user in set K is assigned one bearer to transport its data packets, and let k denote not only the user index but also the index of its assigned bearer. Let us also denote M = {1,.,m,..,M} to represent both the set of QCIs and its cardinality. The assignment of bearer k to QCI m will be denoted as k ε QCI m . QoS concept in 3GPP The EPS bearer concept The Evolved Packet System (EPS) bearer is the basis of the QoS in LTE [1]. It provides a logical channel between the user entity (UE) and a Packet Data Network (PDN) for transporting IP traffic [17]. The EPS bearer is the degree of granularity for bearer-level QoS control. All packets transferred by an EPS bearer are equally treated by forwarding functionalities (e.g., scheduling policy and queue management policy) [1]. EPS bearers can be classified as GBR or non-GBR bearers. An EPS bearer is named a GBR bearer if the EPS permanently allocates dedicated network resources (more specifically, a GBR value) to it. Otherwise, it is named a non-GBR bearer. The 3GPP QoS concept includes four per-EPS-bearer parameters, three of which are relevant for scheduling purposes [1]: QoS Class Identifier (QCI): is a reference scalar used to specify node-specific parameters that control bearer-level packet forwarding treatment. Guaranteed Bit Rate (GBR): is the bit rate the GBR bearer is expected to provide. Maximum Bit Rate (MBR): is the maximum bit rate a GBR bearer can provide (e.g., a traffic shaper may discard the traffic excess). In Release 8, the MBR of a GBR bearer shall be set equal to the GBR [1]. The last two parameters are applicable only to GBR bearers. Standardized QCI characteristics The QCI determines the packet forwarding treatment that the EPS have to apply to the traffic conveyed by the bearer. The 3GPP specification [2] defines a set of standardized QCI characteristics associated with the QCI values. Three of these characteristics are relevant for scheduling: Resource type (GBR or non-GBR): determines if the bearers associated to a given QCI are GBR or not. Priority: via its QCI, a bearer is associated with a priority level, which can be used to prioritize between bearers. Note that a QCI with lower priority parameter has preference over a QCI with higher priority parameter. Packet Delay Budget (PDB): it defines the upper limit of the delay suffered by a packet between the UE and the Policy and Charging Enforcement Function (PCEF). Table 2 captures the mapping of standardized QCI values and its corresponding characteristics as well as example services for each QCI. Table 2 Standardized QCI values to standardized characteristic mapping [2] It is relevant to note that according to [1] these standardized characteristics are simply criteria for the configuration of parameters in each node for each QCI. However, they are not signaled on any interface. Additionally, it is also worth mentioning that the PDB limits the total delay between the PCEF and the UE, and therefore, to calculate the budget applicable to the radio access network, the delay between the PCEF and the eNodeB must be subtracted. Hereafter, we will refer to the PDB assuming that the delay between the PCEF and the eNodeB has already been subtracted. Implications of the bearer QoS profile on the scheduling framework Let us further analyze the implications of the 3GPP QoS concept on the scheduling solution. Regarding GBR bearers, a 3GPP-QoS-compliant scheduling framework must be able to guarantee the GBR for the bearers with QCIs 1–4. As for Release 8, the MBR shall be set equal to the GBR and the bearer must be able to guarantee the bit rate up to the limit imposed by the MBR. As stated in [2], services mapped onto these QCIs "can assume that congestion related packet drops will not occur, and 98 percent of the packets shall not experience a delay exceeding the QCI's PDB" (except in the case of transient link outages). Therefore, we conclude that, for a GBR bearer, the scheduling framework must guarantee that the packet delay does not exceed the PDB for all incoming traffic up to the limit of the GBR (for at least 98 % of the packets). Regarding non-GBR bearers, as stated above, the network has no obligation to guarantee a given bit rate to the EPS bearers. Then, it raises the question of how should the PDB attribute be interpreted for non-GBR bearers that are expected to support elastic traffic. If a communication link supports traffic (e.g., elastic TCP-based) with an upper bound on link capacity, this path is not able to guarantee a given target delay for that traffic unless a restrictive rate shaping function (e.g., leaky bucket) is applied [18]. However, in LTE, the MBR parameter is not applicable to non-GBR bearers. In this respect, the 3GPP specification [2] states that "in general, the rate of congestion related packet drops can not be controlled precisely for Non-GBR traffic." Additionally, it indicates that a queue management function can contribute to control the packet drop rate. Regarding the satisfaction of the PDB for non-GBR traffic 3GPP, the specification [2] states that "98 percent of the packets that have not been dropped due to congestion should not experience a delay exceeding the QCI's PDB." Therefore, ultimately, we conclude that (i) the 3GPP specification acknowledges that non-GBR bearers (e.g., supporting elastic traffic) are expected to suffer packet drops due to congestion and (ii) the delay of packets that are not dropped should not exceed the PDB, but this delay limit is not as strict as for GBR bearers. Regarding the relative priority between bearers of different QCIs, the 3GPP specification [2] indicates that the scheduling between different bearers "shall primarily be based on the PDB." However, if the PDB cannot be met for all bearers with sufficient radio channel quality then Priority shall be used as follows [2]: "in this case a scheduler shall meet the PDB of a bearer on Priority level N in preference to meeting the PDB of a bearer on Priority level N+1." Therefore, we conclude that a 3GPP-based scheduling framework has to be able to detect if the PDB cannot be fulfilled for all bearers (e.g., in case of heavy load) and, in that case, prioritize between bearers of different QCIs. Additionally, it is relevant to observe that [2] states that the prioritization should not be triggered if the PDB is not fulfilled by a UE with insufficient radio channel quality. Although [2] does not define any criterion to determine if a UE has sufficient channel quality, it is possible to use UE measurements to establish a radio channel criterion. QoS scheduling framework This section presents the scheduling framework based on the 3GPP guidelines and EPS bearer QoS attributes described above. First, for non-GBR bearers, this proposal includes a utility maximization scheduling discipline (based on [16]). Second, for GBR bearers, the proposal includes a delay-dependent scheduling discipline combined with a rate shaping function. Third, the proposal includes a novel algorithm that establishes relative precedence between QCIs when the PDB can no longer be met for all bearers. Scheduling for non-GBR bearers As it can be seen in Table 2, the TCP elastic traffic is to be mapped onto non-GBR bearers. The literature on scheduling for elastic traffic in OFDMA systems is extensive. Some of these works have designed scheduling disciplines aiming at maximizing the sum (over flows) of the average rate of the flows under power and/or minimum rate constraints [19–23]. Other works have concentrated on maximizing the sum of a utility function instead [24, 25]. However, it is not possible for a scheduling discipline to guarantee a target delay for elastic traffic unless a restrictive rate shaping function (e.g., leaky bucket) is applied [18]. On the other hand, the quality of TCP-based services is typically determined by the throughput of the flow. For example, for interactive or background services, the quality typically depends on the service response time, which is ultimately determined by the throughput of the flow (see, for example, [26] for web browsing). For progressive video, the quality primarily depends on the rebuffering events [27], which are also determined by the throughput of the flow [28, 29]. For these reasons, we choose to schedule the traffic mapped on non-GBR bearers with a utility maximization scheduling discipline where the utility function is the α-fair function [16]. The scheduling metric is computed as follows: $$ P_{k}[n,s]=\frac{R_{k}[n,s]}{\left [\overline{r_{k}[n]} \right ]^{\alpha }} $$ where P k [n,s] represents the priority of bearer k on resource block s and TTI n, R k [n,s] denotes the achievable transmission rate (obtained from the Channel Quality Indicator), \(\overline {r_{k}[n]}\) is the low pass filtered data rate that bearer k has received until TTI n [24], and α is a factor that controls the degree of fairness. Results presented in [16] have shown that by setting α≈0.6 it is possible to achieve a user throughput at 5 % outage similar to the Proportional Fair algorithm but an average user throughput gain of approximately 60 %. Therefore, hereafter, it will be assumed that α is set to 0.6. Scheduling for GBR bearers As described in Section 4.3, the eNodeB has to ensure for GBR bearers that 98 % of the packets shall not suffer a delay that exceeds the QCI's PDB. This requires a tight control of the delay suffered by the packets in the eNodeB queues. Interesting QoS schedulers are the M-LWDF, Exp rule, and Log rule [3]. As example, the Exp rule scheduler computes the user's priority as: $$ P_{k}[\!n,s]=b_{k} \cdot R_{k}[\!n,s] \cdot exp\left(\frac{\widetilde{a_{k}} w_{k}[n]}{1+\sqrt{(1/K)\sum_{k} w_{k}[n]}}\right) $$ for any fixed positive b k and \(\widetilde {a_{k}}\). w k [n] represents the HOL delay of user k measured in TTI n. The parameter b k can be set equal to \(1/\overline {r_{k}[n]}\). This way, the priority is computed as in the Proportional Fair discipline and then multiplied by a delay-dependent factor. Additionally, the HOL packet delay w k [n] can be substituted by the queue length, which yields its queue-length-driven version. While the Exp rule and Log rule scheduling algorithms provide interesting performance benefits [3–5], they are not well suited for fulfilling the QoS requirements of GBR bearers exposed in Section 4.3. Firstly, although these strategies increase the user's priority when the HOL packet delay increases, they may be enhanced by emphatically increasing the user's priority when the HOL packet delay approaches its PDB. Secondly, and more importantly, the upper bound of the delay-dependent factor in these strategies cannot be easily controlled. This upper bound is reached when w k [n] approaches the PDB (e.g., due to a temporary load increase). For example, in (4), this upper bound depends on the PDB of bearer k and the average HOL packet delay of all bearers. This makes it difficult to eventually prioritize bearers of a given QCI at the expense of bearers mapped onto QCIs with a higher priority parameter if the latter bearers suffer a large HOL packet delay. For these reasons, we propose to implement the delay-dependent factor based on a sigmoid function [30] instead of the exponential or logarithmic functions. The proposed delay dependent factor is as follows: $$ f(w_{k})=\frac{c}{1+e^{-a_{k}(w_{k}-D)}} $$ The parameter a k adjusts the slope of the sigmoid function, the parameter c establishes its upper bound, and the parameter D can be used to control the target packet delay. Although the sigmoid utility function is applied with respect to the bandwidth resource in [30], we apply it with respect to the packet delay to implement the delay-dependent factor. Figure 1 depicts f(w k ) for the case a k =75. Delay-dependent factor f(w k ) with a k = 75 Our scheduling proposal for GBR bearers computes the priority of bearer k by combining the delay-dependent factor f(w k ) with the scheduling metric for non-GBR bearers of (3) as follows: $$ P_{k}[n,s]=(1+f(w_{k}))\cdot \frac{R_{k}[n,s]}{\left [\overline{r_{k}[n]} \right ]^{\alpha }} $$ P k [n,s] in (6) provides a scheduling priority similar to non-GBR bearers (see Section 5.1) when the Head of Line delay w k →0, but it increases when the Head of Line delay w k approaches D. Hence, by appropriately setting the parameters a k , D, and c, (6) is able to emphatically increase the priority P k [n,s] of GBR users when their Head of Line delays approach their respective PDB. With this proposal, if the Head of Line delay w k approaches the PDB (e.g., due to a temporary load increase), the delay-dependent factor will be upper bounded by parameter c, and therefore, it will cause a controlled increase of the priority P k [n,s]. Furthermore, to avoid that an excessive reduction of \(\overline {r_{k}[n]}\) caused by the starvation of a user also leads to an uncontrollable increase of P k [n,s], the metric \(\overline {r_{k}[n]}\) will be lower bounded by a parameter r lb . Besides the scheduling discipline controlled by the scheduling metric (6), we propose that a rate shaping function (e.g., a leaky bucket) limits the maximum bit rate of the traffic flow as imposed by the MBR parameter. The combination of the rate shaping function and the satisfaction of the PDB guarantees the satisfaction of the GBR parameter. QCI prioritization In this subsection, we present an algorithm for integrating the above described scheduling disciplines for GBR and non-GBR bearers. Additionally, the algorithm implements the relative prioritization between QCIs described in Section 4.3. The integration mechanism modifies the priority of a bearer k that is mapped onto QCI m by multiplying it by a factor \(F_{k}^{\text {QCI}}\). The new priority of the bearer k is given by: $$ P_{k}^{\text{QCI}_{m}}[n,s]=P_{k}[n,s]\cdot F_{k}^{\text{QCI}_{m}} $$ Then, the scheduler algorithm assigns resource block s in TTI n to the user k ′ with the highest priority \(P_{k}^{\text {QCI}_{m}}[{n,s}]\): $$ k'[n,s]=\text{arg} \underset{k\epsilon K}{\text{max}}\left [P_{k}^{\text{QCI}_{m}}[n,s] \right ] $$ The factor \(F_{k}^{\text {QCI}_{m}}\) implements the relative prioritization between QCIs. It is computed in every TTI and ∀k∈K. The calculation of \(F_{k}^{\text {QCI}_{m}}\) is described next. As described in Section 4.3, the scheduling has to detect if all bearers fulfill their target quality. For this purpose, we define a quality performance indicator Q k for each bearer. For bearers mapped onto GBR QCIs, Q k is an estimator of the average packet delay of bearer k. For bearers mapped onto non-GBR QCIs, Q k is an estimator of the average data rate of bearer k. Q k is then defined as follows: $$ Q_{k}[n]=\left\{ \begin{array}{cc} \overline{d_{k}}[n] = (1-\rho_{d})\cdot \overline{d_{k}[n-1]} +\rho_{d}\cdot \frac{q_{k}[n]}{\lambda_{k}}\\ \text{QCI}_{m}=1,2,3,4\\ \overline{r_{k}}[n] = (1-\rho_{r})\cdot \overline{r_{k}[n-1]}+\rho_{r}\cdot r_{k}[n]\\ \text{QCI}_{m}=5,6,7,8,9\\ \end{array} \right. $$ where q k [n] denotes the number of bits in the queue of bearer k in TTI n and λ k is an estimator of the average arrival bit rate on bearer k. Accordingly, r k [n] represents the transmitted data rate in TTI n by bearer k. ρ d and ρ r are time averaging constants. Let us additionally define a target quality \(\text {TQ}^{\text {QCI}_{m}}\) that establishes the minimum quality level that should be experienced by a bearer mapped onto QCI m : For GBR QCIs: \(\text {TQ}^{\text {QCI}_{m}}\) is a delay threshold for QCI m . For non-GBR QCIs: \(\text {TQ}^{\text {QCI}_{m}}\) is a data rate threshold for QCI m . Then, we define condition (10) that is fulfilled if bearer k satisfies $$ \left\{ \begin{array}{cc} \text{TQ}^{\text{QCI}_{m}}<Q_{k}[n] & \text{QCI}_{m}=1,2,3,4\\ Q_{k}[n]<\text{TQ}^{\text{QCI}_{m}} & \text{QCI}_{m}~~~~=5,6,7,8,9 \end{array} \right. $$ A bearer k fulfills condition (10) if its quality performance indicator Q k [n] does not reach the target quality level. If there exists a bearer k∈K that fulfills condition (10), then CQI prioritization must be triggered. Figure 2 depicts the algorithm proposed to compute the factor \(F_{k}^{\text {QCI}_{m}} \forall k \in K\). The algorithm is based on condition (10), and it has the following steps: The algorithm determines the set I={1,..,i,..I} of bearers that fulfill condition (10). Computation of factor \(F_{k}^{\text {QCI}_{m}} \forall k \in K\) If set I is empty, CQI prioritization is not required. Then, \(F_{k}^{\text {QCI}_{m}}=1 \forall k\), and the algorithm is finished. If set I is not empty, CQI prioritization is triggered. Then, the algorithm starts to compute factor \(F_{k}^{\text {QCI}_{m}}\) for k = 1. The algorithm sets factor \(F_{k}^{\text {QCI}_{m}}\) for bearer k mapped on to QCI m equal to a constant factor F if the following two conditions hold: (i) bearer k fulfills condition (10) and (ii) QCI m is the QCI with the lowest priority parameter among the QCIs of all the bearers ∈I. The algorithm repeats step 3 until k is the last bearer ∈K. In summary, factor \(F_{k}^{\text {QCI}_{m}}\) is set equal to a priority enhancing factor F if bearer k fulfills condition (10) and its QCI is the one with the lowest priority parameter among all bearers that fulfill (10). Bearers with insufficient radio channel quality are not considered in the prioritization algorithm and are assumed to set \(F_{k}^{\text {QCI}_{m}}=1\). We further propose to provide a certain guard margin to this reactive mechanism before the quality target limit is exceeded. For example, for QCIs = 1, 2, 3, 4, if the PDB of a given QCI equals 300 ms, then we set \(TQ^{\text {QCI}_{m}}\) to a lower value (e.g., 250 ms), thereby providing a certain operation margin (50 ms) before the PDB is exceeded. See Fig. 3 for the relation between parameters D, \(TQ^{\text {QCI}_{m}}\), and PDB. Accordingly, for QCIs = 5, 6, 7, 8, 9, if UEs are considered to incur outage at a given throughput threshold (e.g., 512 kbps), we establish \(TQ^{\text {QCI}_{m}}\) (e.g., 600 kbps) including a certain guard interval. Relation between parameters D, \(TQ^{\text {QCI}_{m}}\), and PDB The complexity of the overall scheduling solution is similar to that of the Exp rule scheduler. It has a complexity of O(K S) due to a search for the maximum of K metrics on each of the S resource blocks. The CQI Prioritization algorithm has a low complexity as it only requires the evaluation of condition (10) for each of the K bearers and determining which bearers in set I have the lowest priority parameter. This section presents the performance evaluation of the proposed scheduling framework in a quasi-dynamic system level simulator. We compare the results with the Proportional Fair and Exp rule scheduling algorithms. Simulation setup and parameters We consider a hexagonal network of 13 cells. We simulate users only in the central cell of the grid, whereas the remaining cells are a source of interference. Users maintain their geographical location during their lifetime, and consequently, their deterministic path loss and shadow fading do not vary. However, users suffer a fast fading process that is updated in each TTI. We use the ITU Typical Urban (TU) power delay profile as the multipath model. The Geometry Factor distribution provided by the simulation model precisely matches the results for a macro-cell outdoor scenario presented in [31]. We use the Exponential Effective SIR Metric (EESM) to model the link-to-system-level mapping. Four different services are considered: live video streaming, buffered video streaming, web browsing, and ftp. These services are mapped onto QCIs 2, 4, 8, and 9, respectively (see Table 2). The user's birth process follows a Poisson process. We control the mean offered cell load with a simulation parameter. Unless otherwise stated, the mean offered cell load is set to 11.5 Mbps, which is a heavy load setting for the considered network configuration. Live video streaming and buffered video streaming services are modeled as CBR sources with bit rates 240 and 440 kbps, respectively. We consider a simplified web browsing and ftp models: each user downloads just one web page (of size 250 kB) or ftp file (of size 1 MB) in every session, and when the download is completed, the session is finished. Five different cases of traffic mix shares are considered (see Table 3). Users with G Factor ≤−3 dB are assumed to have insufficient radio channel quality, and therefore, they are not considered in the prioritization algorithm (see Section 5.3). The remaining parameters of the simulated LTE model are included in Table 3. Table 3 3G LTE network model Simulation results Sensitivity to prioritization factor F We commence the results section by analyzing the sensitivity of the prioritization mechanism between QCIs to the factor F. As described in Section 5.3, factor \(F_{k}^{\text {QCI}_{m}}\) is set equal to the priority enhancing factor F if bearer k fulfills condition (10) and its QCI is the one with the lowest priority parameter among all bearers that fulfill (10). Figure 4 depicts key performance indicators for the proposed QoS scheduler and for the traffic mix case Equal Load. For QCI2 and QCI4, the figure represents the percentage of user entities (UEs) with sufficient channel quality for which at least 2 % of the packets suffer a delay exceeding the QCI's PDB. For QCI8 and QCI9, Fig. 4 represents the percentage of UEs with sufficient channel quality that have a throughput below 512 kbps. Performance sensitivity to prioritization factor F When the factor F increases, the relative prioritization between QCIs is more aggressive. Therefore, the percentage of users of QCI9 in outage increases (see Fig. 4). On the other hand, a low factor F is unable to prioritize QCIs. This can be observed for QCI8, in which the percentage of users that suffer a throughput below 512 kbps increases when F decreases below 3–4 dB. Additionally, when F is higher than 4–5 dB, the percentage of users of QCI2 and QCI4 that incur outage increases. A very high F penalizes the efficiency of the scheduling algorithm to serve users in the fading tops. For these reasons, we select a factor F = 3–4 dB for the design. Evaluation with different traffic mixes Table 4 compares key performance indicators between the Proportional Fair, the Exp rule scheduler, and the proposed QoS scheduler for all considered traffic mix cases (H.L. stands for Heavy Load). The implementation of the Exp scheduler assumes that the priority of GBR flows is computed using (4),whereas the priority of non-GBR flows is computed using Proportional Fair. The parameters used in (4) are extracted from [3]. Table 4 Performance indicators of simulation results The results show that more than 99 % of the users of QCI2 fulfill the PDB for the considered traffic mix cases and for the three schedulers due to the low input rate of the considered live streaming service. However, for QCI4 the percentage of users that do not fulfill the PDB varies in the different traffic mix cases from approximately 4.5 to 7.1 % with Proportional Fair. With the Exp rule and the QoS scheduler, this outage is almost eliminated. Regarding QCI8, the proposed QoS scheduler considerably reduces the percentage of users that suffer a throughput below 512 kbps compared to both the Proportional Fair and especially the Exp rule scheduler. On the other hand, for QCI9, the proposed QoS scheduler increases the percentage of users in outage for all traffic mix cases. As QCI9 has the highest priority parameter, when the target quality \(TQ^{\text {QCI}_{m}}\) cannot be fulfilled for all users, the factor \(F_{k}^{\text {QCI}_{m}}\) of the QoS scheduler prioritizes users mapped onto QCI2, QCI4, and QCI8. This occurs at the expenses of users mapped onto QCI9. Additionally, Table 4 shows that the proposed QoS scheduler increases the average and the 95th percentile of the user throughput for QCI8 and QCI9. Let us define a metric \(Q^{\text {QCI}_{m}}[n]\) that measures the performance of the user with the worst quality among all users mapped on to QCI m that have sufficient radio channel quality: $$ \begin{aligned} Q^{\text{QCI}_{m}}[n]= \underset{\forall k\epsilon \text{QCI}_{m}}{\text{max}} Q_{k}[n]& & \text{QCI}_{m}=1,2,3,4_{.} \end{aligned} $$ Figure 5 plots a realization of the evolution of \(Q^{\text {QCI}_{m}}[n]\) for QCI4 in the Equal Load case for the Proportional Fair and the proposed QoS scheduling algorithms. For QCI4, \(Q^{\text {QCI}_{m}}\) measures the filtered packet delay of the bearer with minimum Q k [n] quality among all bearers of QCI m . It can be observed how the QoS scheduling algorithm is able to keep the filtered packet delay of all users of QCI4 below their PDB. Time evolution of maximum packet delay (\(Q^{\text {QCI}_{m}}\)) for QCI4. a Proportional Fair and b QoS scheduler Evaluation under different offered loads Here we analyze the performance of the proposed scheduling framework for different offered loads. For the evaluation, we consider the traffic mix case Equal Load. For QCI2 and QCI4, Fig. 6 depicts the percentage of users with sufficient channel quality for which at least 2 % of the packets suffer a delay exceeding the QCI's PDB. For QCI8 and QCI9, Fig. 7 represents the percentage of users with sufficient channel quality that suffer a throughput below 512 kbps. The results show that under a wide range of load conditions the proposed scheduler is able to eliminate or significantly reduce the outages of QCI2, QCI4, and QCI8. This occurs at the expenses of increasing the outage of QCI9. The results also show that the Exp rule is also robust with the outage of GBR users, but it degrades considerably the performance of users in QCI8 and QCI9. This is not in line with 3GPP QoS that states that users of QCI8 should be prioritized over users of QCI9. Percentage of UEs for which at least 2 % of the packets suffer a delay exceeding the QCI's PDB. QCI2 and QCI4 Percentage of UEs with throughput below 512 kbps. QCI8 and QCI9 In this paper, we present a scheduling framework with the objective of globally addressing the QoS requirements as defined by 3GPP specifications. For this, we have analyzed the implications of the 3GPP QoS architecture on the downlink scheduling at the eNodeBs of an LTE network. For GBR bearers, the scheduler has to guarantee a packet delay below the QCI's PDB for all incoming traffic up to the limit of the GBR parameter. For non-GBR bearers, 3GPP specifications admit that congestion-related packet drops may occur and only the packets that have not been dropped due to congestion should not experience a delay exceeding the QCI's PDB, although this delay limit is not as strict as for GBR bearers. Additionally, if the PDB cannot be fulfilled for all bearers, the scheduler should prioritize between bearers of different QCIs. Based on the analysis of the 3GPP specifications, we have proposed a scheduling design to address the aforementioned QoS requirements. For GBR bearers, we have proposed a scheduling discipline that incorporates a delay-dependent factor based on a sigmoid function that emphatically increases the bearers' priority when the Head of Line delay approaches the PDB and that can be combined with the relative prioritization between QCIs. For non-GBR bearers, we have proposed to use a classical channel-aware scheduling policy that aims at maximizing the sum of the concave (α-fair) utility function. Additionally, we have designed a mechanism that integrates the scheduling disciplines and triggers the relative prioritization between QCIs when all bearers do not fulfill their target quality. The simulation results have shown how the proposed scheduler is able to fulfill the PDB for GBR bearers and provides a spectrally efficient performance for non-GBR bearers (average user throughput gain in the range 15–25 % over Proportional Fair). The results have also shown the ability of the proposed scheduler to prioritize bearers according to the QCI's priority if the target quality cannot be met for all bearers. As future work, we propose a theoretical evaluation of the proposed solution. TS 23.401 V8.12.0. 3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; General Packet Radio Service (GPRS) Enhancements for Evolved Universal Terrestrial Radio Access Network (E-UTRAN) access (Release 8) (2010). http://www.3gpp.org/ftp/Specs/archive/23_series/23.401/23401-8c0.zip. TS 23.203 V8.11.0. 3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Policy and Charging Control architecture (Rel. 8) (2010). http://www.3gpp.org/ftp/Specs/archive/23_series/23.203/23203-8b0.zip. B Sadiq, R Madan, A Sampath, Downlink scheduling for multiclass traffic in LTE. EURASIP Journal on Wireless Communications and Networking. 2009:, 1–18 (2009). F Capozzi, G Piro, LA Grieco, G Boggia, P Camarda, Downlink packet scheduling in LTE cellular networks: key design issues and a survey. Commun. Surv. Tutor. IEEE. 15(2), 678–700 (2013). S Dardouri, R Bouallegue, Comparative study of downlink packet scheduling for LTE networks. Wireless Personal Commun.82(3), 1405–1418 (2015). P Ameigeiras, J Wigard, P Mogensen, in Vehicular Technology Conference, 2004. VTC2004-Fall. 2004 IEEE 60th, 2. Performance of the M-LWDF scheduling algorithm for streaming services in HSDPA, (2004), pp. 999–10032. doi:10.1109/VETECF.2004.1400171. K Sandrasegaran, HAM Ramli, R Basukala, in Wireless Communications and Networking Conference (WCNC), 2010 IEEE. Delay-prioritized scheduling (DPS) for real time traffic in 3GPP LTE system, (2010), pp. 1–6. X Wu, X Han, X Lin, in Communications (ICC), 2015 IEEE International Conference On. QoS oriented heterogeneous traffic scheduling in LTE downlink, (2015), pp. 3088–3093. Q Ai, P Wang, F Liu, Y Wang, F Yang, J Xu, QoS-guaranteed cross-layer resource allocation algorithm for multiclass services in downlink LTE system (Wireless Communications and Signal Processing (WCSP), 2010 International Conference on, Suzhou, 2010). doi:10.1109/WCSP.2010.5633846. M Iturralde, T Ali Yahiya, A Wei, A-L Beylot, in Vehicular Technology Conference (VTC Fall), 2011 IEEE. Performance study of multimedia services using virtual token mechanism for resource allocation in LTE networks, (2011), pp. 1–5. MM Nasralla, MG Martini, in Personal Indoor and Mobile Radio Communications (PIMRC), 2013 IEEE 24th International Symposium On. A downlink scheduling approach for balancing QoS in LTE wireless networks, (2013), pp. 1571–1575. G Piro, LA Grieco, G Boggia, R Fortuna, P Camarda, Two-level downlink scheduling for real-time multimedia services in LTE networks. Multimedia IEEE Trans.13(5), 1052–1065 (2011). G Mongha, K Pedersen, I Kovacs, P Mogensen, in Proceedings of the IEEE Vehicular Technology Conference, VTC Spring 2008. QoS oriented time and frequency domain packet schedulers for the UTRAN long term evolution, (2008). Y Zaki, T Weerawardane, C Gorg, A Timm-Giel, in Proceedings of the IEEE Vehicular Technology Conference, VTC Spring 2011. Multi-QoS-aware fair scheduling for LTE, (2011). J Góra, QoS-aware resource management for LTE-Advanced relay-enhanced network. EURASIP J. Wireless Commun. Netw.2014:, 178 (2014). P Ameigeiras, Y Wang, J Navarro-Ortiz, P Mogensen, JM Lopez-Soler, Traffic models impact on OFDMA systems design. EURASIP J. Wireless Commun. Netw.2012:, 61 (2012). M Olson, S Sultana, S Rommer, L Frid, C Mulligan, SAE and the evolved packet core (Elsevier, Burlington, 2009). NR Figueira, J Pasquale, An upper bound delay for the virtual-clock service discipline. IEEE/ACM Trans. Netw.3(4), 399–408 (1995). J Jang, KB Lee, Transmit power adaptation for multiuser OFDM systems. IEEE J. Selected Areas Commun.21(2), 71–178 (2003). LMC Hoo, B Halder, J Tellado, JM Cioffi, Multiuser transmit optimization for multicarrier broadcast channels: asymptotic FDMA capacity region and algorithms. IEEE Trans. Commun.52(6), 922–930 (2004). YJ Zhang, KB Letaief, in IEEE Transactions on Wireless Communications, 3. Multiuser adaptive subcarrier-and-bit allocation with adaptive cell selection for OFDM systems, (2004), pp. 1566–1575. doi:10.1109/TWC.2004.833501. H Yin, H Liu, An efficient multiuser loading algorithm for OFDM-based broadband wireless systems. Proceedings of the IEEE Global Communications Conference 2000 (GLOBECOM'00). 1:, 103–107 (2000). K Seong, M Mohseni, JM Cioffi, in Proceedings of the IEEE International Symposium on Information Theory (ISIT '06). Optimal resource allocation for OFDMA downlink systems, (2006), pp. 1394–1398. G Song, Y Li, Utility-based resource allocation and scheduling in OFDM-based wireless broadband networks. IEEE Commun. Mag.43(12), 127–134 (2005). P Svedman, SK Wilson, LJ Cimini, B Ottersten, Opportunistic beamforming and scheduling for OFDMA systems. IEEE Trans. Commun.55(5), 941–952 (2007). P Ameigeiras, JJ Ramos-Munoz, J Navarro-Ortiz, P Mogensen, JM Lopez-Soler, QoE oriented cross-layer design of a resource allocation algorithm in beyond 3G systems. Comput. Commun. J.33:, 571–582 (2010). RKP Mok, EWW Chan, RKC Chang, in IFIP/IEEE International Symposium on Integrated Network Management (IM). Measuring the quality of experience of HTTP video streaming, (2011), pp. 485–492. P Ameigeiras, JJ Ramos-Munoz, J Navarro-Ortiz, JM Lopez-Soler, Analysis and modeling of YouTube traffic. Transactions on Emerging Telecommunications Technologies, John Wiley & Sons. 23(4), 360–377 (2012). JJ Ramos-Munoz, J Prados-Garzon, P Ameigeiras, J Navarro-Ortiz, JM Lopez-Soler, Characteristics of mobile youtube traffic. IEEE Wireless Commun. Mag.21(1), 18–25 (2014). W Kuo, W Liao, Utility-based radio resource allocation for QoS traffic in wireless networks. IEEE Trans. Wireless Commun.7(7), 2714–2722 (2008). NW, et al., in Proceedings of the IEEE Vehicular Technology Conference, VTC 2006-Fal. Baseline E-UTRA Downlink spectral efficiency evaluation, (2006), pp. 1–5. This work is partially supported by the Spanish Ministry of Economy and Competitiveness and the European Regional Development Fund (project TIN2013-46223-P). Additionally, the authors would like to express their gratitude to the Supercomputing and Bioinnovation Center (SCBI) of the University of Malaga (Spain) for their support and resources. Department of Signal Theory, Telematics and Communications, University of Granada, Periodista Daniel Saucedo Aranda s/n, Granada, 18071, Spain Pablo Ameigeiras , Jorge Navarro-Ortiz , Pilar Andres-Maldonado & Juan M. Lopez-Soler Telefonica R&D - GCTO, C/ Zurbarán 12, Madrid, 28010, (Spain) Javier Lorca Telefonica R&D - GCTO, Ronda de la Comunicación s/n, Madrid, 28050, (Spain) Quiliano Perez-Tarrero & Raquel Garcia-Perez Search for Pablo Ameigeiras in: Search for Jorge Navarro-Ortiz in: Search for Pilar Andres-Maldonado in: Search for Juan M. Lopez-Soler in: Search for Javier Lorca in: Search for Quiliano Perez-Tarrero in: Search for Raquel Garcia-Perez in: Correspondence to Pablo Ameigeiras. The authors declare that they have applied for a patent relating to the content of the manuscript. Ameigeiras, P., Navarro-Ortiz, J., Andres-Maldonado, P. et al. 3GPP QoS-based scheduling framework for LTE. J Wireless Com Network 2016, 78 (2016) doi:10.1186/s13638-016-0565-9 Non-GBR QoS Class Identifier
CommonCrawl
Village differences in rural household energy consumption within the Loess hilly region of China Guozhu Li1Email author, Jinxin Sun1 and Ailin Dai1 Received: 23 February 2016 Accepted: 28 October 2016 There are obvious differences in rural household energy consumption, which vary according to the location of the households. At the village level, geographic factors significantly influence household energy choices and consumption. Therefore, it is vital to research differences in rural energy use among different types of villages to pertinently strengthen the implementation of rural energy policy and to correctly adjust the measures for local conditions. For this study, typical villages were selected, and the related data were obtained by using questionnaire surveys and household interviews. We investigated the differences among villages regarding rural household energy consumption for mountainous, semi-mountainous, and plains areas in the Loess hilly region of Gansu Province. The results indicate obvious differences in rural household energy consumption among the different types of villages, although they shared the distinctive feature of utilizing a combination of energy sources; however, the level of rural household energy consumption is relatively low. In the mountainous areas, households mainly depend on straw and use coal, animal manure, solar energy, wood, and bio-gas as auxiliary energy sources. In the semi-mountainous areas, households mainly depend on coal and use straw, wood, and solar energy as auxiliary energy sources. In the plains areas, households mainly depend on coal and use grass, straw, solar energy, and wood as auxiliary energy sources. Energy used for cooking and heating, both of which are required for basic survival, accounted for most of the energy consumption. In the hilly, mountainous areas, households relied on kangs (an integrated system for cooking, sleeping, household heating, and ventilation) for heat in the winter. In the semi-mountainous areas, households used both kangs and stoves for heat. In the plains along the river district, households primarily depended on stoves for heat. The characteristics of energy combination provide the evidence that farmers tend to make full use of their own resources to achieve maximum utility. Eco-environmental and economic problems should both be considered and consequently resolved together for optimal rural energy development. Rural household energy Village differences Loess hilly region Energy is the most basic material demand for the existence and development of human being. Energy consumption is used as the criterion for measuring the level of economic and social development for a certain region [1]. Due to the energy crisis and other environmental, economic, political, market, and social issues, researchers have sought to develop sustainable and renewable energy sources to reduce energy consumption, protect the environment, and promote regional development [2]. Household energy consumption accounts for a substantial proportion of the total energy consumption worldwide. In certain European and North American and South American countries, household energy use accounts for approximately 30 % of total energy consumption [3, 4]. In China, the rapid development of the economy and society in past decades have resulted in an increasingly high energy demand, for which household energy consumption has accounted for a significant proportion [5–8]. It has been reported that residential energy consumption was approximately 11 % of China's total energy consumption in 2012 [9]. In the vast rural areas of developing countries, there are extensive and profound connections which link regional economic and social development and environmental protection with the supply and demand of energy. Because of the growing concern regarding rural household energy consumption, several research areas have emerged. The first area of research focuses on changes in the household energy consumption structure. Multiple factors, including local energy resources and economic, social, cultural, and natural geographical factors, influence the rural household energy consumption structure [10, 11]. Optimizing the energy structure and adjusting energy management policy are important aspects of promoting the improvement of rural areas in developing countries [12–14]. The second area of research focuses on the utilization and exploitation of renewable energy. The evaluation of the potential exploitation of fuel sources in different regions, particularly renewable energies such as solar energy, wind energy, biomass, and terrestrial heat, has shown great potential; however, energy utilization is restricted because households must pay for their energy usage. The shortage of energy sources continues to be a long-standing problem [15–17]. The improvement of resource use technology and the transformation of resource consumption patterns are not the only approaches available to resolve energy shortages; however, they play important roles in improving the efficiency of resource use and reducing environmental pollution. These approaches have been used in the rural areas of Asia and Africa with beneficial results [18–20]. Because rural areas hold a large population, household energy consumption in these areas is significant, and traditional methods of energy consumption used by thousands of households cause tremendous destruction to the environment of China. China comprises a vast territory: its natural conditions are complicated, economic development levels are widely different, and regional differences in rural household energy consumption are apparent. Solving the rural energy problem in China requires the combination of strategic planning by the national energy system and full awareness of regional differences in rural energy consumption; such a solution should be based on the energy requirements of actual regions and include effective, scientific planning, and design of rural energy development strategies [21]. Similar to other developing countries, China has a renewable energy program for citizens living in rural areas that are often very remote. Within the framework of this program, numerous renewable energy technologies have been developed in the past and are currently being developed to address rural energy shortages [22, 23]. The two most important renewable energy technologies used in the rural areas of China are bio-gas digesters and solar stoves. Bio-gas production is an important aspect of the energy strategy in China [24]. In the study area of Gansu Province in Northwest China, the rural energy construction program was initiated in the 1970s and thus has a relatively long history. In 2000, the national government strengthened the program, and Gansu's rural renewable energy construction program profited by this opportunity for development. At the end of 2011, 308,000 households owned rural bio-gas digesters, and 788,000 solar stoves were distributed in Gansu Province [25]. In the Loess hilly region, many residents are poverty stricken, the environment is fragile, household energy shortages exist, and excessive biomass consumption has become an important factor contributing to ecological degradation [26]. Due to the complexity of the terrain and villages that are located in different terrain areas, there are differences in the planting structure, the energy resource endowment structure, and economic development levels. Therefore, it is vital to research the differences in rural energy use between these different types of villages to strengthen pertinent rural energy policy implementation and correctly adjust the measures to local conditions. Prior studies have analyzed issues such as greenhouse gas emissions, energy poverty, and health risk; however, insufficient attention has been paid to the differentiations in rural household energy consumption. On a micro-scale, this study examines the differences in rural household energy consumption for different types of villages in the Loess hilly region in Gansu Province. We obtained relevant data via questionnaire survey and analyzed the current situation in rural family life and the respective energy consumption structures and village differences. We then developed a model of rural energy construction for the different types of villages and provided a basis for policy design relating to regional development and environmental management. Description of study area The Loess hilly region in the center of Gansu Province includes broken terrains and ravines. It suffers from the most serious soil erosion within the Loess Plateau region; thus, its ecological environment is fragile. In this area, crisscrossing gullies and sparse vegetation on loose soil are mostly unprotected against heavy rain, which leads to severe erosion and delivers soil and water into the gullies. The most severe loss of water and soil is in the Loess Plateau where the annual average temperature is between 5.9 and 10.4 °C. Precipitation is approximately 400 mm, and loose soil is easily cultivated. There is a long agricultural history in the Loess Plateau, and archeological studies on the Dadiwan Ruins of Qin'an County have demonstrated that crop planting occurred in the region more than 7000 years ago [27]. When farmland is barren and dry and produces a low-yield cultivation, land becomes the primary method of increasing the food supply. When household energy is extremely scarce, farmers use local materials such as large amounts of crop straw, trees, and weeds for cooking and heating. This leads to a significant challenge in deciding whether to utilize these materials for fuel, feed, or fertilizer [28]. The demand for food leads to reclaiming steep-slope lands, and the demand for fuel damages vegetation. Due to an increase in population and less land for families to cultivate, a low level of agricultural production and subsequent farmer poverty are common in low-income areas in China. Such a high demand places great pressure on resources and the environment and seriously restricts sustainable development for this region. Household energy consumption is a process that involves key interaction between the environment and the economy, as demonstrated in Fig. 1. Rural energy consumption's impact on the environment and the economy Since 1999, the nation has implemented Western Development Strategies. The goal of the strategy is to return the region from farmland to forest and grassland in order to implement natural forest protection. The agriculture ministry has implemented the Ecology Household Project (EHP) in this region since 2003. The EHP was founded on the concept of renewable energy resources. The objective of EHP is to transform the agricultural production and lifestyle of traditional rural households through the application of technologies and engineering that are carefully matched to different types of households. In addition, farmers' incomes have increased. Coal, electricity, and liquid gas consumption have also increased, which has prompted an upgrade to the local energy structure [29]. In poverty-stricken areas with fragile environments, solving basic energy requirements is of practical significance. This includes vegetation protection and expanding this protection incrementally [30]. Effectively utilizing existing biomass resources and improving energy use technology and energy efficiency are effective methods to solve certain local ecological and economic problems. To quantitatively compare the differences among household energy consumption for different villages, it is necessary to obtain reliable data and design an appropriate method. For this survey, the study area was divided into three different terrains including mountainous areas, semi-mountainous areas, and plains areas. Hilly mountain villages were located in the mountains at high altitudes and experienced a significant temperature fluctuation between day and night. The agricultural planting structure was used more often than traditional agriculture. Production was not high, and there was extensive cultivation. These areas were affected by a drought climate which resulted in unstable agricultural production. Furthermore, these villages were located a great distance from town which made traffic inconvenient. Lastly, these villages had a low economic level. Semi-mountainous areas were typically located on the mountainside, with an altitude lower than that of the mountainous areas. Agricultural planting structures were used more often than traditional agriculture, and farmers planted a small number of cash crops. Traffic was relatively inconvenient because the villages were of a great distance away from towns. Plains area villages had the best local economic conditions. The fluvial outwash land was fertile, and the areas had water security. The agriculture industry gave priority to economic crops and planted fruit and vegetables primarily for surrounding urban areas and other markets. In the plains areas, farmers were wealthier than those in the other two areas. In June 2012, we conducted a pre-test survey to affirm the validity of the sample and reconstructed the wording of our questions to ensure the questionnaire (Additional file 1) was clear and user-friendly. In July and December 2012, 13 representative villages in Qin'an and Tongwei Counties were surveyed. Tazipo, Qiyao, Jiahe, Pindao, Liangxian, Xiedao, Dougou, and Dunwan are mountainous hilly villages. The Heishitou, Qizui, and Dapin villages are located in semi-mountainous areas. Sizui and Houtan were located in the plains area (Fig. 2). The basic conditions of the 13 villages are provided in Table 1. Location of the villages investigated in this study Basic conditions of the three types of villages Altitude (m) Number in household Number of households investigated Per door farmland (ha) Average yearly household income (yuan) Mountainous areas Tazipo Qiyao Pindao Liangxian Xiedao Dougou Dunwan Semi-mountainous areas Heishitou Qizui Dapin Plains areas Sizui Houtan We visited 390 peasant families and effectively surveyed 371 farmers which accounted for 24.65 % of the total number of households in the 13 villages. The families surveyed included both high-income households (annual household income above 40,000 yuan) and low-income households (annual household income above 5000 yuan). Peasant families included three-generation families and nuclear families. The survey data contained detailed geographic information at the village level and included the natural characteristics of each village surveyed because the geographic location significantly influenced household energy choices and consumption in rural areas. Collected data included the distance between villages, the distance to the nearest town, the latitude of the village, and the topographical features of the villages. Detailed information was collected regarding household energy consumption, particularly the installation and use of bio-gas digesters and solar stoves. Each type of fuel that was used for 1-day cooking and heating was evaluated. The thermal efficiencies of bio-gas and solar heating of water were measured. In the survey, we gathered information regarding household characteristics including family labor, basic demographic and educational information for all household members, housing conditions, and family assets. The data from the survey also contained information regarding household production activities, such as livestock production, land ownership, and off-farm employment. During the survey process, we discussed the representation of sample data with officials from rural energy management offices from both Qin'an and Tongwei Counties. Calculation method Statistical data collected from the field investigations were used to calculate annual household energy consumption. The total energy consumption of households can be calculated using the following formula: $$ \mathrm{T}\mathrm{e}={\displaystyle \sum_{\mathit{\mathsf{j}}=1}^{\mathit{\mathsf{m}}}{\displaystyle \sum_{\mathit{\mathsf{i}}=1}^{\mathit{\mathsf{n}}}{\mathit{\mathsf{x}}}_{\mathit{\mathsf{i}}\mathit{\mathsf{j}}}}}\left(\mathit{\mathsf{i}}=1,\ 2\dots \mathit{\mathsf{n}},\ \mathit{\mathsf{j}}=1,\ 2\dots \mathit{\mathsf{m}}\right) $$ Te—total energy of household consumption n—types of energy resources m—types of energy consumption x ij —the amount of the ith type of resource to use for the jth type of purpose Energy resources were converted into standard coal equivalents to compare with different energy structures. The conversion factors from physical units to coal equivalents [31] are provided in Table 2. The conversion coefficient for energy resources Crop residues (kg) Grass (kg) Firewood (kg) Dung (kg) Coal (kg) Electricity (kWh) Bio-gas (m3) Solar (MJ) LPG (kg) Conversion coefficient Village differences in household energy consumption Overall structure of household energy consumption Presently, rural household energy use primarily includes cooking and heating (using either a stove or a kang bed stove). Heating water, slow-boiling tea, lighting, and electrical home appliances (e.g., televisions, music equipment, video recorders, washing machines) all consume small amounts of energy. Existing energy sources include crop straw, coal, grass, wood, animal dung, electricity, solar energy, and LPG (liquefied petroleum gas). According to the questionnaire data on the corresponding consumption of all types of energy and energy consumption projects, we figured out the household energy consumption structure for the three types of villages (Table 3). The household energy totals, for mountainous areas, semi-mountainous areas, and plains areas, 2030.52, 1921.54, and 2166.48 KgCE, respectively. Structure of rural household energy consumption for the three types of villages (KgCE) Crop straw Bio-gas Mountainous village Boiling tea Heating kang Semi-mountainous village Plains village Village differences in energy portfolio characteristics According to the survey data, rural household energy consumption indicated distinct energy portfolio characteristics. The mountainous area households used straw, coal, dung, solar energy, and wood mainly and used bio-gas as only an auxiliary energy source. The semi-mountainous area households used coal, grass, straw, and solar mainly and used wood as only an auxiliary energy source. The plains areas used coal, grass, and straw mainly and used solar energy as only an auxiliary energy source (Fig. 3). Structure types of energy consumption in the three types of villages The data in Table 3 indicate that the mountainous area households consumed the greatest amount of straw of approximately 833.18 KgCE, and the plains area households consumed the least amount of straw of only 162.48 KgCE. Semi-mountainous area households consumed approximately 338.69 KgCE of straw. A peasant household consumes straw generally to provide self-sustaining energy, and the gain depends on the farmers' planting structures and production. Large amounts of farmland are available for each farmer, and the farmers in hilly mountainous areas grow traditional agricultural crops to obtain more straw. In the plains areas, farmers plant fruit trees, vegetables, and other economic crops. Only a few farmers grow traditional crops, and fewer farmers grew straw. The coal consumption per household in the plains areas was 354.6 KgCE, and in the semi-mountainous areas, consumption was 1007.41 KgCE. In the mountainous areas, consumption was less than half of that in plains areas, which was 655.23 KgCE. Commercial energy consumption is affected by the farmers' economic situation and the amount of self-produced energy. Plains areas farmers use less noncommercial energy, and most farmers bought coal as a supplement. In addition, we determined that many poor farmers in the plains areas continue to use grass as their primary energy resource. Coal use was seldom reported during the survey. The plains area households consumed the grass most, followed by the semi-mountainous areas and mountainous areas. In our survey, the three types of areas consume the grass 439.62, 386.58, and 240.40 KgCE, respectively. The kang was the most basic heating method for mountainous areas during the winter. Fuels consumed included straw, grass, dung, and coal dust. Farmers harvested a small amount of stalks in the plains areas and in general did not raise large animals that would produce manure for fuel; therefore, grass was commonly used to heat kangs. Electricity was widely used clean energy in the Loess hilly region in Gansu, but clean energy consumption was rare overall. The plains area consumed electricity most at 26.63 KgCE; the semi-mountainous areas consumed 22.33 KgCE and the mountainous area households consumed electricity the least at only16.68 KgCE (see Table 3). According to the surveys, when consuming energy, farmers select the type of energy based on cost, availability, convenience, and cleanliness. Energy cost was the primary factor considered when consumers selected which fuel to use because their choices were limited by their ability to pay. Ordered from the highest to the lowest, the types of household energy used were the following: straw and grass, household bio-gas, coal, electricity, and liquefied gas. Three other factors (availability, convenience, and cleanliness) were also considered when the farmers chose fuels. Real household energy consumption patterns were determined by the farmers' selection of energy sources under existing conditions. There continued to be an energy shortfall after peasants consumed all of the biomass fuel; therefore, solar stoves, coal, and LPG were needed to meet their basic energy needs. The characteristics of combined energy sources provided evidence that farmers used their own resources to realize utility maximization. Village differences in energy consumption structure Regarding household energy, heating and cooking consumed the most energy in the Loess hilly region in Gansu. In the mountainous, semi-mountainous, and plains areas, heating and cooking accounted for 98.98, 99.11 and 98.94 % of the total life-sustaining energy consumption, respectively (Fig. 4). Energy consumption in the three types of villages Kangs and stoves were the primary heating methods in the rural areas and consumed the most energy. In the mountainous areas, the heating energy consumption per household was 1173.19 KgCE in the winter, which was approximately three fifths of all consumed energy, and heating kang was primarily used as heating energy consumption. In the semi-mountainous areas, the heating energy consumption per household was 1034.9 KgCE. The kang and stove were equally used for heating, and each accounted for approximately half of the usage. The per household heating energy consumption was 1248.38 KgCE in the plains areas. The stove was primarily used for heating and accounted for 60 % of overall usage. Overall, heating energy used more fuel in the plains areas, and the stove was the primary heating method; coal consumption produces more effective heat energy and the heating effect is superior. The consumption of cooking energy in peasant households in mountainous, semi-mountainous, and plains areas were 833.68, 860.66, and 893.1 KgCE, respectively. In the mountainous areas, straw was used primarily and coal was a secondary used energy source. In semi-mountainous areas, coal was primarily used and straw was a secondary used energy source. Coal comprises up to 65% of the cooking energy in the plains areas. For lighting and electrical appliances, the usage proportion was less than 1 %. Residents of villages drank canned tea in the Loess hilly region, which consumed little energy. Data indicated that the hilly areas used firewood for preparing tea, the plains areas used electricity, and the semi-mountainous areas used firewood and electricity equally. Household energy use structures, in some degree, reflected the local residents' living standards. Overall, in the Loess hilly region in Gansu, cooking and heating, which were the basic survival needs, were the main energy usage. Entertainment, television, audio, and other household appliances occupied only a small proportion of energy consumption. Discussion of rural household energy construction Rural energy construction must take advantage of local resources. In the Loess region of the Gansu Province, where rural households are impoverished and water loss and soil erosion are serious matters, the issue of household energy shortages must be solved to improve the environment. Bio-gas digesters, energy-saving kangs, and solar stoves should be more widely utilized. This study revealed that a beneficial cycle has been established within the ecosystem that closely linked bio-gas production, crop cultivation, and household animal husbandry due to the effective use of nutrient materials, biological energy, and solar energy in this region (Fig. 5). Household heating played an important role in many aspects of energy demand. For a long period of time, Chinese kangs and stoves have been used in rural areas for household heating purposes. These two appliances provide heat for rural residents during extremely cold weather; however, indoor thermal comfort remains poor throughout winter. Innovating traditional kangs and building energy-saving kangs are primary methods to enhance quality-of-life and to protect the environment, both of which have important strategic significance for sustainable energy use in rural areas. Energy flow in the rural energy consumption system People in mountainous areas and semi-mountainous areas are subject to shortages of water, fuel, and fertilizer, which hinder local economic development. Rural energy construction should be based on terracing and rainwater collection. The primary energy construction components include bio-gas digesters, solar cookers, energy-saving kangs, sunlight greenhouses, and water cellars. A water cellar is a type of rainwater collection technology invented by farmers in Gansu Province in China for survival in a harsh, dry environment where rainfall only occurs in July, August, and September. A water cellar is used to store rainwater runoff within a cemented cellar during the wet season for residents and livestock to use during the dry season. A water cellar may store 30–40 m3 of water and can sustain a family of five for up to 5 months. For villages in the plains with superior economic conditions, the primary energy production was vegetable and fruit cultivation. Bio-gas acted as a link, driving livestock husbandry, reducing overall agricultural production costs, and producing pollution-free vegetables, thus improving farmers' incomes and promoting rural economic development. The primary energy construction elements included greenhouses, bio-gas digesters, energy-saving kangs, and solar cookers. By improving economic conditions for families, enhancing resource availability, and changing consumption practices, we can constantly upgrade rural energy consumption structures and gradually diversify it from a single traditional energy consumption structure toward high-quality energy consumption. This transition allows us to take effective measures to promote and guide this shift, which is of great significance for ecological protection and improving consumption structures. In this study, we analyzed current rural household energy consumption structures. This study differs from prior studies because we focused on differences among different types of villages, using data collected by questionnaires from371 rural households in the Loess hilly region of Gansu Province, China. According to the study, the household energy totals for mountainous, semi-mountainous, and plains areas were 2030.52, 1921.54, and 2166.48 KgCE, respectively. Total energy consumption was less in rural areas. The energy sources that peasants obtained only met their basic living demands and were insufficient to improve their living standards. This article revealed differences in energy portfolio characteristics among different village regions. The mountainous areas used straw, coal, dung, solar energy, and wood mainly and used bio-gas as only an auxiliary energy source. The semi-mountainous areas used coal, grass, straw, and solar mainly and used wood as only an auxiliary energy source. The plains areas used coal, grass, and straw mainly and used solar energy as an auxiliary energy source. People in mountainous areas depend on straw the most; those in the plains areas depend on straw the least. People in the mountainous areas used kang primarily for heating in the winter, those in the semi-mountainous areas used both kang and stove for heating, and those in the plains areas primarily used the stove for heating. The characteristics of the energy combinations provide evidence that farmers make full use of their various resources to realize utility maximization. Economic conditions, resource availability, and consumption are the primary influencing factors that determine rural energy consumption levels and structure changes. The improvement of farmers' economic conditions results in a limited ability to purchase energy and played a positive role in reducing biomass energy consumption. Currently, the energy structure is undergoing a historic transformation period. Rural energy construction must take advantage of local resources and combine demand with their potential to match rural, economic, and social development. It is important to consider energy conservation, emissions reduction, and ecological protection; however, we must also consider the actual capacity of the farmers. Applications of new technologies and appliances are technically feasible and have an economic rationale. This study contributes to the current knowledge regarding household energy consumption on the micro-scale, the development of more effective intervention strategies, long-term energy conservation, and sustainable development. The limitation of this study lies on two aspects. Firstly, this paper researches into the village differences of rural household energy consumption taking the Loess hilly region of China as an example; however, further study on the internal mechanism is still needed. Secondly, the topographic condition affects the agricultural activities and the income structure and has influences on the household energy consumption. So, future studies may expand the sample size and improve the method of analysis, discussing deeply the optimal rural energy development of different topographic conditions. EHP: Ecology Household Project KgCE: The authors would like to thank the anonymous reviewers for their constructive comments and suggestions. This research was supported by the National Social Science Fund of China (No. B13CSH068) and the Humanities and Social Science Research Project of Chinese Ministry of Education (No. 10YJCZH070). GL designed the study, conducted the survey, and drafted the article. JS and AD proof read and edited the article and designed the figures. All authors read and approved the final manuscript. Additional file 1: Questionnaire of rural household energy consumption. (DOCX 26 kb) College of Tourism and Geographical Sciences, Jilin Normal University, Siping, 136000, China Sayin C, Mencet M, Ozkan B (2005) Assessing of energy policies based on Turkish agriculture: current status and some implications. Energy Policy 33:2361–2373View ArticleGoogle Scholar Abbas M, Ahmad J, Edmundas K, Fausto C, Zainab K (2015) Sustainable and renewable energy: an overview of the application of multiple criteria decision making techniques and approaches. Sustainability 7:13947–13984View ArticleGoogle Scholar EIA (2009) Annual energy review 2009. US Energy Information Administration, WashingtonGoogle Scholar EEA (2008) Energy and environment report. Copenhagen: European Environment Agency (EEA), EEA Report No 6/2008Google Scholar Ouyang J, Hokao K (2009) Energy-saving potential by improving occupants' behavior in urban residential sector in Hangzhou City, China. Energy Build 41:711–720View ArticleGoogle Scholar Wang Z, Zhang B, Yin J, Zhang Y (2011) Determinants and policy implications for household electricity-saving behaviour: evidence from Beijing, China. Energy Policy 39:3550–3557View ArticleGoogle Scholar Song M, Song Y, An Q, Yu H (2013) Review of environmental efficiency and its influencing factors in China: 1998–2009. Renew Sustain Energy Rev 20:8–14View ArticleGoogle Scholar Zhou K, Yang S, Shen C, Ding S, Sun C (2015) Energy conservation and emission reduction of China's electric power industry. Renew Sustain Energy Rev 45:10–19View ArticleGoogle Scholar National Bureau of Statistics of China (NBSC) (2014) China energy statistical year-book. China Statistics Press, BeijingGoogle Scholar Zhai F (2003) China's rural energy development policy adjustment problem study. J Nat Resour 18:81–86Google Scholar Wang X, Feng Z, Jiang K (1999) On household energy consumption for rural development: a study on Yangzhong County of China. Energy 24:493–500View ArticleGoogle Scholar Catania P (1999) Chinese rural energy system and management. Appl Energy 64:229–240View ArticleGoogle Scholar Ghiorgis W (2002) Renewable energy for rural development in Ethiopia: the case for new energy policies and institutional reform. Energy Policy 30:1095–1105View ArticleGoogle Scholar Zhong Y, Cai W, Wu G, Ren H (2009) Incentive mechanism design for the residential building energy efficiency improvement of heating zones in North China. Energy Policy 37:2119–2123View ArticleGoogle Scholar Yan L, Min Q, Cheng S (2005) Energy consumption and bio-energy development in rural areas of China. Resour Sci 27:8–13Google Scholar Taele B, Gopinathana K, Mokhuts'oane L (2007) The potential of renewable energy technologies for rural development in Lesotho. Renew Energy 32:609–622View ArticleGoogle Scholar Frederick N, Reccab O, Ochieng M (2006) The potential of solar chimney for application in rural areas of developing countries. Fuel 85:2561–2566View ArticleGoogle Scholar Sateikis I, Lynikiene S, Kavolelis B (2006) Analysis of feasibility on heating single family houses in rural areas by using sun and wind energy. Energy Buildings 38:695–700View ArticleGoogle Scholar Madubansi M, Shackleton C (2006) Changing energy profiles and consumption patterns following electrification in five rural villages, South Africa. Energy Policy 34:4081–4092View ArticleGoogle Scholar Guo X, Niu S, Li G, Wang H (2006) Estimate on the eco-economic benefits of rural energy sources construction and de-farming and reforestation. China Popul Resour Environ 16:98–102Google Scholar Li G, Nie H, Yang Y (2010) The regional differences and influencing factors of China's rural life energy consumption. J Finance Econ Shanxi Univ 32:68–73Google Scholar Valmiki M, Li P, Heyer J, Morgan M, Albinali A, Alhamidi K (2011) A novel application of a Fresnel lens for a solar stove and solar heating. Renew Energy 36:1614–1620View ArticleGoogle Scholar Chen L, Zhao L, Ren C, Wang F (2012) The progress and prospects of rural biogas production in China. Energy Policy 51:58–63View ArticleGoogle Scholar Nandwani S (1996) Solar cookers cheap technology with high ecological benefits. Ecol Econ 17:73–81View ArticleGoogle Scholar Niu H, He Y, Umberto D, Zhang P, Qin H, Wang S (2014) Rural household energy consumption and its implications for eco-environments in NW China: a case study. Renew Energy 65:137–145View ArticleGoogle Scholar Li G, Niu S (2008) Economic cost analysis of rural life energy consumption environment in the loess hilly region. J Nat Resour 23:15–24Google Scholar Mo D, Li F, Li S, Kong Z (1996) A preliminary study on the pale environment of the middle Holocene in the Hulu River area in Gansu Province and its effects on human activity. Acta Geograph Sin 51:59–67Google Scholar Li G, Dong D (2010) Environmental economic benefit analysis of rural household biogas construction in Wei River upstream of the loess hilly region. Renew Energy 28:115–123Google Scholar Niu S, Zhang X, Zhao C, Niu Y (2012) Variations in energy consumption and survival status between rural and urban households: a case study of the Western Loess Plateau, China. Energy Policy 49:515–527View ArticleGoogle Scholar Qu W, Tu Q (2013) Which factors are effective for farmers' biogas use?—Evidence from a large-scale survey in China. Energy Policy 63:26–33View ArticleGoogle Scholar National Bureau of Statistics (2013) China Energy Statistical Yearbook 2012. China Statistics Press, Beijing.Google Scholar
CommonCrawl
Plugging equation into itself; works only for finite-solution equations? In this old post, substituting a modified - but completely equivalent - form of the equation ${\sqrt {x+1}+\sqrt{x+2}=1}$ back into itself yields its solution, as opposed to doing it for a linear equation in two variables, where the result is always a tautology. The question asks how this difference arises. The only reason I can think of for why this happens is this; equations with finite solutions can sometimes be manipulated in specific ways to produce roots, which is not the case with infinite-solution equations. For example, the quadratic formula to solve the general quadratic is just a manipulation we did by completing the square, but it does the trick and gives us the answer. The same goes for cubic equations (Cardano's formula). But for cases like $3x+2y=1$, you can't solve it in that sense because it doesn't have a practically obtainable set of solutions. The conjugate multiplication the OP did in the linked question is just another manipulation as in the first case, which yielded the solutions, and predictably, nothing happens with the two-variable equation. I'm asking separately if this view is correct because none of the answers there seem to use this, and also because; It doesn't seem to work for all equations with finite solutions, like $x^2+y^2=0$. So what exactly is the flaw with this line of reasoning? And, Only some manipulations seem to yield solutions (if I had substituted $\sqrt {x+1}=1-\sqrt{x+2}$, nothing would have happened). If there is a reason for it, what is it, and what exactly makes particular manipulations fruitful? Edit: I wanted to ask about solving single equations, sorry about the confusion. algebra-precalculus polynomials substitution harryharry $\begingroup$ I agree with Marc van Leeuwen whose answer is this. $\endgroup$ – mathlove $\begingroup$ @mathlove: is it that isolating a variable and substituting it back always yields a tautology (as with the linear equation) , and it's because it wasn't a variable substitution, but just some manipulation that the OP did on the other equation that a solution was obtained ? $\endgroup$ – harry $\begingroup$ (1) Solving $\sqrt{x+1}+\sqrt{x+2}=1$ gives $x=-1$, and substituting $x=-1$ into the equation gives $0=0$. (2) From $\sqrt{x+1}+\sqrt{x+2}=1$, one obtains $\sqrt{x+2}-\sqrt{x+1}=1$. Subtracting the latter from the former, one gets $\sqrt{x+1}+\sqrt{x+2}=\sqrt{x+2}-\sqrt{x+1}$, i.e. $2\sqrt{x+1}=0$. I would not call this "substituting" since "you combined the equation with a modified form of itself to obtain a new equation that is implied by the original one" as Marc van Leeuwen said. $\endgroup$ $\begingroup$ @mathlove: okay, it wasn't substition. Would my earlier comment be correct in that case? Because even if it's a modified form that we're combining the original with, it doesn't seem rigorous enough that 'modification' should determine whether we obtain a tautology or solutions. $\endgroup$ $\begingroup$ If you have an equation $f(x)=g(x)$, and get a modified form $h(x)=g(x)$ where $f(x)\not=h(x)$, and combine the first equation with the second one to have $f(x)=h(x)$, then $f(x)=h(x)$ is an equation, and so you'll finally obtain solutions. $\endgroup$ $\require{cancel}$ All $\space x^2+y^2=0\space$ solution are complex. Substitution for $\space ax+by=c\space$ ends up as $\space x=x\space$ but it does work for simultaneous equations. Double squaring works for $this$ problem by eliminating square roots. \begin{align*} \sqrt {x+1}+\sqrt{x+2}&=1\\ \\ \big(\sqrt {x+1} +\sqrt{x+2}\space \big)^2 &=2 x + 2 \sqrt{x + 1} \sqrt{x + 2} + 3=1^2\\ \big( 2 \sqrt{x + 1} \sqrt{x + 2}\space \big)^2 &=(1-3-2x)^2\\ \bcancel{4 x^2} + 12 x + 8 &= \bcancel{4 x^2} + 8 x + 4\\ \\4x=-4\implies x&=-1 \end{align*} poetasispoetasis $\begingroup$ I'm sorry, I had meant single equations, not systems. I'll fix it, could you change your answer accordingly? Also, isn't (0, 0) a solution, in (1)? $\endgroup$ $\begingroup$ @harry I meant for example that $x=x=0=0$. For substitution in single equations, the result is always $0=0$. I just mentioned systems of equations for substitution because that's the only place it works, to my knowledge. $\endgroup$ – poetasis (1) No, it works also for infinite-solution equations in $\mathbb{R}$, e.g., $$ \lvert\sqrt{x+1}+\sqrt{x}-1\rvert+\lvert y+z\rvert=0. $$ (2) The real reason is that the definition of $\sqrt{\dots}$ (say in the nonnegative reals $\mathbb{R}^+$ so it is unambiguous) means that $\sqrt{x+2}+\sqrt{x+1}=1$ is not just one equation but three: \begin{align} a+b&=1\tag{1}\\ a^2&=x+2\tag{2}\\ b^2&=x+1\tag{3} \end{align} with the constraints $a,b\in\mathbb{R}^+$ and we are really manipulating with these subequations instead, namely: $((2)-(3))\div (1)$ to get $a-b=1$. So this is really not "manipulating one equation and substitute back into itself" when you really think about it. (Alternatively, what you have done is applying an automorphism to an equation, but that is definitely too advanced for a question tagged algebra-precalculus) $\begingroup$ I'm sorry, I had meant single equations, not systems. I'll fix it, could you change your answer accordingly? $\endgroup$ $\begingroup$ @harry Over $\mathbb{R}$ we have tools like $\sum a_i^2=0$ iff $a_i=0$ for all $i$, or using absolute value for the same thing, so a system of equations is no different from a single equation. $\endgroup$ $\begingroup$ Got it. But how exactly does substituting back work in the example used in your (1)? Also, in (2), aren't the extraneous roots $a=-\sqrt{x+2} $ and $b=-\sqrt{x+1}$ created? (I understand you've set the square root operation as being a function from $R^+\to R^+$, but this problem still persists.) $\endgroup$ Not the answer you're looking for? Browse other questions tagged algebra-precalculus polynomials substitution or ask your own question. Substituting an equation into itself, why such erratic behavior? Find the roots of the quadratics function If $m = 6x + 5$, what equation is equivalent to $(6x + 5)^2 - 10=-18x - 15$ in terms of $m$? Is it possible to create a system of two equations with 3 variables and only one solution? Cubic equation... How can I write $\frac{1}{x+2} - \frac{4}{x} - 1 = 0$ as $x^{2} + 5x + 8 = 0$? Is this solution for a equation above 2nd degree correct? What's wrong with manipulating this algebraic equation? and why does a manipulated system of equations have a different solution than the original? Inspiration behind substitution of main variable in Cardano's solution Is there a formal name for equations and inequalities containing parameters (known variables, coefficients)?
CommonCrawl
BMC Biotechnology Characterization of a cold-active esterase from Serratia sp. and improvement of thermostability by directed evolution Huang Jiang1,2, Shaowei Zhang1,2, Haofeng Gao1 & Nan Hu1 BMC Biotechnology volume 16, Article number: 7 (2016) Cite this article In recent years, cold-active esterases have received increased attention due to their attractive properties for some industrial applications such as high catalytic activity at low temperatures. An esterase-encoding gene (estS, 909 bp) from Serratia sp. was identified, cloned and expressed in Escherichia coli DE3 (BL21). The estS encoded a protein (EstS) of 302 amino acids with a predicted molecular weight of 32.5 kDa. It showed the highest activity at 10 °C and pH 8.5. EstS was cold active and retained ~92 % of its original activity at 0 °C. Thermal inactivation analysis showed that the T1/2 value of EstS was 50 min at 50 °C (residual activity 41.23 %) after 1 h incubation. EstS is also quite stable in high salt conditions and displayed better catalytic activity in the presence of 4 M NaCl. To improve the thermo-stability of EstS, variants of estS gene were created by error-prone PCR. A mutant 1-D5 (A43V, R116W, D147N) that showed higher thermo-stability than its wild type predecessor was selected. 1-D5 showed enhanced T1/2 of 70 min at 50 °C and retained 63.29 % of activity after incubation at 50 °C for 60 min, which were about 22 % higher than the wild type (WT). CD spectrum showed that the secondary structure of WT and 1-D5 are more or less similar, but an increase in β-sheets was recorded, which enhanced the thermostability of mutant protein. EstS was a novel cold-active and salt-tolerant esterase and half-life of mutant 1-D5 was enhanced by 1.4 times compared with WT. The features of EstS are interesting and can be exploited for commercial applications. The results have also provided useful information about the structure and function of Est protein. Esterases (EC 3.1.1.3, carboxyl ester hydrolases), lipases (EC 3.1.1.1, triacylglycerol hydrolases) and phospholipase, commonly referred to as lipolytic enzymes, principally catalyze the hydrolysis and synthesis of acyl glycerides and other fatty acid esters [1]. Lipolytic enzymes produced from psychrophilic microorganisms in cold environment could be active and stable at low temperatures as compared to mesophilic enzymes. Esterases have gained immense importance in pharmaceutical, polymer, food, flavor, oleochemical, biofuel and detergent industries [2]. The high catalytic activity of these lipolytic enzymes at low temperatures make these more useful for commercial applications [3]. Recently, many cold-active lipolytic enzymes from psychrophiles and psychrotrophs have been discovered and characterized [4–6]. In esterses, the catalytic triad is commonly formed by Ser, His and Asp residues followed the order Ser-Asp-His and the nucleophilic serine residue is usually embedded in a conserved pentapeptide motif (G-X-S-X-G) [7]. In the past decade, a quick progress achieved in the production of recombinant esterases by directed evolution, mutagenesis, structural analysis, protein engineering [8]. Therefore, attempts have been made to enhance the thermo-stability of lipolytic enzymes by directed evolution [9, 10]. The directed evolution is generally used to generate desired variants and investigate the relationship of structure-function without any detailed structure information [11, 12]. Directed evolution creates molecular diversity by various methods such as error-prone PCR, site-specific saturation mutagenesis and DNA shuffling [13–15]. Recently, an esterase of pig liver origin mutated at F407I was used to resolve the racemic mixture of clopidogrel [16]. Our previous studies also confirmed that a cold active esterase (Est11), produced from Psychrobacter pacificensis, was salt tolerant, highly active at low temperatures, and its biochemical characteristics made it important for commercial applications [17]. Serratia is a genus of Gram-negative, rod-shaped bacteria and is also a well-known source for chitinase production [18]. Only few reports of esterase from Serratia sp. are available in literature. In this study, a gene encoding a cold-active esterase, termed EstS, was cloned from the marine bacterium Serratia sp. and expressed in E. coli. The recombinant enzyme was purified to homogeneity and characterized. Furthermore, the effect of altered amino acid on the thermo-stability of esterase was studied by 3D structural model of esterase and circular dichroism (CD) analysis. Gene cloning and sequence analysis The esterase gene, estS, was successfully cloned from Serratia sp. genomic DNA. The gene was 909 bp long and encoded a protein (EstS) of 302 amino acids with a theoretical molecular weight of 32.5 kDa. The SignalP 4.1 Server predicted no signal peptide for EstS. BLASTP revealed that the translated protein sequences of EstS showed a high sequence identity to esterase [GenBank: WP_020827011.1] from Serratia liquefaciens ATCC 27592 (identity, 98 %), esterase [GenBank: WP_044551387.1] from Serratia liquefaciens FK01 (92 %), and esterase [GenBank: WP_037415500.1] from Serratia grimesii (82 %). A multiple sequence alignment of EstS was performed with thermophilic carboxylesterase Est2 [PDB: 1EVQ_A] from Alicyclobacillus acidocaldarius (identity: 41 %), esterase PestE [PDB: 2YH2_A] from Pyrobaculum calidifontis (41 %), carboxylesterase Este1 [PDB: 2C7B_A] from a metagenomic library (40 %), esterase Lpest1 [PDB: 4C88_A] from Lactobacillus plantarum (29 %), and all of them were retrieved using BLASTP in the NCBI and PDB database (Fig. 1). The classical catalytic triad consisting of Ser 157, Asp 252 and His 282 was identified and the active site Ser 157 residue was located within the conserved pentapeptide motif (Gly-X-Ser-X-Gly). Multiple alignments of EstS and other four esterases. The four esterases are Est2 [PDB: 1EVQ_A] from Alicyclobacillus Acidocaldarius, PestE [PDB: 2YH2_A] from Pyrobaculum calidifontis, Este1 [PDB: 2C7B_A] from a metagenomic library and Lpest1 [PDB: 4C88_A] from Lactobacillus Plantarum. The identical and conserved residues are shaded. The conserved G–X–S–X–G motif and the catalytic triad (Ser, Asp, and His) were indicated by red box and black triangle, respectively Screening of random mutant library Screening of random mutant library was based on retention of esterase activity after incubation at high temperature. A clone, designated as 1-D5, which displayed higher thermo-stability than WT enzyme, was selected from over 8000 clones. Sequencing of gene revealed that 1-D5 showed three alterations in amino acid residues (A43V, R116W, D147N). Expression and purification The protein WT and mutant fused with GST tag (58 kDa) were efficiently induced and overexpressed as a soluble, catalytically active form in host strain at 15 °C. The purified WT and mutant (~32 kDa) were detected by SDS-PAGE as a single band, which was consistent with the value predicted from the deduced amino-acid sequence (Fig. 2). SDS-PAGE analysis of purified EstS protein. M: Protein molecular weight marker; 1: Uninduced cell lysate of E. coli BL21 (DE3) harboring pGEX-6P-1; 2: IPTG-induced of cell lysate of E. coli BL21 (DE3) harboring pGEX-6P-1; 3: Uninduced cell lysate of E. coli BL21 (DE3) harboring pGEX-6P-estS; 4: IPTG-induced of cell lysate of E. coli BL21 (DE3) harboring pGEX-6P-estS; 5: Purified EstS. The protein GST-EstS is indicated by arrow Substrate specificities The substrate specificity of WT was determined against various aliphatic acyl-chain p-NP esters (C2-C16). EstS showed the maximum hydrolytic activity towards p-NP acetate (C2), but no activity toward p-NP palmitate (Fig. 3). The results indicated that purified protein was an esterase rather than a lipase due to its preference for short acyl-chain p-NP esters. Substrate specificity of the purified EstS. The esterase activity of EstS was tested with various chain lengths of p-NP esters (C2, C4, C6, C8, C12 and C16) in 50 mM Tris – HCl, pH 8.5, at 30 °C. The activity against p-NP acetate (C2) was taken as 100 %. All measurements were performed in triplicate Biochemical characterization of WT and mutant The optimum activity of WT and mutant was measured over a temperature range of 0–80 °C and a pH range of 5–10 WT showed the maximum activity around 10 °C and retained nearly 92 % activity at 0 °C. The properties of higher hydrolytic activity at a low temperature indicated that EstS was a cold-active enzyme. However, no change in optimum temperature of mutant esterase was observed when compared with WT (Fig. 4a). Thermo-stability analysis showed that T1/2 of WT esterase was about 50 min at 50 °C with 41.23 % of its original activity after 1 h incubation. Also, complete loss of WT activity was reported after 20 min incubation at 55 °C (Fig. 4b). In contrast, the mutant enzyme showed enhanced T1/2 of 70 min at 50 °C and retained 63.3 % of its initial activity after incubation at 50 °C for 60 min, which were about 22 % higher than WT. The pH activity profile of WT and mutant was examined over the pH range of 5–10 under optimized assay conditions (Fig. 4c). The optimal pH of WT and mutant esterase was found to be 8.5. WT was stable over a wide pH range of 5.5–9.5, but almost inactive at pH 5 when compared with 1-D5 (Fig. 4d). Effect of temperature and pH on enzyme activity and stability of WT and mutant. a The effect of temperature on enzyme activity. The temperature-activity profile was measured at a temperature range of 0 to 80 °C in 50 mM Tris–HCl buffer (pH 8.5). Activity value obtained at 10 °C was defined as 100 %. b Temperature stability. The WT and mutant enzyme was incubated at 45 (○ WT; ● 1-D5), 50 (□ WT; ■ 1-D5) and 55 °C (▼ WT; ▲ 1-D5) for various time intervals and the residual activity was measured. The specific activity without incubation was taken as 100 %. c The effect of pH on enzyme activity. The pH-activity profile was determined in phosphate–citrate buffer (pH 5.0–7.0) and 50 mM Tris–HCl buffer (pH 7.0–10.0) at 10 °C. The activity at pH 8.5 was defined as 100 %. d pH stability. The activity was determined by pre-incubating enzyme solutions in different pHs buffers at 4 °C for 24 h and the residual activity was measured under standard condition. The residual activity after treatment with pH 7.0 buffer was shown as 100 % The effects of various additives on the EstS activity were examined. EstS was slightly activated by 1 mM Mg2+ (remaining activity, 121 %), 1 and 5 mM Mn2+ (120 %, 114 % respectively), whereas it was fairly inhibited by 1, 5 mM Zn2+ (83 %, 71 %) and Cu2+ (87 %, 52 %) and 5 mM PMSF (89 %). The mutant 1-D5 showed a similar behavior in the presence of metal ions and relative activity was slightly increased by Mn2+ and strongly decreased by the presence of Zn2+ and PMSF. However, the activity of EstS was not affected by the presence of EDTA, Ca2+, Ba2+ and Sr2+ (Table 1). The activity of EstS was also strongly reduced by higher concentrations of isopropanol, methanol and ethylene glycol (20, 30 %) and was almost completely inhibited by N-butyl alcohol and acetonitrile. Our data also showed that ethanol (10–30 %) increased the activity of the enzyme WT by more than 10 % (Table 2). However, the mutant 1-D5 displayed highest activity in the presence of ethylene glycol (20 % v/v) and a slight decrease in the relative activity was observed in the presence of DMSO and short chain alcohols. Non-ionic detergents such as Triton X-100, Tween 20, Tween 80 and CHAPS showed no significant effect on the enzyme activity of EstS WT and mutant while the anionic detergent SDS almost inactivated it (Table 3). Table 1 Effects of various reagents on the activity of EstS and mutant Table 2 Effect of organic solvents on the activity of Est WT and 1-D5 Table 3 Effect of detergents on the activity of WT and 1D-5 esterase The effect of NaCl on the EstS enzyme activity and stability was further investigated. EstS was not significantly affected and remained robust even at a concentration of 4 M NaCl (Fig. 5). EstS was NaCl- tolerant and retained more than 80 % of its original activity after 24 h incubation at 4 °C in the solution of high salinity (2.4 M NaCl) as depicted in Fig. 5. Effects of NaCl on activity and stability of EstS. The enzyme activity (●) was assayed in 50 mM Tris–HCl buffer (pH 8.5) containing 0–4 M NaCl and the residual activity (■) of EstS was measured after incubating with 0–4 M NaCl (pH 8.5) at 4 °C for 24 h Kinetic measurements The kinetic parameters of WT and mutant toward p-NP acetate were investigated (Table 4). The mutant 1-D5 displayed an 18 % increase in K m and an 8 % increase in K cat , leading to approximately a 10 % decline in catalytic efficiency K cat / K m (Table 4). Table 4 The kinetic parameters of the wild-type EstS and its mutant The homology models were constructed using swiss-model server with the crystal structure of thermophilic carboxylesterase Est2 [PDB: 1EVQ_A] from Alicyclobacillus acidocaldarius as the template (41 % identity to EstS). Both the predicted models of the WT and mutant enzyme exhibit the typical α/β hydrolase fold, which was characteristic of lipolytic enzymes (Fig. 6). The electrostatic potential of EstS was calculated and described (Fig. 7). The distribution of charges revealed that EstS had high negative charges on the surface. Three-dimensional model of 1-D5. The catalytic sites and substitution sites were displayed with stick-ball model The surface electrostatic potential of EstS. The most negative and most positive electrostatic potentials are indicated by purple and red, respectively. The right image is the 180° rotated view of the left one Circular dichroism and secondary structure prediction The program PSIPRED predicted that the residue Arg116 residue was located in a conservative region. A quantitative analysis of the protein secondary structure for WT and mutant was carried out using SELCON3 program. The data showed that the CD spectra of Est WT and 1-D5 was more or less similar and an increase in the percentage of β-sheets was reported by CD analysis (Fig. 8). CD spectra of the EstS and 1-D5 in the far-UV spectral region (195–250 nm). CD spectra of Est WT and 1-D5 was more or less similar and an increase in the percentage of β-sheets was reported by CD analysis In this study, we have identified and characterized an esterase (EstS) from a marine bacterium Serratia sp. EstS preferred short-chain p-nitrophenyl esters as substrate and unable to hydrolyze long-chain p-nitrophenyl esters (C12 and C16). The specificity towards short chain acyl esters indicated that purified protein (EstS) was an esterase. EstS demonstrated the T1/2 of approximately 50 min at 50 °C and retained 41.23 % activity after 1 h incubation. In addition, the activity of EstS WT was increased by low concentration (1 mM) of Mg2+ and Mn2+, partly inhibited by Cu2+ and Zn2+, and completely inhibited by the addition of acetonitrile, n-butanol and SDS. EstS retain its activity and stability between pH 5.5 and pH 9.5 after 24 h at 4 °C. Furthermore, we also report the engineering of EstS by directed evolution. A thermo-stable mutant 1-D5 was selected from the random mutant library constructed by error-prone PCR. 1-D5 showed change in three amino acids (A43V, R116W, D147N). No change in optimum temperature and pH of 1-D5 was observed in comparison to EstS. But the T1/2 at 50 °C of 1-D5 was about 70 min and it retained 63.3 % of activity at 50 °C for 1 h, which were about 22 % higher than WT. Interestingly, EstS displayed a significant adaptation towards low temperature showing the optimal activity at 10 °C and retained 92 % residual activity at 0 °C (Fig. 4a). However, EstS was considerably unstable at temperatures above 55 °C. These unique characteristics indicate that EstS is a cold-active enzyme (low optimal temperature, poor thermo-stability). The value of optimal temperature is certainly lower than other reported cold-active esterases⁄lipases, such as: EstB from Alcanivorax dieselolei B-5(T) which showed optimal activity at 20 °C and retained 95 % of its original activity at 0 °C [19]; Est10 from P. pacificensis displayed optimal activity at 25 °C and retained 55 % of its original activity at 0 °C [20]; rEst97 from deep-sea sediment had shown optimal activity at 35 °C and about 12 % relative activity at 0.5 °C [21]; lipase hiLip1 from uncultured microorganism with the optimal activity at 35 °C and 44 % activity at 10 °C [22]. However, the EstS is slightly more stable at high temperature(s) than all these cold-active lipolytic enzymes, which were rapidly inactivated at 55 °C. The adaptation of cold-active enzyme to low temperatures can be attributed to the conformational flexibility conferred by some structural features: more Gly residues (especially around the active site); less Pro and Arg residues; more Ser and Met [22–24]. Comparatively, EstS has higher percentage of small amino acids Ala (16.56 %) and Gly (8.94 %) than its thermophilic counterpart Est2 (Ala 11.94 %, Gly 7.10 %) from Alicyclobacillus acidocaldarius and PestE (Ala 11.94 %, Gly 7.35 %) from Pyrobaculum calidifontis [25, 26]. Besides, EstS has less Pro (6.95 %) than Est2 and more Met (2.98 %) than Est2 (0.65 %) and PestE (1.92 %), all this is a factor probably contributing to its adaptation to low temperatures. Additionally, another noteworthy property of EstS was its strong tolerance to NaCl. EstS was active from 0 to 4 M NaCl and retained nearly 94 % activity even at a salt concentration of 4 M NaCl (Fig. 5a). Unlike most halophlic enzymes which are inactive or unstable under low salt concentrations, EstS was active even without NaCl [27, 28]. This unique characteristic indicated EstS was halo-tolerant rather than halophilic. Furthermore, the presence of NaCl was unable to improve the activity of EstS as compared with other halo-tolerant and/or halophilic lipolytic enzymes, such as: the esterase EstPc from Psychrobacter cryohalolentis K5, which showed 179 % activity at 1.75 M NaCl [29]; and esterase PE10 from Pelagibacterium halotolerans B2, which exhibited the maximum activity in the presence of 3 M NaCl [30]. Generally, halophilic proteins have a large number of acidic amino acids on the surface, whose negative charge acts to form protective hydrated ion network that keeps the protein stable in high salt concentrations [31, 32]. In the present study, EstS has a higher percentage of acidic amino acid (Asp + Glu: 12.25 %) than the basic amino acid (Arg + Lys: 8.61 %). EstS showed high negative charges on the surface, which is consistent with the distribution of the electrostatic potential of its model (Fig. 7), which clearly indicated that the halo-tolerance of EstS was depends upon structure and amino acid composition. The overall results from this study, suggests that EstS is a novel cold active and halo-tolerant esterase and it may prove useful for immense biotechnological applications. The major factors responsible for thermostability of proteins includes ionic interactions, hydrogen bonds, hydrophobic interactions and disulfide bonds [33]. In this work, a mutant 1-D5 was more thermo-stable than WT and showed three amino acid changes (A43V, R116W, D147N; Fig. 6). It was predicted that Ala43 of EstS is located in the loop near first β sheet (β1) of enzyme and does not directly with the active center. The Arg116 of EstS, which replaced by Trp in 1-D5, is located in the loop between fourth β sheet (β4) and second α helix (α2) and the Asp147 of EstS is located in the loop near fifth β sheet (β5) and the catalytic residue Ser157 is on the other side of β5. Both position 116 and 147 are on the protein surface. The mutation R116W changed the polar amino acid residue to hydrophobic residues while the mutations A43V and D147N tend to increase the hydrophobicity of EstS [34]. This increase in the hydrophobicity lead to higher thermo-stability of 1-D5 than WT. CD results demonstrated that the mutation of EstS did not affect the secondary structure of enzyme but improve the activity and stability. A novel cold-active and salt-tolerant esterase was purified and characterized from marine bacterium Serratia sp. EstS showed remarkable catalytic activity at low temperature, extreme salt tolerance and good pH stability. All the characteristics collectively make it a potential candidate for industrial applications. Furthermore, a more thermo-stable esterase was obtained by error-prone PCR, and the experimental results may provide useful information for further study. Strains, vectors and reagents E. coli DH5α (TaKaRa, Japan) and BL21 (DE3) (Novagen, USA) were used as cloning and expression hosts, respectively. The vector pGEX-6P-1 (GE Healthcare, USA) was used for gene cloning and protein expression. Serratia sp. and E. coli were grown in LB medium (tryptone 1 %, yeast extract 0.5 % and NaCl 1 % w/v) at 28 and 37 °C, respectively. p-nitrophenyl esters: p-NP acetate (C2), p-NP butyrate (C4), p-NP hexanoate (C6), p-NP caprylate (C8), p-NP laurate (C12) and p-NP palmitate (C16) were all purchased from Sigma (St. Louis, MO, USA). Gene cloning The cloning of estS gene of Serratia sp. was performed by PCR amplification with estS-F and estS-R primers. The forward primer estS-F 5-CCGGAATTCATGCCGCTTGATCCTCA-3 (with EcoRI restriction site underlined) and the reverse primer estS-R 5-CCGCTCGAGTCAGCCAACCTTGTCGA-3 (with XhoI restriction site underlined) were designed based on the sequence of the esterase gene GenBank: AGQ31273.1] of Serratia liquefaciens ATCC 27592. The whole genome of S. liquefaciens ATCC 27592 [GenBank: CP006252.1] was sequenced and published in GenBank by Nicholson et al [35]. The genomic DNA of Serratia sp. was used as template, PCR was performed with an initial denaturation at 94 °C for 3 min; followed by 30 cycles of denaturation at 94 °C for 0.5 min, annealing at 60 °C for 0.5 min, and extension at 72 °C for 1 min; and a final extension at 72 °C for 10 min. The resulting product was digested by EcoRI and XhoI, ligated into the expression vector pGEX-6P-1 with the same digestion, and then transformed into competent E. coli DH5α cells. GenScript (Nanjing, China) was used to make sure that the gene was correctly inserted sequenced the recombinant plasmid pGEX-6p-estS. The deduced amino acid sequence of estS was analyzed by blastp program and the SignalP 4.1 Server (http://www.cbs.dtu.dk/services/SignalP/) predicted the signal peptide. Moreover, the multiple sequence alignment was carried out using Clustal X 2.0. Construction of error-prone library The random mutant library was constructed by error-prone PCR reaction [36]. The 50 μl reaction mixture contained 50 ng of template plamid pGEX-6p-estS, 0.2 μM forward primer estS-F and reverse primer estS-R, 0.2 mM dATP, 0.2 mM dGTP, 0.4 mM dCTP, 0.4 mM dTTP, 5 mM Mg2+, 0.3 mM Mn2+ and 2.5 U of Taq polymerase. The PCR reaction was carried out under similar condition as the gene cloning of estS. The amplified product was digested by EcoRI and XhoI and ligated in the pGEX-6p-1 vector. The recombinant plasmids were transformed into E. coli DH5α cell. The resultant clones were spread onto LB agar plate and incubated at 37 °C. Screening of mutants The mutants were grown for 12 h on LB agar plate and colonies were picked with sterile toothpicks and grown in 96-deep well plates containing 600 μl LB medium supplimented with ampicillin (100 μg/ml) along with WT clone (cell harboring pGEX-6p-estS). Each well was supplemented with 150 μl flesh LB-Amp medium containing 1 mM IPTG and T7 phage. The cells were induced and lysed at 28 °C for 6 h followed by incubation at 50 °C for 40 min. Subsequently, the plates were cooled at 4 °C for 10 min, and 20 μl of lysate was transferred to 96-well plates to assay residual enzyme activity at room temperature. The mutants that showed higher residual enzyme activity were selected for further experiments. Finally, the DNA of mutants was isolated, sequenced and compared with the WT to locate the changed amino acid. Expression and purification of WT and mutant The recombinant plasmid pGEX-6p-esterase (WT and mutant) were transformed into E. coli BL21 (DE3) and the cells were grown for 12 h at 37 °C in LB medium containing ampicillin (100 μg/ml). When the optical density of the culture reached 0.6 at OD600, the cells were induced by adding isopropyl-β-D-thiogalactopyranoside (IPTG; 0.2 mM). After induction for 16 h at 15 °C, the cells were harvested by centrifugation (8000 rpm) at 4 °C for 10 min, followed by washing, resuspension in PBS buffer (0.8 % NaCl, 0.02 % KCl, 0.142 % Na2HPO4, 0.027 % KH2PO4; pH 7.4). The disruption of cells was carried out by French pressure cell. The cell lysate was collected by centrifugation (12,000 rpm) at 4 °C for 40 min. The supernatant was purified by Glutathione-Sepharose column (GE Healthcare, USA) as described previously [37]. Finally, the target protein, wild-type and mutant esterase was released from the GST-tag attached to the column by the 3C protease solution (10 U/μl, PreScission, Pharmacia). The purified protein was analyzed by SDS-PAGE in 12 % polyacrylamide gels. Protein concentration was determined by the Bradford method [38], using bovine serum albumin as standard. Enzyme assay The esterase activity was determined by measuring the amount of p-nitrophenol released in the standard reaction mixture with the OD values at 405 nm monitored by Thermo Scientific Multiscan Spectrum. The standard reaction mixture contained 10 μl of enzyme, 2 μl of p-NP ester (20 mM) in ethanol and 188 μl of Tris–HCl buffer (50 mM, pH 8.5). The reaction mixture in which enzyme is replaced with PBS was considered as control. All experiments were carried out in triplicate and the data obtained were analyzed using GraphPad Prism 5.0 and Excel 2010 software. The calculated values were expressed as mean ± standard deviation (SD) with the statistical significance at p < 0.05 and standard deviation was calculated by standard deviation function (STDEV) in Excel. One unit of enzyme activity was defined as the amount of enzyme needed to release l μmol of p-nitrophenol per minute under the above reaction conditions. The experiments were performed in triplicate and average values were calculated with standard deviations. Substrate specificity The substrate specificity was investigated with p-NP esters of different chain lengths (acetate, C2; butyrate, C4; hexanoate, C6; caprylate, C8; laurate, C12; palmitate, C16). The hydrolytic reactions were performed in triplicate under standard assay conditions with each substrate. Biochemical characterization of esterase The optimum temperature of esterase was determined by incubating the reaction mixtures at a temperature range of 0 to 80 °C with p-NP acetate (C2) as substrate. The thermal stability was determined by measuring the residual enzyme activity after exposing the enzyme solution separately to three different temperatures (45, 50 and 55 °C) for varying time intervals. The pH optimum was investigated in following buffers: phosphate–citrate buffer (pH 5.0–7.0) and Tris–HCl buffer (pH 7.0–10.0). The pH stability was evaluated by incubating the enzyme in various pH buffer solutions for 24 h at 4 °C. The remaining enzyme activity was assayed under the standard conditions. The effects of metal ions (Mg2+ , Sr2+, Ba2+, Zn2+, Mn2+, Cu2+, Ca2+) and reagents (EDTA, PMSF) on esterase activity were examined at a final concentration of 1 and 5 mM in Tris–HCl buffer (pH 8.5). The effects of organic solvents and detergents were evaluated by diluting the enzyme solution in different final concentrations (10–30 %) of organic solvents and detergents, including iso-propanol, acetone, methanol, DMSO, ethanol, n-butyl alcohol, ethylene glycol, acetonitrile, TritonX-100, Tween-20, Tween-80, CHAPS and SDS. Samples containing only the same amount of reagent were used as control. The enzyme activity without additives in the reaction mixture was considered as 100 %. The effect of NaCl on enzyme activity was determined with 0–4 M NaCl dissolved in Tris–HCl buffer (50 mM, pH 8.5). The effect of NaCl on enzyme stability was evaluated by treating enzyme solutions in the above-mentioned NaCl solutions at 4 °C for 24 h. The kinetic parameters (k cat , V max, and k m) were determined by measuring the reaction rate of WT and mutant in different substrate (p-NP acetate) concentration (0.01–0.3 mM) at 10 °C for 10 min. The kinetic parameters V max and k m were determined by Lineweaver-Burk plot using Michaelis–Menten equation with Graphpad Prism software (Graphpad, San Diego, CA). The k cat parameter was calculated using the equation k cat = V max/[E]. To gain insights into the structure of WT and mutant esterase, a homology model was generated automatically with Swiss-model server (http://swissmodel.expasy.org/; [39]. Based on the model, the electrostatic potential on the surface of esterase was visualized by the software SYBYL and the amino acid changes were analyzed by MOE 2009. Circular dichroism analysis Secondary structure of Est WT and 1-D5 was predicted by using the program PSIPRED [40]. Circular dichroism (CD) spectra of Est WT and 1-D5 were recorded with a Jasco-810 CD spectrometer (Jasco Corp., Japan). The data were collected at room temperature from 195 to 250 nm using 2 mm quartz cuvette (600 μl). The conversion to the Mol CD (∆ε) in each spectrum was performed with the Jasco Standard Analysis software. Estimation of the secondary structure content from far-UV circular dichroism (CD) spectra was performed by using the CDPro software package (available at http://lamar.colostate.edu/~sreeram/CDPro/main.html), including three executable programs (SELCON3, CDSSTR, and CONTIN/LL) [41]. In this study, the percentages of α-helix and β-sheet for each protein sample were averaged by the calculations of results from the CDPro software package. The circular dichroism data were expressed in terms of the mean residue ellipticity (θmrw),which calculated with using the equation [42]: $$ {\uptheta}_{\mathrm{mrw}}=\frac{M_w\cdot {\uptheta}_{\mathrm{obs}}\cdot 100}{\mathrm{N}\cdot d\cdot c} $$ where θobs is the observed ellipticity in degrees, Mw is the protein molecular weight of WT and 1-D5, and N is the number of residues, d is the path length of quartz cuvette (0.2 cm), c is the protein concentration (mg/ml), and the constant number 100 stems from the conversion of the molecular weight to mg/dmol. The additional data supporting the results of this article is available online in the National Center for Biotechnology Information [NCBI] repository, [GenBank accession number:KU362566]. Arpigny J, Jaeger K. Bacterial lipolytic enzymes: classification and properties. Biochem J. 1999;343:177–83. Li XL, Zhang WH, Wang YD, Dai YJ, Zhang HT, Wang Y, et al. A high-detergent-performance, cold-adapted lipase from Pseudomonas stutzeri PS59 suitable for detergent formulation. J Mol Catalysis B Enzymatic. 2014;102:16–24. Tutino ML, Parrilli E, De Santi C, Giuliani M, Marino G, de Pascale D. Cold-adapted esterases and lipases: a biodiversity still under-exploited. Curr Chem Biol. 2010;4(1):74–83. Suzuki T, Nakayama T, Choo DW, Hirano Y, Kurihara T, Nishino T, et al. Cloning, heterologous expression, renaturation, and characterization of a cold-adapted esterase with unique primary structure from a psychrotroph Pseudomonas sp. strain B11-1. Protein Expr Purif. 2003;30(2):171–8. Kulakova L, Galkin A, Nakayama T, Nishino T, Esaki N. Cold-active esterase from Psychrobacter sp. Ant300: gene cloning, characterization, and the effects of Gly → Pro substitution near the active site on its catalytic activity and stability. Biochim Biophys Acta Protein Proteomics. 2004;1696(1):59–65. Ryu H, Kim H, Choi W, Kim M, Park S, Han N, et al. New cold-adapted lipase from Photobacterium lipolyticum sp. nov. that is closely related to filamentous fungal lipases. Appl Microbiol Biotechnol. 2006;70(3):321–6. Bornscheuer UT. Microbial carboxyl esterases: classification, properties and application in biocatalysis. FEMS Microbiol Rev. 2002;26(1):73–81. Romano D, Bonomi F, de Mattos MC, Fonseca TD, de Oliveira MDF, Molinari F. Esterases as stereoselective biocatalysts. Biotechnol Adv. 2015;33(5):547–65. Khurana J, Singh R, Kaur J. Engineering of Bacillus lipase by directed evolution for enhanced thermal stability: effect of isoleucine to threonine mutation at protein surface. Mol Biol Rep. 2011;38(5):2919–26. Cao L, Chen R, Xie W, Liu Y. Enhancing the thermostability of feruloyl esterase EstF27 by directed evolution and the underlying structural basis. J Agric Food Chem. 2015. Khan MIH, Ito K, Kim H, Ashida H, Ishikawa T, Shibata H, et al. Molecular properties and enhancement of thermostability by random mutagenesis of glutamate dehydrogenase from Bacillus subtilis. Biosci Biotechnol Biochem. 2005;69(10):1861–70. Wang M, Si T, Zhao H. Biocatalyst development by directed evolution. Bioresour Technol. 2012;115:117–25. Kim J, Kim S, Yoon S, Hong E, Ryu Y. Improved enantioselectivity of thermostable esterase from Archaeoglobus fulgidus toward (S)-ketoprofen ethyl ester by directed evolution and characterization of mutant esterases. Appl Microbiol Biotechnol. 2015;1–9. Chronopoulou EG, Labrou NE. Site saturation mutagenesis: A powerful tool for structure based design of combinatorial mutation libraries. Curr Protoc Protein Sci. 2011; 26.26. 21-26.26. 10. Packer MS, Liu DR. Methods for the directed evolution of proteins. Nat Rev Genet. 2015. Gaber Y, Ismail M, Bisagni S, Takwa M, Hatti-Kaul R. Rational mutagenesis of pig liver esterase (PLE-1) to resolve racemic clopidogrel. J Mol Catal B: Enzym. 2015;122:156–62. Wu G, Zhang X, Wei L, Wu G, Kumar A, Mao T, et al. A cold-adapted, solvent and salt tolerant esterase from marine bacterium Psychrobacter pacificensis. Int J Biol Macromol. 2015;81:180–7. Fadhil L, Kadim A, Mahdi A. Production of chitinase by Serratia marcescens from soil and its antifungal activity. J Nat Sci Res. 2014;4(8):80–6. Zhang S, Wu G, Liu Z, Shao Z, Liu Z. Characterization of EstB, a novel cold-active and organic solvent-tolerant esterase from marine microorganism Alcanivorax dieselolei B-5 (T). Extremophiles. 2014;18(2):251–9. Wu G, Wu G, Zhan T, Shao Z, Liu Z. Characterization of a cold-adapted and salt-tolerant esterase from a psychrotrophic bacterium Psychrobacter pacificensis. Extremophiles. 2013;17(5):809–19. Fu J, Leiros H-KS, de Pascale D, Johnson KA, Blencke H-M, Landfald B. Functional and structural studies of a novel cold-adapted esterase from an Arctic intertidal metagenomic library. Appl Microbiol Biotechnol. 2013;97(9):3965–78. Hårdeman F, Sjöling S. Metagenomic approach for the isolation of a novel low temperature active lipase from uncultured bacteria of marine sediment. FEMS Microbiol Ecol. 2007;59(2):524–34. Siddiqui KS, Cavicchioli R. Cold-adapted enzymes. Annu Rev Biochem. 2006;75:403–33. Zhou X-X, Wang Y-B, Pan Y-J, Li W-F. Differences in amino acids composition and coupling patterns between mesophilic and thermophilic proteins. Amino Acids. 2008;34(1):25–33. De Simone G, Galdiero S, Manco G, Lang D, Rossi M, Pedone C. A snapshot of a transition state analogue of a novel thermophilic esterase belonging to the subfamily of mammalian hormone-sensitive lipase. J Mol Biol. 2000;303(5):761–71. Hotta Y, Ezaki S, Atomi H, Imanaka T. Extremely stable and versatile carboxylesterase from a hyperthermophilic archaeon. Appl Environ Microbiol. 2002;68(8):3925–31. Madern D, Ebel C, Zaccai G. Halophilic adaptation of enzymes. Extremophiles. 2000;4(2):91–8. Ventosa A, Sánchez-Porro C, Martín S, Mellado E. Halophilic archaea and bacteria as a source of extracellular hydrolytic enzymes. In: Gunde-Cimerman N, Oren A, Plemenitaš A, editors. Adaptation to life at high salt concentrations in Archaea, Bacteria, and Eukarya. Netherlands: Springer; 2005. p. 337–54. Novototskaya‐Vlasova K, Petrovskaya L, Yakimov S, Gilichinsky D. Cloning, purification, and characterization of a cold adapted esterase produced by Psychrobacter cryohalolentis K5T from Siberian cryopeg. FEMS Microbiol Ecol. 2012;82(2):367–75. Jiang X, Huo Y, Cheng H, Zhang X, Zhu X, Wu M. Cloning, expression and characterization of a halotolerant esterase from a marine bacterium Pelagibacterium halotolerans B2T. Extremophiles. 2012;16(3):427–35. Fukuchi S, Yoshimune K, Wakayama M, Moriguchi M, Nishikawa K. Unique amino acid composition of proteins in halophilic bacteria. J Mol Biol. 2003;327(2):347–57. Müller-Santos M, de Souza EM, Pedrosa FO, Mitchell DA, Longhi S, Carrière F, et al. First evidence for the salt-dependent folding and activity of an esterase from the halophilic archaea Haloarcula marismortui. Biochim Biophys Acta (BBA) J Mol Cell Biol Lipids. 2009;1791(8):719–29. Beeby M, O Connor BD, Ryttersgaard C, Boutz DR, Perry LJ, Yeates TO. The genomics of disulfide bonding and protein stabilization in thermophiles. PLoS Biol. 2005;3(9):1549. Black SD, Mould DR. Development of hydrophobicity parameters to analyze proteins which bear post-or cotranslational modifications. Anal Biochem. 1991;193(1):72–82. Nicholson WL, Leonard MT, Fajardo-Cavazos P, Panayotova N, Farmerie WG, Triplett EW, et al. Complete genome sequence of Serratia liquefaciens strain ATCC 27592. Genome Announcements. 2013;1(4):e00548–13. Cirino PC, Mayer KM, Umeno D. Generating mutant libraries using error-prone PCR. In: Arnold FH, Georgiou G, editors. Directed evolution library creation. Humana Press; 2003. p. 3–9. Cao S, Liu Z, Guo A, Li Y, Zhang C, Gaobing W, et al. Efficient production and characterization of Bacillus anthracis lethal factor and a novel inactive mutant rLFm-Y236F. Protein Expr Purif. 2008;59(1):25–30. Bradford MM. A rapid and sensitive method for the quantitation of microgram quantities of protein utilizing the principle of protein-dye binding. Anal Biochem. 1976;72(1):248–54. Arnold K, Bordoli L, Kopp J, Schwede T. The SWISS-MODEL workspace: a web-based environment for protein structure homology modelling. Bioinformatics. 2006;22(2):195–201. McGuffin LJ, Bryson K, Jones DT. The PSIPRED protein structure prediction server. Bioinformatics (Oxford, England). 2000;16(4):404–5. Sreerama N, Woody RW. Estimation of protein secondary structure from circular dichroism spectra: comparison of CONTIN, SELCON, and CDSSTR methods with an expanded reference set. Anal Biochem. 2000;287(2):252–60. Chaloupková R, Sýkorová J, Prokop Z, Jesenská A, Monincová M, Pavlová M, et al. Modification of activity and specificity of haloalkane dehalogenase from Sphingomonas paucimobilis UT26 by engineering of its entrance tunnel. J Biol Chem. 2003;278(52):52622–8. This work was supported by grants from National Science Foundation of China (No. 31270162) and the Priority Academic Program Development of Jiangsu Higher Education Institutions. College of Biotechnology and Pharmaceutical Engineering, Nanjing Tech University, Nanjing, 211800, P. R. China Huang Jiang, Shaowei Zhang, Haofeng Gao & Nan Hu State Key Laboratory of Agricultural Microbiology, College of Life Science and Technology, Huazhong Agricultural University, Wuhan, 430070, P. R. China Huang Jiang & Shaowei Zhang Huang Jiang Shaowei Zhang Haofeng Gao Nan Hu Correspondence to Nan Hu. HJ and SWZ designed and performed the experiments and drafted this manuscript. SWZ performed purification, characterization and Circular dichroism analysis. HFG contributed to characterization, expression experiments and revised the manuscript. NH conceived, designed and supervised the experiments, is a corresponding author. All authors have read and approved the manuscript. Huang Jiang and Shaowei Zhang contributed equally to this work. Jiang, H., Zhang, S., Gao, H. et al. Characterization of a cold-active esterase from Serratia sp. and improvement of thermostability by directed evolution. BMC Biotechnol 16, 7 (2016). https://doi.org/10.1186/s12896-016-0235-3 Esterase Cold-active Salt-tolerant Serratia sp Thermo-stability Error-prone PCR Protein and enzyme technology Submission enquiries: [email protected]
CommonCrawl
everything wiki fandom SpongeBob has a childlike enthusiasm for life, which carries over to his job as a fry cook at a fast food restaurant called the … is a Netflix original coming-of-age comedy-drama created by Ben York Jones and Michael Mohan that parodies teen culture of the mid-1990s. In Season One, 13 people go to Wikia Manor, cared for by the maids and the butler, Charles, believing they will be undergoing a normal series of challenges to win $1,000,000.But when one of 13 is killed, they realize that a killer is loose. It was released through Republic Records on August 25, 2014. Rules: 1. 4. 3. The album was recorded between October 2013 and May 2014. Kate Messner is a main character on Everything Sucks!. Globox joins Rayman on a few occasions in a fashion similar to Clark or Globox's role in Rayman 3: Hoodlum Havoc. Answer all Questions within a single comment. D&D Beyond Fandom Apps Take your favorite fandoms with you and never miss a beat. and more great discussions about Wiki Des amis libres The state is the 21st-most extensive in area. Everything page and stuff into one category. It was concieved in the $ \frac{2}{\omega} $ th dimension, where creating this anomaly would be possible. The Badass Admins of The Everything Everything Wiki. With over six million residents, it is the 18th-most populous state of the Union. Everything page and stuff into one category. Kate is a Sophomore at Boring High& a member of the A/V Club. Globox is a major character in the Rayman series, second only to Rayman himself in his prominence. He previously had developed the game Mountain , in which players had limited interactions with a virtual mountain. Everything is the third and final major form of Chaos, having it's place on the M key. Emaline Addario ist eine Schülerin der Boring Highschool und eine der Hauptfiguren von Everything Sucks!. My Everything is Ariana Grande's sophomore studio album. Welcome to Everything Barbie Wiki, a wiki all about everything Barbie including dolls, movies, life in the dreamhouse, barbiegirls and so much more. Find communities that share your fandom. Fandom Apps Take your favorite fandoms with you and never miss a beat. When people ask her, she and her father said she was ill. D&D Beyond Everything currently has no alternates. You can help Everything Wiki by expanding it. Kate's mother killed herself when Kate was... Kate is a Sophomore at Boring High& a member of the A/V Club. Globox acts as a major ally in Rayman 2: The Great Escape and Rayman 3: Hoodlum Havoc, and is a playable character in Rayman M, Rayman Origins and wikipedia:Rayman LegendsRayman Legends. They must figure out who The Killer is before their vacation is up… and before they all die. No Spamming. You can edit and add to any wiki, or invite others to yours. It's wing animations are unique, as Everything is the only non-custom form to possess it's wing animations. Make an edit, start a page, or start a wiki. Feel free to create a page or to add any informationyou like to an existing page. "To-do list" and more great discussions about Literately Everything Wiki This is everything used for various media. Answering can only be done once by a single participant. SpongeBob SquarePants is an energetic and optimistic sea sponge who lives in a submerged pineapple with his pet snail Gary, who meows like a cat. D&D Beyond "Cower before the embodiment of all that ever was, is, and will be!" Number all your answers. She is portrayed by Peyton Kennedy. Linkara is an adult male from Minnesota with a tendency to save the world from aliens and Hypertime travelers, all the while running the comic review show. Share everything you know about your passions. It is not an ordinary book though, it has been deliberately made to be 0 meters in size, whilst still holding information.. The game is about perspective, philosophy, and imagination, you can be 'Anything', from a bacteria to a galaxy. She is portrayed by Peyton Kennedy. "Everything" is a book written by Ael Onpet. "CLEM PARS !!!! " Missouri is a state in the Midwestern United States. Gateway Worship Spanish, Which Zodiac Sign Is The Best Looking, Fly And Mosquito Spray For Dogs, Danny Ings Injury, Auto Mechanic Salary, Dog Death Poem Reddit, Dragon Ball Z Kai Season 4, Silk Kantha Throw, Taurus January 2020 Horoscope, How To Preserve Hydrosols, Salmon Freshwater Or Saltwater, Existential Family Therapy, What Are The Different Types Of Glaciers Quizlet, Retroarch Cheats Snes, Subnautica Memes Reddit, Kinza Hashmi House, O2 Samsung S20, Quarantine Food Quotes, Rajesh Khattar Son, Historical Soil Survey, Multiplication Word Problems Year 4, Saffron Finch Shirts, Mole Cricket Facts, Leo Celebrities Female, Which Zodiac Is Good In Bed, EVE Online Memes, Battle Of Damascus 1918, Documentaries In Urdu, 2020 everything wiki fandom
CommonCrawl
Aryan Ghobadi gives a maths lecture at a zoo Image: Highways Agency, CC BY 2.0 by Aryan Ghobadi. Published on 1 May 2021. As trends go, diagrammatic algebra has taken mathematics by storm. Appearing in papers on computer science, pure mathematics and theoretical physics, the concept has expanded well beyond its birthplace, the theory of Hopf algebras. Some use these diagrams to depict difficult processes in quantum mechanics; others use them to model grammar in the English language! In algebra, such diagrams provide a platform to prove difficult ring theoretic statements by simple pictures. As an algebraist, I'd like to present you with a down-to-earth introduction to the world of diagrammatic algebra, by diagrammatising a rather simple structure: namely, the set of natural numbers! At the end, I will allude to the connections between these diagrams and the exciting world of higher and monoidal categories. Now—imagine yourself in a lecture room, with many others as excited about diagrams as you (yes?!), plus a cranky audience member, who isn't a fan of category theory, in the front row: What we would like to draw today is the process of multiplication for the natural numbers. In its essence, multiplication, $\times$, takes two natural numbers, say 2 and 3, and produces another natural number… Because it takes two elements and produces just one, multiplication is formally called a binary operation: we can say it is a function $m:\mathbb{N}\times\mathbb{N}\to\mathbb{N}$, where, for example, $m(2,3)=6$. We will keep this $m$ notation for natural number multiplication to avoid confusion with the so-called product of two sets $A$ and $B$, which is the set of all possible pairs from $A$ and $B$ and is denoted by \[A \times B = \{(a,b) : a \in A, \, b \in B\}\] Now we draw (reading diagrams from top to bottom): Multiplication, $m$, can really be thought of as a 'meta-road': it's a one-way road with two entry lanes, both departing from two cities whose cars correspond to natural numbers, and one exit lane leading to natural-number-land again. We call our roads 'meta' because two cars, 2 and 3, enter the lanes at the same time, possibly colliding in the middle, passing through time and space, and a brand new car, 6, exits into the city. Do not be alarmed by this interruption! I am ready to respond. Diagrams for a monoid A monoid structure is a fancy word for some of the nice properties that the multiplication of natural numbers satisfies: associativity: \[m(x,m(y,z)) = m(m(x,y),z),\]eg \[2 \times (3\times 5) = 30 = (2\times 3) \times 5\] a unit element exists: \[m(1,x) = x = m(x,1),\]eg \[1\times x = x = x \times 1 \; \forall x \in \mathbb{N}\] Now we simply visualise these properties using our pictorial notation. Associativity translates to these compound meta-roads being the same: But why are the diagrams the same? The key ingredient is that we need to put on our topological glasses! We don't care about length or curvature in our roads. It's as if the asphalt moves freely above the sand! With our new glasses, all the following diagrams are the same and the middle lane can move freely from one side to the other: The second property we need to visualise is the unit element $1 \in \mathbb{N}$. In previous diagrams, any car from $\mathbb{N}$ can use the roads, whereas to discuss multiplication by 1, we need a unique car to use the road. So we draw a special diagram for the road where only the car corresponding to 1 can use the lane. The unit conditions require one more ingredient. Each city can have a boring 'identity road' $\mathrm{id}$, where nothing happens to cars taking this road. They simply leave and enter the city looking the same. With this in mind, the diagrams representing the unit condition turn into the following picture: This should not be a surprise since it is natural to think of multiplication by 1, $m(1,x)$ for any $x$, as a function from $\mathbb{N}$ to $\mathbb{N}$, which ultimately sends every number to itself. Putting our topological glasses back on, looks as if the diagram for the identity road grew an extra hair, so we can push it back in! In our car metaphor, the left side represents a main road with an additional lane entering it, but this lane is reserved for a 'harmless' car that does not interact with any of the other cars. So, it's the same as if the main road were the identity road, where nothing happens to the cars driving on it. Here the cranky listener is using the old trick of deploying fancy words to heckle me. The word commutative just means that the order in which we multiply the numbers doesn't matter. Formally, $m$ being commutative means \[ m(a,b) = m(b,a) \quad \text{for any }a,b\in\mathbb{N}.\] For example, $2 \times 3 = 6 = 3 \times 2$. To represent this, we need our roads to pass over each other. We need to build bridges! If we can build bridges and allow lanes to pass over each other, ie diagrams like , then commutativity translates to these diagrams being equal: To truly see this property, we need to upgrade our glasses to 3D glasses to capture three-dimensional topology. If we view the string diagrams through our 3D glasses, then one could unwind the right-hand diagram by rotating it as so: To placate this restless member of the audience, I will present the punchline a bit early and use the keyword 'category' before explaining what it is. The reason we can draw a commutative monoid such as $\mathbb{N}$ as a three-dimensional diagram is because commutative monoids live in what we call braided categories such as the category of sets. Today's algebraists will tell you that a braided category is an example of a weirder structure called a 3-category, which has some 3D topology hidden in it. But this takes us into the daunting world of higher categories, and by this point my heckler is hopefully intrigued but has too much pride to ask me to elaborate. Aha! Back to our story… In the same way that looking at the connections between cities in a country is more enlightening than looking at the cities independently, in mathematics it's more useful to understand the relation between mathematical objects. For example, instead of looking at sets $\mathbb{N}, \mathbb{R}, \{1,2,3\}, \emptyset$, I really need to discuss functions between sets to understand how sets relate to each other. This now fits in a bigger framework, a category. A category has some cities, for example sets $A$, $B$ and $C$, and some roads $f:A \to B$ between the cities, with two extra rules! If roads $f:A \to B$ and $g:B \to C$ are part of my category, then so is a composition road $gf:A \to C$ which is made up from joining roads $f$ and $g$ (first taking the road $f$ to the city $B$ followed by the road $g$): Every city should have a special 'safe' road, called the identity road, like the identity function $\mathrm{id}_{\mathbb{N}}$ for $\mathbb{N}$: Categories provide a platform to draw one-dimensional diagrams and a '1D calculus', ie a way to manipulate these diagrams, as I've shown on the right there. The category of sets has sets as cities and functions as roads. The identity road for each city $A$ is just the identity function $\mathrm{id}_A:A \to A$, where $\mathrm{id}_A(a) = a$ for all $a \in A$. Monoidal categories The missing piece for a 2D calculus is a way to write in the horizontal direction. When we visualised $m:\mathbb{N}\times\mathbb{N}\to\mathbb{N}$ as a diagram, we said that writing two cities $A$ and $B$ next to each other meant the product of the two sets $A \times B$. In other words, writing cities in rows should have a good meaning, where 'good' means that roads between these cities can run parallel in the vertical direction. That is, in the case of sets, for every pair of functions $f:A_1 \to A_2$ and $g:B_1 \to B_2$, we have a new function $f \times g:A_1 \times B_1 \to A_2 \times B_2$. In our diagrams, we represent the road $f \times g$ by the roads $f$ and $g$ running parallel: Similar to the identity roads acting as ineffective components in the vertical direction, we require an 'empty city' $E$ which behaves indifferently in the horizontal direction: \[A\;E \; = \; A \; = \; E\;A.\] A bit more formally, for each pair of objects $A$ and $B$, the object '$A$ next to $B$' is written as $A \otimes B$. Parallel roads are written as $f \otimes g$ and $E$ is called the unit. A category with an $\otimes$ operation on pairs of cities and roads and a unit $E$ is called monoidal. It should be clear that monoidal categories provide a setting for 2-dimensional diagrams: The monoidal structure on the category of sets is given by $A \otimes B = A \times B$, $f \otimes g = f \times g$; and $E = \{*\}$ is the set with one element, so that $\{*\} \times A = \{(*,a):a \in A\}$. By now the room is probably silent and the fear that the audience has long drifted off into sweet dreams of differential equations dawns on me. But… An intelligent question! In the same way you call a set a monoid when you can multiply its elements, a category is called monoidal when you can 'multiply' its cities and roads, and instead of a unit element you have a unit city. A trendier way to say this is "monoidal categories categorify monoids". This is reflected in the fact that a monoid structure on an object of a category only makes sense when the category itself has a monoidal structure. Braided monoidal categories In a braided category, the order of cities in a row can be swapped! To swap any two cities $A$ and $B$, we need a method of travel—a road—from $A \otimes B$ to $B \otimes A$. These roads should have two entry lanes from the cities $A$ and $B$, and two exit lanes into $B$ and $A$, in that order. We'd also like these roads, which we denote by $b_{A,B}$, to resemble the 3D picture , which we saw when describing the commutative property of $\mathbb{N}$. The next rules which need to be satisfied are directly influenced by topology. Firstly, each pass over road $b_{A,B}$ should also be invertible by a road $b_{A,B}^{-1}$ resembling the move . As apparent in the diagram on the right, the composition of two such roads should be the same as the identity roads of $A$ and $B$ running parallel. The other conditions which need to hold just mean that if you take a number of cities $(A,B,C)$ and reorder them (maybe to $C,B,A$) via such passover roads, the outcome should be the same journey: Geometrically this translates to 'the order in which the roads lay above each other matters, not the order in which one passes over the other'. As in this picture, the road connected to $A$ lies above the road connected to $B$, which itself lies above the road connected to $C$. However, the order in which they pass over each other does not matter. A monoidal category with passover roads for any pair of cities, as described above, is called braided. In the category of sets, the passover roads for sets $A$ and $B$ are provided by \[b_{A,B}:A \times B \to B \times A, \quad b_{A,B}(a,b) = (b,a), \quad a\in A, b \in B.\] For those with some university algebra knowledge, another important example of braided monoidal categories is the category of vector spaces with the tensor product of vector spaces. This is in fact where the notation $\otimes$ comes from. The big finale… higher algebra! Let's say we want to describe a larger system than cities and roads between them. We really want to know how two roads $f,g$ between two cities $A,B$ are related to each other. Under this geographical metaphor, this would entail looking at which streets connect the two roads within the two cities: We call such a pair of streets connecting roads $f$ and $g$ a 2-road between $f$ and $g$. A 2-category carries the information of cities, roads and 2-roads (for those not entertained by my metaphors: objects, morphisms and 2-morphisms) where we draw roads and 2-roads by $\rightarrow$ and $\Rightarrow$, respectively. Similarly to how we can compose ordinary roads, we compose 2-roads $\theta: f \Rightarrow g$ and $\eta:g \Rightarrow h$ 'vertically' to produce a new 2-road $\eta\circ_v\theta: f \Rightarrow h$: We can only do this when $f$, $g$ and $h$ are all roads between the same two cities $A,B$. But in addition to this vertical composition, 2-roads also have a horizontal composition: Such compositions need to act well together, ie the order of composing horizontally or vertically should not matter: Diagrams like the above provide a platform for a 2-dimensional calculus as well and this is no coincidence. The information for a monoidal category is equivalent to the information needed for a 2-category with a single city. To better understand this, compare the pictures we have been drawing: monoidal category 2-category with one city $*$ cities, eg $A$ roads from $*$ to $*$, eg $A$ roads, eg $m$ 2-roads, eg $m$ composition of roads vertical composition monoidal operation $\otimes$ for cities roads composing roads running parallel: $\otimes$ for roads horizontal composition empty city identity road from $*$ to $*$ The diagram on the right shows how information transfers between the two settings. This brings us back to why we can draw a commutative monoid, such as the natural numbers, via 3D diagrams. First remember that to talk about a monoid being commutative, we needed to be able to swap elements. So we really need a braided monoidal category. In a similar fashion to how monoidal categories are 2-categories in disguise, a braided category is a 3-category with one city and one road, and provides a 3D calculus, where our commutative monoid $\mathbb{N}$ can live! So maybe now while these cheers fill the air, my heckler walks out of the lecture room and slams the door. I smile with pride, knowing that 'category theory won today'. No mathematicians were harmed during the making of this article. All audience members were fictitious and no real mathematicians were forced to attend my lecture. Aryan Ghobadi Aryan is a PhD student in mathematics at Queen Mary University of London, working with categories in quantum algebra. He is often the cranky audience member in the front row. sites.google.com/view/aghobadimath + More articles by Aryan Sophie Maclean and David Sheard speak to a very top(olog)ical mathematician! Madeleine Hall explores the sometimes counterintuitive consequences of conditional probability to our everyday lives. Sophie explores the fascinating mathematics behind the games Mafia and Among Us. Paddy Moore levels the score Francisco Berkemeier takes a look at the mathematics behind elections, and the Electoral College, in the United States of America. Discover the meaning of the coloured squares on the cover of issue 13 ← Page 3 model: Cows Surfing on wavelets →
CommonCrawl
Dynamics in Siberia - 2019 28 февраля 2019 г. 12:00–12:50, Новосибирск, Институт математики им. С.Л.Соболева СО РАН, конференц-зал Пленарные доклады On the topology of manifolds admitting cascades attractor-repeller of the same dimension В. З. Гринес Аннотация: Let $f:M^n\rightarrow M^n$ be orientation preserving diffeomorphism of a smooth closed orientable manifold $M^n$ satisfying axiom $A$ of S.Smale (non-wandering set $\mathrm{NW}(f)$ is hyperbolic and the set of periodic points is dense in $\mathrm{NW}(f)$). According to S.Smale's spectral theorem, the nonwandering set $\mathrm{NW}(f)$ can be decomposed into a finite union of disjoint closed invariant sets (called basic sets), each of which contains a dense orbit. It is well known that if the non-wandering set of diffeomorphism $f$ consists of exactly two fixed points: a source and a sink, then the manifold $M^n$ is diffeomorphic to the $n$-dimensional sphere $\mathbb S^n$. If the dimension of a basic set of diffeomorphism $f$ coincides with the dimension of the ambient manifold, then $f$ is Anosov diffeomorphism, the basic set is an attractor and a repeller simultaneously and coincides with the manifold $M^n$. It was shown by J.Franks and Sh.Newhouse in the case when the dimension of a stable or unstable manifold of a periodic point of Anosov diffeomorphism is 1, that manifold $M^n$ is diffeomorphic to the torus of dimension $n$ (see [1, 2]). The report describes the results obtained in the works of V.S.Grines, E.V.Zhuzhoma, Yu.A.Levchenko, V.Medvedev, O.Pochinka (see [3–5]), from which follows topological classification of manifolds $M^n$ admitting diffeomorphisms $f$ whose nonwandering sets consist of an attractor and a repeller of the same dimension. In addition, we give sufficient conditions when the nonwandering set of the diffeomorphism $f$ cannot consist of two basic sets of the same dimension. The report is prepared with the financial support of the Russian Science Foundation (project 17-11-01041). [1] J.Franks, Anosov diffeomorphisms, In: Global Analisys, Proc. Symp. in Pure Math., 14, 61–93 (1970). [2] S.Newhouse, On codimension one Anosov diffeomorphisms, Am. J. Math., 92, No. 3, 761–770 (1970). [3] V.Grines, Yu.Levchenko, V.S.Medvedev, and O.Pochinka, The topological classification of structural stable 3-diffeomorphisms with two-dimensional basic sets, Nonlinearity, 28, 4081–4102 (2015). [4] V.Grines, T.Medvedev, O.Pochinka, Dynamical systems on 2-and 3-manifolds. Switzerland. Springer International Publishing, 2016. [5] V.Z.Grines, Ye.V.Zhuzhoma, O.V.Pochinka. Rough diffeomorphisms with basic sets of codimension one. Journal of Mathematical Sciences, Vol. 225, No. 2, August, 2017.
CommonCrawl
Corporate Finance & Accounting Financial Ratios Spotting Profitability With Return on Capital Employed By Ben McClure Return on Capital Employed Defined What Does ROCE Say? ROCE and the Cost of Borrowing Analyzing ROCE: Guidelines ROCE: Special Considerations Disadvantages of ROCE Analysis Think of the return on capital employed (ROCE) as the Clark Kent of financial ratios. ROCE is a good way to measure a company's overall performance. One of several different profitability ratios used for this purpose, ROCE can show how companies use their capital efficiently by examining the net profit it earns in relation to the capital it uses. Most investors don't take a second look at a company's ROCE, but savvy investors know that like Kent's alter ego, ROCE has a lot of muscle. ROCE can help investors see through growth forecasts, and it can often serve as a reliable measure of corporate performance. The ratio can be a superhero when it comes to calculating the efficiency and profitability of a company's capital investments. Return on Capital Employed (ROCE) Defined Put simply, ROCE reflects a company's ability to earn a return on all of the capital it employs. ROCE is calculated by determining what percentage of a company's utilized capital it made in pre-tax profits, before borrowing costs. The ratio looks like this: ROCE = EBITCapital Employed\text{ROCE}\ = \ \frac{\text{EBIT}}{\text{Capital Employed}}ROCE = Capital EmployedEBIT​ The numerator, or the return, which is typically expressed as earnings before interest and taxes (EBIT), includes the profit before tax, exceptional items, interest, and dividends payable. These items are located on the income statement. The denominator, or the capital employed, is the sum of all ordinary and preferred-share capital reserves, all debt and finance lease obligations, as well as minority interests and provisions. In the event that these figures — EBIT or the capital employed — are not available or cannot be found, ROCE can also be calculated by subtracting current liabilities from total assets. All of these items are also found on the balance sheet. For starters, ROCE is a useful measurement for comparing the relative profitability of companies. But ROCE is also an efficiency measure of sorts — it doesn't just gauge profitability as profit margin ratios do. ROCE measures profitability after factoring in the amount of capital used. This metric has become very popular in the oil and gas sector as a way of evaluating a company's profitability. It can also be used with other methods, such as return on equity (ROE). It should not be used for companies that have a large cash reserve that remains unused. To understand the significance of factoring in employed capital, let's look at an example. Say Company A makes a profit of $100 on sales of $1,000. Company B makes $150 on $1,000 of sales. In terms of pure profitability, B, having a 15% profit margin, is far ahead of A, which has a 10% margin. Let's say A employs $500 of capital and B employs $1,000. Company A has a ROCE of 20% [100/500] while B has a ROCE of only 15% [150/1,000]. The ROCE measurements show us that Company A makes better use of its capital. In other words, it is able to squeeze more earnings out of every dollar of capital it employs. A high ROCE value indicates that a larger chunk of profits can be invested back into the company for the benefit of shareholders. The reinvested capital is employed again at a higher rate of return, which helps produce higher earnings-per-share growth. A high ROCE is, therefore, a sign of a successful growth company. A company's ROCE should always be compared to the current cost of borrowing. If an investor puts $10,000 into a bank for a year at a stable 1.7% interest, the $170 received in interest represents a return on the capital. To justify putting the $10,000 into a business instead, the investor must expect a return that is significantly higher than 1.7%. To deliver a higher return, a public company must raise more money in a cost-effective way, which puts it in a good position to see its share price increase — ROCE measures a company's ability to do this. There are no firm benchmarks, but as a very general rule of thumb, ROCE should be at least double the interest rates. A return any lower than this suggests a company is making poor use of its capital resources. Consistency is a key factor in performance. In other words, investors should resist investing on the basis of only one year's return on capital employed. Take a look at how ROCE behaves over several years and follow the trend closely. A company that earns a higher return on every dollar invested in the business year after year is bound to have a higher market valuation than a company that burns up capital to generate profits. Be on the lookout for sudden changes—a decline in ROCE could signal the loss of competitive advantage. Because ROCE measures profitability in relation to invested capital, ROCE is important for capital-intensive companies or firms that require large upfront investments to start producing goods. Examples of capital-intensive companies are those in telecommunications, power utilities, heavy industries, and even food service. ROCE has emerged as the undisputed measure of profitability for oil and gas companies which also operate in a capital-intensive industry. There is often a strong correlation between ROCE and an oil company's share price performance. While ROCE is a good measure of profitability, it may not provide an accurate reflection of performance for companies that have large cash reserves. These reserves could be funds raised from a recent equity issue. Cash reserves are counted as part of the capital employed even though these reserves may not yet be employed. As such, this inclusion of the cash reserves can actually overstate capital and reduce ROCE. Consider a firm that has turned a profit of $15 on $100 capital employed—or 15% ROCE. Of the $100 capital employed, let's say $40 was cash it recently raised and has yet to invest into operations. If we ignore this latent cash in hand, the capital is actually around $60. The company's ROCE, then, is a much more impressive 25%. Furthermore, there are times when ROCE may understate the amount of capital employed. Conservatism dictates that intangible assets—such as trademarks, brands and research and development—are not counted as part of the capital employed. Intangibles are too hard to value with reliability, so they are left out. Nevertheless, they still represent the capital employed. Even though it can be a good measure of profitability, there are several different reasons why investors may not want to use ROCE as a way to guide their investment decisions. First of all, the figures that are used to calculate ROCE come from the balance sheet, which is a set of historic data. So it does not necessarily provide an accurate forward-looking picture. Secondly, this method often focuses on achievements that happen in the short-term, so it may not be a good measure of more longer-term successes a company may experience. Finally, ROCE cannot be adjusted to account for different risk factors from different investments a company has made. Like all performance metrics, ROCE has its difficulties and limitations, but it is a powerful tool that deserves attention. Think of it as a tool for spotting companies that can squeeze a high a return out of the capital they put into their businesses. ROCE is especially important for capital-intensive companies. Top performers are the firms that deliver above-average returns over a period of several years and ROCE can help you to spot them. How useful is ROCE as an indicator of a company's performance? What is the difference between ROCE and ROA? Learn How a Company Improves on Its Return on Capital Employed (ROCE) The Difference Between ROI and ROCE ROE vs ROCE: The Difference How Is Operating Margin And EBITDA Different? Understanding Return on Capital Employed Return on Capital Employed (ROCE) is a financial ratio that measures a company's profitability and the efficiency with which its capital is employed. Understanding Return on Average Capital Employed Return on average capital employed (ROACE) is a financial ratio that shows profitability versus the investments a company has made in itself. Learn What Capital Employed Is Capital employed, also known as funds employed, is the total amount of capital used for the acquisition of profits. What Is Capital? Capital is a financial asset that usually comes with a cost. Companies report capital on the balance sheet and seek to optimize their total cost of capital. Invested Capital Definition Invested capital is the total amount of money that was endowed into a company by the shareholders, bondholders, and all other interested parties. Understanding Return on Net Assets Return on net assets (RONA) is a measure of financial performance that shows how effectively a company is using its assets to generate its net profit.
CommonCrawl
A competitive analysis of EU ports by fixing spatial and economic dimensions Claudio Quintano1, Paolo Mazzocchi ORCID: orcid.org/0000-0002-6632-314X1 & Antonella Rocca1 The purpose of this paper is to evaluate the efficiencies of ten of the leading European ports. The motivation of the research refers to the relevant topic of selection of indicators that can be involved in the comparative analysis. Concerning the theoretical model, the authors' efforts are especially directed towards the usage of the stochastic frontier analysis (SFA) and of the data envelopment analysis (DEA). These techniques have been widely adopted for benchmarking and performance evaluation by involving indicators based on data from National Accounts. If one of these indicators, such as labour force consistency, is not available at a specific level of aggregation, detailed assumptions are needed to address this complication. The present study proposes an additive model in order to provide an estimation of ports' economic activities by fixing the port activity boundaries and the spatial perimeter of the firms investigated. Several NUTS (Nomenclature of Territorial Units for Statistics) levels and NACE (EU Statistical Classification of Economic Activities) codes are fixed to offer a useful comparable labour indicator. Empirical results reveal that each port area presents a combination of the NACE categories which significantly impact the efficiency that can reach very high performance values through both the SFA and DEA techniques. Since the managers can choose which sectors to improve, which particular improvement strategies to support, which specific service to add, their decisions impact this performance evaluation, and their performance can be verified through the approaches proposed. Port authorities and port operators manage the new context of supply and logistics chains. Increasing globalization has improved the strategic relevance of ports, and the attention to port efficiency has consequently grown. The traditionally strong competition among the ports affects port performance at intra-port and inter-port levels (Castelein et al. 2019). This competitiveness has encouraged management to address performance evaluation methods and benchmarking models (Figueiredo De Oliveira and Cariou 2015; European Commission 2016; Wiegmans and Witte 2017; Ferreira et al. 2018; Ha et al. 2019). The performance evaluation approaches also dedicate increasing attention to sustainability criteria (IAPH - International Association of Ports and Harbours 2007; Baynes et al. 2011; Chang and Wang 2012; Lam and Notteboom 2014; Laxe et al. 2016; Roos and Kliemann-Neto 2017; Chang et al. 2018). Despite the existing remarkable literature on port performance, the subject is still quite debated. One main problem is the complexity of the port structure since various characteristics determine maritime performance, such as the number of activities linked to it, the development of intermodal transportation and undesirable outputs (OECD 2016; Madeira Jr et al. 2012; Bulut and Durur 2018; Munim and Schramm 2018; Shobayo and Van Hassel 2019). An additional issue is that there is no reliable database of collective information of international port dimensions (Cheon et al. 2010). Concerning the selection of the measurements that can be used in the competitive analysis, several authors referred to the empirical criterion that considers the availability of inputs and outputs, while other authors suggested considering the measurements commonly adopted in previous studies (Cullinane et al. 2006). In the current work special prominence has been dedicated to the collection of data of a specific indicator, the labour consistency. In fact, the availability of labour data sources—in addition to capital and port land—represents a relevant topic in port benchmarking models (Dowd and Leschine 1990). In order to enhance this dimension, since these data are very difficult to collect, two perspectives exist in past literature. On the one hand, Tongzon (2001), Estache et al. (2002), Barros (2003), Min and Park (2005), Cullinane et al. (2006) and Turnbull (2012) proposed solutions targeted to include a proxy of the number of port' employees. On the other hand, Demirel et al. (2012) suggested the involvement of input indicators strictly connected to labour force consistency. Since both perspectives share the effort aimed at avoiding the exclusion of the labour indicator, the present paper contributes to the debate attempting to address the availability issue by means of the usage of spatial and economic patterns. The authors argue (1) that the geographical concentration of the maritime firms and (2) that an inventory of the NACE (European Statistical Classification of Economic Activities) classes related to the maritime sector can be assumed to fix homogenous and comparable indicators connected to ports. In the authors' opinion, the involvement of firms which operate in well-defined territorial districts and in specific port activities could be a good way to analyse the port performances in future research. Specifically, this paper aims to analyse the efficiency of ten of the leading European container ports focusing on the labour force estimation, and considering as a case study the port of Antwerp compared to the port of Rotterdam. The model results can be considered as implications for policymakers. As for the theoretical model, both parametric stochastic frontier analysis (SFA) and non-parametric data envelopment analysis (DEA) were undertaken to achieve the performance investigation. Liu (1995) was among the first researchers to utilize SFA in the port sector and Barros (2005) and Cullinane et al. (2006) significantly contributed to this approach. DEA has also been widely adopted for the benchmarking and environmental performance in transportation (Roll and Hayuth 1993; Cullinane et al. 2004; Barros 2006). Among others, Ensslin et al. (2018) provided an overview of the most common port efficiency techniques. As for the remaining content of this article, the following section discusses the model assumptions. Section three briefly reviews the theoretical background and data. Section four combines the results and discussion. Section five refers to the case study. Section six considers the concluding remarks. Model assumptions In general, a performance quantitative method requires some specifications: the sample size must be appropriate, several conditions must be preserved and the results must be validated. Concerning the availability of one or more indicators, it does not represent a problem since a specific database contains the corresponding figures at a specific level of aggregation. In different conditions, the estimation of one indicator for comparative analysis requires additional assumptions. As aforementioned, in the port sector a large number of factors—such as the port features connected to the structural dimension and/or company attributes, manpower, advanced technology and port institutional reforms—need to be considered (Cheon et al. 2010; Van Den Bos and Wiegmans 2018). Nevertheless, in the current research the efficiency measurements were calculated using a limited number of variables, one input and one output in addition to the labour dimension. These indicators—discussed more in depth later—have been obtained from the following databases: Eurostat, Bureau van Dijk, World Port Source and Harbours Review. The present paper focuses on 2016, and it was selected because it has the most comprehensive data availability. As argued in the introduction, authors assume that the issue of the availability of the labour force consistency can be addressed considering an additive model that fixes (1) the port activity boundaries (economic activities) strictly depending on maritime activities and (2) the spatial perimeter (territorial districts) of the firms investigated. Ports' behaviour of providing services to several economic sectors has been discussed in recent literature by, among others, Van Der Lugt and De Langen (2005), De Langen and Haezendonck (2012) and Alijohani and Thompson (2016). The NACE codes refer to the System of National Accounts, and the present research proposes the usage of four-digit NACE codes (classes) of port sector quoted in Table 1. Table 1 Four-digit NACE (rev 2) classes and descriptions of the economic activities considered for each EU port The firms were selected by fixing 'active' companies throughout a Boolean search strategy via the Bureau van Dijk database. Nevertheless, this selection entails several weaknesses. First of all, the Bureau van Dijk database contains key establishment information, including firm name, type of activity (NACE code), number of employees and address. This classification is based on the activity declared by the establishment upon creation. Therefore, the assigned code could not exactly reflect the economic activity and/or there could be changes in the NACE classification over several years. Furthermore, the number of firms could be underestimated since some firms involved in maritime activities could have a main (primary or secondary) activity that is different from the classifications considered in the present research. Another difficulty is that some firms provide auxiliary services for maritime transportation and a distinction may be necessary. As a consequence, in addition to the NACE codes, Surís-Regueiro et al. (2013) suggested analysing the contribution of maritime activities to GDP (Gross Domestic Product) by using specific weights, which referred to economic activities that are fully or partially involved in the maritime economy. Bruno et al. (1999) proposed the entropic average as a useful indicator to investigate highly asymmetrical distributions. Interesting findings also derived from Oum and Park (2004), Fernández-Macho et al. (2016) and Heitz et al. (2018). Using a different standpoint, Baynes et al. (2011) referred to the input-output (I/O) approach as proposed by Leontief (1936). Territorial districts If one assumes that a firm's location near a port increases its probability of depending upon the port to exist, then comparison of the spatial dimension appears to be a sustainable perspective. The approach prioritizes the proximity as a key element in defining an appropriate cluster of activities and it has been analysed by, among others, Rivera et al. (2014). These authors defined the clusters in logistics and transportation by considering the geographic concentration of firms providing logistics services. NUTS2 (Nomenclature of Territorial Units for Statistics) classifications represent territorial districts allowing for harmonized and comparable socio-economic analyses. Therefore, the usage of the NUTS2 level appears to be suitable to ensure a high degree of homogeneity of the geographical division. Eurostat (2009, 2017) referred to the NUTS2 codes to analyse different maritime policies and several tourism flows across the EU. According to this perspective, in the current paper labour force consistency has been estimated by fixing the firms located in the NUTS2 regions mentioned in Table 2, and those involved in the NACE codes quoted in the above-mentioned Table 1. Data from a sample of 11,849 active firms has been considered. Table 2 also shows the number of firms involved in each NUTS2 level. Table 2 NUTS2 levels considered for the ports analysed in the present research, and number of firms involved Theoretical background and data The literature differentiates between two fundamental methodologies for measuring efficiency through the functional form: the non-parametric linear programming technique—DEA—and the parametric model—SFA. As discussed in the introduction, both the SFA and DEA approaches have been commonly considered in the port performance literature. Barros et al. (2011, b), Odeck and Bråthen (2012) and Lampe and Hilgers (2015) presented an extensive description, assumptions and differences between the SFA and DEA perspectives. Selected studies connected to the usage of DEA and SFA in previous port literature can be found in Table 3. This table also summarises the approaches proposed by each research paper and the indicators used in each work. Table 3 Input and output variables used in previous port studies In this paper the variables were selected after reviewing the existing literature quoted in Table 3; the first input dimension is the number of employees (Roll and Hayuth 1993; Coto-Millan et al. 2000; Notteboom et al. 2000; Estache et al. 2002; Barros 2003, 2006, 2012; Min and Park 2005; Rios and Maçada 2006; Barros and Peypoch 2007; Panayides et al. 2011; Gong et al. 2019). De Langen and Pallis (2006), Turnbull and Wass (2007) and Murphy et al. (2016) highlighted that, even though the capital-intensive paradigm is increasing in the port sector, the labour remains an important dimension in port competition. The economic efficiency of the labour market significantly influences the productivity, thus inefficient work procedures can cause inefficiencies in port operations. Notteboom (2010, 2012) emphasized that several features impact the port labour cost and competitiveness, for instance direct and indirect (or hidden) costs (such as strikes, absenteeism, inactivity for accidents/sickness), technological innovations, introduction of new cargo handling equipment, etc. Port labour environment also changes as a consequence of port reform, new port security regulation, labour port schemes, etc. In the present work authors assume that the managers' efforts are addressed to maximize the goods handled in each port involved in the analysis (the correlation matrix of the input and output variables presents a positive relationship among the indicators). This perspective represents one of two different schools of thought on labour assumptions. In fact, on the one hand, if one assumes this positive correlation, port policy measures can be targeted to increase the port throughput to expand the labour component (Ferrari et al. 2010; Bottasso et al. 2013). On the other hand, different authors, such as Grobar (2008) and Deng et al. (2013), noted that recent advancements in transportation technology have modified the role of ports in local economic development. For instance, in the container sector, the transportation activity has made the process of goods movement much more capital intensive, thus decreasing the local employment benefits of having a port. If this standpoint prevails, the implications on the port throughput are less clear. The second input measurement refers to the terminal quay length of each port. This dimension has strategic importance in terms of time waiting as a performance indicator (Notteboom et al. 2000; Cullinane et al. 2006; Rios and Maçada 2006; Almawsheki and Shah 2015; Barros 2003, 2012; Panayides et al. 2011; Demirel et al. 2012; Nguyen et al. 2016; Suárez-Alemán et al. 2016). This dimension appears to be as a more neutral measurement than the container quay length, since ports can have a different division between diverse output products in their trade (for instance, Rotterdam is traditionally focused on bulk). In regard to the output, current research considers the total gross weight of goods handled in each port (bulk and containers) which is expressed in thousands of tons. According to Table 3, also this dimension represents a widely accepted indicator of port output. Eurostat (2020) highlighted that Rotterdam was the largest European port for all types of cargo in 2019, with almost 110 million tons for each quarter. The second port in the same year was Antwerp which handled close to half of the tonnage recorded by Rotterdam, while the third port was Hamburg. Considering only the container cargo segment, the ranking is similar and Rotterdam, Antwerp and Hamburg remained the three main European ports in 2019, followed by the two Spanish ports of Algeciras and Valencia. In contrast, slight differences in ranking appear observing diverse types of bulk. Table 4 provides the descriptive statistics of the input and output measurements included in the model. Table 4 Descriptive statistics of the indicators involved in the DEA and SFA approaches DEA represents a widely utilised method to obtain a multi-variate frontier estimation and to measure the efficiency of multiple homogeneous DMUs (decisions making units) with the same set of inputs and outputs. The original idea behind the DEA model can be traced back to Farrell (1957), while the model was significantly advanced by Charnes et al. (1978) and Banker et al. (1984). This technique does not require a specific functional relationship among inputs and outputs. Both the input and output orientations can be used, and several technologies are available: constant returns to scale (CRS, or CCR), variable returns to scale (VRS, or BCC), and non-increasing returns to scale (NIRS). Following the definition proposed by Cook and Zhu (2005), eq. (1) summarizes the two-stage input DEA approach. $$ {\displaystyle \begin{array}{l}\min {\theta}_0-\varepsilon \left(\sum \limits_{i=1}^m{s}_i^{-}+\sum \limits_{r=1}^s{s}_r^{+}\right)\\ {} subject\ to\\ {}\sum \limits_{j=1}^n{\lambda}_j{x}_{ij}+{s}_i^{-}={\theta}_0{x}_{i0}\kern1.75em \left(i=1,\dots, m\right)\\ {}\sum \limits_{j=1}^n{\lambda}_j{y}_{jh}-{s}_r^{+}={y}_{r0}\kern2em \left(r=1,.\dots, s\right)\\ {}{\lambda}_j,{s}_r^{+},{s}_i^{-}\ge 0\kern5em 1\end{array}} $$ In this equation θ denotes the efficiency score for each DMU; \( {s}_i^{-} \) and \( {s}_r^{+} \) represent input and output slacks; the non-Archimedean ε allows the minimization involving the slacks; xi is the i-th input of m inputs; yr indicates the r-th output of s outputs; λj is a non-negative scalar. In addition to the radial approach, the non-radial efficiency measurements have also been considered in the DEA models. Zhou et al. (2007) highlighted that the non-radial DEA seems to be more effective in measuring the environmental performance, since this approach has a high discriminating power in evaluating the DMU's efficiencies. Cook and Seiford (2009) presented a detailed review of the DEA techniques, while Sahoo et al. (2016) and Liu et al. (2016) proposed innovative DEA approaches. The SFA approach was developed by Aigner et al. (1977) and Meeusen and Van den Broeck (1977). Battese and Coelli (1992, 1995) significantly expanded the basic model. In contrast to DEA, SFA requires the specification of a parametric function. The most popular parametrization of the model refers to the Cobb-Douglas (log) function, which can be exhibited in form of multiplicative specifications as shown in eq. (2). $$ y=f\left(x;\beta \right)\mathit{\exp}\left(v-u\right) $$ In this equation, y is a scalar output, while x is a vector of the inputs. β is a vector of the technology parameters. The composed error refers to the decomposition of the error term ε into the two components represented by ε= v- \( u.v\sim N\left(0,{\sigma}_v^2\right) \) is the first error component, that concerns the effects of the statistical noise, and it is unrestricted in sign. u is the second error component, and it considers the effects of technical inefficiency (u ≥ 0). u is considered to have a distribution such as the exponential or half normal \( u\sim {N}_{+}\left(0,{\sigma}_u^2\right) \), to ensure that it produces only non-negative values. The model assumes that the corresponding log-likelihood function needs to be maximized, by using the maximum likelihood method (Kumbhakar and Lovell 2003). The stochastic version of output-oriented technical efficiency proposed by Coelli et al. (2005) is shown in eq. (3): $$ TE=\frac{y}{f(x){e}^v}=\frac{f(x){e}^{-u}{e}^v}{f(x){e}^v}=\mathit{\exp}\left(-u\right) $$ where TE indicates the technical efficiency of production obtained as a ratio between the observed output (y) and the corresponding stochastic frontier output; e−u denotes the inefficiency; ev represents the noise. Different functional forms can be used, instead of using the traditional Cobb-Douglas (log) function. In this article the performance evaluation derived from the translog SFA as proposed by Christensen et al. (1973). Translog SFA represents a less restrictive approach compared to the standard Cobb-Douglas function. In the prevalent literature, the SFA technique is less frequently used than the DEA. One main reason is that multiple input and output measurements limit the usage of the SFA technique in the standard version. In fact, when the SFA model needs to consider multiple outputs instead of a single one, the standard econometric approach requires that input and output prices are available. An extensive consideration of DEA and SFA techniques is beyond the scope of this paper while the actual purpose of this paper is to include the radial DEA and SFA techniques to (1) verify if the model provides consistent results and (2) evaluate a specific labour port perspective. See Orea and Wall (2016) for a comprehensive discussion on these topics. Table 5 shows the efficiency scores of the ten European ports, according to the following techniques: the VRS output orientation (DEA_VRS_OUT); the VRS input orientation (DEA_VRS_IN); the CRS (DEA_CRS); and translog SFA (SFA_TR_LG). Table 5 DEA and SFA efficiency scores for each port Spearman correlations have been calculated by considering the DEA and SFA efficiency scores to verify whether the ports' ranks are (approximately) the same. The results are provided in Table 6. As can be seen in the table, all the ranking correlations among the diverse techniques are positive, and the Spearman correlation between SFA_TR_LG and DEA_CRS appears to be relatively high. According to the results of the statistical analysis, even though the efficiency scores slightly differ among the different techniques, the efficiency scores do not present conflicting results. Table 6 Spearman rank correlations among the efficiency scores obtained by different techniques Figures 1a-b-c show the (positive) relationship between SFA_TR_LG and the DEA efficiency estimates. Relationship between the parametric and non-parametric efficiency scores Table 5 indicates that in the DEA-VRS approach three ports (Le Havre, Marseille and Rotterdam) are on the efficient frontier using both the input and output orientations. This result is consistent with the distinction between the input and output DEA orientations which reflect the different ways of reaching the efficient production frontier. Two ports, Le Havre and Marseille, also show the best score in the DEA-CRS approach, while Rotterdam decreases its performance marginally. Rotterdam and Amsterdam present the best score in the translog SFA technique. Several other ports operate at a high level of efficiency, for instance Antwerp and Algeciras. Bremerhaven appears to be inefficient in both SFA and DEA approaches. Compared to the other ports, Marseille has very low values of the terminal quay length, while Le Havre presents the minimum value of number of employees and Rotterdam has the maximum value of total gross weight of goods handled. In the DEA approach, since the efficient ports determine the technology set, the consequences are (1) that at least one port has its efficiency equal to 1 and (2) the number of inputs and outputs used in the model determines the number of efficient ports. In contrast, in the SFA approach, ports have efficiency equal to 1 only when u = 0. Therefore, there are firms with a DEA efficiency equal to 1 but have much lower SFA efficiency scores. The port policy actions aimed at inefficient ports should be considered by referring to these efficient ports to improve operational performance. Specifically, these policies could refer to the efficient 'peer' ports, since the DEA 'principle of dominance' assumes that an inefficient DMU is dominated by one (or more) peer(s) that presents the best practices. Diverse relevant features are missing in the present analysis: the peer weights (or benchmarks), the mathematical derivation of the slacks for the efficient ports (in radial model), hypothesis tests of the different CRS/VRS technologies, and extensive explorations of the causes of the variation in the efficiency (and of the validation of the approaches proposed). Similarly, in the SFA approach, several control variables could be considered because they can have an impact on the estimated efficiency values. In fact, in addition to the estimation of the efficiency, the SFA analyses factors determining the variations in the level of efficiency. These features are beyond the aim of this paper. Concerning the indicators' assumptions needed to perform the competitive SFA and DEA, since the models can both be consistently wrong and both could report the same erroneous results, the fact that DEA and SFA appear to provide consistent results does not validate the assumptions of the model. As a consequence, the selection of the dimensions, the choice over the NACE codes, the signs of the values and the results of past studies require an appropriate corresponding analysis and further investigation. In regard to the SFA's maximum-likelihood estimates, Table 7 indicates the corresponding results. Table 7 Standard and translog SFA estimates These maximum likelihood values can be also reported in equation form to estimate the translog production frontier, as follows in eq. (4). $$ \mathit{\ln}(TGW)=80.75+3.09\mathit{\ln}(NOE)-17.46\mathit{\ln}(CTL)--1.22\mathit{\ln}\left({NOE}^2\right)-0.19\mathit{\ln}\left({CTL}^2\right)+2.16\mathit{\ln}(NOE)\mathit{\ln}(CTL)+\left(v-u\right) $$ The results indicate the β coefficients have different sign and size between standard and translog SFA. The number of employees is positively affecting output and it is statistically significant. This positive correlation with the port's output results consistent with the authors assumption. Nevertheless, the sign of 'square of number of employees' is negative, which indicates that the output increases but in decreasing manner. The negative and statistically significant coefficient of 'terminal quay length' suggests that the higher the terminal quay length, the smaller the output. This finding is coherent with the involvement of a dimension (the terminal quay length) that is a more neutral measurement compared to the container quay length since each port activity is no longer limited to just the containers' handling. Concerning the parameters of the technical efficiency model, the signs of the determinants need to be analysed as well to verify if they result in an increase or in a decrease of the inefficiency of the ports. The conceptual framework presented in this paper is empirically analysed considering the port of Antwerp in the different scenarios. The port of Antwerp represents the most extensive port area in the world and several recent research contributions focused on this port in empirical analysis (Haezendonck and Langenus 2019; Leloup 2019). Among others, Esser et al. (2019) discussed the importance of this port as a job generator for the province of Antwerp. This port experienced exceptional economic growth in the last decade, and it has been selected as a case study because DEA and SFA techniques offer significantly different performance estimations among the diverse scenarios. It is important to underline that the set of ten ports analysed in the paper presents heterogeneous features which should be taken into account, even though this investigation is not discussed in the present paper. For instance, Table 8 provides the (significantly different) distribution of firms by NACE codes and container ports, while Table 9 shows the distribution of firms by NACE codes and firm sizes. Table 8 Distribution of firms by NACE codes and ports Table 9 Port of Antwerp: Distribution of firms by NACE codes and firm size Furthermore, present work does not consider additional features connected—for instance—to the diverse company's financial characteristics, the standardized legal form, the full-time (or part-time) prevalent jobs structure, etc. In addition to Antwerp, the present case study considers the port of Rotterdam to compare the results. The remaining eight ports are excluded from the analysis even though each of them presents specific relevant characteristics. One might think to the port of Marseille that experienced (1) a recent port reform (Lacoste and Douet 2013) and (2) increasing investments to realize an efficient integration of this port with the hinterland (Cariou et al. 2014). Therefore, further research is required on different ports and contextual factors that could potentially affect the results. Assuming that management and port authorities are able to influence port performance, the economic significance of the current model refers to the specific policies that can be used to stimulate ports' behaviour towards diverse topics. Current case study suggests the involvement of different combinations of NACE codes for each scenario, that result in significantly different clusters of firms (and employees) analysed. Ten different scenarios have been proposed considering DEA_CRS and SFA_TR_LG scores, since these approaches presented the highest Spearman correlation. The first scenario refers to the whole set of NACE codes quoted in Table 1. Differently, the second scenario includes only the 3011 code, while the third scenario adds to this code the number of workers belonging to 3012 code, and so on. Table 10 shows the step-by-step procedure and the different efficiency scores, while Fig. 2 visually indicates the relationship among these scores for each scenario. Table 10 DEA and SFA efficiency scores: (CRS) DEA and SFA results Relationship between the parametric and non parametric efficiency scores Empirical results reveal that the port of Antwerp presents different efficiency values for each scenario, and it reaches very high performance values through both the SFA and DEA techniques. This finding confirms that the combination of the NACE categories significantly impacts the performance evaluation. Managers can choose which sectors to improve, which particular improvement strategies to support, which specific service to add, and so on. One example could refer to the services for passengers and/or the concessions of the ferry routes even though they often require political decisions (Wergeland 2016), and/or the measures to support operational costs caused by accidents and dangerous port occurrences (Antão et al. 2016). In general, it is very important to mitigate the potential inaccuracies of involving specific labour categories in the port handling sector. Nevertheless, several NACE codes involved in the analysis could be inadequate. For instance, categories such as ship repairs (NACE codes 3011, 3012, 3315) and passenger-related services (NACE codes 5010 and 5030) have limited relevance with container terminals. In the authors' opinion, the usage of a broader selection of the port activities appears to be appropriate in the present analysis, although additional proxies⎯and/or different NACE selection⎯can be considered in further investigation. The indicators' assumptions proposed to perform the performance analysis represent the major concern for the benchmarking study. This article investigates the performances of ten European ports and it assumes that the widely debated labour indicator can be estimated by fixing the firms involved in the NACE codes and NUTS2 regions. Empirical results show that, on the one hand, this approach could be useful in avoiding the exclusion of this measurement due to difficulties in collecting labour data. On the other hand, since the NACE selection impacts on the benchmarking, it is important to address the issue connected to the usage of the labour force via a coherent and consistent model. Supplementary finding derives from the results of both SFA and DEA techniques that do not present conflicting results. The Spearman coefficients show positive ranking correlations (which is relatively high considering translog SFA and DEA CRS). The outcomes of the empirical study confirm that policy actions can refer to these techniques to verify the potential impact of specific measures. In particular, the result for the labour indicator provides evidence for the relevance of the assumptions connected to it, showing significant differences among the performance evaluation. Accordingly, since the number of workers can be used to verify the efficacy of employment policies, especially when the implementation of new policies concerns definite (NACE) labour categories, management can design policy actions throughout the model proposed in the current work. Furthermore, the involvement of the NUTS2 territorial districts should be relevant to define policy measures according to the peculiarities (heterogeneity) of a specific region, as well as the impact of the national law on the reorganisation process of each port. In fact, the role of the government has strategic importance in regard to interventions aimed at a specific business, and could incorporate components connected to ports handling multiple NUTS2 regions. Even though current empirical work analyses a limited set of indicators, the outcomes highlighted the importance of their selection, and confirm the critical role of the DEA and SFA approaches as tools to support management decisions since they allow to verify the consistency of the different efficiency estimations. One of the main limits of the current research concerns the output, since port activity is no longer limited to just cargo handling. Further investigation on the involvement of additional criteria that can be considered when the NACE/NUTS levels appear to be not fully satisfactory. For instance, features connected to ports which are close to regional border handling multiple NUTS2 districts should be debated. Several contextual factors must also be considered in the benchmark analysis to detect whether they affect port efficiency. CRS or CCR: Constant returns to scale CTL: Terminal quay length DEA: DMU: Decisions making unit NACE: European statistical classification of economic activ NIRS: Non-increasing returns to scale NUTS: Nomenclature of territorial units for statistics SFA: Stochastic frontier analysis TE: Technical efficiency TGW: Total gross weight of goods handled in each port VRS or BCC: Variable returns to scale Aigner D, Lovell KCA, Schmidt P (1977) Formulation and estimation of stochastic frontier production function models. J Econ 6(1):21–37 Alijohani K, Thompson R (2016) Impacts of logistics sprawl on the urban environment and logistics: taxonomy and review of literature. J Transp Geogr 57:257–263 Almawsheki ES, Shah MZ (2015) Technical efficiency analysis of container terminals in the middle eastern region. Asian J Ship Logist 31(4):477–486 Antão P, Calderón M, Puig M, Wooldridge C, Darbra RM (2016) Identification of occupational health, safety, security (OHSS) and environmental performance indicators in port areas. Saf Sci 85:266–275 Banker RD, Charnes A, Cooper W (1984) Some models for estimating technical and scale inefficiencies in data envelopment analysis. Manag Sci 30(9):1078–1092 Barros CP (2003) The measurement of efficiency of portuguese sea port authorities with DEA. Int J Transp Econ 30(3):335–354 Barros CP (2005) Decomposing growth in Portuguese seaports: a frontier cost approach. Marit Econ Logist 7(4):297–315 Barros CP (2006) A benchmark analysis of italian seaports using DEA. Marit Econ Logist 8(4):347–365 Barros CP (2012) Productivity assessment of African seaports. Afr Dev Rev 24(1):67–78 Barros CP, Haralambides H, Hussain M, Peypoch N (2011) Seaport efficiency and productivity growth. In: Cullinane KPB (ed) International handbook of maritime economics. Edward Elgar, Cheltenham, pp 363–382 Barros CP, Peypoch N (2007) Comparing productivity change in Italian and portuguese seaports using the Luenberger indicator approach. Marit Econ Logist 9(2):138–147 Battese G, Coelli T (1992) Frontier production function, technical efficiency and panel data: with application to paddy farmer in India. J Prod Anal 3:153–169 Battese G, Coelli T (1995) A model for technical in efficiency effects in a stochastic frontier production function for panel data. Empir Econ 20:325–332 Baynes T, Lenzen M, Steinberger JK, Bai X (2011) Comparison of household consumption and regional production approaches to assess urban energy use and implications for policy. Energy Policy 39:7298–7309 Bottasso A, Conti M, Ferrari C, Merk O, Teia A (2013) The impact of port throughput on local employment: evidence from a panel of European regions. Transp Policy 27:32–38 Bruno G, Corsini V, Monducci R (1999) Dynamics of Italian industrial firms; microeconomic analysis of performance and labour demand from 1989 to 1994. In: Biffignandi S (ed) Micro- and macrodata of firms statistical analysis and international comparison - contributions in statistics. Springer Verlag, Boston, pp 543–570 Bulut E, Durur O (2018) Analytic hierarchy process (AHP) in maritime logistics: theory, application and fuzzy set integration. In: Lee PTW, Yang Z (eds) Multi-criteria decision making in maritime studies and logistics - international series in operations research & management science. Springer Verlag, New York, pp 31–78 Cariou P, Fedi L, Dagnet F (2014) The new governance structure of French seaports: an initial post-evaluation. Marit Policy Manag 41(5):430–443 Castelein RB, Geerlings H, Van Duin JHR (2019) The ostensible tension between competition and cooperation in ports: a case study on intra-port competition and inter-organizational relations in the Rotterdam container handling sector. J Shipp Trade. https://doi.org/10.1186/s41072-019-0046-5 Censis (2015) The fifth maritime economy report, Roma http://www.federazionedelmare.it/images/pubblicazioni/vrapportoeconomiamare_2015.pdf Accessed 23 Apr 2018 Chang CC, Wang CM (2012) Evaluating the effects of green port policy: case study of Kaohsiung harbour in Taiwan. Transp Res Part D: Transp Environ 17:185–189 Chang YT, Park HK, Lee S, Kim E (2018) Have emission control areas (ecas) harmed port efficiency in Europe? Transp Res Part D: Transp Environ 58:39–53 Charnes A, Cooper WW, Rhodes E (1978) Measuring the efficiency of decision making units. Eur J Oper Res 2(6):429–444 Cheon S, Dowall D, Song DW (2010) Evaluating impacts of institutional reforms on port efficiency changes: ownership, corporate structure, and total factor productivity changes. Transp Res Part E: Logist Transp 46(4):546–561 Christensen LR, Jorgenson DW, Lau LJ (1973) Transcendental logarithmic production frontiers. Rev Econ Stat 55:28–45 Coelli TJ, Rao DSP, O'donnell CJ, Battese GE (2005) An introduction to efficiency and productivity analysis. Springer Verlag, Boston Cook WD, Seiford LM (2009) Data envelopment analysis – thirty years on. Eur J Oper Res 192(1):1–17 Cook WD, Zhu J (2005) Modelling performance measurement – application and implementation issues in DEA. Springer Verlag, Boston Coto-Millan P, Banos-Pino J, Rodriguez-Alvarez A (2000) Economic efficiency in Spanish ports: some empirical evidence. Marit Policy Manag 27(2):169–174 Cullinane KPB, Song DW, Ji P, Wang TF (2004) An application of DEA windows analysis to container port production efficiency. Rev Netw Econ 3(2):184–206 Cullinane KPB, Wang TF, Ji P, Song DW (2006) The technical efficiency of container ports: comparing data envelopment analysis and stochastic frontier analysis. Transp Res Part A: Policy Pract 40(4):354–374 De Langen PW, Haezendonck E (2012) Ports as clusters of economic activity. In: Talley WK (ed) . The blackwell companion to maritime economics. Wiley-Blackwell, New York, pp 638–655 De Langen PW, Pallis AA (2006) Analysis of the benefits of intra-port competition. Int J Transp Econ 33(1):69–85 Demirel B, Cullinane KPB, Haralambides H (2012) Container terminal efficiency and private sector participation. In: Talley WK (ed) The Blackwell companion to maritime economics. Wiley-Blackwell, New York, pp 571–598 Deng P, Lu S, Xiao H (2013) Evaluation of the relevance measure between ports and regional economy using structural equation modeling. Transp Policy 27:123–133 Dowd TJ, Leschine TM (1990) Container terminal productivity: a perspective. Marit Policy Manag 17(2):107–112 Ensslin L, Dezem V, Dutra A, Ensslin SR, Somensi K (2018) Seaport-performance tools: an analysis of the international literature. Marit Econ Logist 20(4):587–602 Esser A, Sys C, Vanelslander T, Verhetsel A (2019) The labour market for the port of the future. A case study for the port of Antwerp. Case Studies Transp Policy 8(2):349–360 Estache A, Gonzalez M, Trujillo L (2002) Efficiency gains from port reform and the potential for yardstick competition: lessons from Mexico. World Dev 30(4):545–560 European Commission (2016) Commission staff working document on the implementation of the EU maritime transport strategy 2009-2018 https://ec.europa.eu/transport/sites/transport/files/swd2016_326.pdf Accessed 13 Jul 2018 Eurostat (2009) Study in the field of maritime policy - approach towards an integrated maritime policy database. Volume 1: Main part European Commission. https://webgate.ec.europa.eu/maritimeforum/system/files/eurostat_mp_study_final%20report_r1_volume_1_mainpart.pdf. Accessed 23 Aug 2018 Eurostat (2017) Eurostat regional yearbook 2017, Statistical books, Luxembourg. http://ec.europa.eu/eurostat/documents/3217494/8222062/ks-ha-17-001-en-n.pdf/eaebe7fa-0c80-45af-ab41-0f806c433763. Accessed 24 Nov 2018 Eurostat (2020) Maritime transport of goods - quarterly data – Eurostat Statistics Explained, Luxembourg. https://ec.europa.eu/eurostat/statistics-explained/index.php/Maritime_transport_of_goods_-_quarterly_data#Top_European_ports. Accessed 03 Sep 2020 Farrell MJ (1957) The measurement of productive efficiency. J Royal Stat Soc 120:253–281 Fernández-Macho J, González P, Virto J (2016) An index to assess maritime importance in the european Atlantic economy. Mar Policy 64:72–81 Ferrari C, Percoco M, Tedeschi A (2010) Ports and local development: evidence from Italy. Int J Transp Econ 37(1):9–30 Ferreira DC, Marques RC, Pedro MI (2018) Explanatory variables driving the technical efficiency of European seaports: an order-α approach dealing with imperfect knowledge. Transp Res Part E: Logist Transp 119:41–62 Figueiredo De Oliveira G, Cariou P (2015) The impact of competition on container port (in)efficiency. Transp Res Part A: Policy Pract 78:124–133 Gong X, Wu X, Luo M (2019) Company performance and environmental efficiency: a case study for shipping enterprises. Transp Policy 82(C):96–106 Grobar L (2008) The economic status of areas surrounding major u.s. container ports: evidence and policy issues. Growth Chang 39:497–516 Ha MH, Yang Z, Lam JSL (2019) Port performance in container transport logistics: a multi-stakeholder perspective. Transp Policy 73:25–40 Haezendonck E, Langenus M (2019) Integrated ports clusters and competitive advantage in an extended resource pool for the Antwerp seaport. Marit Policy Manag 46(1):74–91 Heitz A, Dablanca L, Olssonb J, Sanchez-Diaz I, Woxenius J (2018) Spatial patterns of logistics facilities in Gothenburg, Sweden. J Transp Geogr. https://doi.org/10.1016/j.jtrangeo.2018.03.005 IAPH - International Association of Ports and Harbours (2007) Resolution on clean air programs for ports. Second plenary session. 25th World ports conference, Houston Kumbhakar SC, Lovell CAK (2003) Stochastic frontier analysis. Cambridge University Press, Cambridge Lacoste R, Douet M (2013) The adaptation of the landlord port model to France's major seaports: a critical analysis of local solutions. Marit Policy Manag 40(1):27–47 Lam JSL, Notteboom T (2014) The greening of ports: a comparison of port management tools used by leading ports in Asia and Europe. Transplant Rev 34(2):169–189 Lampe HW, Hilgers D (2015) Trajectories of efficiency measurement: a bibliometric analysis of DEA and SFA. Eur J Oper Res 240(1):1–21 Laxe FG, Bermúdez FM, Palmero FM, Novo-Corti I (2016) Sustainability and the Spanish port system - analysis of the relationship between economic and environmental indicators. Mar Pollut Bull 113(1–2):232–239 Leloup P (2019) A historical perspective on crime control and private security: a Belgian case study. Polic Soc 29(5):551–565 Leontief W (1936) Quantitative input and output relations in the economic system of the US. Rev Econ Stat 18:105–125 Liu JS, Lu LYY, Lu WM (2016) Research fronts in data envelopment analysis. Omega 58:33–45 Liu Z (1995) The comparative performance of public and private enterprises. J Transp Econ Policy 29(3):263–274 Madeira AG Jr, Cardoso MM Jr, Belderrain MCN, Correia AR, Chwanz SH (2012) Multicriteria and multivariate analysis for port performance evaluation. Int J Prod Econ 140(1):450–456 Meeusen W, Van Den Broeck J (1977) Efficiency estimation from cobb- Douglas production functions with composed errors. Int Econ Rev 18:435–444 Min H, Park B (2005) Evaluating the inter-temporal efficiency trends of international container terminals using data envelopment analysis. Int J Integr Supply Manag 1(3):258–277 Munim ZH, Schramm HJ (2018) The impacts of port infrastructure and logistics performance on economic growth: the mediating role of seaborne trade. J Shipp Trade. https://doi.org/10.1186/s41072-018-0027-0 Murphy B, Veall MR, Zhang Y (2016) Is there evidence of ICT skill shortages in Canadian Taxfiler data? In: Green WH, Khalaf L, Sickles RC, Veall M, Voia MC (eds) . Productivity and efficiency analysis. Springer Verlag, Boston, pp 145–160 Nguyen HO, Nguyen HV, Chang YT, Chin ATH, Tongzon J (2016) Measuring port efficiency using bootstrapped DEA: the case of Vietnamese ports. Marit Policy Manag 43(5):644–659. https://doi.org/10.1080/03088839.2015.1107922 Notteboom T (2010) Dock labour and port-related employment in the European seaport system. European Seaport Organisation, University of Antwerp, Belgium Notteboom T (2012) Dock labour systems in north-west European seaports: how to meet stringent market requirements? Paper 1116 – Satta G. et al. Presented at the International Forum on Shipping, Ports and Airports (IFSPA), Hong Kong Notteboom T, Coeck C, Van De Broeck J (2000) Measuring and explaining the relative efficiency of container terminals by means of Bayesian stochastic frontier models. Int J Marit Econ 2(2):83–106 Odeck J, Bråthen S (2012) A meta-analysis of DEA and SFA studies of the technical efficiency of seaports: a comparison of fixed and random-effects regression models. Transp Res Part A: Policy Pract 46(10):12–21 OECD (2016) Cruise shipping and urban development the case of Dublin - case-specific policy analysis. The International Transport Forum, Paris https://www.itf-oecd.org/sites/default/files/cruise-shipping-urban-development-dublin.pdf. Accessed 21 Sep 2018 Orea L, Wall A (2016) Measuring eco-efficiency using the stochastic frontier analysis approach. In: Aparicio J, Lovell CAK, Pastor JT (eds) Advances in efficiency and productivity. Springer Verlag, Boston, pp 275–297 Oum TH, Park JH (2004) Multinational firms location preference for regional distribution centers: focus on the northeast Asian region. Transp Res Part E: Logist Transp Rev 40:101–121 Panayides PM, Lambertides N, Savva CS (2011) The relative efficiency of shipping companies. Transp Res Part E: Logist Transp Rev 47(5):681–694 Rios LR, Maçada ACC (2006) Analyzing the relative efficiency of container terminals of Mercosur using DEA. Marit Econ Logist 8(4):331–346 Rivera L, Sheffi Y, Welsch R (2014) Logistics agglomeration in the US. Transp Res Part A: Policy Pract 59:222–238 Roll Y, Hayuth Y (1993) Port performance comparison applying data envelopment analysis. Marit Policy Manag 20(2):153–161 Roos EC, Kliemann-Neto FJ (2017) Tools for evaluating environmental performance at brazilian public ports: analysis and proposal. Mar Pollut Bull 115(1–2):211–216 Sahoo BK, Khoveyni M, Esalmi R, Chaudhury P (2016) Returns to scale and most productive scale size in DEA with negative data. Eur J of Oper Res 255(2):545–558 Shobayo P, Van Hassel E (2019) Container barge congestion and handling in large seaports: a theoretical agent-based modeling approach. J Shipp Trade. https://doi.org/10.1186/s41072-019-0044-7 Suárez-Alemán A, Morales Sarriera J, Serebrisky T, Trujillo L (2016) When it comes to container port efficiency, are all developing regions equal? Transp Res Part A: Policy Pract 86:56–77 Surís-Regueiro JC, Garza-Gil MD, Varela-Lafuente MM (2013) Marine economy: a proposal for its definition in the European Union. Mar Policy 42(c):111–124 Tongzon J (2001) Efficiency measurement of selected Australian and other international ports using data envelopment analysis. Transp Res Part A: Policy Pract 35(2):113–128 Turnbull P (2012) Port labor. In: Talley WK (ed) The Blackwell companion to maritime economics. Wiley-Blackwell, New York, pp 517–548 Turnbull P, Wass V (2007) Defending dock workers—globalization and labour relations in the World's ports. J Econ Soc 46(3):582–612. https://doi.org/10.1111/j.1468-232X.2007.00481.x Van Den Bos G, Wiegmans B (2018) Short sea shipping: a statistical analysis of influencing factors on sss in European countries. J Shipp Trade. https://doi.org/10.1186/s41072-018-0032-3 Van Der Lugt LM, De Langen PW (2005) The changing role of ports as locations for logistics activities. J Int Logist Trade 3(2):59–72 Wergeland T (2016) Ferry passenger markets. In: Talley WK (ed) The Blackwell companion to maritime economics. Wiley-Blackwell, Malden, pp 161–183 Wiegmans B, Witte P (2017) Efficiency of inland waterway container terminals: stochastic frontier and data envelopment analysis to analyse the capacity design- and throughput efficiency. Transp Res Part A: Policy Pract 106:12–21 Zhou P, Poh KL, Ang BW (2007) A non-radial DEA approach to measuring environmental performance. Eur J of Oper Res 178:1–9 The authors are grateful to Professor Manolis Kavussanos and two anonymous referees for their helpful reviews and suggestions at the IAME 2019 conference where an earlier version of this paper was presented. We are also grateful to Professor Kee-Hung Lai and two different anonymous referees for their comments during the submission procedure of this article to Journal of Shipping and Trade. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Department of Management and Quantitative Studies, University of Naples 'Parthenope', Naples, Italy Claudio Quintano, Paolo Mazzocchi & Antonella Rocca Claudio Quintano Paolo Mazzocchi Antonella Rocca All authors have directly contributed to the planning, analysis, and writing of the paper. The authors have read and approved the final manuscript. Claudio Quintano is Emeritus Professor of Economic Statistics. E-mail: [email protected] Paolo Mazzocchi is Associate Professor of Economic Statistics. E-mail: [email protected] Antonella Rocca is an Assistant Professor of Economic Statistics. E-mail: [email protected] Correspondence to Paolo Mazzocchi. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Quintano, C., Mazzocchi, P. & Rocca, A. A competitive analysis of EU ports by fixing spatial and economic dimensions. J. shipp. trd. 5, 18 (2020). https://doi.org/10.1186/s41072-020-00075-x
CommonCrawl
Is cardinality a well defined function? I was wondering if the cardinality of a set is a well defined function, more specifically, does it have a well defined domain and range? One would say you could assign a number to every finite set, and a cardinality for an infinite set. So the range would be clear, the set of cardinal numbers. But what about the domain, here we get a few problems. This should be the set of all sets, yet this concept isn't allowed in mathematics as it leads to paradoxes like Russell's paradox. So how do we formalize the notion of 'cardinality'? It seems to behave like a function that maps sets into cardinal numbers, but you can't define it this way as that definition would be based on a paradoxical notion. Even if we only restrict ourselves to finite sets the problem pops up, as we could define the set {A} for every set, thereby showing a one-to-one correspondence between 'the set of all sets' (that doesn't exist) and the 'set of all sets with one element'. So how should one look at the concept of cardinality? You can't reasonably call it a function. Formalizing this concept without getting into paradoxes seems very hard indeed. elementary-set-theory cardinals psmears DirkbossDirkboss $\begingroup$ The collection of all cardinals isn't a set either $\endgroup$ – Alessandro Codenotti Oct 26 '16 at 12:07 The cardinality function is well-defined, but it is what known as a class function. Since every set has a cardinality, the domain of the function $A\mapsto |A|$ has to be the class of all sets, so this is indeed a proper class. And since every set has a strictly large cardinal, the class of cardinals is not a set either. Using the axioms of set theory, we can canonically determine an object, in the set theoretic universe, which will represent the cardinal $|A|$. So the function $A\mapsto|A|$ is indeed definable. It should be pointed, perhaps, that this class function is also amenable. Namely, restricting it to any set of sets will result in a function which is itself a set. Namely, a set of sets can only have a set of distinct cardinals. This is a direct consequence of the Replacement axiom. There is some inherent difficulty at first when talking about existence of proper classes, and whether or not they are well-defined objects. In the case of $\sf ZFC$ and related theories, existence means "a set", but when we say that a class exists and it is well-defined, we mean to say that there is a definition which is provably giving us the function that we want. This is the case in your question. But one can also work in class theories like $\sf KM$ (Kelley–Morse) or $\sf NBG$ (von Neumann–Godel–Bernays), and there the function assigning every set its cardinality is still a class function and not a set, but now it exists in "an internal way" as an object of the universe. Asaf Karagila♦Asaf Karagila $\begingroup$ One minor(?) point here: you're right, as long as you have the axiom of choice. When you say "every set has a cardinality," you need choice. Without choice, only the well-orderable sets have a well-defined cardinality. Even the Hartog function can't quite fix that problem. Otherwise, you are spot-on: the function taking any (well-orderable) set to its cardinality is a definable and amenable class function. As my thesis advisor used to say: "We will be assuming the axiom of choice throughout, primarily because it's true!" $\endgroup$ – user128390 Oct 26 '16 at 21:13 $\begingroup$ No. No. No. If we stop treating the ordinals like gods, everything becomes fine. Cardinal is an object which represents the equivalence class of sets under bijections. It does not have to be an element of that class, just to be uniquely identified, and this is doable via Scott's trick. Moreover the notion of "cardinality" is just a notion of "same size". Do you want to tell me that there are no bijections between the reals and $\mathcal P(\Bbb n)$ in the absence of choice? Nonsense! :) Finally, karagila.org/2013/… :) $\endgroup$ – Asaf Karagila♦ Oct 26 '16 at 21:19 $\begingroup$ Asaf, would it be fair to say that the class of strongly inaccessible cardinals is necessarily well-ordered? More vaguely: what's the weakest notion of "difficult to access from below" such that the class of all cardinals that are difficult to access from below is automatically well-ordered? I suppose that by "$\kappa$ inaccessible" I probably mean that $V_\kappa$ models second-order ZF, if that helps. (Unless you think that's a bad definition, in which case, that's not what I mean.) $\endgroup$ – goblin Oct 28 '16 at 9:45 $\begingroup$ @goblin: Any class of ordinals is well ordered. I don't understand your question... As for the definition of an inaccessible, it's a fine definition (see Blass, Löwe and Dimitriou about inaccessible cardinals in ZF for more about that). $\endgroup$ – Asaf Karagila♦ Oct 28 '16 at 9:58 $\begingroup$ @AsafKaragila, yes, sorry. It was a foolish question... $\endgroup$ – goblin Oct 29 '16 at 2:51 The collection of all sets does not form a set in ZF(-style) set theory, indeed. Note that the same is true for the collection of all cardinals: there is no set containing all cardinals, because then its union would be a set as well, and it would be a greater cardinal than any of its elements. So the function $X \mapsto |X|$ is not a function internally to ZFC. However, it can be made a function externally: that is, there is a formula $\phi(x,y)$, in two free variables, which holds if and only if $y$ is the cardinality of $x$. For this formula, we can prove $\phi(x,y) \land \phi(x,y') \to y = y'$, and we can prove $\forall x \exists y \phi(x,y)$. Hence, if we want to, we can introduce a function symbol $\mathrm{Card}$ to the language of set theory, such that $\mathrm{Card}(x)$ is interpreted as the unique $y$ such that $\phi(x,y)$. This is fine for the purposes for which we want to use cardinality. Note also that if you are looking at a more limited part of the universe of sets, say $V_\alpha$ for some ordinal $\alpha$, then the restriction of the meta-function $\mathrm{Card}$ to this set does form a set. Mees de VriesMees de Vries $\begingroup$ If it doesn't form a set within ZF(C) theory, then in what kind of framework does it exist? As far as I know ZFC is the basis of mathematics these days and they are considered fundamental axioms. If something lies outside it, that imply's there is some more fundamental framework that incorporates ZFC. $\endgroup$ – Dirkboss Oct 26 '16 at 13:23 $\begingroup$ I don't really have a good answer for you; I guess it might also depend on what you mean by the word "exist", and what is necessary for a function that "exists". The closest thing to an answer I have is something like this: when I tell you $|X|$ you intuitively understand what I mean (namely the unique cardinal which admits a bijection to $X$). That makes it a function. This function is not represented by a (ZFC-)set, the way most functions are, so it is not a ZFC-function. However, we can talk about it in the language of ZFC, which is what we "work in", and that is good enough for us. $\endgroup$ – Mees de Vries Oct 26 '16 at 13:33 $\begingroup$ @Dirkboss: This is a difficult question to answer if you're not comfortable with the distinction between theory and meta-theory, where classes exist. One can consider instead class-set theories like von Neumann--Godel--Bernays or Kelley--Morse, where classes are objects and there the function assigning cardinals does exist, but it still isn't a set. $\endgroup$ – Asaf Karagila♦ Oct 26 '16 at 14:33 $\begingroup$ @Dirkboss: Says who that ZFC is the basis of mathematics these days? It is one basis that provides successfully a foundation on which much of modern mathematics can be built, but it is by no means the only foundations.. and there are many others. Also, "can build" does not mean "is needed to build", and even most modern mathematics (especially in applied fields) hardly need ZFC. Even a set theorist I know stated that the axiom of foundation (in ZFC) is simply to ensure nice models of ZFC (if a model exists to begin with). It is not at all 'fundamental', in other words. $\endgroup$ – user21820 Oct 27 '16 at 16:27 $\begingroup$ @Dirkboss: And note that going to MK set theory does not fix the issue you are raising with some apparently well-defined collection being 'outside', because in MK too there is no class of all classes $x$ such that $x \notin x$. Asking "why should it be like that" would be a philosophy of mathematics question, and hence opinion-based. Also, you should know Godel's incompleteness theorem, which shows that every sufficiently nice formal system can't prove everything that is true about itself and nothing that is false, simply put. $\endgroup$ – user21820 Oct 27 '16 at 16:33 Being equinumerous or bijective is an equivalence relation on sets: $x\equiv y$ iff there is a bijection $f:x\text{ onto }y$. The problem is to define a total set function (a proper class of course) $F$ satisfying $x\equiv y$ iff $F(x)=F(y)$. In ZFC this is done by the cardinality function $F(x)=\text{card}(x)$, whose values are cardinals, and it satisfies $x\equiv F(x)$, that is, is a true transversal. In ZF, this is done by $F(x)=$ the set of all sets $y$ with $x\equiv y$, which have the least von Neumann's rank among all such sets, and then generally $x\not\equiv F(x)$, of course. I don't know though if anyone has succeed to prove that in ZF there is no true transversal for $\equiv$. Vladimir KanoveiVladimir Kanovei $\begingroup$ If $x$ is non-empty, then the class of all sets with $x \equiv y$ is not a set. Also, for your last sentence, I guess you mean to ask whether it is consistent with ZF for there to be no transversal; as you point out, in ZFC (which is consistent with ZF) there certainly is a transversal. $\endgroup$ – LSpice Oct 27 '16 at 1:56 $\begingroup$ Indeed, it is consistent with ZF that there is no true transversal for $\equiv$: see Asaf's answer to this question. That said, I don't think this really answers the question - the OP isn't asking how cardinality is defined, but rather what type of object the "cardinality map" is (which you answer in parantheticals - a class function - although I'd argue that calling it a "total set function" is somewhat misleading, since you mean a map on/to sets, not a map which is a set). $\endgroup$ – Noah Schweber Oct 27 '16 at 2:11 $\begingroup$ I am not able to find an answer there. $\endgroup$ – Vladimir Kanovei Oct 27 '16 at 2:33 $\begingroup$ Pincus proved that it is consistent to have a definable choice of representatives for the Scott cardinals. You can find the fact this is not provable in Jech's Axiom of Choice book in chapter 11. $\endgroup$ – Asaf Karagila♦ Oct 27 '16 at 8:05 $\begingroup$ Pincus' paper: ams.org/mathscinet-getitem?mr=366666 $\endgroup$ – Asaf Karagila♦ Oct 27 '16 at 8:11 Not the answer you're looking for? Browse other questions tagged elementary-set-theory cardinals or ask your own question. Defining cardinality in the absence of choice Is the class of cardinals totally ordered? limitations of Functions as tuple mappings in ZFC set theory What are some examples of classes that are not sets? Is $2^\infty$ uncountable and is cardinality a continuous function? The set of all sets of the universe? Does cardinality really have something to do with the number of elements in a infinite set? Are there sets $A$ and $B$ such that $A \in B$ and $B \in A$? Is Russell's paradox really about sets as such? Decidability of the cardinality of a set given that the Continuum Hypothesis is independent from ZFC Is the fact that these sets cannot exist a consequence of Russell's paradox? Cardinality as "size of a set" Question about Hungerford's definition about cardinality
CommonCrawl
Multi-functional soft-bodied jellyfish-like swimming Ziyu Ren ORCID: orcid.org/0000-0003-0824-18051 na1, Wenqi Hu1 na1, Xiaoguang Dong1 & Metin Sitti ORCID: orcid.org/0000-0001-8249-38541 The functionalities of the untethered miniature swimming robots significantly decrease as the robot size becomes smaller, due to limitations of feasible miniaturized on-board components. Here we propose an untethered jellyfish-inspired soft millirobot that could realize multiple functionalities in moderate Reynolds number by producing diverse controlled fluidic flows around its body using its magnetic composite elastomer lappets, which are actuated by an external oscillating magnetic field. We particularly investigate the interaction between the robot's soft body and incurred fluidic flows due to the robot's body motion, and utilize such physical interaction to achieve different predation-inspired object manipulation tasks. The proposed lappet kinematics can inspire other existing jellyfish-like robots to achieve similar functionalities at the same length and time scale. Moreover, the robotic platform could be used to study the impacts of the morphology and kinematics changing in ephyra jellyfish. Untethered miniature swimming robots1,2,3,4,5,6,7,8 are indispensable in biomedical and environmental monitoring and remediation applications. Although existing miniature swimming robots have shown interesting mobility, their advanced functionalities, such as object manipulation ability, significantly decrease as the robot size becomes smaller, due to limitations of their miniaturized on-board components9. To achieve the object manipulation function, microswimmers operating in the low Reynolds number (Re) regime have been proposed to incur controlled viscous fluidic flows to manipulate objects10,11,12,13,14,15,16,17,18,19,20. However, it is unclear whether such approach is applicable in the moderate Re regime, where both inertial and viscous forces play critical roles21. In nature, scyphomedusae ephyra, the juvenile of the most widely distributed jellyfish, can smartly control the fluidic flow around their body to realize diverse functionalities, such as propulsion22,23,24, predation25,26,27,28, and mixing of the surrounding fluid29, despite their simple body structure. Inspired by ephyra, we propose an untethered jellyfish-like soft millirobot, which could realize multiple functionalities by producing diverse controlled fluidic flows around its body using its lappets, which are actuated by magnetic composite elastomer and bent by remote magnetic fields. Using this experimental setup, we study five distinct swimming modes to particularly investigate the interaction between the robot's soft body and incurred fluidic flows due to the robot's body motion and utilize such physical interaction for predation-inspired object manipulation capability of the robot, in addition to the robot's swimming propulsion, which has been the only focus of previous jellyfish-like robot studies1,2,5,30. The proposed soft robot's different lappet motion kinematics are used to conduct four different robotic tasks: selectively trap and transport objects of two different sizes, burrow into granular media consisting of fine beads to either camouflage or search a target object, enhance the local mixing of two different chemicals, and generate a desired concentrated chemical path. The magnetic composite elastomer is chosen here because it can be actuated and controlled wirelessly and fast by remote magnetic fields, which have minimal effects on the fluidic flow under investigation. Existing robots, including other jellyfish-like robots1,2,5,30, could also complete the same tasks if they generate the same proposed lappet kinematics and local flow structures at the same length and time scale. Moreover, the proposed soft robotic platform, which has similar size and fluidic flow generating behaviors as an ephyra, could be used to study the impacts of changing their morphology and kinematics, which can happen due to pollutants, ionic changes, and temperature variation, to their survivability and habitat24,31,32,33,34. Design and swimming behavior of the jellyfish-inspired swimming soft millirobot Scyphomedusae ephyra (diameter: 1–10 mm) is characterized by its incomplete bell (Fig. 1a) and lappet paddling-based propulsion23,25,26,28. Inspired by such organism, our robot has a magnetic composite elastomer core (Supplementary Fig. 1), which can beat up and down its eight lappets in a non-reciprocal manner like an ephyra (Supplementary Fig. 4) under the control of an external magnetic field (B). As shown in Supplementary Fig. 2, each lappet has two compliant joints. Only the lappet distal joint is allowed to bend in the contraction phase, while both the distal and proximal joints of the lappet can bend in the recovery phase. This design induces a large wetted area during the contraction phase to acquire high thrust while decreasing the wetted area significantly during the recovery phase to reduce the drag force, just like the kinematics of a jellyfish ephyra (Supplementary Note 1). An air bubble of 0.3 µL is introduced on top of the robot body by a pipette to reduce the robot's effective density to around 1.02 g⋅cm−3. Design and swimming behavior of the jellyfish-inspired swimming soft millirobot. a Comparison in morphology. The newly budded scyphomedusae ephyra (Aurelia aurita) possesses deep clefts between two adjacent lappets. The design of the jellyfish-inspired soft millirobot captures such feature. The photo of the real animal is taken in a pet store and the animal's care is in accordance with the institutional guidelines. b Kinematics and flow structures achieved by Mode A. The motion sequence, the velocity and vorticity fields, and the wake structures visualized by the fluorescein dye are all in one cycle. The three experiments are from three different trials using robots with the same design and kinematics. c Comparison of the robot and animal in two kinematic metrics: bell fineness and lappet velocity. The biological data is reproduced with permission from Feitl et al.28; permission is conveyed through Copyright Clearance Center, Inc. d Video snapshots of capturing a neutrally buoyant bead using the fluid flow around the robot's lappets With this soft robot design, we first design the external B to make the robot mimic the swimming mode of an ephyra studied by Feitl et al.28 based on two common metrics quantifying the ephyra swimming kinematics: bell fineness and lappet velocity (Supplementary Note 5). The resulting biomimetic kinematics, referred as Mode A, are shown in Fig. 1b, c and Supplementary Movie 1. Keeping the beating frequency (2.5 Hz) and Reynolds number of the robot body (ReB = 7–95) similar to an ephyra25,27,28 (Supplementary Note 6), the robot can capture the typical flow structures of its biological counterpart23,30. As visualized by the particle image velocimetry (PIV) technique in the second row of Fig. 1b, the starting vortex forms at the beginning of the cycle (0 s) and dissipates quickly during the contraction (0–0.16 s). The stopping vortex forms a bit later than the starting vortex (0.08–0.16 s) and sheds during the recovery (0.24–0.32 s). This behavior contrasts to the well-formed starting-stopping vortex pair in an adult scyphomedusae23,35,36. During swimming, a portion of the surrounding water also propagates along with the robot due to the pressure field around the body37, causing an upward drift flow below (indicated at 0.24 s). The flow structures are further visualized by a fluorescein dye, and the dye tree structure is observed to grow during swimming due to the induced drift and boundary layer shedding (the third row in Fig. 1b), similar to that created by an ephyra29. With such flow structures, the robot can trap objects from outside to the inside of the sub-umbrella region (Supplementary Fig. 3b) during propulsion (Fig. 1d), similar to the predation behavior of an ephyra25. Five basic swimming modes and their propulsion performances Apart from Mode A, we prescribe other four swimming modes (Modes B1, B2, B3, and C) with distinct fluidic flow generating behaviors and swimming performances by changing the lappet beating kinematics of the robot (Fig. 2; Supplementary Movie 2). First, we decrease the duration of the contraction phase (tC) and recovery phase (tR) to generate Mode B1 and Mode B2, respectively, while maintaining the lappet beating amplitude of Mode A. This change increases the angular velocity of the lappets in each mode (ωC and ωR are defined in the inset of Fig. 2b). As expected, the displacement per cycle increases when the robot beats faster in contraction and decreases when the robot beats faster in recovery (Fig. 2a). However, such difference does not reveal in the average velocity (v̅robot, mm⋅s−1). In Fig. 2c, Mode B1 sees a significant increase in v̅robot while v̅robot for Mode B2 does not change significantly. This is because a shorter tR reduces the duration of a beating cycle (tC + tR) and makes the robot beat more frequently, compensating the loss in distance per cycle. Further reducing tR, however, would eventually reduce v̅robot to zero or even negative as the downward displacement in recovery would be equal to or even greater than the upward displacement in contraction. This trend can be explained by a dynamic model (Supplementary Figs. 10d and 10e and Supplementary Note 10). In addition to velocity, we also evaluate the propulsion efficiency by the reciprocal of the cost of transport (1/COT, Supplementary Note 7). As shown in Fig. 2c, Mode B1 results in a significant increase in 1/COT as the higher ReB associated with the higher v̅robot benefits the robot more from the inertia. On the contrary, Mode B2 dispenses more energy during the recovery, which decelerates the robot, reducing the 1/COT. Five basic swimming modes being investigated and their impacts on swimming propulsion. a Kinematics of each swimming mode. The video snapshots of the start and end frames in one cycle are shown in the first row. One more frame is included for the glide phase of Mode B3. The red dashed lines indicate the final position of the robot in Mode A. The overlapped body profiles in one cycle are shown in the second row. b The difference in kinematics verified by angular velocity change. The beating angles and the angular velocities are defined in the inset. c The comparison in propulsion performance of each mode, showing the average velocity (v̅robot, mm⋅s-1) and efficiency represented by the reciprocal of cost of transport (1/COT). In b and c, the error bars represent the standard error of the mean, and N is the number of trials Besides Mode B1 and Mode B2, the flexibility of this soft robot platform also allows combining swimming modes from other biological species into that of the ephyra. Therefore, we prescribe Mode B3 having an extra glide phase with duration tG after the contraction. Such a combination of stroke and glide phases is widely used by aquatic animals to save energy38. With such change, Mode B3 shows statistically significant increase in 1/COT (Fig. 2c), as the glide phase increases the displacement per cycle (Fig. 2a), while the work done within one cycle remains the same as Mode A. This result agrees with the fact that the inertia still plays an important role within the moderate ReB39. However, v̅robot does not increase significantly in Mode B3 as the peak speed, which is achieved at the end of the contraction, does not significantly change compared with Mode A. Moreover, further increasing tG would eventually slow down the robot due to the drag of the fluid and gravity (Supplementary Fig. 10f and Supplementary Note 10). At last, we also prescribe Mode C with a smaller beating amplitude by decreasing the recovery angle θR while maintaining the contraction angle θC (defined in the inset of Fig. 2b), as previous literature shows that θR and the lappet beating amplitude (θR−θC) of the ephyra gradually decrease during its growth27. This mode can help us understand how the robot performs with the kinematics of a larger-size ephyra than the one referred by Mode A28. Fig. 2c shows Mode C worsens v̅robot23 while 1/COT does not have a significant change. According to the dynamic model, the beating amplitude of Mode A is very close to the optimal value that maximize the swimming velocity, and further increasing or decreasing the beating amplitude of Mode A both slow down the robot (Supplementary Fig. 10c and Supplementary Note 10). Object collection capability of the five basic swimming modes Changing the swimming kinematics of the robot also affects its object manipulation capability. First, the object collecting performance of the robot is quantified in Fig. 3 by the exchange rate of the water volume into the sub-umbrella region (Qexchange, mm3⋅s−1). Qexchange is defined in Fig. 3b and implies how fast the water can be exchanged into the sub-umbrella region during one cycle, from the bottom boundary of the sub-umbrella. We assume the same amount of water volume is sucked in during the recovery and expelled out during the contraction (Supplementary Note 9). Therefore, we only integrate the feeding flow (vdrift−vrobot) initiating from the beginning (Fig. 3a) to the end of the recovery. In nature, ephyra relies on this feeding flow to carry the preys into its sub-umbrella region for further capture and digestion25. The object collecting performance of the five basic swimming modes. a The flow field at the beginning of the recovery phase. The feeding flow is integrated from this time instant until the end of the recovery phase to estimate the volume flow sucked in during the recovery. The stronger stopping vortex suggests a stronger feeding flow. b The definition of Qexchange. The equation estimates the volume of the feeding flow Volfeeding exchanged into the sub-umbrella region by applying the axisymmetric assumption and integrating the 2D velocity obtained from PIV along the reference line. c Qexchange of the five basic swimming modes. A higher value indicates that the robot can collect objects faster. The error bars represent the standard error of the mean. N is the number of trials As Qexchange is proportional to (vdrift−vrobot)/T, it can be increased by either increasing (vdrift−vrobot) to make the robot move slower to engulf faster upwards drifting flow, or by decreasing T to increase the frequency of engulfing within a given time period. The experiment results support such prediction (Fig. 3c, Supplementary Fig. 9b). First, Qexchange increases significantly in both Mode B1, where vdrift rises, and Mode B2, where vrobot reduces. Second, the glide phase in Mode B3 has a negative contribution to Qexchange as an extra tG increases T while (vdrift−vrobot) is similar to Mode A. Third, Mode C lowers Qexchange as vdrift decreases and vrobot increases during the recovery, which results in a smaller (vdrift−vrobot). Object retaining capability of the five basic swimming modes In addition to object collecting performance, Fig. 4 shows the object retaining performance of the robot investigated by tracing the trajectories of the trapped beads (Supplementary Movie 3). Neutrally buoyant beads are used here to exclude the effect of gravity. The results are quantified by the cycles and distance (mm) of a bead being retained by the robot. After being trapped inside the sub-umbrella region, a neutrally buoyant bead has two ways to escape (Fig. 4c). In Mechanism-1, the bead can escape with a probability, PC, during the contraction phase. It acquires the momentum for escaping from the downward flow generated by contraction and can be further enhanced by the beating of the lappets. In Mechanism-2, the bead can escape with a probability, PR, during the recovery phase. It acquires the momentum for escaping from the stopping vortex and escapes from the distal tip of the lappet. Both PC and PR determine the retaining cycles and can be tuned with tC, tR and θR (θC does not change in five basic modes) in our experiment. As accurately calculating PC and PR requires infinite trials of experiments, we can only obtain estimated P'C and P'R (Fig. 5g, see "Methods: Estimating the escaping probabilities"). As summarized in Figs. 5 and 6, weak contraction (larger tC) and tighter sub-umbrella region (smaller θR) make the robot retain the beads for more cycles while changing tR has contradicting effects on P'C and P'R. A detailed discussion is provided below. The object retaining performance of the five basic swimming modes. a The typical translation trajectories of neutrally buoyant beads. In the left frame, the bead just enters the sub-umbrella region. In the right frame, the bead completely escapes the control of the robot. b The retaining metrics quantified by neutrally buoyant beads. N is the number of the counted beads. c Two escaping mechanisms. The trajectory sequence of Mechanism-1 is from Mode B1. The trajectory sequence of Mechanism-2 is from Mode B2. The two sequences are chosen as each mechanism is most typical in each corresponding mode The influence of changing the kinematic parameters on object manipulation capability. a–d Typical trajectories of the neutrally buoyant beads achieved by different swimming modes. The changing in escaping probabilities are marked on the left. e Recapture rate of beads being transported. A higher value indicates a less chance to escape through Mechanism-1. f Proportion of the beads escaping through Mechanism-2. A higher value indicates a higher chance to escape through Mechanism-2. g The escaping probabilities estimated from experiments. In e–g, N represents the number of the counted beads The mechanisms of changing the escaping probability by changing the robot's kinematic parameters. The comparisons between Modes B1, B2, B3, and C with Mode A are shown in a-d, respectively. The green up arrow, the red down arrow, and the black line indicate, respectively, the increase, decrease, and no change in kinematic parameter value or probability. The dashed and solid curves indicate, respectively, the positions of the robot body at the last and the current phases. The green and red dots indicate, respectively, the positions of the object at the last and the current phases First, P'C and P'R increase with decreasing tC (Mode B1, Figs. 5a, g and 6a). When tC shortens, the downward fluidic flow during contraction becomes stronger and the chance of physical contact between the lappets and the beads increases, enhancing the Mechanism-1 and increasing P'C. In addition, the strong stopping vortex (Supplementary Fig. 8) induced by high propulsion speed circulates more beads out of the sub-umbrella region during the upcoming recovery phase, enhancing the Mechanism-2 and increasing P'R. With both P'C and P'R increase, the object retaining cycles decrease (Fig. 4b). However, the overall retaining distance of Mode B1 does not decrease significantly, as Mode B1 makes the robot have a longer displacement per cycle (Fig. 2a), compensating the loss in the retaining cycles. Second, P'R increases with decreasing tR (Mode B2, Figs. 5b, 5g, and 6b) as shorter tR creates a stronger stopping vortex (Supplementary Fig. 8) and more beads could be circulated out from the distal tip of the lappet during recovery through Mechanism-2 (Fig. 5f), increasing P'R. In contrast, P'C decreases with decreasing tR as reducing tR increases the recapture of the beads beaten out during the last contraction phase, weakening the Mechanism-1 (Fig. 5e, see "Methods: Bead trajectory tracing experiments"). Therefore, in Mode B2, the increase in P'R compensates the loss in P'C. Consequently, the retaining cycles do not show statistically significant change (Fig. 4b). Third, P'C and P'R decrease with decreasing θR (Mode C, Figs. 5d, g and 6d). Reducing θR solely decreases ωC and ωR, which consequently reduces the magnitude and scale of the vortices incurred (Supplementary Fig. 8). Therefore, the beads circulate slower inside the sub-umbrella region and are less likely to escape through both Mechanism-1 and Mechanism-2. Moreover, a smaller θR creates a tighter sub-umbrella region during contraction, and it squeezes the beads towards the robot central axis, where a stronger upward drifting flow exists (Fig. 1b). This behavior increases the chance of the beads being recaptured by the upcoming recovery phase, decreasing P'C (Fig. 5e, see "Methods: Bead trajectory tracing experiments"). During the recovery, the tighter sub-umbrella region makes the trapped beads have more chances to collide with the inner wall of the sub-umbrella region when they try to escape through the Mechanism-2, decreasing P'R. With P'C and P'R both reduced, Mode C greatly increases the retaining cycles and distance of the trapped beads (Fig. 4b). At last, increasing tG does not alter P'C, P'R (Mode B3, Figs. 5c, g and 6c) and hence the retaining cycles (Fig. 4b). There is no fluid exchange during gliding (Supplementary Fig. 9b) and consequently, the beads can keep their positions relative to the robot. Therefore, Mode B3 is observed to have no significant change in retaining cycles in comparison with Mode A. For each retaining cycle, however, the beads can travel a longer distance as it is possible to tune tG to make the robot have longer displacement per cycle than Mode A (Supplementary Fig. 10f), which is the case for Mode B3 (Fig. 2a). Note that the probabilistic estimate of the bead transportation is based on idealized assumptions. More experiments are needed to find the extent to which the P'C and P'R are still valid, e.g., in more complex conditions. Four robotic tasks realized by newly prescribed modes The swimming propulsion and object manipulation performances of the above five basic swimming modes are summarized in Table 1. Table 1 shows that no swimming mode performs best in all aspects. For example, Mode B1, who is better in propulsion and object collecting while worse in object retaining, contrasts with Mode C, who is worse in propulsion and object collecting while better in object retaining. Based on Table 1, we prescribe additional new modes (Modes D, E, F, and G) with different lappet kinematics to enable the robot to achieve specific tasks, other than swimming, as shown in Fig. 7. Table 1 Summary of propulsion and object manipulation performances of different basic swimming modes Four tasks realized by newly prescribed modes extended from the five basic modes. a Object selective transportation. The robot can use Mode D1 to transport large beads (diameter: 0.99 mm ± 0.025 mm, density: 1.05 g⋅cc-1) while expelling small beads (diameter: 500–600 µm, density: 1.05 g⋅cc-1). It can also use Mode D2 to transport small beads and leave behind large beads. b Burrowing for camouflage and searching in fine beads (diameter: 200–300 µm density: 1.05 g⋅cc-1) with Mode E. c Locally mixing two food dyes with different colors using Mode F. d Generating a desired concentrated chemical path. The robot can swim upwards to create a straight chemical path with Mode C. It can also be steered in 2D to create a desired S-shaped chemical path with Mode G As the first task, the robot can selectively transport beads with different sizes from the bottom to the top of the water tank using the proposed Mode D1 (trapping large beads while expelling small beads) and Mode D2 (trapping small beads while leaving large beads) in Fig. 7a (Supplementary Movie 4). Such selective transportation is useful to transport and deliver drug type of cargos6 in biomedical applications, collect samples with specific size1,40 for environmental monitoring, and clean microplastics for environmental cleaning41. Beads heavier than water (density: 1.05 g⋅cc−1) are used here to ensure that they initially stay at the tank bottom for fair comparison. For such beads, gravity enables a third escape mechanism (Mechanism-3) if the beads carried by drift flow cannot catch up with the moving robot. Through an analysis of the drag force and gravity, we know that when the densities are the same, the larger beads are easier to escape through the Mechanism-3, and hence requires a faster average feeding flow speed (v̅feeding) to catch up with the robot. A thorough discussion on the mechanism of selective transportation and prescribing kinematics can be found in Supplementary Note 12. In summary, we prescribe Mode D1 with a larger v̅feeding to trap and transport the large beads. While the small beads are expelled due to the shorter relaxation time. In contrast, by reducing θR relative to Mode D1, we prescribe Mode D2 with smaller v̅feeding and better object retaining capability. Therefore, the robot can leave behind the large beads while retaining small beads. Such selective trapping and transportation is repeatable and quantified in Supplementary Figs. 18 and 20. As the second task, the robot can also burrow, by interacting with the solid granular medium on the bottom surface. Inspired by many sand-dwelling animals that burrow to predate or to escape from the predators42,43, here we show that our robot can burrow into granular media to either camouflage or search a buried object under the fine beads using Mode E (Fig. 7b, Supplementary Movie 5). To realize burrowing, the recovery phase of Mode E is prescribed to be much stronger than the contraction phase (ωC = 17.36 rad⋅s−1 < ωR = 39.31 rad⋅s−1) and to reach the largest θR that can be achieved in our system (θR = 2.63 rad). Therefore, the robot cannot propel upwards and stays at the bottom. The stronger recovery than any other swimming modes also makes the beads under the body easier to be expelled through the Mechanism-2. Some of the beads are expelled directly to the side of the robot (Supplementary Fig. 21b). Other beads are first expelled up and then beaten away further by the lappets during the following recovery phases (Supplementary Fig. 21c). For camouflage, the robot is required to bury itself. Therefore, the robot is positioned flat. The beads under the robot are first expelled up, and then they gradually settle down and bury the robot body. For object searching, however, the robot is required to expel out the beads with a tilted angle to avoid the beads falling back and covering the target object again. Therefore, the robot is tilted to expel beads obliquely. The robot eventually finds the target black bead and expels it out. A detailed discussion on the burrowing process can be found in Supplementary Note 13. As the third task, we demonstrate that the robot can enhance the local mixing of the fluids using Mode F (Fig. 7c, Supplementary Movie 6), which is inspired by the discovery that the swimming of the ephyra helps to enhance the mixing of the ocean at the moderate Re29. Many marine species, such as asteroids, sea urchin and some corals, reproduce externally by releasing gametes into the surrounding flow44,45,46, and the successful fertilization relies on the sperm-egg contacts45. Finding effective kinematics for untethered miniature swimming robots to mix the fluids locally may potentially help these organisms to increase the contact chance of the gametes, boosting their reproduction. Compared with Mode A, Mode F increases both the contraction (ωC = 27.61 rad⋅s−1) and the recovery (ωR = 27.70 rad⋅s−1) to locally enhance mixing. With the recovery phase being slightly stronger than the contraction phase, the robot can suspend at the bottom center of the tank. During mixing, the robot first draws dyes from both sides during recovery, then squeezes the dyes from both sides to the sub-umbrella region by contraction, and finally redistributes the mixed dye back to the environment through Mechanism-1 and Mechanism-2 (Supplementary Fig. 22). A detailed discussion on the mixing process can be found in Supplementary Note 14. As the last task, we demonstrate that the robot can generate a desired chemical path in its wake by using Mode C (Fig. 7d, Supplementary Movie 7). This function could be useful in spreading pheromones (or other specific chemicals) into desired positions, by which the robot could intentionally interact with aquatic animals by controlling their migration, mating, and various social behaviors47. We choose Mode C to finish this task as it has the best object retaining performance (Table 1) and can better resist the spreading of the chemicals. When swimming upwards with Mode C from the dye bolus injected on the tank bottom, the robot can create a straight and concentrated chemical path in comparison with other basic modes (Supplementary Fig. 23b). We then show that the chemical path can also be generated as a more complex S-shaped trajectory by using the external magnetic field to steer the robot in two dimensions by Mode G (Fig. 7d). In comparison with Mode C, Mode G lets the robot beat and swim faster with a smaller sub-umbrella region. A detailed discussion on creating the chemical path can be found in Supplementary Note 15. The untethered jellyfish-like soft millirobot, which has similar size and fluidic flow generation behavior as an ephyra, can achieve diverse physical functions and robotic tasks by manipulating its surrounding fluidic flow. The ability to utilize the fluidic flow to achieve multiple functions and tasks is independent of the magnetic field since the incurred flow structures only rely on the interplay between the robot body and the fluid. Therefore, the above design and swimming modes may potentially be realized by current or future jellyfish-like soft robots built with other on-board or off-board actuation methods48 such as biological muscle cells30, shape memory alloys5, hydraulic actuators1, dielectric elastomers2,49, hydrogels50, and liquid crystal elastomers51. In addition, this robotic platform could also be used as a scientific tool to further study the behaviors of ephyrae due to its various advantages, such as the ability to change the locomotion mode on demand and not being influenced by physiological factors52,53. Ephyrae hold a critical position in ocean ecological system due to their large quantity and wide distribution54. Although many previous researches have studied ephyrae's most typical swimming kinematics23,25,26,27,28,29, the impacts of changing the morphology and kinematics of the ephyrae to their survivability and habitat, which can happen when the environment is affected by pollutants31,32,33, ion concentration change34, and temperature variation24, remains to be further investigated using such tool as a future work. Magnetic composite elastomer core The detailed design of the magnetic composite elastomer core is shown in Supplementary Fig. 1a. The core has a thickness of 65 µm. The circumscribed circle of the core has a diameter of 3 mm. A circular hole with a diameter of 0.5 mm is designed in the robot center to trap the bubble. To ensure that the bubble trapped on the top does not go through the hole to the bottom, an elastic ring with a thickness of 65 µm is designed to be on the top center of the magnetic composite elastomer core. The magnetic composite elastomer has been reported in our previous work7. It is a composite of the soft silicone rubber (Ecoflex 00–10, Smooth-On Inc.) and the neodymium-iron-boron (NdFeB) magnetic microparticles (MQP-15-7, Magnequench; average diameter: 5 µm) with a mass ratio of 1:1. The resulting magnetic elastomer has a density of 1.86 g⋅cm−3. The mixture is then cast onto a flat poly (methyl methacrylate) plate coated with a thin layer of parylene C (6 µm thick) to form a thin film with a thickness of 65 µm. The material is then put into the oven under 60 ℃ for curing around 1 h. After curing, the magnetic part is cut out by using the UV laser. The elastic ring for fixing the bubble position is nonmagnetic and is made of Ecoflex 00–10 loaded with aluminum powder (5413 H Super, laborladen.de) in a weight ratio of 1:2. It is fabricated in a similar way as the magnetic core and then glued to the magnetic composite elastomer core by Ecoflex 00–10. The magnetic composite elastomers mentioned above are hydrophobic, which is determined through the sessile droplet method. The static water contact angle of the material used to build the magnetic composite elastomer core is characterized to be 108 ± 3°. The static water contact angle of the material used to build the elastic ring is characterized to be 110 ± 5°. To create the magnetization profile of the core, a water droplet is pipetted on the core (Supplementary Fig. 1c). The core can automatically wrap the droplet due to the attraction of the water droplet55, forming an ellipsoidal shape (Supplementary Figs. 1d and 1e). The volume of the water droplet is well controlled to be 1 µL using a pipette. We then put the core along with the water droplet in a freezer until the water droplet is totally frozen. This is to fix the ellipsoidal shape of the core during the magnetization process. We finally apply a strong uniform B field (1.8 T) inside a vibrating sample magnetometer (VSM, EZ7, Microsense) in the direction shown in Supplementary Fig. 1d, which makes the magnetization magnitude to be 71,700 ± 1725 A·m−1 and generates the magnetization profile shown in Supplementary Fig. 1f. With such magnetization profile, the magnetic composite elastomer core can deform upwards when By > 0, and deform downwards when By < 0 (Supplementary Fig. 1b). When By = 0, however, the magnetic composite elastomer core still shows a curvature. As discussed in our previous work7, such deformation at the rest state may be caused by the residual strain energy due to the fabrication process. We currently have not observed any influence of this residual strain energy on our experiment results. The robot can be steered by rotating the direction of the applied B field. With an additional horizontal pair of coil set, we can steer the robot in the 2D plane by applying magnetic torque (Supplementary Movies 1 and 7). In the last scene of Supplementary Movie 1, the robot toddles slightly when it tilts too much from swimming vertically. However, this does not affect the steering of the robot. There are two possible reasons for this. First, the deformation of the magnetic composite elastomer core is not perfectly axial symmetric, causing the net magnetic moment deviating from its central axis and inducing the undesired magnetic torque. Second, this might be due to the bubble, which is trapped on top of the body, providing a self-righting torque, which always rotates the robot central axis back to be vertical. We observe that such phenomenon relieves when the beating frequency increases and beating amplitude reduces as in Supplementary Movie 7. Such dynamic process would be further investigated in the future. Passive lappets The detailed design of the passive lappets is shown in Supplementary Fig. 2a. Each passive lappet is composed of five parts: T stopper, proximal pad, distal pad, proximal joint and distal joint. The T stopper, proximal pad, and distal pad are all made of parylene C. They are cut out from a layer of parylene C with a thickness of 6 µm. The proximal and distal joints linking the proximal and the distal pads are made of Ecoflex 00–10. They have an average thickness of 15 µm. The T stopper can restrict the upward bending of the proximal joint while having little influence on the downward bending. With such mechanism, the proximal joint bends less during the contraction phase than during the recovery phase (Supplementary Fig. 2c). With the robot center pinned on a pillar (restricting only the vertical translation), the frequency response of the whole lappet, including both magnetic lappet and passive lappet, is tested under sinusoidal By fields of three different magnitudes (10, 20, and 30 mT) and different frequencies (0.5–30 Hz) (see Supplementary Fig. 2b). Five measurements are conducted for each case. In this report, we borrow the concept of cut-off frequency from the linear system to clarify the relation between the actuating magnetic field (magnitude and frequency) and the resultant beating amplitude. We define the cut-off frequency of the lappet to be the frequency at which the beating amplitude, θR−θC, is 0.707 times that achieved under 0.5 Hz. Linear least square regression is used to obtain the relation between the actuation frequency and the beating amplitude to find out the cut-off frequency (Supplementary Fig. 2b). In Supplementary Fig. 2b, we show that the cut-off frequency of the lappet can be increased by increasing the B field magnitude. In the future, the non-linearity in frequency response (e.g., the frequency response observed at 10 mT) will be investigated. Electromagnetic coil setup and particle image velocimetry system The schematic of the experimental setup for quantitative characterization experiments is shown in Supplementary Fig. 3a. The two electromagnetic coils providing the uniform vertical external magnetic field are arranged in the Helmholtz configuration. 95% homogeneous region of the coils is measured to be 45 mm along Y direction. The largest |B| it can provide is 30 mT. During the experiment, the robot swims in a transparent water tank situating at the central region of the coil system. The dimension of the tank used for quantifying the five basic swimming modes is 100 × 60 × 40 mm3 (length × width × height). To minimize the viscosity change due to the temperature variation and guarantee the comparability of the results, the coil system is water cooled, and all the experiments are conducted at around 23 ℃. To minimize the wall and surface effects, only the experimental data obtained when the robot swims at least 5 mm away from the bottom and the water-air interface are adopted. The fluid flow around the robot is characterized by a PIV system (Dantec Dynamics, Inc.). The water is evenly seeded with 1 µm-diameter polystyrene particles loaded with fluorochrome dye (Molecular Probes, Inc., Eugene, OR, USA; 1.1 ± 0.035 µm) which can be excited with a laser at 535 nm wavelength and emits fluorescence at 575 nm. The laser beam (1000 Hz, 527 nm) is expanded into a plane and projected vertically from the water tank bottom. The movement of the particles is captured using a high-speed camera (M310, Phantom, Inc.). A 570 nm high-pass lens filter is used to increase the contrast between the PIV particles and the background. The image sequences recorded are then processed in the commercial software (DynamicStudio 2016a, Dantec Dynamics, Inc.) to obtain velocity fields by applying a cross-correlation algorithm. The characterization experiments are all performed in this setup using multiple robots with the same design. No fatigue of the robot has been observed throughout our experiments. Currently, the control of the robot is realized by providing an oscillating magnetic field along its body central axis. Therefore, the controllable degree of freedoms (DOFs) of the robot's rigid-body translational and rotational motions depend on the configuration of the electromagnetic coil system. With a single pair of fixed electromagnets, we have one control DOF for controlling the 1-DOF rigid body translational motion (e.g., swimming vertically in characterization experiments). With one more pair of fixed electromagnets, one more controllable DOF for rotational motion can be obtained and steering the robot in 2D can be achieved (e.g., generating S-shaped chemical path). To improve the control performance, we can add more coils to the system56,57, improve the dynamic model (Supplementary Note 10), and implement visual feedback control. Definition of the sub-umbrella region The sub-umbrella or bell region is the area used by the robot to trap objects, similar to ephyra trapping its prey. The sub-umbrella region is defined in Supplementary Fig. 3b. When the robot deforms its body into a bell shape (0.08–0.24 s in Supplementary Fig. 3b), the sub-umbrella region is defined to be the enclosed region between a reference line that links the lappet tips of both sides and the whole robot body30. When the robot deforms into an inversed-bell shape (0 s, 0.32–0.40 s in Supplementary Fig. 3b), the sub-umbrella region is defined to be the area enclosed by the robot body and a horizontal reference line that is 300 µm away from the bottom of the robot. 300 µm distance is selected here to accommodate at least one small bead (used in bead trajectory tracing experiments, 212–250 µm in diameter) right below the robot's body. Kinematics of five basic swimming modes We use two methods to tune the five basic swimming modes. All the control signals used are shown in Supplementary Fig. 5d. In the first approach, we tune the duration of each phase of Mode A while maintaining the beating amplitude to generate other three basic swimming modes (Modes B1, B2, and B3). Specifically, Mode B1 has a shorter contraction (tC), 21.76% of that in Mode A, while the recovery duration (tR) is kept unchanged. Consequently, the average angular velocity of the contraction ωC = (θR−θC) / tC = 55.85 rad⋅s−1 is larger than ωC = 12.16 rad⋅s−1 of Mode A, and the contraction is more powerful. Mode B2 has a shorter recovery (tR), and its recovery phase is 50.13% of that in Mode A, while the tC is the same. Consequently, the angular velocity of the recovery ωR = (θR−θC)/tR = 11.69 rad⋅s−1 is larger than ωR = 5.89 rad⋅s−1 of Mode A, and the recovery is more powerful. Mode B3 has an extra glide phase compared with Mode A, and the glide duration tG = 0.2 s. In the second approach, we prescribe Mode C with smaller beating amplitude by decreasing θR while maintaining θC and the duration of each phase the same compared with Mode A. Consequently, the average velocities of both the contraction and the recovery phases (ωC = 6.27 rad⋅s−1, ωR = 3.04 rad⋅s−1) are weaker than Mode A. We do not change the lappet kinematics by simply changing the actuation frequency, f, because of the following reasons. First, since the beating period T = 1/f = tC + tR + tG, changing the actuation frequency f alters tC, tR and tG at the same time. Second, only changing f can also change the beating angles θR and θC, as well as the beating amplitude θR−θC (Supplementary Fig. 2b). Therefore, only changing f is not appropriate if the impacts of each kinematic parameter (tC, tR, tG, θR, and θC) are to be investigated. Also, see Supplementary Notes 3 and 4 for the details of tuning individual kinematic parameters while maintaining others. Note that θC doesn't change for all characterization modes. Under the current experiment setup, the minimum θC, the maximum θR, the maximum ωC, and the maximum ωR that can be achieved are, respectively, 0.44 rad, 2.63 rad, 55.85 rad⋅s−1, and 39.31 rad⋅s−1. Bead trajectory tracing experiments The bead trajectory tracing experiments shown in Fig. 4 are conducted in a transparent water tank with a size of 100 × 60 × 40 mm3 (length × width × height). Polystyrene beads with the diameter of 212–250 µm and density of 1.00 g⋅cc−1 (Cospheric, Inc.) are used to exclude the effect of the gravity. This is in accordance with the natural preys of ephyrae, as their sizes range from 100 to 5000 µm26 and are often regarded as neutrally buoyant58. In each experiment trial, the robot swims upwards from the tank bottom, and the beads scattered in the water are randomly captured by the robot. The motion of the robot and the beads are captured by a high-speed camera with a frame rate of 500 frames per second (fps). For each swimming mode, the experiment is repeated for 5–10 times. See Fig. 4a, Fig. 5a-d, and Supplementary Movie 3 for the typical transportation trajectory of the bead in each mode. We manually trace the trajectories of the beads captured into the sub-umbrella region and use two metrics, retaining cycles and retaining distance (mm), to quantify the object retaining performance. The retaining cycle is defined as the cycle number of a bead being retained during the trapping process. The retaining distance is defined as the distance of a bead being transported during the trapping process. The trapping process begins at the time instant when the bead is first captured into the sub-umbrella region and terminates when the beads completely escape. Here, the word 'completely' means the escaped beads will not be recaptured into the sub-umbrella region during the rest of the trial (the end of the whole upwards swimming process). Apart from the metrics defined above, we also quantify two indicators: the recapture rate and the proportion of escaping through Mechanism-2 (Fig. 5e, f). Recapture usually happens to a bead that is beaten out during the contraction (Mechanism-1) but is still near the drift flow of the robot. The upcoming recovery phases can then pull the bead back into the sub-umbrella region. The recapture rate indicates the proportion of beads that are recaptured during the transportation process, among all the beads that are transported. It can be used to indicate whether the escaping through Mechanism-1 is reduced. The proportion of escaping through Mechanism-2 evaluates the proportion of the beads that escape through Mechanism-2, among all the beads that escape. A higher value indicates that a trapped bead has a higher chance to escape during the recovery. Since the beads are captured randomly, the number of the counted beads in different swimming modes are different. In Modes A, B1, B2, B3, and C, the number of the counted beads are respectively 32, 30, 35, 31, and 18. Estimating the escaping probabilities The process of the bead transportation shown in our experiments (Fig. 4) is very complicated. The neutrally buoyant beads are randomly seeded in the water and are randomly trapped by the robot. It is hard to find a simple and deterministic rule to predict the trajectory of each bead being trapped. Since we know little about the whole transporting process, probabilistic models are applicable to describe the complex behavior of the system59. As for a neutrally buoyant bead, we assume each swimming cycle of the robot is an independent event that can produce two outcomes. In the first outcome, the trapped bead escapes the sub-umbrella region with probability Pout. In the second outcome, the trapped bead is still retained within the sub-umbrella region with probability 1–Pout. If we further assume the probabilities of these two outcomes hold constant, the number of the beating cycles needed to expel a trapped bead has a geometric distribution. The expected number of the retaining cycles can then be expressed as: $$E_{{\mathrm{retain}}} = \mathop {{\lim }}\limits_{k \to \infty } \mathop {\sum }\limits_{n = 1}^k n(1 - P_{{\mathrm{out}}})^{n - 1}P_{{\mathrm{out}}}$$ In the first outcome, the escape probability Pout can be expressed as Pout = PC + PR because the beads can escape either through Mechanism-1 with probability PC (during contraction) or through Mechanism-2 with probability PR (during recovery). With the above assumptions, Equation 1 can be formulated as: $$\begin{array}{l}E_{{\mathrm{retain}}} = \lim _{k \to \infty }\mathop {\sum}\limits_{n = 1}^k {\left( {1 - P_{{\mathrm{out}}}} \right)^{n - 1}} \left( {P_{\mathrm{C}} + P_{\mathrm{R}}} \right)\\ \quad \quad \; = P_{\mathrm{C}}\lim _{k \to \infty }\mathop {\sum}\limits_{n = 1}^k {\left( {1 - P_{{\mathrm{out}}}} \right)^{n - 1}} + P_{\mathrm{R}}\lim _{k \to \infty }\mathop {\sum}\limits_{n = 1}^k {\left( {1 - P_{{\mathrm{out}}}} \right)^{n - 1}} \\ \quad \quad \; = E_{{\mathrm{retain}} - 1} + E_{{\mathrm{retain}} - 2}\end{array}$$ where Eretain−1 and Eretain−2 are, respectively, the expected number of the retaining cycles of a bead that is expelled through Mechanism-1 and Mechanism-2. Eretain−1 and Eretain−2 can further be derived as: $$\begin{array}{*{20}{c}} {E_{{\mathrm{retain}} - 1} = \lim _{k \to \infty }P_{\mathrm{C}}\left( {\frac{{1 - \left( {1 - P_{{\mathrm{out}}}} \right)^k}}{{P_{{\mathrm{out}}}^2}} - \frac{{k\left( {1 - P_{{\mathrm{out}}}} \right)^k}}{{P_{{\mathrm{out}}}}}} \right)} \\ {E_{{\mathrm{retain}} - 2} = \lim _{k \to \infty }P_{\mathrm{R}}\left( {\frac{{1 - \left( {1 - P_{{\mathrm{out}}}} \right)^k}}{{P_{{\mathrm{out}}}^2}} - \frac{{k\left( {1 - P_{{\mathrm{out}}}} \right)^k}}{{P_{{\mathrm{out}}}}}} \right)} \end{array}$$ Because 0 <Pout <1 and k → ∞, we can obtain \(E_{{\mathrm{retain}} - 1} = \frac{{P_{\mathrm{C}}}}{{P_{{\mathrm{out}}}^2}}\) and \(E_{{\mathrm{retain}} - 2} = \frac{{P_{\mathrm{R}}}}{{P_{{\mathrm{out}}}^2}}\) from Eq. 3. As it is impossible to implement infinite trials of experiments (k → ∞), theoretically we cannot get PC and PR from the experiment results. However, if we assume the average retaining cycles of the trapped beads from the beads tracing experiments (Fig. 4b) as reasonable estimations to Eretain−1 and Eretain−2, then we can obtain probabilities P'C and P'R for each swimming mode (Fig. 5g). Therefore, if P'C = PC and P'R = PR, then Eretain−1 and Eretain−II are the expected value for the cycling number recorded in the experiment. In fact, the last beating cycle does have an influence on the following beating cycle, since the initial position and velocity of the bead of the following cycle are influenced by the last beating. Probabilities are given here to provide an insight into the experimental results and provide a guideline for designing kinematics. Strict stochastic modeling about retaining capabilities will be conducted in the future. Beads with a higher density than water may fall out of the sub-umbrella region or lag behind the robot due to the gravity, in addition to Mechanisms-1 and 2. This is classified as the escape Mechanism-3. Mechanism-3 is discussed in Supplementary Note 12 and is not included here. Materials used in the demonstrated four tasks In selective transportation experiments, we use two kinds of polystyrene beads with different sizes (Polysciences, Inc.). The diameters of the large beads are 965–1015 µm, and the diameters of the small beads are 500–600 µm. Both the large and small beads have the same densities (1.05 g⋅cc−1). In burrowing experiments, the fine beads are 200–300 µm, density of 1.05 g⋅cc−1 (Polysciences, Inc.). The target objects to be searched are large beads painted with black ink (diameter: 965–1015 µm, density: 1.05 g⋅cc−1, Polysciences, Inc.). In local mixing experiments, the dyes used for demonstrations are food dyes (Bakeryteam, GmbH). In chemical path generation experiments, we use fluorescein sodium as the chemical to be distributed (Fisher Scientific U.K., Ltd.). Statistical method and normalization Because of the limited sample size, the two-sided Wilcoxon rank sum test is applied to examine the statistical significance. The test is conducted between the values of Mode A and the values of the other four basic swimming modes. Asterisks are used to denote the statistically significant difference. *, **, and *** denote P ≤ 0.05, P ≤ 0.01, and P ≤ 0.001, respectively. To easily visualize the difference between Mode A and other modes, the experimental data are normalized. The values of each data set are first divided by the average value of Mode A and then subtract 1. Therefore, a positive value indicates the performance of the corresponding swimming mode outperforms Mode A, while a negative value indicates the corresponding swimming mode underperforms Mode A. All data generated or analyzed during this study are included in the published article and its Supplementary Information, and are available from the corresponding author on reasonable request. The MATLAB codes used in this study are available from the corresponding author on reasonable request. Peer review information: Nature Communications thanks Tiefeng Li and other anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Frame, J., Lopez, N., Curet, O. & Engeberg, E. D. Thrust force characterization of free-swimming soft robotic jellyfish. Bioinspir. Biomim. 13, 064001 (2018). Tingyu, C. et al. Untethered soft robotic jellyfish. Smart Mater. Struct. 28, 015019 (2019). Chen, Y. F. et al. A biologically inspired, flapping-wing, hybrid aerial-aquatic microrobot. Sci. Robot. 2, eaao5619 (2017). Katzschmann, R. K., DelPreto, J., MacCurdy, R. & Rus, D. Exploration of underwater life with an acoustically controlled soft robotic fish. Sci. Robot. 3, eaar3449 (2018). Villanueva, A., Smith, C. & Priya, S. A biomimetic robotic jellyfish (Robojelly) actuated by shape memory alloy composite actuators. Bioinspir. Biomim. 6, 036004 (2011). Sitti, M. et al. Biomedical Applications of Untethered Mobile Milli/Microrobots. Proc. IEEE 103, 205–224 (2015). Hu, W., Lum, G. Z., Mastrangeli, M. & Sitti, M. Small-scale soft-bodied robot with multimodal locomotion. Nature 554, 81–85 (2018). Huang, H. W., Sakar, M. S., Petruska, A. J., Pane, S. & Nelson, B. J. Soft micromachines with programmable motility and morphology. Nat. Commun. 7, 12263 (2016). Sitti, M. Mobile Microrobotics. (MIT Press, Cambridge, MA, 2017). Floyd, S., Pawashe, C. & Sitti, M. Two-dimensional contact and noncontact micromanipulation in liquid using an untethered mobile magnetic microrobot. IEEE Trans. Robot. 25, 1332–1342 (2009). Pawashe, C., Floyd, S., Diller, E. & Sitti, M. Two-dimensional autonomous microparticle manipulation strategies for magnetic microrobots in fluidic environments. IEEE Trans. Robot. 28, 467–477 (2012). Ye, Z., Diller, E. & Sitti, M. Micro-manipulation using rotational fluid flows induced by remote magnetic micro-manipulators. J. Appl. Phys. 112, 064912 (2012). Ye, Z. & Sitti, M. Dynamic trapping and two-dimensional transport of swimming microorganisms using a rotating magnetic microrobot. Lab Chip 14, 2177–2182 (2014). Hu, W., Fan, Q. & Ohta, A. T. An opto-thermocapillary cell micromanipulator. Lab Chip 13, 2285–2291 (2013). Petit, T., Zhang, L., Peyer, K. E., Kratochvil, B. E. & Nelson, B. J. Selective trapping and manipulation of microscale objects using mobile microvortices. Nano Lett. 12, 156–160 (2011). Zhang, L., Petit, T., Peyer, K. E. & Nelson, B. J. Targeted cargo delivery using a rotating nickel nanowire. Nanomed. Nanotechnol. Biol. Med. 8, 1074–1080 (2012). Tung, H. W., Peyer, K. E., Sargent, D. F. & Nelson, B. J. Noncontact manipulation using a transversely magnetized rolling robot. Appl. Phys. Lett. 103, 114101 (2013). Zhang, L., Peyer, K. E. & Nelson, B. J. Artificial bacterial flagella for micromanipulation. Lab Chip 10, 2203–2215 (2010). Zhou, Q., Petit, T., Choi, H., Nelson, B. J. & Zhang, L. Dumbbell fluidic tweezers for dynamical trapping and selective transport of microobjects. Adv. Funct. Mater. 27, 1604571 (2017). Huang, T. Y. et al. Generating mobile fluidic traps for selective three-dimensional transport of microobjects. Appl. Phys. Lett. 105, 114102 (2014). El Yacoubi, A., Xu, S. & Wang, Z. J. Computational study of the interaction of freely moving particles at intermediate Reynolds numbers. J. Fluid Mech. 705, 134–148 (2012). McHenry, M. J. & Jed, J. The ontogenetic scaling of hydrodynamics and swimming performance in jellyfish (Aurelia aurita). J. Exp. Biol. 206, 4125–4137 (2003). Blough, T., Colin, S. P., Costello, J. H. & Marques, A. C. Ontogenetic changes in the bell morphology and kinematics and swimming behavior of rowing medusae: the special case of the limnomedusa Liriope tetraphylla. Biol. Bull. 220, 6–14 (2011). Nawroth, J. C., Feitl, K. E., Colin, S. P., Costello, J. H. & Dabiri, J. O. Phenotypic plasticity in juvenile jellyfish medusae facilitates effective animal–fluid interaction. Biol. Lett. 6, 389–393 (2010). Higgins, J. III, Ford, M. & Costello, J. Transitions in morphology, nematocyst distribution, fluid motions, and prey capture during development of the scyphomedusa Cyanea capillata. Biol. Bull. 214, 29–41 (2008). Sullivan, B. K., Suchman, C. L. & Costello, J. H. Mechanics of prey selection by ephyrae of the scyphomedusa Aurelia aurita. Mar. Biol. 130, 213–222 (1997). Nagata, R. M., Morandini, A. C., Colin, S. P., Migotto, A. E. & Costello, J. H. Transitions in morphologies, fluid regimes, and feeding mechanisms during development of the medusa Lychnorhiza lucerna. Mar. Ecol. Prog. Ser. 557, 145–159 (2016). Feitl, K. E., Millett, A. F., Colin, S. P., Dabiri, J. O. & Costello, J. H. Functional morphology and fluid interactions during early development of the scyphomedusa Aurelia aurita. Biol. Bull. 217, 283–291 (2009). Nawroth, J. C. & Dabiri, J. O. Induced drift by a self-propelled swimmer at intermediate Reynolds numbers. Phys. Fluids 26, 091108 (2014). Nawroth, J. C. et al. A tissue-engineered jellyfish with biomimetic propulsion. Nat. Biotechnol. 30, 792–797 (2012). Faimali, M. et al. Ephyra jellyfish as a new model for ecotoxicological bioassays. Mar. Environ. Res. 93, 93–101 (2014). Costa, E. et al. Effect of neurotoxic compounds on ephyrae of Aurelia aurita jellyfish. Hydrobiologia 759, 75–84 (2015). Echols, B. S., Smith, A. J., Gardinali, P. R. & Rand, G. M. The use of ephyrae of a scyphozoan jellyfish, Aurelia aurita, in the aquatic toxicological assessment of Macondo oils from the Deepwater Horizon incident. Chemosphere 144, 1893–1900 (2016). Hoffmann, C. & Smith, D. F. Lithium and rubidium: effects on the rhythmic swimming movement of jellyfish (Aurelia aurita). Experientia 35, 1177–1178 (1979). Dabiri, J. O., Colin, S. P., Costello, J. H. & Gharib, M. Flow patterns generated by oblate medusan jellyfish: field measurements and laboratory analyses. J. Exp. Biol. 208, 1257–1265 (2005). Gemmell, B. J. et al. Passive energy recapture in jellyfish contributes to propulsive advantage over other metazoans. Proc. Natl Acad. Sci. 110, 17904–17909 (2013). Katija, K. & Dabiri, J. O. A viscosity-enhanced mechanism for biogenic ocean mixing. Nature 460, 624–626 (2009). Akoz, E. & Moored, K. W. Unsteady propulsion by an intermittent swimming gait. J. Fluid Mech. 834, 149–172 (2018). Herschlag, G. & Miller, L. Reynolds number limits for jet propulsion: a numerical study of simplified jellyfish. J. Theor. Biol. 285, 84–95 (2011). Galloway, K. C. et al. Soft robotic grippers for biological sampling on deep reefs. Soft Robot. 3, 23–33 (2016). Law, K. L. & Thompson, R. C. Oceans. Micro. Seas. Sci. 345, 144–145 (2014). Hanlon, R. T., Watson, A. C. & Barbosa, A. A "mimic octopus" in the Atlantic: flatfish mimicry and camouflage by Macrotritopus defilippi. Biol. Bull. 218, 15–24 (2010). Able, K. W., Grimes, C. B., Cooper, R. A. & Uzmann, J. R. Burrow construction and behavior of Tilefish, Lopholatilus-Chamaeleonticeps, in Hudson Submarine-Canyon. Environ. Biol. Fishes 7, 199–205 (1982). Dams, B., Blenkinsopp, C. E. & Jones, D. O. B. Behavioural modification of local hydrodynamics by asteroids enhances reproductive success. J. Exp. Mar. Biol. Ecol. 501, 16–25 (2018). Levitan, D. R., Sewell, M. A. & Chia, F. S. Kinetics of fertilization in the Sea Urchin Strongylocentrotus franciscanus: interaction of gamete dilution, age, and contact time. Biol. Bull. 181, 371–378 (1991). Harrison, P. L. et al. Mass spawning in tropical reef corals. Science 223, 1186–1189 (1984). Wyatt, T. D. How animals communicate via pheromones. Am. Sci. 103, 114–121 (2015). Hines, L., Petersen, K., Lum, G. Z. & Sitti, M. Soft actuators for small-scale robotics. Adv. Mater. 29, 1603483 (2017). Christianson, C., Goldberg, N. N., Deheyn, D. D., Cai, S. Q. & Tolley, M. T. Translucent soft robots driven by frameless fluid electrode dielectric elastomer actuators. Sci. Robot. 3, eaat1893 (2018). Cangialosi, A. et al. DNA sequence–directed shape change of photopatterned hydrogels via high-degree swelling. Science 357, 1126–1130 (2017). Ware, T. H., McConney, M. E., Wie, J. J., Tondiglia, V. P. & White, T. J. Voxelated liquid crystal elastomers. Science 347, 982–984 (2015). Ijspeert, A. J. Biorobotics: Using robots to emulate and investigate agile locomotion. Science 346, 196–203 (2014). Gravish, N. & Lauder, G. V. Robotics-inspired biology. J. Exp. Biol. 221, jeb138438 (2018). Purcell, J. E. & Angel, D. L. Jellyfish Blooms: New Problems and Solutions. (Springer, Dordrecht, 2015). Marchand, A., Weijs, J. H., Snoeijer, J. H. & Andreotti, B. Why is surface tension a force parallel to the interface? Am. J. Phys. 79, 999–1008 (2011). Lum, G. Z. et al. Shape-programmable magnetic soft matter. Proc. Natl Acad. Sci. 113, E6007–E6015 (2016). Kummer, M. P. et al. OctoMag: an electromagnetic system for 5-DOF wireless micromanipulation. IEEE Trans. Robot. 26, 1006–1017 (2010). Peng, J. & Dabiri, J. O. Transport of inertial particles by Lagrangian coherent structures: application to predator–prey interaction in jellyfish feeding. J. Fluid Mech. 623, 75 (2009). Simon, J. The Art of Empirical Investigation. (Routledge, New York, 2017). We thank all members of the Physical Intelligence Department at the Max Planck Institute for Intelligent Systems for their comments. We also thank J. H. Costello for helpful discussions. This work is funded by the Max Planck Society. These authors contributed equally: Ziyu Ren, Wenqi Hu. Physical Intelligence Department, Max Planck Institute for Intelligent Systems, 70569, Stuttgart, Germany Ziyu Ren , Wenqi Hu , Xiaoguang Dong & Metin Sitti Search for Ziyu Ren in: Search for Wenqi Hu in: Search for Xiaoguang Dong in: Search for Metin Sitti in: M.S., Z.R., W.H. and X.D. proposed and designed the research. Z.R., W.H., and X.D. performed all experiments. Z.R., W.H. and X.D. developed the theoretical models and performed the simulations. The experimental data were analyzed by Z.R., W.H. and X.D. All authors wrote the paper and participated in discussions. Correspondence to Metin Sitti. Peer Review File Description of Additional Supplementary Files Supplementary Movie 1
CommonCrawl
Results for 'Adel Sheibani' Investigation of Freedom Stemma in the Constitution of the Islamic Republic of Iran: A Genealogy Viewpoint.Adel Sheibani & Alireza Dabirnia - forthcoming - International Journal for the Semiotics of Law - Revue Internationale de Sémiotique Juridique:1-14.details Among the civilization challenges, recognizing the human freedom and drawing its constraints have continuously been contestable. Hence, the recognition, definition, pledge and delineation of the boundaries of freedom have a special position in the declaration of rights. In the meantime, considering the historical intersection of jurisprudential foundations with the modernist thoughts, defining the concept of freedom and delineating its boundaries will be crucial. This definition provides a groundwork for elucidating and interpreting other Articles via identifying their positions in creating solutions (...) to the problems of public administration. The current study is an effort to recognize the concept of freedom and its boundaries in the constitutional system of the Islamic Republic of Iran via employing a descriptive-analytical method. Our findings indicate that the perception of freedom in the Constitution of the Islamic Republic of Iran is convoluted and interpretable. Because of the existing ambiguities in balancing among the historical concerns, the boundaries of freedom can be redefined based on the temporal and historical circumstances by the pressure of social forces. (shrink) Arabic and Islamic Philosophy in Philosophical Traditions, Miscellaneous Constructions: A New Theoretical Approach to Language.Adele E. Goldberg - 2003 - Trends in Cognitive Sciences 7 (5):219-224.details A new theoretical approach to language has emerged in the past 10–15 years that allows linguistic observations about form–meaning pairings, known as 'construc- tions', to be stated directly. Constructionist approaches aim to account for the full range of facts about language, without assuming that a particular subset of the data is part of a privileged 'core'. Researchers in this field argue that unusual constructions shed light on more general issues, and can illuminate what is required for a complete account of (...) language. (shrink) Philosophy of Linguistics in Philosophy of Language On The Decision Problem For Two-Variable First-Order Logic, By, Pages 53 -- 69.Erich Gr\"Adel, Phokion Kolaitis & Moshe Vardi - 1997 - Bulletin of Symbolic Logic 3 (1):53-69.details Argument Structure Constructions Versus Lexical Rules or Derivational Verb Templates.Adele E. Goldberg - 2013 - Mind and Language 28 (4):435-465.details The idea that correspondences relating grammatical relations and semantics (argument structure constructions) are needed to account for simple sentence types is reviewed, clarified, updated and compared with two lexicalist alternatives. Traditional lexical rules take one verb as 'input' and create (or relate) a different verb as 'output'. More recently, invisible derivational verb templates have been proposed, which treat argument structure patterns as zero derivational affixes that combine with a root verb to yield a new verb. While the derivational template perspective (...) can address several problems that arise for traditional lexical rules, it still faces problems in accounting for idioms, which often contain specifications that are not appropriately assigned to individual verbs or derivational affixes (regarding adjuncts, modification, and inflection). At the same time, it is clear that verbs play a central role in determining their distribution. The balance between verbs and phrasal argument structure constructions is addressed via the Principles of Semantic Coherence and Correspondence together with a usage-based hierarchy of constructions that contains entries which can include particular verbs and other lexical material. (shrink) Other Areas of Linguistics in Philosophy of Language A Novel of Fuzzy PSS Based on New Objective Function in Multimachine Power System.Adel Akbarimajd & Nasser Yousefi - 2016 - Complexity 21 (6):288-298.details The UNESCO Universal Declaration on Bioethics and Human Rights: Perspectives From Kenya and South Africa. [REVIEW]Adèle Langlois - 2008 - Health Care Analysis 16 (1):39-51.details In October 2005, UNESCO (the United Nations Educational, Scientific and Cultural Organization) adopted the Universal Declaration on Bioethics and Human Rights. This was the culmination of nearly 2 years of deliberations and negotiations. As a non-binding instrument, the declaration must be incorporated by UNESCO's member states into their national laws, regulations or policies in order to take effect. Based on documentary evidence and data from interviews, this paper compares the declaration's universal principles with national bioethics guidelines and practice in Kenya (...) and South Africa. It concentrates on areas of particular relevance to developing countries, such as protection of vulnerable persons and social responsibility. The comparison demonstrates the need for universal principles to be contextualised before they can be applied in a meaningful sense at national level. The paper also assesses the 'added value' of the declaration in terms of biomedical research ethics, given that there are already well-established international instruments on bioethics, namely the World Medical Association Declaration of Helsinki and the CIOMS (Council for International Organizations of Medical Sciences) guidelines on biomedical research. It may be that the added value lies as much in the follow-up capacity building activities being initiated by UNESCO as in the document itself. (shrink) Meaning and Necessity: Can Semantics Stop Same-Sex Marriage?Adèle Mercier - 2007 - Essays in Philosophy 8 (1):14.details Think of this paper as an exercise in applied philosophy of language. It has both semantic and deontic concerns. More than about the meaning of 'marriage,' it is about how one goes about determining the meaning of social kind terms like 'marriage'. But it is equally about the place of philosophy of language in the legislative sphere, and inter alia, about the roles and responsibilities of philosophers in public life. Analytic Feminism in Philosophy of Gender, Race, and Sexuality Feminism: Marriage and Civil Unions in Philosophy of Gender, Race, and Sexuality Feminist Philosophy of Language in Philosophy of Language Philosophy of Sexuality in Philosophy of Gender, Race, and Sexuality Topics in Feminist Philosophy, Misc in Philosophy of Gender, Race, and Sexuality Parameter Optimization of MIMO Fuzzy Optimal Model Predictive Control By APSO.Adel Taieb, Moêz Soltani & Abdelkader Chaari - 2017 - Complexity:1-11.details A Perverse Case of the Contingent A Priori: On the Logic of Emasculating Language.Adèle Mercier - 1995 - Philosophical Topics 23 (2):221-259.details Apriority and Necessity in Epistemology Logic and Philosophy of Logic, Misc in Logic and Philosophy of Logic Corpus Evidence of the Viability of Statistical Preemption.Adele E. Goldberg - 2011 - Cognitive Linguistics 22 (1).details Three Elements of Stakeholder Legitimacy.Adele Santana - 2012 - Journal of Business Ethics 105 (2):257-265.details This paper focuses attention on the stakeholder attribute of legitimacy. Drawing upon institutional and stakeholder theories, I develop a framework of stakeholder legitimacy based on its three aspects—legitimacy of the stakeholder as an entity, legitimacy of the stakeholder's claim, and legitimacy of the stakeholder's behavior. I assume that stakeholder legitimacy is socially constructed by management and that each of its three aspects exists in degree in the manager's perception. I discuss how these aspects interact and change over time, and propose (...) an agenda for future research on stakeholder legitimacy. (shrink) Stakeholder Theory in Applied Ethics Diagrams as Tools for Scientific Reasoning.Adele Abrahamsen & William Bechtel - 2015 - Review of Philosophy and Psychology 6 (1):117-131.details We contend that diagrams are tools not only for communication but also for supporting the reasoning of biologists. In the mechanistic research that is characteristic of biology, diagrams delineate the phenomenon to be explained, display explanatory relations, and show the organized parts and operations of the mechanism proposed as responsible for the phenomenon. Both phenomenon diagrams and explanatory relations diagrams, employing graphs or other formats, facilitate applying visual processing to the detection of relevant patterns. Mechanism diagrams guide reasoning about how (...) the parts and operations work together to produce the phenomenon and what experiments need to be done to improve on the existing account. We examine how these functions are served by diagrams in circadian rhythm research. (shrink) Transparency and Social Responsibility Issues for Wikipedia.Adele Santana & Donna J. Wood - 2009 - Ethics and Information Technology 11 (2):133-144.details Wikipedia is known as a free online encyclopedia. Wikipedia uses largely transparent writing and editing processes, which aim at providing the user with quality information through a democratic collaborative system. However, one aspect of these processes is not transparent—the identity of contributors, editors, and administrators. We argue that this particular lack of transparency jeopardizes the validity of the information being produced by Wikipedia. We analyze the social and ethical consequences of this lack of transparency in Wikipedia for all users, but (...) especially students; we assess the corporate social performance issues involved, and we propose courses of action to compensate for the potential problems. We show that Wikipedia has the appearance, but not the reality, of responsible, transparent information production. (shrink) Computer Ethics in Applied Ethics Dynamic Analysis of Complex Synchronization Schemes Between Integer Order and Fractional Order Chaotic Systems with Different Dimensions.Adel Ouannas, Xiong Wang, Viet-Thanh Pham & Toufik Ziar - 2017 - Complexity 2017:1-12.details We present new approaches to synchronize different dimensional master and slave systems described by integer order and fractional order differential equations. Based on fractional order Lyapunov approach and integer order Lyapunov stability method, effective control schemes to rigorously study the coexistence of some synchronization types between integer order and fractional order chaotic systems with different dimensions are introduced. Numerical examples are used to validate the theoretical results and to verify the effectiveness of the proposed schemes. Student Academic Dishonesty: What Do Academics Think and Do, and What Are the Barriers to Action?Adele Thomas & GideonP De Bruin - 2012 - African Journal of Business Ethics 6 (1):13.details The aims of the study were to explore the awareness of and attitudes towards student academic dishonesty at a South African university, and to explore perceived personal and institutional barriers to taking action against such dishonesty. All full-time academic staff at the University of Johannesburg were anonymously surveyed during late 2009. The findings indicated a high level of awareness of student academic dishonesty, with few faculty members taking action against it. Four groups of barriers to preventing and acting on student (...) academic dishonesty were identified, with two of these barrier groups being significantly related to willingness to report student academic dishonesty. (shrink) Academic and Teaching Ethics in Philosophy of Social Science Explanation and Constructions: Response to Adger.Adele E. Goldberg - 2013 - Mind and Language 28 (4):479-491.details Explanation, Miscellaneous in General Philosophy of Science Explanation: A Mechanist Alternative.William Bechtel & Adele Abrahamsen - 2005 - Studies in History and Philosophy of Biological and Biomedical Sciences 36 (2):421-441.details Explanations in the life sciences frequently involve presenting a model of the mechanism taken to be responsible for a given phenomenon. Such explanations depart in numerous ways from nomological explanations commonly presented in philosophy of science. This paper focuses on three sorts of differences. First, scientists who develop mechanistic explanations are not limited to linguistic representations and logical inference; they frequently employ diagrams to characterize mechanisms and simulations to reason about them. Thus, the epistemic resources for presenting mechanistic explanations are (...) considerably richer than those suggested by a nomological framework. Second, the fact that mechanisms involve organized systems of component parts and operations provides direction to both the discovery and testing of mechanistic explanations. Finally, models of mechanisms are developed for specific exemplars and are not represented in terms of universally quantified statements. Generalization involves investigating both the similarity of new exemplars to those already studied and the variations between them. (shrink) Explanation in Biology in Philosophy of Biology History of Biology in Philosophy of Biology Interlevel Relations in Biology in Philosophy of Biology Bridging Boundaries Versus Breaking Boundaries: Psycholinguistics in Perspective.Adele A. Abrahamsen - 1987 - Synthese 72 (3):355 - 388.details Psycholinguistics in Philosophy of Language Need, Frames, and Time Constraints in Risky Decision-Making.Adele Diederich, Marc Wyszynski & Stefan Traub - 2020 - Theory and Decision 89 (1):1-37.details In two experiments, participants had to choose between a sure and a risky option. The sure option was presented either in a gain or a loss frame. Need was defined as a minimum score the participants had to reach. Moreover, choices were made under two different time constraints and with three different levels of induced need to be reached within a fixed number of trials. The two experiments differed with respect to the specific amounts to win and the need levels. (...) The $$2 \times 2 \times 3$$2×2×3 design was a within-subject design. Data were evaluated on an overall and on a group level, the latter based on participants' stated risk preference and on revealed preferences using cluster analysis across subjects. Overall, the results showed riskier behavior when the choice options were presented as losses as compared to gains and when the induced need was highest. Time limits enhanced the framing effect. (shrink) Phenomena and Mechanisms: Putting the Symbolic, Connectionist, and Dynamical Systems Debate in Broader Perspective.Adele A. Abrahamsen & William P. Bechtel - 2006 - In R. Stainton (ed.), Contemporary Debates in Cognitive Science. Blackwell.details Cognitive science is, more than anything else, a pursuit of cognitive mechanisms. To make headway towards a mechanistic account of any particular cognitive phenomenon, a researcher must choose among the many architectures available to guide and constrain the account. It is thus fitting that this volume on contemporary debates in cognitive science includes two issues of architecture, each articulated in the 1980s but still unresolved: " • Just how modular is the mind? – a debate initially pitting encapsulated mechanisms against (...) highly interactive ones. • Does the mind process language-like representations according to formal rules? – a debate initially pitting symbolic architectures against less language-like architectures. " Our project here is to consider the second issue within the broader context of where cognitive science has been and where it is headed. The notion that cognition in general—not just language processing—involves rules operating on language-like representations actually predates cognitive science. In traditional philosophy of mind, mental life is construed as involving propositional attitudes—that is, such attitudes towards propositions as believing, fearing, and desiring that they be true—and logical inferences from them. On this view, if a person desires that a proposition be true and believes that if she performs a certain action it will become true, she will make the inference and perform the action. (shrink) Dynamical Systems in Philosophy of Cognitive Science $14.41 used $27.00 new (collection) Amazon page Model Theory of Adeles I.Jamshid Derakhshan & Angus Macintyre - 2022 - Annals of Pure and Applied Logic 173 (3):103074.details Model Theory in Logic and Philosophy of Logic Learning Argument Structure Generalizations.Adele E. Goldberg, Devin M. Casenhiser & Nitya Sethuraman - 2004 - Cognitive Linguistics 15 (3).details Ethics and the Networked Business.Adele Santana, Antonino Vaccaro & Donna J. Wood - 2009 - Journal of Business Ethics 90 (S4):661 - 681.details Pushing through a logical continuum of closed-to open-system views of organizations necessarily changes the conceptualization of a firm from a strongly bounded entity to a configuration of networks and sub-networks, which exists and operates in a larger systemic network configuration. We unfold a classification of management processes corresponding to views of the firm along the closed/open-systems continuum. We examine ethical issues that are likely to devolve from these classes of management processes, and we suggest typical means by which managers will (...) attempt to control their firms' exposure to such issues. The final class of management processes examined focuses on the achievement of out-comes that are mutually satisfactory in the set of networks and sub-networks that constitute the focal firm, and that support the sustainability of the whole system. The article contributes to organizational theory, business ethics, and computer and information ethics by providing a comprehensive analysis of the impact of managerial views of the firm and of networks - virtual, social, informational - on managerial processes and on our understanding of how business ethics issues are linked to perceptions of what a firm is, does, and can do. (shrink) Business Ethics in Applied Ethics Pattern Destabilization and Emotional Processing in Cognitive Therapy for Personality Disorders.Adele M. Hayes & Carly Yasinski - 2015 - Frontiers in Psychology 6.details From Reactive to Endogenously Active Dynamical Conceptions of the Brain.Adele Abrahamsen & William Bechtel - unknowndetails We contrast reactive and endogenously active perspectives on brain activity. Both have been pursued continuously in neurophysiology laboratories since the early 20thcentury, but the endogenous perspective has received relatively little attention until recently. One of the many successes of the reactive perspective was the identification, in the second half of the 20th century, of the distinctive contributions of different brain regions involved in visual processing. The recent prominence of the endogenous perspective is due to new findings of ongoing oscillatory activity (...) in the brain at a wide range of time scales, exploiting such techniques as single-cell recording, EEG, and fMRI. We recount some of the evidence pointing to ways in which this endogenous activity is relevant to cognition and behavior. Our major objective is to consider certain implications of the contrast between the reactive and endogenous perspectives. In particular, we relate these perspectives to two different characterizations of explanation in the new mechanistic philosophy of science. In a basic mechanistic explanation, the operations of a mechanism are characterized qualitatively and as functioning sequentially until a terminating condition is realized. In contrast, a dynamic mechanistic explanation allows for non-sequential organization and emphasizes quantitative modeling of the mechanisms's behavior. For example, with appropriate parameter values a set of differential equations can be used to demonstrate ongoing oscillations in a system organized with feedback loops. We conclude that the basic conception of mechanistic explanation is adequate for reactive accounts of brain activity, but dynamical accounts are required to explain sustained, endogenous activity. (shrink) Philosophy of Neuroscience, Misc in Philosophy of Cognitive Science On Communication-Based D E Re Thought, Commitments D E Dicto, and Word Individuation.Adele Mercier - 1998 - In Robert Stainton & Kumiko Murasagi (eds.), Philosophy and Linguistics. Westview Press. pp. 85--111.details Provides an account of how necessary subjective syntactic investments on the part of speakers affect the semantic contents of their words and the possibilities for their thought-contents. Speaker Meaning and Linguistic Meaning in Philosophy of Language Words in Philosophy of Language $6.02 new (collection) Amazon page The Time Window of Multisensory Integration: Relating Reaction Times and Judgments of Temporal Order.Adele Diederich & Hans Colonius - 2015 - Psychological Review 122 (2):232-241.details The Nature of Generalization in Language.Adele E. Goldberg - 2009 - Cognitive Linguistics 20 (1).details On the Nature of Marriage: Somerville on Same-Sex Marriage.Adèle Mercier - 2008 - The Monist 91 (3-4):407-421.details Cognitive Accessibility Predicts Word Order of Couples' Names in English and Japanese.Adele E. Goldberg & Karina Tachihara - 2020 - Cognitive Linguistics 31 (2):231-249.details We investigate the order in which speakers produce the proper names of couples they know personally in English and Japanese, two languages with markedly different constituent word orders. Results demonstrate that speakers of both languages tend to produce the name of the person they feel closer to before the name of the other member of the couple. In this way, speakers' unique personal histories give rise to a remarkably systematic linguistic generalization in both English and Japanese. Insofar as closeness serves (...) as an index of cognitive accessibility, the current work demonstrates that systematicity emerges from a domain-general property of memory. (shrink) Dynamic Mechanistic Explanation: Computational Modeling of Circadian Rhythms as an Exemplar for Cognitive Science.William Bechtel & Adele Abrahamsen - 2010 - Studies in History and Philosophy of Science Part A 41 (3):321-333.details Two widely accepted assumptions within cognitive science are that (1) the goal is to understand the mechanisms responsible for cognitive performances and (2) computational modeling is a major tool for understanding these mechanisms. The particular approaches to computational modeling adopted in cognitive science, moreover, have significantly affected the way in which cognitive mechanisms are understood. Unable to employ some of the more common methods for conducting research on mechanisms, cognitive scientists' guiding ideas about mechanism have developed in conjunction with their (...) styles of modeling. In particular, mental operations often are conceptualized as comparable to the processes employed in classical symbolic AI or neural network models. These models, in turn, have been interpreted by some as themselves intelligent systems since they employ the same type of operations as does the mind. For this paper, what is significant about these approaches to modeling is that they are constructed specifically to account for behavior and are evaluated by how well they do so—not by independent evidence that they describe actual operations in mental mechanisms. (shrink) Computation and Physical Systems, Misc in Philosophy of Computing and Information Computation and Representation, Misc in Philosophy of Cognitive Science Surface Generalizations: An Alternative to Alternations.Adele E. Goldberg - 2002 - Cognitive Linguistics 13 (4).details The Inherent Semantics of Argument Structure: The Case of the English Ditransitive Construction.Adele E. Goldberg - 1992 - Cognitive Linguistics 3 (1):37-74.details But Do We Need Universal Grammar? Comment On.Adele E. Goldberg - 2004 - Cognition 94 (1):77-84.details Linguistic Universals in Philosophy of Language Syntactic Theories in Philosophy of Language "Fair Ones of a Purer Caste": White Women and Colonialism in Nineteenth-Century British Columbia.Adele Perry - 1997 - Feminist Studies 23 (3):501.details But Do We Need Universal Grammar? Comment on Lidz Et Al.Adele E. Goldberg - 2004 - Cognition 94 (1):77-84.details Language Acquisition in Philosophy of Language Linguistic Innateness in Philosophy of Language Linguistics in Cognitive Sciences Anatomy, and Biochemistry.Adele Diamond - 2002 - In Donald T. Stuss & Robert T. Knight (eds.), Principles of Frontal Lobe Function. Oxford University Press. pp. 466.details $4.25 used $95.00 new $365.00 from Amazon (collection) Amazon page Linguistic Generalization on the Basis of Function and Constraints on the Basis of Statistical Preemption.Florent Perek & Adele E. Goldberg - 2017 - Cognition 168:276-293.details The UNESCO Bioethics Programme.Adèle Langlois - 2014 - The New Bioethics 20 (1):3-11.details Reflections on the Reproductive Sciences in Agriculture in the UK and US, Ca. 1900–2000+.Adele E. Clarke - 2007 - Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 38 (2):316-339.details This paper provides a brief comparative overview of the development of the reproductive sciences especially in agriculture in the UK and the US. It begins with the establishment by F. H. A. Marshall in 1910 of the boundaries that framed the reproductive sciences as distinct from genetics and embryology. It then examines how and where the reproductive sciences were taken up in agricultural research settings, focusing on the differential development of US and UK institutions. The reproductive sciences were also pursued (...) in medical and biological settings, and I discuss how the intersections among all three allowed the circulation of both ideas and scientists' careers. Across the twentieth century, scientific leadership in the reproductive sciences alternated between the UK and US, and these patterns are elucidated. I conclude with thoughts on future research that might emphasize the elaboration of industrialization processes in agriculture and new capacities to transform both reproductive processes and their products—life itself—as biopower comes to be more ambitiously understood as extending across all species. (shrink) Genetics and Molecular Biology in Philosophy of Biology Multistability Analysis and Function Projective Synchronization in Relay Coupled Oscillators.Ahmad Taher Azar, Ngo Mouelas Adele, Kammogne Soup Tewa Alain, Romanic Kengne & Fotsin Hilaire Bertrand - 2018 - Complexity 2018:1-12.details Linguistic Competence, Convention and Authority: Individualism and Anti-Individualism in Linguistics and Philosophy.Adèle Mercier - 1992 - Dissertation, University of California, Los Angelesdetails Two central tenets of externalist theories of word individuation are: the claim that some terms derive their meaning from causal connections to the world , and the claim that some terms derive them from intentional connections to the linguistic community . A normative conception of language underlies the latter claim . It is this conception which motivates the reliance on a principle of literal interpretation in interpreting ascriptions of intentional content. ;The new theory of grammar initiated by Chomsky yields radically (...) individualistic conceptions of language and of mind incompatible with those underlying communalist theories of conceptual content. ;The feasibility of the principle of literal interpretation postulated by normativist theories depends on the possibility of identifying the language of the subject. This condition grounds an important distinction between cases of deference to science and cases of deference to communal norms. For while our shared commitments to science are a function of our sharing the same world, and thus transcend language communities, commitments to linguistic norms differ essentially across language communities. ;The conception of language and concept acquisition underlying normativism leaves unexplained the facts of language change. Moreover, the problem of concept-individuation, which communalism seeks to solve by appeal to an individual's normative commitments, merely resurfaces as that of individuating linguistic communities. ;Subjectivism provides an explanation of language change, as well as an important insight into the historical chain picture of the reference of names. ;Moreover, the view that humans are innately endowed with concepts is problematic for an externalist theory of concept-individuation. I conclude that a relational theory of concept-individuation is inconsistent with Chomskian premises. (shrink) Knowledge of Language in Philosophy of Language Consumerism and Language Acquisition.Adèle Mercier - 1994 - Linguistics and Philosophy 17 (5):499 - 519.details Idiolects in Philosophy of Language Languages, Misc in Philosophy of Language Philosophy of Language, Miscellaneous in Philosophy of Language Public Language in Philosophy of Language Paradisus Homo Amicus.Adele Fiske - 1965 - Speculum 40 (3):436-459.details Speed/Accuracy Trade-Offs in Target-Directed Movements.Réjean Plamondon & Adel M. Alimi - 1997 - Behavioral and Brain Sciences 20 (2):279-303.details This target article presents a critical survey of the scientific literature dealing with the speed/accuracy trade-offs in rapid-aimed movements. It highlights the numerous mathematical and theoretical interpretations that have been proposed in recent decades. Although the variety of points of view reflects the richness of the field and the high degree of interest that such basic phenomena attract in the understanding of human movements, it calls into question the ability of many models to explain the basic observations consistently reported in (...) the field. This target article summarizes the kinematic theory of rapid human movements, proposed recently by R. Plamondon (1993b; 1993c; 1995a; 1995b), and analyzes its predictions in the context of speed/accuracy trade-offs. Data from human movement literature are reanalyzed and reinterpreted in the context of the new theory. It is shown that the various aspects of speed/accuracy trade-offs can be taken into account by considering the asymptotic behavior of a large number of coupled linear systems, from which a delta-lognormal law can be derived to describe the velocity profile of an end-effector driven by a neuromuscular synergy. This law not only describes velocity profiles almost perfectly, it also predicts the kinematic properties of simple rapid movements and provides a consistent framework for the analysis of different types of speed/accuracy trade-offs using a quadratic (or power) law that emerges from the model. (shrink) An Effect of Inhibitory Load in Children While Keeping Working Memory Load Constant.Andy Wright & Adele Diamond - 2014 - Frontiers in Psychology 5.details History and Core Themes.Adele Abrahamsen & William Bechtel - 2012 - In Keith Frankish & William Ramsey (eds.), The Cambridge Handbook of Cognitive Science. Cambridge University Press. pp. 9.details $14.99 used $31.49 new $31.99 from Amazon (collection) Amazon page Innovation, Influence, and Borrowing in Mamluk-Era Legal Maxim Collections: The Case of Ibn ʿAbd Al-Salām and Al-Qarāfī.Sheibani - 2020 - Journal of the American Oriental Society 140 (4):927.details History in Social Sciences Disentangling the Knot: Variable Mixing of Four Motivations for Firms' Use of Social Practices. [REVIEW]Adele Santana - 2015 - Business and Society 54 (6):763-793.details The objective of this study is to reach a deeper understanding of the nature of the motivations behind social practices used by firms. The motivation-mix model is a proposal that attempts to classify the different reasons that may motivate the use of each practice. The article proposes that this motivation-mix can be examined as intrafirm, indicating a particular combination for each social practice within each firm, at a given moment. The article argues that the aggregate of motivation-mixes for all social (...) practices in use by the company at a certain point in time establishes the motivation position for the company as a whole. The author applies the concept of motivation-mix to business community involvement practices as an illustration. (shrink)
CommonCrawl
International Breastfeeding Journal Exclusivity of breastfeeding and body composition: learnings from the Baby-bod study Sisitha Jayasinghe ORCID: orcid.org/0000-0001-8805-385X1, Manoja P. Herath1, Jeffrey M. Beckett1, Kiran D. K. Ahuja1, Nuala M. Byrne1 & Andrew P. Hills1 International Breastfeeding Journal volume 16, Article number: 41 (2021) Cite this article This report evaluated the breastfeeding status in a Tasmanian cohort and its effects on infant and maternal anthropometry and body composition. An observational-cohort analysis of self-reported feeding data from 175 Tasmanian mother-baby dyads (recruited via in-person contact between September 2017 and October 2019), was executed. Only mothers who were ≥ 18 years of age, who had a singleton pregnancy and were able to speak and understand English, were included in the study. Infants outside a gestational age range between 37+ 0 and 41+ 6 weeks were excluded. Infant (using Air Displacement Plethysmography) and maternal body composition was assessed at 0, 3 and 6 months. Analysis of variance with relevant statistical corrections were utilised for cross-sectional and longitudinal comparisons between non-exclusively breastfed (neBF) and exclusively breastfed (eBF) groups. Fat-free mass was significantly higher [t = 2.27, df = 98, P = 0.03, confidence interval (CI) 0.03, 0.48] in neBF infants at 6 months (5.59 ± 0.59 vs 5.33 ± 0.50 kg) despite a higher mean fat-free mass in eBF infants at birth (2.89 ± 0.34 vs 3.01 ± 0.35 kg). Weak evidence for different fat mass index trajectories was observed for eBF and neBF infants in the first 6 months of life (ANOVA, F = 2.42, df = 1.9, P = 0.09) with an inversion in fat mass index levels between 3 and 6 months. Body Mass Index (BMI) trajectories were significantly different in eBF and neBF mothers through pregnancy and the first 6 months postpartum (ANOVA, F = 5.56, df = 30.14, P = 0.01). Compared with eBF mothers, neBF mothers retained significantly less weight (t = − 2.754, df = 158, P = 0.02, CI -6.64, − 1.09) at 3 months (0.68 ± 11.69 vs 4.55 ± 6.08 kg) postpartum. Prevalence for neBF was incrementally higher in mothers with a normal BMI compared to mothers with obesity, and mothers who underwent surgical or medical intervention during birth were less likely to exclusively breastfeed. Infants with different feeding patterns may display varying growth patterns in early life and sustained breastfeeding can contribute to greater postpartum maternal weight loss. Prenatal and early postnatal nutrition plays a significant role in reducing the risk of childhood obesity [1, 2]. In this context, breast milk has been identified as one of the most efficient, natural, cost-effective means of optimising nutrition in early life [3, 4]. Although the exact mechanisms are yet to be elucidated, a substantial body of literature supports the preventative capacity of breastfeeding during infancy on later life incidence of obesity and related comorbidities [5,6,7,8]. The benefit to infants from prolonged breast milk consumption is not limited to prevention of obesity; it is also associated with cognitive, immune, and healthy digestive system development [9,10,11]. Breastfeeding/ lactation can also have a profound impact on maternal health. Evidence indicates that breastfeeding can reduce the incidence of numerous metabolic and physiological complications in mothers, including type 2 diabetes [12, 13], metabolic syndrome [14] and cardiovascular disease [15]. Pregnancy is commonly associated with an increase in visceral fat, insulin production and circulating lipid levels [16], and a delayed return to pre-pregnancy levels may place mothers at an elevated risk of developing deleterious metabolic conditions. According to the 'Reset Hypothesis', lactation can reverse some of these trends via mobilization of fat stores accumulated during pregnancy and be a potential vehicle for reductions in the incidence of metabolic disease [17]. Lactation also plays an important role in postpartum maternal weight management with exclusive breastfeeding (eBF) acting as an effective means of weight loss following childbirth [18, 19]. Despite the acknowledged importance of eBF as the best possible nutritional start to life for infants, 78 million babies worldwide (estimated in 2018) are not breastfed within the first hour of life [20]. The ideal timeframe for eBF lacks global consensus; however, the World Health Organization (WHO) recommends eBF for at least 6 months for all infants [21,22,23]. Initiation, duration, continuity, and cessation of breastfeeding depend on a multitude of factors including maternal obesity, use of medications (particularly ones that are prescribed at labour), perceived insufficient milk supply, and mode of birth/birth complications [24,25,26,27,28,29]. In addition, socio-economic status is considered to be a major determinant of breastfeeding pattern [30]. Previous reports have suggested that socioeconomically disadvantaged regions in Australia, including in Tasmania, have low rates of breastfeeding compared with other parts of the country [31, 32]. According to the Department of Health and Human Services, 75–85% (as opposed to the recommended 90%) of Tasmanian mothers breastfeed at the time of discharge from hospital following childbirth [33]. Further, the number of women who breastfeed drops over the first 6 months with only 44% of mothers partially breastfeeding (as opposed to the recommended 80%) at 6 months postpartum [33]. This report evaluated breastfeeding trends in a Tasmanian cohort and its effects on both infant and maternal anthropometry and body composition. Participant recruitment and data collection was undertaken as part of a larger study ('Developing better information globally on young children's body composition') at the maternity ward of the Launceston General Hospital in the north of Tasmania, Australia. Briefly, mothers admitted to the postnatal ward at Launceston General Hospital were approached by trained research staff and midwives and given a plain language description of the study. Subsequently, subject to the provision of written informed consent, all interested, eligible mothers were enrolled in the study. Only mothers who were ≥ 18 years of age, who had a singleton pregnancy and were able to speak and understand English, were included in the study. Infants outside a gestational age range between 37+ 0 and 41+ 6 weeks were excluded. Further, women who experienced significant complications at labour (according to the assessment of the attending clinician), and infants born with a congenital anomaly, were also not included. According to self-reported data from consenting mothers, approximately 90% were Caucasian (Tasmanian average 83.5%). Average maternal age was 30.3 years (range 18–48) and infants were 1.7, 87.7 and 180.4 days old (on average) at birth, 3-month and 6-month data collection points, respectively. Exclusive breastfeeding was defined as the infant has received only breast milk from his/her mother, or expressed breast milk, and no other substantial amount of liquids or solids with the exception of drops or syrups consisting of vitamins, mineral supplements or medicines since birth [34, 35]. Sixty-three percent of infants were reported to have been eBF to 6 months of age. The remaining 37% were included in the neBF group, who consumed formula or a mixture of breast milk, formula, and solids. A detailed breakdown of participant numbers is shown in Fig. 1. Flow diagram of study participants. Flow diagram of study participants; neBF: non-exclusive breastfeeding; eBF: exclusive breastfeeding up to 6 months Body composition and anthropometric measurements Infant body weight, fat mass and fat-free mass were assessed via air displacement plethysmography (PEA POD, COSMED, Rome, Italy). Briefly, body weight was measured (using the integrated scale) in unclothed infants and hair flattened (with a hair cap or baby oil) prior to placing them in the automatic volume measurement capsule. Subscapular (SS) and triceps (TS) skinfolds were obtained from infants at 3 and 6 months using a calibrated skinfold caliper (Holtain Limited, Croswell, UK). Left mid-upper arm circumference (MUAC) was measured in all infants using a tape at the measured halfway mark between the acromion and olecranon processes. Prenatal body mass index (BMI) was calculated using self-reported height and weight of participating mothers. For relevant analyses, mothers were allocated into BMI categories (underweight < 18.5; healthy weight 18.5–24.9; overweight 25.0–29.9 and obese, > 30.0 kg m− 2) according to World Health Organisation BMI criteria. Maternal postpartum body weight retention, length corrected fat mass and weight-for-length (WFL) in infants were enumerated as follows: $$ {\displaystyle \begin{array}{c}\mathrm{Maternal}\ \mathrm{postpartum}\ \mathrm{weight}\ \mathrm{retention}=\mathrm{body}\ \mathrm{weight}\ \mathrm{at}\ \left[\left(3\ \mathrm{or}\ 6\ \mathrm{months}\right)-\left(\mathrm{pre}-\mathrm{pregnancy}\right)\right]\\ {}\mathrm{Infant}\ \mathrm{fat}\ \mathrm{mass}\ \mathrm{in}\mathrm{dex}=\mathrm{fat}\ \mathrm{mass}\ \left(\mathrm{kg}\right)/{\left(\mathrm{length}\ \mathrm{in}\ \mathrm{meters}\right)}^2\\ {}\mathrm{Infant}\ \mathrm{WFL}=\mathrm{weight}\ \left(\mathrm{kg}\right)/\mathrm{length}\ \left(\mathrm{m}\right)\end{array}} $$ Maternal body weight and height measures (apart from prenatal height and weight which was self-reported) were completed in duplicate (to maintain reliability) at each of the visits. Body weight was recorded to the nearest gram using a digitized scale (SECA Corp. Hamburg, Germany) and height was measured to the nearest millimetre using a stadiometer (SECA Corp. Hamburg, Germany). All statistical analyses were conducted using the Statistical Package for the Social Sciences (SPSS) software (SPSS Version 26, Inc., Chicago, IL, USA). Results are presented as means and standard deviations, unless specified otherwise. Cross-sectional comparisons between eBF and neBF mothers and infants were executed using independent sample t-tests. Longitudinal differences in WFL, fat mass index and BMI between eBF and neBF groups were evaluated using a repeated measures analysis of variance (ANOVA). In the event of violation of sphericity assumptions, Greenhouse–Geisser or Huynh Feldt corrections were applied as appropriate. Relationship between skinfold measures, MUAC, maternal pre-pregnancy BMI and infant fat mass content was assessed using Pearson correlation coefficient. Statistical significance was set at p < 0.05. Infant body mass and composition Body weight, fat mass, skinfold thickness and MUAC did not differ between eBF and neBF infants throughout the first 6 months of life (Table 1). Nevertheless, fat-free mass was significantly higher [t = 2.27, df = 98, P = 0.03, confidence interval (CI) 0.03, 0.48] in neBF infants at 6 months despite a higher mean fat-free mass in eBF infants at birth (2.89 ± 0.34 vs 3.01 ± 0.35 kg) (Table 1). Both groups of infants displayed almost identical measurements for all anthropometric and body composition parameters at 3 months (Table 1). Repeated measures analysis of variance revealed weak evidence (fat mass index lower in eBF at birth but higher at 3 months compared with neBF; Fig. 2a) for different fat mass index trajectories for eBF and nEBF infants in the first 6 months of life (ANOVA, F = 2.42, df = 1.9, P = 0.09). No such effect (WFL lower in eBF throughout the 6 months; Fig. 2b) was observed for WFL (ANOVA, F = 0.32, df = 1.7, P = 0.69). Furthermore, we observed an inversion in fat mass index levels between eBF and neBF infants from 3 to 6 months, alongside a divergence of WFL trajectory (Fig. 2a and b). There were significant correlations between SS, TS, MUAC and fat mass at 3 and 6 months, in both groups of infants (Table 5). Table 1 Body weight and composition (mean ± SD) of eBF and neBF infants in the first 6 months of life Infant fat mass index and weight for length. Infant fat mass index (a) and WFL (b) (means ±95% confidence interval) in eBF and neBF infants in the first 6 months of life; neBF: non-exclusive breastfeeding; eBF: exclusive breastfeeding up to 6 months Maternal body weight, composition, breastfeeding status and birth complications At 3 months postpartum, the mean body weight of mothers who eBF showed weak evidence (t = 1.92, df = 160, P = 0.06, CI -0.17, 11.07) to be lower than counterparts who were neBF, and at 6 months postpartum, eBF mothers were significantly lighter (t = 3.49, df = 166, P = 0.001, CI 4.44, 15.99) (Table 2). Repeated measures analysis of variance revealed significant differences in BMI trajectories of eBF and neBF mothers through pregnancy and the first 6 months postpartum (ANOVA, F = 5.56, df = 1.8, P = 0.01, Fig. 3). Mothers in the neBF group, retained significantly less weight (t = − 2.75, df = 158, P = 0.02, CI -6.64, − 1.09) at 3 months (0.68 ± 11.69 vs 4.55 ± 6.08 kg) at 3 months compared with the eBF women but the amount of weight retained at 6 months was similar between groups (Table 2). Prevalence of neBF showed an increment from mothers with a normal BMI to mothers with obesity (Table 3). Further, mothers who underwent surgical or medical intervention during birth were less likely to eBF (Table 4). Maternal pre-pregnancy BMI was not significantly related to infant fat mass at 3 and 6 months, regardless of breastfeeding status (Table 5). Table 2 Maternal body weight and weight retention [mean ± SD (median, inter-quartile range)] from pregnancy through to 6 months postpartum Maternal body mass index. Maternal BMI trajectory (mean ± 95% confidence interval) of eBF and neBF mothers from conception through to 6 months postpartum; neBF: non-exclusive breastfeeding; eBF: exclusive breastfeeding up to 6 months Table 3 Level of obesity and breastfeeding status in mothers Table 4 Effects of mode of birth on breastfeeding Table 5 Associations between infant anthropometrics (skinfolds and arm circumference), body composition (fat mass) and maternal pre-pregnancy BMI at 3 and 6 months This study illustrates the impact of eBF on mothers and infants in a selected cohort of mother-baby dyads from Tasmania, Australia. Both fat mass index and WFL had similar trajectories in the first 90 days of life, regardless of the type of feeding. In contrast, there was a deviation (from one another) in fat mass index and WFL in the second 90 days of life. Specifically, fat mass index in eBF infants, which was lower in the first 3 months, began to track higher than that of the neBF infants in the second 90 days. WFL trajectories of the two groups appear to separate from the 3-month time point onwards, after displaying identical increments in the first 90 days. Mothers with a higher BMI were less likely to breastfeed exclusively. Further, sustained breastfeeding was associated with greater postpartum maternal weight loss, confirmation of the efficacy of lactation as a weight loss strategy. Breastfeeding (and associated energy expenditure through milk production) intensifies lipolysis whereby accumulated fat tissue during pregnancy is mobilized, resulting in subsequent reductions in postpartum weight loss in mothers [36]. Medical intervention at birth also appears to be a major contributing factor that determines the pattern of breastfeeding. Although we do not have the capacity to draw on definitive conclusions in the current instance, existing empirical evidence indicate that factors such as stressful labour/birth, caesarean births, psychosocial stress/ pain due to childbirth and the consequent endocrine (e.g., oxytocin secretion) or mechanical (e.g., milk ejection reflex) changes associated with them can significantly delay lactogenesis and cause reductions in breastfeeding [37, 38]. Our observation (through fat mass index analysis) that exclusivity of breastfeeding results in increased fat mass accumulation in infants, is consistent with Gridneva et al.'s findings regarding compositional changes in the first 12 months of life [39]. As indicated in the results, overall, fat mass index was higher in the eBF infants compared with the neBF infants. On the other hand, neBF infants displayed higher fat-free mass accretion in the same period (Fig. 2). This pattern corroborates previous findings in neBF infants where significantly higher gains in fat-free mass has been observed compared with breastfed infants [40]. This suggests that the early life compositional changes that contribute to body weight gain is dependent on the feeding status of the infant. We also observed significant correlations between proxy measures of subcutaneous fat (Subscapular skinfold [SS] and Triceps skinfold [TS]) and fat mass in both groups of infants. Our findings partially concurred with existing empirical evidence that eBF is associated with preferential accretion of subcutaneous fat [41]. This notion is particularly important as localization of adipose tissue dictates its functionality and plays a major role in its contribution to the aetiology of obesity and metabolic disease [42]. Specifically, infant subcutaneous fat (between 0 and 24 months) has been shown to associated with general adiposity during adolescence and with cardio-metabolic risk factors in children as young as 6 years of age [43, 44]. The fact that we did not observe any significant associations between maternal pre pregnancy BMI and infant body composition is an anomaly as previous research has indicated an intergenerational link between maternal and infant body composition [45,46,47]. Fat mass and fat-free mass are sensitive indicators of the intrauterine environment, where foetal tissue developmental and regulatory planning happens [48, 49]. Methodological issues may explain at least part of the differences observed in this study. Previous Australian data that indicate associations between pre gravid BMI and neonatal body fat, had a much larger sample size (n = 599) compared to the current study [50]. Despite a dearth of mechanistic evidence regarding the influence of breastfeeding on infant adiposity, some research suggests that the composition of breast milk can have a significant influence on infant growth [51,52,53]. For instance, Gridneva et al. reported that higher total carbohydrate level in human milk is linked with greater fat-free mass whereas higher oligosaccharide content is related to greater fat mass in the first 12 months of life [54]. Other research purports a differential effect of leptin and adiponectin concentrations in breast milk on the development of infant lean and fat mass in the first year of life [55]. Further, secretory immunoglobulins are also thought to play a major role in breast milk mediated modulation of infant body composition [56]. According to the most recent Department of Health and Human Services data, only 44% of Tasmanian infants are at least partially breastfed at 6 months with entrenched socioeconomic disparities being a potential contributing factor [32, 57]. In stark contrast, 63% of the infants in our healthy cohort were eBF for the first 6 months of infancy. It is possible that the high education (i.e., ~ > 70 was university/ tertiary educated - data not reported) attainment of the current cohort of mothers may have contributed towards the higher eBF rates. Existing Australian evidence suggest that university-educated women are twice as likely to breastfeed your child for their first 6 months of their life than non-tertiary-educated women [58]. Furthermore, based on the well documented differences in growth patterns of infants from different ethnicities [59], including in the Tasmanian context, further examination to enable wider generalizability of the findings is warranted. We observed that exclusivity of breastfeeding decreases with increasing levels of adiposity, only 46% of mothers with obesity exclusively breastfed in the first 6 months postpartum. This finding concurs with existing empirical evidence of lower rates of breastfeeding initiation and exclusivity in women with obesity [60]. Contributing factors in obesity-mediated reductions in breastfeeding include mechanical factors such as larger breasts/areolas [61, 62], suboptimal endocrine activity, [63,64,65] and inefficient lactogenesis [66]. Complications during labour may also affect breastfeeding patterns by delaying the onset of lactogenesis [67]. Interestingly, in the current study, mothers who experienced assisted vaginal birth or caesarean section had relatively lower levels of exclusive breastfeeding. Previous reports have linked caesarean section with reduced the likelihood/prevalence of all forms of breastfeeding from discharge to 6 months postpartum [68]. Another important observation was that mothers who maintained eBF continued to lose weight postpartum, a trend that is corroborated by previous reports [69, 70]. Specifically, between 3- and 6-months postpartum, neBF mothers showed an increment in body weight compared to their breastfeeding counterparts. It is important to note that although (overall) it appears that both neBF and eBF women gained ~ 3 kg during the study, the trajectories associated with arrival at the 6-month body weights are unique to the respective groups. Deducing from suggested mechanisms in the current literature, it could be assumed that much of this weight loss in eBF mothers is likely to be from increases in energy expenditure and distribution/mobilization of adipose tissue depots that were accumulated during pregnancy [71,72,73]. An ongoing challenge in this context is to differentiate between whether sustained postpartum weight loss is due to lactation per se or general/overall increments in energy expenditure, as it is likely that women who were habitually active pre-pregnancy may shed the weight gained during gestation faster than their relatively inactive counterparts [74]. Regardless, weight loss following childbirth is particularly important as excess postpartum weight retention may be associated with numerous complications including adverse metabolic conditions and entering subsequent pregnancies at successively unhealthy weight and fatness levels [75, 76]. Dietary self-reporting is thought to be marred by social desirability bias [77]. As such, we acknowledge there may have been limitations/ biases in some of the self-reported parameters by mothers in the current study. Nevertheless, existing literature indicate that breastfeeding initiation and duration information derived from maternal recall can be considered valid and reliable [78]. Further, volunteer bias, a well-documented impediment for participant recruitment and retention in research studies [79] may have contributed to the relatively low sample size in this instance. Reasons beyond the research team's control such as being time poor, and relocation (to other parts of the state) were cited for the inability to continue participating in the study. In addition, due to the descriptive nature of this report, the influence of potential contributors toward endemic patterns of breastfeeding such as parity, diet, sedentary occupation, and socio-economic status, were not considered thus limiting comparability of current results with other populations. There is an urgent need for a concerted effort from all key stakeholders in maternal and infant health to promote optimal breastfeeding practices across all populations. As illustrated by the results of the current study, infants with different feeding patterns may display varying growth patterns in early life. Education and support for eBF at a population level should be accompanied by the simultaneous assessment of body composition (as opposed to widespread exclusive use of anthropometry) to obtain a more comprehensive understanding of infant growth. Accurate quantification of body composition status can provide important insights regarding qualitative and quantitative differences in specific tissue types. Logical extensions of the current research include the utilization of objective measurement approaches (e.g., deuterium dilution dose-to-mother technique) and a comprehensive evaluation of the role of different components of breast milk in shaping the longitudinal health, including body composition, of infants and young children. The datasets generated and/or analysed during the current study are not publicly available due privacy protection and ethical obligations but are available (in deidentified form) from the corresponding author on reasonable request. eBF: Exclusive breastfeeding MUAC: Mid-upper arm circumference neBF: Non-exclusive breastfeeding SS: Subscapular skinfold TS: Triceps skinfold WFL: Weight-for-length Worldwide trends in body-mass index, underweight, overweight, and obesity from 1975 to 2016: a pooled analysis of 2416 population-based measurement studies in 128.9 million children, adolescents, and adults. Lancet 2017;390(10113):2627–42. https://doi.org/10.1016/S0140-6736(17)32129-3. Woo Baidal JA, Locks LM, Cheng ER, Blake-Lamb TL, Perkins ME, Taveras EM. Risk factors for childhood obesity in the first 1,000 days: a systematic review. Am J Prev Med. 2016;50(6):761–79. https://doi.org/10.1016/j.amepre.2015.11.012. Geddes DT, Prescott SL. Developmental origins of health and disease: the role of human milk in preventing disease in the 21(st) century. J Hum Lact. 2013;29(2):123–7. https://doi.org/10.1177/0890334412474371. Kelishadi R, Farajian S. The protective effects of breastfeeding on chronic non-communicable diseases in adulthood: a review of evidence. Adv Biomed Res. 2014;3(1):3. https://doi.org/10.4103/2277-9175.124629. Gillman MW. Commentary: breastfeeding and obesity--the 2011 scorecard. Int J Epidemiol. 2011;40(3):681–4. https://doi.org/10.1093/ije/dyr085. Wells JC, Chomtho S, Fewtrell MS. Programming of body composition by early growth and nutrition. Proc Nutr Soc. 2007;66(3):423–34. https://doi.org/10.1017/S0029665107005691. Savino F, Liguori SA, Fissore MF, Oggero R. Breast milk hormones and their protective effect on obesity. Int J Pediatr Endocrinol. 2009;2009(1):327505. https://doi.org/10.1186/1687-9856-2009-327505. Bartok CJ. Babies fed breastmilk by breast versus by bottle: a pilot study evaluating early growth patterns. Breastfeed Med. 2011;6(3):117–24. https://doi.org/10.1089/bfm.2010.0055. Kramer MS, Aboud F, Mironova E, Vanilovich I, Platt RW, Matush L, et al. Breastfeeding and child cognitive development: new evidence from a large randomized trial. Arch Gen Psychiatry. 2008;65(5):578–84. https://doi.org/10.1001/archpsyc.65.5.578. Victora CG, Bahl R, Barros AJ, Franca GV, Horton S, Krasevec J, et al. Breastfeeding in the 21st century: epidemiology, mechanisms, and lifelong effect. Lancet. 2016;387(10017):475–90. https://doi.org/10.1016/S0140-6736(15)01024-7. Le Doare K, Holder B, Bassett A, Pannaraj PS. Mother's milk: a purposeful contribution to the development of the infant microbiota and immunity. Front Immunol. 2018;9:361. https://doi.org/10.3389/fimmu.2018.00361. Stuebe AM, Rich-Edwards JW, Willett WC, Manson JE, Michels KB. Duration of lactation and incidence of type 2 diabetes. JAMA. 2005;294(20):2601–10. https://doi.org/10.1001/jama.294.20.2601. Horta BL, Loret de Mola C, Victora CG. Long-term consequences of breastfeeding on cholesterol, obesity, systolic blood pressure and type 2 diabetes: a systematic review and meta-analysis. Acta Paediatr. 2015;104(467):30–7. https://doi.org/10.1111/apa.13133. Ram KT, Bobby P, Hailpern SM, Lo JC, Schocken M, Skurnick J, et al. Duration of lactation is associated with lower prevalence of the metabolic syndrome in midlife--swan, the study of women's health across the nation. Am J Obstet Gynecol. 2008;198(3):268.e261–6. Nguyen B, Jin K, Ding D. Breastfeeding and maternal cardiovascular risk factors and outcomes: a systematic review. PLoS One. 2017;12(11):e0187923. https://doi.org/10.1371/journal.pone.0187923. Einstein FH, Fishman S, Muzumdar RH, Yang XM, Atzmon G, Barzilai N. Accretion of visceral fat and hepatic insulin resistance in pregnant rats. Am J Physiol Endocrinol Metab. 2008;294(2):E451–5. https://doi.org/10.1152/ajpendo.00570.2007. Stuebe AM, Rich-Edwards JW. The reset hypothesis: lactation and maternal metabolism. Am J Perinatol. 2009;26(1):81–8. https://doi.org/10.1055/s-0028-1103034. Hatsu IE, McDougald DM, Anderson AK. Effect of infant feeding on maternal body composition. Int Breastfeed J. 2008;3:1. López-Olmedo N, Hernández-Cordero S, Neufeld LM, García-Guerra A, Mejía-Rodríguez F, Gómez-Humarán IM. The associations of maternal weight change with breastfeeding, diet and physical activity during the postpartum period. Matern Child Health J. 2016;20(2):270–80. https://doi.org/10.1007/s10995-015-1826-7. World Health Organisation: 3 in 5 babies not breastfed in the first hour of life. 2018. Available from: https://www.who.int/news/item/31-07-2018-3-in-5-babies-not-breastfed-in-the-first-hour-of-life Kramer MS, Kakuma R. Optimal duration of exclusive breastfeeding. Cochrane Database Syst Rev. 2012;8:CD003517. World Health Organisation: Infant and young child feeding: Model chapter for textbooks for medical students and allied health professionals. 2009. Available from: https://www.who.int/nutrition/publications/infantfeeding/9789241597494/en/ Gupta A, Suri S, Dadhich JP, Trejos M, Nalubanga B. The world breastfeeding trends initiative: implementation of the global strategy for infant and young child feeding in 84 countries. J Public Health Policy. 2019;40(1):35–65. https://doi.org/10.1057/s41271-018-0153-9. Scott JA, Binns CW. Factors associated with the initiation and duration of breastfeeding: a review of the literature. Breastfeed Rev. 1999;7(1):5–16. Lepe M, Bacardi Gascon M, Castaneda-Gonzalez LM, Perez Morales ME, Jimenez Cruz A. Effect of maternal obesity on lactation: systematic review. Nutr Hosp. 2011;26(6):1266–9. https://doi.org/10.1590/S0212-16112011000600012. Kent JC, Hepworth AR, Sherriff JL, Cox DB, Mitoulas LR, Hartmann PE. Longitudinal changes in breastfeeding patterns from 1 to 6 months of lactation. Breastfeed Med. 2013;8(4):401–7. https://doi.org/10.1089/bfm.2012.0141. Hobbs AJ, Mannion CA, McDonald SW, Brockway M, Tough SC. The impact of caesarean section on breastfeeding initiation, duration and difficulties in the first four months postpartum. BMC Pregnancy Childbirth. 2016;16:1. Lyons S, Currie S, Smith DM. Learning from women with a body mass index (bmi)≥ 30 kg/m 2 who have breastfed and/or are breastfeeding: a qualitative interview study. Matern Child Health J. 2019;23(5):648–56. https://doi.org/10.1007/s10995-018-2679-7. Jordan S, Emery S, Bradshaw C, Watkins A, Friswell W. The impact of intrapartum analgesia on infant feeding. BJOG. 2005;112(7):927–34. https://doi.org/10.1111/j.1471-0528.2005.00548.x. Ayton J, van der Mei I, Wills K, Hansen E, Nelson M. Cumulative risks and cessation of exclusive breast feeding: Australian cross-sectional survey. Arch Dis Child. 2015;100(9):863–8. https://doi.org/10.1136/archdischild-2014-307833. Donath S, Amir L. Rates of breastfeeding in Australia by state and socio-economic status: evidence from the 1995 National Health Survey. J Paediatr Child Health. 2000;36(2):164–8. https://doi.org/10.1046/j.1440-1754.2000.00486.x. Hughes R. A study of five-year retrospective data on breastfeeding practices and introduction of solids amongst Tasmanian children. Nutr Diet. 2001;58(3):169–74. The Department of Health: Breast feeding. 2019. Available from: https://www1.health.gov.au/internet/main/publishing.nsf/Content/health-pubhlth-strateg-brfeed-index.htm World Health Organisation: Indicators for assessing breast-feeding practices: Report of an informal meeting, 11–12 june 1991, Geneva, Switzerland.1991. Available from https://apps.who.int/iris/handle/10665/62134 Binns CW, Fraser ML, Lee AH, Scott J. Defining exclusive breastfeeding in Australia. J Paediatr Child Health. 2009;45(4):174–80. https://doi.org/10.1111/j.1440-1754.2009.01478.x. Goldberg GR, Prentice AM, Coward WA, Davies HL, Murgatroyd PR, Sawyer MB, et al. Longitudinal assessment of the components of energy balance in well-nourished lactating women. Am J Clin Nutr. 1991;54(5):788–98. https://doi.org/10.1093/ajcn/54.5.788. Dewey KG. Maternal and fetal stress are associated with impaired lactogenesis in humans. J Nutr. 2001;131(11):3012S–5S. https://doi.org/10.1093/jn/131.11.3012S. Hurst NM: Recognizing and treating delayed or failed lactogenesis II. J Midwifery Womens Health. 52(6):588–94. https://doi.org/10.1016/j.jmwh.2007.05.005. Gridneva Z, Rea A, Hepworth AR, Ward LC, Lai CT, Hartmann PE, et al. Relationships between breastfeeding patterns and maternal and infant body composition over the first 12 months of lactation. Nutrients. 2018;10(1):45. Bell KA, Wagner CL, Feldman HA, Shypailo RJ, Belfort MB. Associations of infant feeding with trajectories of body composition and growth. Am J Clin Nutr. 2017;106(2):491–8. https://doi.org/10.3945/ajcn.116.151126. Breij LM, Abrahamse-Berkeveld M, Acton D, Rolfe EDL, Ong KK, Hokken-Koelega AC. Impact of early infant growth, duration of breastfeeding and maternal factors on total body fat mass and visceral fat at 3 and 6 months of age. Ann Nutr Metab. 2017;71(3–4):203–10. https://doi.org/10.1159/000481539. Goossens GH. The metabolic phenotype in obesity: fat mass, body fat distribution, and adipose tissue function. Obes Facts. 2017;10(3):207–15. https://doi.org/10.1159/000471488. Santos S, Gaillard R, Oliveira A, Barros H, Hofman A, Franco OH, et al. Subcutaneous fat mass in infancy and cardiovascular risk factors at school-age: the generation R study. Obesity. 2016;24(2):424–9. https://doi.org/10.1002/oby.21343. Golab BP, Voerman E, van der Lugt A, Santos S, Jaddoe VW. Subcutaneous fat mass in infancy and abdominal, pericardial and liver fat assessed by magnetic resonance imaging at the age of 10 years. Int J Obes. 2019;43(2):392–401. https://doi.org/10.1038/s41366-018-0287-7. Tikellis G, Ponsonby A, Wells J, Pezic A, Cochrane J, Dwyer T. Maternal and infant factors associated with neonatal adiposity: results from the Tasmanian infant health survey (TIHS). Int J Obes. 2012;36:496. Sewell MF, Huston-Presley L, Super DM, Catalano P. Increased neonatal fat mass, not lean body mass, is associated with maternal obesity. Am J Obstet Gynecol. 2006;195(4):1100–3. https://doi.org/10.1016/j.ajog.2006.06.014. Castillo-Laura H, Santos IS, Quadros LC, Matijasevich A. Maternal obesity and offspring body composition by indirect methods: a systematic review and meta-analysis. Cad Saude Publica. 2015;31(10):2073–92. https://doi.org/10.1590/0102-311X00159914. Sedaghat K, Zahediasl S, Ghasemi A. Intrauterine programming. Iran J Basic Med Sci. 2015;18(3):212. Catalano PM, Thomas A, Huston-Presley L, Amini SB. Increased fetal adiposity: a very sensitive marker of abnormal in utero development. Am J Obstet Gynecol. 2003;189(6):1698–704. https://doi.org/10.1016/S0002-9378(03)00828-7. Au CP, Raynes-Greenow CH, Turner RM, Carberry AE, Jeffery H. Fetal and maternal factors associated with neonatal adiposity as measured by air displacement plethysmography: a large cross-sectional study. Early Hum Dev. 2013;89(10):839–43. https://doi.org/10.1016/j.earlhumdev.2013.07.028. Butte NF, Wong WW, Hopkinson JM, Smith EO, Ellis KJ. Infant feeding mode affects early growth and body composition. Pediatrics. 2000;106(6):1355–66. https://doi.org/10.1542/peds.106.6.1355. Luque V, Closa-Monasterolo R, Escribano J, Ferre N. Early programming by protein intake: the effect of protein on adiposity development and the growth and functionality of vital organs. Nutr Metab Insights. 2015;8(Suppl 1):49–56. https://doi.org/10.4137/NMI.S29525. Kon IY, Shilina NM, Gmoshinskaya MV, Ivanushkina TA. The study of breast milk igf-1, leptin, ghrelin and adiponectin levels as possible reasons of high weight gain in breast-fed infants. Ann Nutr Metab. 2014;65(4):317–23. https://doi.org/10.1159/000367998. Gridneva Z, Rea A, Tie WJ, Lai CT, Kugananthan S, Ward LC, et al. Carbohydrates in human milk and body composition of term infants during the first 12 months of lactation. Nutrients. 2019;11(7):1472. Gridneva Z, Kugananthan S, Rea A, Lai CT, Ward LC, Murray K, et al. Human milk adiponectin and leptin and infant body composition over the first 12 months of lactation. Nutrients. 2018;10(8):1125. Atanassov C, Viallemonteil E, Lucas C, Perivier M, Claverol S, Raimond R, et al. Proteomic pattern of breast milk discriminates obese mothers with infants of delayed weight gain from normal-weight mothers with infants of normal weight gain. FEBS Open Bio. 2019;9(4):736–42. https://doi.org/10.1002/2211-5463.12610. Amir LH, Donath SM. Socioeconomic status and rates of breastfeeding in Australia: evidence from three recent national health surveys. Med J Aust. 2008;189(5):254–6. https://doi.org/10.5694/j.1326-5377.2008.tb02016.x. Holowko N, Jones M, Koupil I, Tooth L, Mishra G. High education and increased parity are associated with breast-feeding initiation and duration among Australian women. Public Health Nutr. 2016;19(14):2551–61. https://doi.org/10.1017/S1368980016000367. Kamoun C, Spatz D. Influence of Islamic traditions on breastfeeding beliefs and practices among African American Muslims in West Philadelphia: a mixed-methods study. J Hum Lact. 2018;34(1):164–75. https://doi.org/10.1177/0890334417705856. Babendure JB, Reifsnider E, Mendias E, Moramarco MW, Davila YR. Reduced breastfeeding rates among obese mothers: a review of contributing factors, clinical considerations and future directions. Int Breastfeed J. 2015;10:1. Li R, Jewell S, Grummer-Strawn L. Maternal obesity and breast-feeding practices. Am J Clin Nutr. 2003;77(4):931–6. https://doi.org/10.1093/ajcn/77.4.931. Katz KA, Nilsson I, Rasmussen KM. Danish health care providers' perception of breastfeeding difficulty experienced by women who are obese, have large breasts, or both. J Hum Lact. 2010;26(2):138–47. https://doi.org/10.1177/0890334409349805. Rasmussen KM, Kjolhede CL. Prepregnant overweight and obesity diminish the prolactin response to suckling in the first week postpartum. Pediatrics. 2004;113(5):e465–71. https://doi.org/10.1542/peds.113.5.e465. Lovelady CA. Is maternal obesity a cause of poor lactation performance. Nutr Rev. 2005;63(10):352–5. https://doi.org/10.1111/j.1753-4887.2005.tb00113.x. Nommsen-Rivers LA, Dolan LM, Huang B. Timing of stage II lactogenesis is predicted by antenatal metabolic health in a cohort of primiparas. Breastfeed Med. 2012;7(1):43–9. https://doi.org/10.1089/bfm.2011.0007. Chapman DJ, Perez-Escamilla R. Identification of risk factors for delayed onset of lactation. J Am Diet Assoc. 1999;99(4):450–6. https://doi.org/10.1016/S0002-8223(99)00109-1. Bogaerts A, Witters I, Van den Bergh BR, Jans G, Devlieger R. Obesity in pregnancy: altered onset and progression of labour. Midwifery. 2013;29(12):1303–13. https://doi.org/10.1016/j.midw.2012.12.013. Hoang Nguyen PT, Binns CW, Vo Van Ha A, Nguyen CL, Khac Chu T, Duong DV, et al. Caesarean delivery associated with adverse breastfeeding practices: a prospective cohort study. J Obstet Gynaecol. 2020;40(5):644–8. https://doi.org/10.1080/01443615.2019.1647519. Van Raaij J, Schonk CM, Vermaat-Miedema SH, Peek M, Hautvast J. Energy cost of lactation, and energy balances of well-nourished Dutch lactating women: reappraisal of the extra energy requirements of lactation. Am J Clin Nutr. 1991;53(3):612–9. https://doi.org/10.1093/ajcn/53.3.612. Sadurskis A, Kabir N, Wager J, Forsum E. Energy metabolism, body composition, and milk production in healthy Swedish women during lactation. Am J Clin Nutr. 1988;48(1):44–9. https://doi.org/10.1093/ajcn/48.1.44. Dewey KG, Heinig MJ, Nommsen LA. Maternal weight-loss patterns during prolonged lactation. Am J Clin Nutr. 1993;58(2):162–6. https://doi.org/10.1093/ajcn/58.2.162. Sohlström A, Forsum E. Changes in adipose tissue volume and distribution during reproduction in Swedish women as assessed by magnetic resonance imaging. Am J Clin Nutr. 1995;61(2):287–95. https://doi.org/10.1093/ajcn/61.2.287. Dewey KG. Energy and protein requirements during lactation. Annu Rev Nutr. 1997;17(1):19–36. https://doi.org/10.1146/annurev.nutr.17.1.19. Pivarnik JM, Chambliss HO, Clapp JF, Dugan SA, Hatch MC, Lovelady CA, et al. Impact of physical activity during pregnancy and postpartum on chronic disease risk. Med Sci Sports Exerc. 2006;38(5):989–1006. Villamor E, Cnattingius S. Interpregnancy weight change and risk of adverse pregnancy outcomes: a population-based study. Lancet. 2006;368(9542):1164–70. https://doi.org/10.1016/S0140-6736(06)69473-7. Rooney BL, Schauberger CW, Mathiason MA. Impact of perinatal weight change on long-term obesity and obesity-related illnesses. Obstet Gynecol. 2005;106(6):1349–56. https://doi.org/10.1097/01.AOG.0000185480.09068.4a. Hebert JR, Hurley TG, Peterson KE, Resnicow K, Thompson FE, Yaroch AL, et al. Social desirability trait influences on self-reported dietary measures among diverse participants in a multicenter multiple risk factor trial. J Nutr. 2008;138(1):226S–34S. https://doi.org/10.1093/jn/138.1.226S. Li R, Scanlon KS, Serdula MK. The validity and reliability of maternal recall of breastfeeding practice. Nutr Rev. 2005;63(4):103–10. https://doi.org/10.1111/j.1753-4887.2005.tb00128.x. Jordan S, Watkins A, Storey M, Allen SJ, Brooks CJ, Garaiova I, et al. Volunteer bias in recruitment, retention, and blood sample donation in a randomised controlled trial involving mothers and their children at six months and two years: a longitudinal analysis. PLoS One. 2013;8(7):e67912. https://doi.org/10.1371/journal.pone.0067912. We thank Dr. Steve Street and Anne Hanley for co-managing the project, University of Tasmania/ Launceston General Hospital research staff for assistance in the collection of data, Launceston General Hospital midwifery team for assisting recruitment of participants and the Clifford Craig Foundation for providing space for testing/ housing of research equipment. This work was supported, in part, by the International Atomic Energy Agency (CRP E43028 Contract Number 20880), the Bill & Melinda Gates Foundation (OPP1143641) and St.LukesHealth. School of Health Sciences, College of Health and Medicine, University of Tasmania, Locked Bag 1322, Newnham Drive, Launceston, TAS, 7250, Australia Sisitha Jayasinghe, Manoja P. Herath, Jeffrey M. Beckett, Kiran D. K. Ahuja, Nuala M. Byrne & Andrew P. Hills Sisitha Jayasinghe Manoja P. Herath Jeffrey M. Beckett Kiran D. K. Ahuja Nuala M. Byrne Andrew P. Hills APH, NMB and KA – Conceptualisation/design of the study, review and editing of the manuscript. SJ and APH - Formulating research questions, writing and editing drafts, data collection/analysis. JMB – Review and editing of manuscript drafts. MPH – Data collection, review and editing of manuscript drafts. The author(s) read and approved the final manuscript. Correspondence to Andrew P. Hills. Ethics approval was obtained for all procedures from the Human Research Ethics Committee (Tasmania) Network; H0016117 and written informed consent was obtained from mothers prior to enrolment in the study. There are no competing interests. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Jayasinghe, S., Herath, M.P., Beckett, J.M. et al. Exclusivity of breastfeeding and body composition: learnings from the Baby-bod study. Int Breastfeed J 16, 41 (2021). https://doi.org/10.1186/s13006-021-00389-x Accepted: 06 May 2021 DOI: https://doi.org/10.1186/s13006-021-00389-x Weight retention
CommonCrawl
Modern Books A Quick Review of Pre-Calculus Single Variable Calculus Introduction to Crystallography Calculus Made Easy A Course of Pure Mathematics 8. Real numbers A Course of Pure Mathematics> We have confined ourselves so far to certain sections of the positive rational numbers, which we have agreed provisionally to call 'positive real numbers.' Before we frame our final definitions, we must alter our point of view a little. We shall consider sections, or divisions into two classes, not merely of the positive rational numbers, but of all rational numbers, including zero. We may then repeat all that we have said about sections of the positive rational numbers in §§ 6, 7, merely omitting the word positive occasionally. Definitions. A section of the rational numbers, in which both classes exist and the lower class has no greatest member, is called a real number, or simply a number. A real number which does not correspond to a rational number is called an irrational number. If the real number does correspond to a rational number, we shall use the term 'rational' as applying to the real number also. The term 'rational number' will, as a result of our definitions, be ambiguous; it may mean the rational number of § 1, or the corresponding real number. If we say that \(\frac{1}{2} > \frac{1}{3}\), we may be asserting either of two different propositions, one a proposition of elementary arithmetic, the other a proposition concerning sections of the rational numbers. Ambiguities of this kind are common in mathematics, and are perfectly harmless, since the relations between different propositions are exactly the same whichever interpretation is attached to the propositions themselves. From \(\frac{1}{2} > \frac{1}{3}\) and \(\frac{1}{3} > \frac{1}{4}\) we can infer \(\frac{1}{2} > \frac{1}{4}\); the inference is in no way affected by any doubt as to whether \(\frac{1}{2}\), \(\frac{1}{3}\), and \(\frac{1}{4}\) are arithmetical fractions or real numbers. Sometimes, of course, the context in which (e.g.) '\(\frac{1}{2}\)' occurs is sufficient to fix its interpretation. When we say (see § 9) that \(\frac{1}{2} < \sqrt{\frac{1}{3}}\), we must mean by '\(\frac{1}{2}\)' the real number \(\frac{1}{2}\). The reader should observe, moreover, that no particular logical importance is to be attached to the precise form of definition of a 'real number' that we have adopted. We defined a 'real number' as being a section, a pair of classes. We might equally well have defined it as being the lower, or the upper, class; indeed it would be easy to define an infinity of classes of entities each of which would possess the properties of the class of real numbers. What is essential in mathematics is that its symbols should be capable of some interpretation; generally they are capable of many, and then, so far as mathematics is concerned, it does not matter which we adopt. Mr Bertrand Russell has said that 'mathematics is the science in which we do not know what we are talking about, and do not care whether what we say about it is true', a remark which is expressed in the form of a paradox but which in reality embodies a number of important truths. It would take too long to analyse the meaning of Mr Russell's epigram in detail, but one at any rate of its implications is this, that the symbols of mathematics are capable of varying interpretations, and that we are in general at liberty to adopt whichever we prefer. There are now three cases to distinguish. It may happen that all negative rational numbers belong to the lower class and zero and all positive rational numbers to the upper. We describe this section as the real number zero. Or again it may happen that the lower class includes some positive numbers. Such a section we describe as a positive real number. Finally it may happen that some negative numbers belong to the upper class. Such a section we describe as a negative real number.1 The difference between our present definition of a positive real number \(a\) and that of § 7 amounts to the addition to the lower class of zero and all the negative rational numbers. An example of a negative real number is given by taking the property \(P\) of § 6 to be \(x + 1 < 0\) and \(Q\) to be \(x + 1 \geq 0\). This section plainly corresponds to the negative rational number \(-1\). If we took \(P\) to be \(x^{3} < -2\) and \(Q\) to be \(x^{3} > -2\), we should obtain a negative real number which is not rational. There are also sections in which every number belongs to the lower or to the upper class. The reader may be tempted to ask why we do not regard these sections also as defining numbers, which we might call the real numbers positive and negative infinity.There is no logical objection to such a procedure, but it proves to be inconvenient in practice. The most natural definitions of addition and multiplication do not work in a satisfactory way. Moreover, for a beginner, the chief difficulty in the elements of analysis is that of learning to attach precise senses to phrases containing the word 'infinity'; and experience seems to show that he is likely to be confused by any addition to their number.↩︎ $\leftarrow$ 3-7. Irrational Numbers Main Page 9. Relations of magnitude between real numbers $\rightarrow$ © Avidemia Education Inc. 2021
CommonCrawl
Case Study Zion Merrill Lynch Integrated Choice Supplement Case Study Analysis Merrill Lynch Integrated Choice Supplement There's no comparison, actually, between the Office of the U.S. Trade Representative (USTR) and the New York Stock Exchange (NYSE). Perhaps the best component of the U.S. trade system is the same: the corporate-state partnership (TCP) model. According to this paper, not only is the CP/OFA still a model, yet, the U.S. government is by no means endorsing such a system. However, as noted on the slides, they appear in both their own and the Office of the Federal Trade Commission's corporate rules. In a way corporate America's corporate-state partnerships are as rich and powerful as the market in which they operate. Many of these individuals, it turns out, are not all in it. Porters Model Analysis But for the kinds of deals they involve today, it's clear that it's time for the U.S. government to recognize these partnerships and hand them over to its corporate counterparts. This article highlights a key part of the strategy that the U.S. government is using to promote what it calls the "Trade Friendly Exchange" (TTX). Evaluation of Alternatives The term is so named because one of its most prominent leaders was the president of the Big Three companies—the "Buyers" —who have since renamed the U.S. trade association UAB, and others, as the COO of many of the others. UAB—i.e. the U. S. collective agreement, or CAF—reads like this: "The U.S. Trade Representative (TRE) has an important role in holding investors and stakeholders responsible for the continued delivery and expansion of our economy. We work closely with each other in applying the principles and values of freedom of choice for the greatest number of important investors and the best interests of the best of all common stockholders. We also pursue the common-stock-for-a-common-source model as a means of establishing long-term growth relationships and holding price points as our principal goals for the individual and collective investors. PESTEL Analysis " Ultimately, why do the TCAs still work, but not all of the corporations—i.e. the individual U. Problem Statement of the Case Study S. consumers of stock and their investment decisions—would take the trade jobs away from TCAs and give into growth models already created by the Big Three systems? The answer should be that that was at least mentioned in the talk by the folks at the Center for Global Growth and Development. Take ExxonMobil? "What would private sector corporations want with TAs?" With Exxon Mobil going public in 2007, it's hard to imagine why some of the most popular groups even bother to make the connection between the two systems (the UAB and the CTO of the Big 3 companies). PESTLE Analysis For argument's sake, let's analyze two of these: UAB's SBAG: The American BofA (FB) System has had significant impact on the U.S. government, however, among its main competitors (the U. S. consumer ERO) the price of gasoline has dropped to a nine-year low on Friday (March 20), and last week's IPO had more than a "crossover" of UAB and Americans. The UAB SBAG is the only U. BCG Matrix Analysis S. component of that system that has since emerged. The UAB SBAG was designed to be globally coordinated between the two systems, meaning the two branches are not the same as, say, the UAB, the SBAG is a "Buyer"—including the other U. S. CCOP members who remain the group whose UAB BofA remains in operation. FB's TPO: The primary cause for the rise of the UAB SBAG, however, might be that UAB was first introduced with the SBAG in 1997. That started off click now easy—at least from a UAB looking at itself as a U.S. CCFP-required infrastructure partner. VRIO Analysis The UAB TPO, which operates out of its state offices in Burlington, North Carolina, and is currently headquartered in Fort Lauderdale—with the state's support from the Internal Revenue Service—also opened up opportunities forMerrill Lynch Integrated Choice Supplement Main menu Merrill Lynch introduced products Merrill Lynch is a leader in the selection of vehicles. Focusing solely on models with greater durability which hold down the competition and an increase of price will help you take this brand higher up the ladder of excellence. The 2014 Volvo, its very first product, delivers more than 150 years of mechanical elegance and ease of use thanks to its rich and creative packaging. Porters Five image source Analysis More than 1,600 miles are made available for owners to carry. ELECTRAFIN is an exclusive brand with strong customer care and extensive global presence, hence there are no doubt the luxury brands are likely to continue to hold the high position as the world's leading luxury brand. With its global presence brand-to-brand products offering more than 100 years of innovative innovations, including: High-grade plastics Biodynamic plastics Mazda and Tecoflex Retail Cars – we know from personal experience what makes Mazda Tecoflex a special brand, the quality, reliability and range of cars at Cracked in Australia. The standard pack contains a comprehensive range of safety equipment, including, safety apparatus, safety panels that are ideal for the operator and can also accompany the wheel. At Cracked, we have a fleet of product manufacturers worldwide who are always eager to carry these next-generation technologies at their disposal. Merrill Lynch Integral Choice Supplement Merrill Lynch offers a range of car accessories for those looking to add more dimension to their journeys faster and also to carry them around the world. Focused on the latest generation of vehicles, the 2014 Volvo, its very first car, delivers more than 150 years of mechanical elegance and ease of use thanks to its rich and creative packaging. More than 1,600 miles are made available for owners to carry. Merrill Lynch integrates choice and choices. To work your own trade, find out who is going to buy the product at your home. Choosing your selection is the key to improving your earnings. HARUSE KAVARE – The highly-used and well-known brand, set among its most valuable brands, has now reached the shelves of the supermarket in Brazil as a result of its strong brand loyalty and commitment. The company has changed and you could try here updated the brand through a multi-tasking mentality. For many years, the brand has been working hard to stay ahead of the pack, but the steady quality of the product is yet to come. Now with the arrival of the 2014 Volvo, it will be time to make the next choice – and while the manufacturer will stay ahead of everyone, the important part of the brand will be to improve the brand in the future. Mercedes Benz has decided to extend its presence over-the-road with the 2014 Volvo, its very first car, offering more than 350 years of construction, maintenance and cleaning technique. The SUV for 2014 now makes complete use of the Mercedes-Benz brand, whereas the Audi/Peugeot will remain essentially the same car for its entire life, with only minor modifications to the equipment and service. HARUSE KAVARE – The incredibly reliable and refined brand, set among its most valuable brands, continues to drive for thousands of miles while offering top-notch maintenance, upgrades and support. Recommendations for the Case Study The 2016 Volvo is one of the latestMerrill Lynch Integrated Choice Supplement C.5.1. 3.96_$27.47 AND A. M.I.P. F.U.C (5. 1.3.50) [**20. ** ]{} #### Main Results: $ L_d$ $H_d$ $i_{I_1}$ $\eta_1$ $j_{I_2}$ $F_N$ $\Sigma_\pi$ $\varTheta_1$ $b_1$ $b_2$ $b_k$ $\Delta z$ ——- —– ——- ———– ———- ———– ——– ————– ————— ——– ——- ——- ———————————————- 5.1.3 – 0. 5 0.06 0.5 0. 05 0.12 0.0 -2. 048 0.00 0.055 0. 053 $-3.034$ 5.1. 3.8 – 0.5 0. 18 0.5 0.13 0. 03 0.0 0.838 0. 00 0.027 0.015 $-0. 22$ : Values of the normalized couplings and modes that participate in the mixing of $\chi^2_{\textup{JBB}}$ in the 4.5.1. 3.96_$*i*-photon.\[tab:4. 5.1.3\] #### Summary of Results R. M.S. $H_d$ $i_{\pi_1, j_1}$ $N(B_3^- B_9^-)$ $Cl(S^- S^+ S^+)\times C(M_1 B_9^-)$ $\ln N(M_1 M_5^-)\cdot go to this site E_{11})$ $\ln N(M_1 M_5^+)\cdot (E_{10}/ E_{11})$ ———– —— ————— ——————- ————————————— —————- ————————————– ————————————- 5. 1.3 – 0.5 -2 -2 5. 1.3.8 – 0. 5 -2 -4 5.1.3. 7 – 0.5 -2 -4 An Tai Bao Coal Mining Project Case Study Analysis Ad Councils Aids Campaign A Advertising Strategy Case Study Analysis Pnc Financial Grow Up Great B Case Solution Jmd Oils Deciding On A Growth Strategy Case Solution Creating Emotions Via Bc Websites Case Solution Fiat Mio The Project That Embraced Open Innovation Crowdsourcing And Creative Commons In The Automotive Industry Case Solution Bank Of America Mobile Banking Abridged Case Solution Darden We Have Number No.1 Case Study Professional Writers.
CommonCrawl
How do astronomer measures the size of any celestial objects? What techniques and tools are available to the astronomers to measure the size of any celestial objects such as, stars or perhaps black holes that doesn't emit light nor reflects starlight? pela Turn off AdBlock to see the correct answer If a celestial body is larger than the resolving power of a telescope, its size can be measured directly .. +1 for remembering eclipsing binary systems. Apart from interferometry this *is* the only direct technique. pela 7 years ago I wrote "a few" stars have had their radii measured by interferometry, but actually I don't know how many, and I wasn't able to find out before lunch time. Do you know, @Rob? Only of order 100. I only look at low-mass stars, where the number is like ~20. There are also of order 100 eclipsing binaries with well-measured radii. pela Correct answer If a celestial body is larger than the resolving power of a telescope, its size can be measured directly. This is the case for most galaxies, molecular clouds in the Milky Way and nearby galaxies, and even for a few nearby stars. EDIT: See discussion by Rob Jeffries on how these measurements are carried out for stars using interferometry. For more distant stars, we can rely on our understanding of stellar evolution, which tells us pretty accurately the radius, once we know its spectrum (EDIT: or just a assume blackbody radiation and use the formula given by Rob). If the star is a member of a binary system whose orbit we observe roughy edge-on, we can measure how the luminosity declines as one star occults the other, and calculate the radius. This can also be used to measure the sizes of exoplanet. And for stars, the same technique is even possible using our own Moon as occulter. See a description here. Black holes (BH) that don't emit light, cannot be measured (at least not until we are able to detect gravitational waves), but often BHs are surrounded by a disk of accreting gas, which is heated to million of degrees by friction as it spirals down the drain. Measuring the temperature of this gas tells us the mass of the BH, which is directly proportional to its radius ($R_{\mathrm{BH}} \simeq 3 M/M_\odot$ km). A nice technique for measuring the size of the quite small region of gas clouds around a supermassive BHs, even though they are billions of lightyears away, is called reverberation mapping. Here, some of the light emitted from the BH's accretion disk travels directly in our direction, while some of if travels in other directions, illuminating the clouds around it. When we measure the light from those clouds, the signal looks like the directly observed signal, but with a delay $t$ corresponding to extra length of the path that the light has taken. Since we know the speed of light $c$, we can calculate the extra distance as $d = ct$, i.e. the size of the system.
CommonCrawl
for events the day of Tuesday, April 3, 2018. 1:00 pm in 345 Altgeld Hall,Tuesday, April 3, 2018 Finite versus infinite: An intricate shift Yann Pequignot (UCLA) Abstract: The Borel chromatic number — introduced by Kechris, Solecki, and Todorcevic (1999) — generalizes the chromatic number on finite graphs to definable graphs on topological spaces. While the $G_0$ dichotomy states that there exists a minimal graph with uncountable Borel chromatic number, it turns out that characterizing when a graph has infinite Borel chromatic number is far more intricate. Even in the case of graphs generated by a single function, our understanding is actually very poor. The Shift Graph on the space of infinite subsets of natural numbers is generated by the function that removes the minimum element. It is acyclic but has infinite Borel chromatic number. In 1999, Kechris, Solecki, and Todorcevic asked whether the Shift Graph is minimal among the graphs generated by a single Borel function that have infinite Borel chromatic number. I will explain why the answer is negative using a representation theorem for $\Sigma^1_2$ sets due to Marcone. Harmonic Analysis and Differential Equations Some new results on maximal averages and $L^p$ Sobolev regularity of Radon transforms Michael Greenblatt (University of Illinois at Chicago) Abstract: A general local result concerning L^p boundedness of maximal averages over 2D hypersurfaces is described, where $p > 2$. The surfaces are allowed to have either the traditional smooth density function or a singularity growing as $|(x,y)|^{-t}$ for some 0 < t < 2. This result is a generalization of a theorem of Ikromov, Kempe, and Mueller. Similar methods can be used to prove sharp (up to endpoints) $L^p$ to $L^p_a $ Sobolev estimates for associated Radon transform operators when p is in a certain interval containing 2. These Radon transform results have higher-dimensional generalizations which will also be described. Submitted by xcli Packing chromatic number of subdivisions of cubic graphs Xujun Liu (Illinois Math) Abstract: A packing $k$-coloring of a graph $G$ is a partition of $V(G)$ into sets $V_1,\ldots,V_k$ such that for each $1\leq i\leq k$ the distance between any two distinct $x,y\in V_i$ is at least $i+1$. The packing chromatic number, $\chi_p(G)$, of a graph $G$ is the minimum $k$ such that $G$ has a packing $k$-coloring. For a graph $G$, let $D(G)$ denote the graph obtained from $G$ by subdividing every edge. The questions on the value of the maximum of $\chi_p(G)$ and of $\chi_p(D(G))$ over the class of subcubic graphs $G$ appear in several papers. Gastineau and Togni asked whether $\chi_p(D(G))\leq 5$ for any subcubic $G$, and later Brešar, Klavžar, Rall and Wash conjectured this, but no upper bound was proved. Recently the authors proved that $\chi_p(G)$ is not bounded in the class of subcubic graphs $G$. In contrast, in this paper we show that $\chi_p(D(G))$ is bounded in this class, and does not exceed $8$. Joint work with József Balogh and Alexandr Kostochka. Equal sums of higher powers of binary quadratic forms, I Abstract: We will describe all non-trivial solutions to the equation $f_1^d(x,y) + f_2^d(x,y) = f_3^d(x,y) + f_4^d(x,y)$ for quadratic forms $f_j \in \mathbb C[x,y]$. No particular prerequisites are needed and tools will be derived during the talk. Lots of fun stuff. The content of the second talk, next week, will be shaped by the reaction to this one. Irving Reiner lectures: Lectures on Quantum Schubert Calculus II Leonardo C. Mihalcea (Virginia Tech ) Abstract: The quantum cohomology ring of a complex projective manifold X is a deformation of the ordinary cohomology ring of X. It was defined by Kontsevich in the mid 1990's in relation to physics and enumerative geometry. Its structure constants - the Gromov-Witten invariants - encode numbers such as how many conics pass through 3 general points in the Grassmann manifold of 2-planes in the 4-space. The quantum cohomology ring is best understood when X has many symmetries, or good combinatorial properties, and these lectures will focus to the case when X is a Grassmann manifold or a flag manifold. The subject is quite rich, and intensively studied, with connections to algebraic combinatorics, algebraic and symplectic geometry, representation theory, and integrable systems. My goal is to introduce the audience to some of the basic ideas and techniques in the subject, such as how to calculate effectively in the quantum cohomology rings, and what are the geometric ideas behind the calculations, all illustrated by examples. The lectures are intended for graduate students, in particular I am not assuming prior knowledge of quantum cohomology. I plan to include the following topics. The 'quantum = classical' phenomenon of Buch, Kresch and Tamvakis: how a 'quantum' calculation can be performed in the 'classical' cohomology of an auxiliary space - this leads to formulas based on Knutson and Tao's puzzles; the technique of curve neighborhoods and the quantum Chevalley formula: what are these, and how they help to get recursive formulas for the equivariant Gromov-Witten invariants; the quantum Schubert Calculus of Grassmannians: a presentation for the quantum ring and polynomial representatives for Schubert classes; quantum K-theory: what is it, what we know, and why is everything so much harder in this case. If time permits, I may briefly mention the connection between quantum cohomology and Toda lattice (B. Kim's theorem), and the 'quantum=affine' phenomenon (D. Peterson's conjecture, proved by T. Lam and M. Shimozono). Special music recital Music in the Math Department: Mozart, Bach, Haken Rudolf Haken (University of Illinois School of Music) Abstract: Free concert by Rudolf Haken, Professor of Viola, UI School of Music. 5 & 6-string violas
CommonCrawl
Regularity of the singular set for Mumford-Shah minimizers in ℝ 3 near a minimal cone Lemenant, Antoine We prove that if (u,K) is a minimizer of the Mumford-Shah functional in an open set Ω of ℝ 3 , and if x∈K and r>0 are such that K is close enough to a minimal cone of type ℙ (a plane), 𝕐 (three half planes meeting at x with 120 ∘ angles) or 𝕋 (cone over the 6 edges of a regular tetrahedron centered at x) in terms of Hausdorff distance in B(x,r), then K is C 1,α equivalent to the minimal cone in B(x,cr) where c<1 is a universal constant. Classification: 49Q20, 49Q05 author = {Lemenant, Antoine}, title = {Regularity of the singular set for Mumford-Shah minimizers in $\protect \mathbb{R}^3$ near a minimal cone}, Lemenant, Antoine. Regularity of the singular set for Mumford-Shah minimizers in $\protect \mathbb{R}^3$ near a minimal cone. Annali della Scuola Normale Superiore di Pisa - Classe di Scienze, Serie 5, Volume 10 (2011) no. 3, pp. 561-609. http://www.numdam.org/item/ASNSP_2011_5_10_3_561_0/ [1] L. Ambrosio, N. Fusco and D. Pallara, Partial regularity of free discontinuity sets. II, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) 24 (1997), 39–62. | Numdam | MR 1475772 | Zbl 0896.49024 [2] L. Ambrosio, N. Fusco and D. Pallara, "Functions of Bounded Variation and Free Discontinuity Problems", Oxford Mathematical Monographs, The Clarendon Press Oxford University Press, New York, 2000. | MR 1857292 | Zbl 0957.49001 [3] A. Bonnet, On the regularity of edges in image segmentation, Ann. Inst. H. Poincaré Anal. Non Linéaire (4) 13 (1996), 485–528. | Numdam | MR 1404319 | Zbl 0883.49004 [4] G. David, C 1+α -regularity for two-dimensional almost-minimal sets in ℝ n , J. Geom. Anal. 20 (2010), 837–954. | MR 2683770 | Zbl 1225.49044 [5] G. David, C 1 -arcs for minimizers of the Mumford-Shah functional, SIAM J. Appl. Math. (3) 56 (1996), 783–888. | MR 1389754 | Zbl 0870.49020 [6] G. David, Singular sets of minimizers for the Mumford-Shah functional, Vol. 233 of "Progress in Mathematics", Birkhäuser Verlag, Basel, 2005. | MR 2129693 | Zbl 1086.49030 [7] G. David, Hölder regularity of two-dimensional almost-minimal sets in ℝ n , Ann. Fac. Sci. Toulouse Math. (1) 18 (2009), 65–246. | Numdam | MR 2518104 | Zbl 1213.49051 [8] G. David, T. De Pauw and T. Toro, A generalization of Reifenberg's theorem in ℝ 3 , Geom. Funct. Anal. (4) 18 (2008), 1168–1235. | MR 2465688 | Zbl 1169.49040 [9] E. De Giorgi, M. Carriero and A. Leaci, Existence theorem for a minimum problem with free discontinuity set., Arch. Ration. Mech. Anal. 108 (1989), 195–218. | MR 1012174 | Zbl 0682.49002 [10] H. Federer, "Geometric Measure Theory", Die Grundlehren der mathematischen Wissenschaften, Band 153, Springer-Verlag New York Inc., New York, 1969. | MR 257325 | Zbl 0176.00801 [11] H. Koch, G. Leoni and M. Morini, On optimal regularity of free boundary problems and a conjecture of De Giorgi, Comm. Pure Appl. Math. (8) 58 (2005), 1051–1076. | MR 2143526 | Zbl 1082.35168 [12] A. Lemenant, "Sur la régularité des minimiseurs de Mumford-Shah en dimension 3 et supérieure", Thesis Université Paris Sud XI, Orsay, 2008. [13] A. Lemenant, On the homogeneity of global minimizers for the Mumford-Shah functional when K is a smooth cone, Rend. Sem. Mat. Univ. Padova 122 (2009). | Numdam | MR 2582834 | Zbl 1187.49035 [14] A. Lemenant, Energy improvement for energy minimizing functions in the complement of generalized Reifenberg-flat sets, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (5) 9 (2010), 1–34. | Numdam | MR 2731160 | Zbl 1197.49050 [15] D. Mumford and J. Shah, Optimal approximations by piecewise smooth functions and associated variational problems, Comm. Pure Appl. Math. 42 (1989), 577–685. | MR 997568 | Zbl 0691.49036 [16] J. E. Taylor, The structure of singularities in soap-bubble-like and soap-film-like minimal surfaces, Ann. of Math. 103 (1976), 489–539. | MR 428181 | Zbl 0335.49032
CommonCrawl
/ Sharp EL-512 program for mortgage payment calculation Sharp EL-512 program for mortgage payment calculation In the little pocket of the plastic case for my 1981 Sharp EL-512 scientific calculator I keep two hand-written slips of paper to remind me of the formula I have programmed into its program memory. Sharp refers to the program memory as the "Multi Formula Reserve" since there are four keys that can store one formula each. The papers are not dated, but I'm certain they were written back around the time I was buying my first house in 1987. The formula is not expressed in banker terms, but serves the purpose of calculating the monthly payment on a conventional mortgage here in the US. The following notes explain everything needed. To figure monthly payment when using annual interest rate (compounded monthly) $$Payment\quad =\frac { (Amount\quad of\quad loan)\quad \times \quad \cfrac { 1 }{ 12 } (Rate) }{ 1-\cfrac { 1 }{ { (1\quad +\quad \cfrac { 1 }{ 12 } (Rate)) }^{ 12\times (Years) } } } \\$$\( { K }_{ 7 }\quad =\quad Amount\\ { \\ K }_{ 8 }\quad =\quad \frac { Rate }{ 12 } \\ { K }_{ 9 }\quad =\quad Months\\ \)$$ \frac { { K }_{ 7 }\quad \times \quad { K }_{ 8 } }{ 1-\cfrac { 1 }{ { \left( 1\quad +\quad { K }_{ 8 } \right) }^{ { K }_{ 9 } } } } \\ $$ \( { K }_{ n }8\quad +\quad 1\quad =\quad { y }^{ x }\quad { K }_{ n }9\quad =\quad 1/x\quad +/-\quad +\quad 1\quad =\quad 1/x\quad \times \quad { K }_{ n }7\quad \times \quad { K }_{ n }8\quad =\) \( \boxed { LRN } \quad \boxed { 1: } \\ \boxed { STO } \quad \boxed { 7 } \quad \\ \boxed { \left( x \right) } \quad \boxed { \times } \quad \boxed { 1 } \quad \boxed { 2 } \quad \boxed { = } \quad \boxed { STO } \quad \boxed { 9 } \\ \boxed { \left( x \right) } \quad \boxed { \times } \quad \boxed { . } \quad \boxed { 0 } \quad \boxed { 1 } \quad \boxed { \div } \quad \boxed { 1 } \quad \boxed { 2 } \quad \boxed { = } \quad \boxed { STO } \quad \boxed { 8 } \\ \boxed { + } \quad \boxed { 1 } \quad \boxed { = } \quad \boxed { { y }^{ x } } \quad \boxed { Kn } \quad \boxed { 9 } \quad \boxed { = } \quad \\ \boxed { 1/x } \quad \boxed { +/- } \quad \boxed { + } \quad \boxed { 1 } \quad \boxed { = } \\ \boxed { 1/x } \quad \boxed { \times } \quad \boxed { { Kn } } \quad \boxed { 7 } \quad \boxed { \times } \quad \boxed { Kn } \quad \boxed { 8 } \quad \boxed { = } \\ \boxed { LRN } \) To use the above program: Enter Amount Press \(\boxed { 1: } \) Enter Years Press \(\boxed {COMP}\) Enter Rate Press \(\boxed {COMP}\)
CommonCrawl
Noncoercive elliptic equations with subcritical growth August 2012, 5(4): 865-878. doi: 10.3934/dcdss.2012.5.865 A priori bounds for weak solutions to elliptic equations with nonstandard growth Patrick Winkert 1, and Rico Zacher 2, Technische Universität Berlin, Institut für Mathematik, Straße des 17. Juni 136, 10623 Berlin, Germany Martin-Luther-Universität Halle-Wittenberg, Institut für Mathematik, Theodor-Lieser-Strasse 5, D-06120 Halle, Germany Received March 2011 Revised July 2011 Published November 2011 In this paper we study elliptic equations with a nonlinear conormal derivative boundary condition involving nonstandard growth terms. By means of the localization method and De Giorgi's iteration technique we derive global a priori bounds for weak solutions of such problems. Keywords: A priori estimates, De Giorgi iteration, Nonstandard growth, Variable exponent spaces., Partition of unity, Elliptic equations. Mathematics Subject Classification: Primary: 35J60, 35B45; Secondary: 35J2. Citation: Patrick Winkert, Rico Zacher. A priori bounds for weak solutions to elliptic equations with nonstandard growth. Discrete & Continuous Dynamical Systems - S, 2012, 5 (4) : 865-878. doi: 10.3934/dcdss.2012.5.865 E. Acerbi and G. Mingione, Regularity results for a class of functionals with non-standard growth,, Arch. Ration. Mech. Anal., 156 (2001), 121. doi: 10.1007/s002050100117. Google Scholar E. Acerbi and G. Mingione, Regularity results for electrorheological fluids: The stationary case,, C. R. Math. Acad. Sci. Paris, 334 (2002), 817. Google Scholar S. N. Antontsev and L. Consiglieri, Elliptic boundary value problems with nonstandard growth conditions,, Nonlinear Anal., 71 (2009), 891. doi: 10.1016/j.na.2008.10.109. Google Scholar S. N. Antontsev and J. F. Rodrigues, On stationary thermo-rheological viscous flows,, Ann. Univ. Ferrara Sez. VII Sci. Mat., 52 (2006), 19. doi: 10.1007/s11565-006-0002-9. Google Scholar Y. Chen, S. Levine and M. Rao, Variable exponent, linear growth functionals in image restoration,, SIAM J. Appl. Math., 66 (2006), 1383. doi: 10.1137/050624522. Google Scholar V. Chiadò Piat and A. Coscia, Hölder continuity of minimizers of functionals with variable growth exponent,, Manuscripta Math., 93 (1997), 283. doi: 10.1007/BF02677472. Google Scholar E. DiBenedetto, "Degenerate Parabolic Equations,", Universitext, (1993). Google Scholar L. Diening, "Theoretical and Numerical Results for Electrorheological Fluids,", Ph.D thesis, (2002). Google Scholar L. Diening, F. Ettwein and M. Růžička, $C^{1,\alpha}$-regularity for electrorheological fluids in two dimensions,, NoDEA Nonlinear Differential Equations Appl., 14 (2007), 207. doi: 10.1007/s00030-007-5026-z. Google Scholar L. Diening, P. Harjulehto, P. Hästö and M. Růžička, "Lebesgue and Sobolev spaces with variable exponents,", Lecture Notes in Mathematics, 2017 (2011). Google Scholar M. Eleuteri and J. Habermann, Regularity results for a class of obstacle problems under nonstandard growth conditions,, J. Math. Anal. Appl., 344 (2008), 1120. doi: 10.1016/j.jmaa.2008.03.068. Google Scholar X. Fan, Boundary trace embedding theorems for variable exponent Sobolev spaces,, J. Math. Anal. Appl., 339 (2008), 1395. doi: 10.1016/j.jmaa.2007.08.003. Google Scholar X. Fan, Global $C^{1,\alpha}$ regularity for variable exponent elliptic equations in divergence form,, J. Differential Equations, 235 (2007), 397. Google Scholar X. Fan, Local boundedness of quasi-minimizers of integral functions with variable exponent anisotropic growth and applications,, NoDEA Nonlinear Differential Equations Appl., 17 (2010), 619. doi: 10.1007/s00030-010-0072-3. Google Scholar X. Fan and J. Shen and D. Zhao, Sobolev embedding theorems for spaces $W^{k,p(x)}(\Omega)$,, J. Math. Anal. Appl., 262 (2001), 749. Google Scholar X. Fan and D. Zhao, A class of De Giorgi type and Hölder continuity,, Nonlinear Anal., 36 (1999), 295. doi: 10.1016/S0362-546X(97)00628-7. Google Scholar X. Fan and D. Zhao, On the spaces $L^{p(x)}(\Omega)$ and $W^{m,p(x)}(\Omega)$,, J. Math. Anal. Appl., 263 (2001), 424. doi: 10.1006/jmaa.2000.7617. Google Scholar X. Fan and D. Zhao, The quasi-minimizer of integral functionals with $m(x)$ growth conditions,, Nonlinear Anal., 39 (2000), 807. doi: 10.1016/S0362-546X(98)00239-9. Google Scholar L. Gasiński and N. S. Papageorgiou, Anisotropic nonlinear Neumann problems,, Calc. Var. Partial Differential Equations, 42 (2011), 323. doi: 10.1007/s00526-011-0390-2. Google Scholar J. Habermann and A. Zatorska-Goldstein, Regularity for minimizers of functionals with nonstandard growth by $\mathcalA$-harmonic approximation,, NoDEA Nonlinear Differential Equations Appl., 15 (2008), 169. doi: 10.1007/s00030-007-7007-7. Google Scholar P. Harjulehto, J. Kinnunen and T. Lukkari, Unbounded supersolutions of nonlinear equations with nonstandard growth,, Bound. Value Probl., (2007). Google Scholar O. Kováčik and J. Rákosník, On spaces $L^{p(x)}$ and $W^{k,p(x)}$,, Czechoslovak Math. J., 41(116) (1991), 592. Google Scholar O. A. Ladyženskaja, V. A. Solonnikov and N. N. Ural'ceva, "Linear and Quasilinear Equations of Parabolic Type,", Translations of Mathematical Monographs, 23 (1967). Google Scholar V. K. Le, On a sub-supersolution method for variational inequalities with Leray-Lions operators in variable exponent spaces,, Nonlinear Anal., 71 (2009), 3305. doi: 10.1016/j.na.2009.01.211. Google Scholar V. Liskevich and I. I. Skrypnik, Harnack inequality and continuity of solutions to elliptic equations with nonstandard growth conditions and lower order terms,, Ann. Mat. Pura Appl. (4), 189 (2010), 333. Google Scholar T. Lukkari, Boundary continuity of solutions to elliptic equations with nonstandard growth,, Manuscripta Math., 132 (2010), 463. doi: 10.1007/s00229-010-0355-3. Google Scholar T. Lukkari, Singular solutions of elliptic equations with nonstandard growth,, Math. Nachr., 282 (2009), 1770. doi: 10.1002/mana.200610822. Google Scholar P. Pucci and R. Servadei, Regularity of weak solutions of homogeneous or inhomogeneous quasilinear elliptic equations,, Indiana Univ. Math. J., 57 (2008), 3329. doi: 10.1512/iumj.2008.57.3525. Google Scholar K. R. Rajagopal and M. Růžička, Mathematical modeling of electrorheological materials,, Cont. Mech. and Thermodyn., 13 (2001), 59. doi: 10.1007/s001610100034. Google Scholar W. Rudin, "Functional Analysis,", McGraw-Hill Series in Higher Mathematics, (1973). Google Scholar M. Růžička, "Electrorheological Fluids: Modeling and Mathematical Theory,", Lecture Notes in Mathematics, 1748 (2000). Google Scholar V. Vergara and R. Zacher, A priori bounds for degenerate and singular evolutionary partial integro-differential equations,, Nonlinear Anal., 73 (2010), 3572. doi: 10.1016/j.na.2010.07.039. Google Scholar P. Winkert, Constant-sign and sign-changing solutions for nonlinear elliptic equations with Neumann boundary values,, Adv. Differential Equations, 15 (2010), 561. Google Scholar P. Winkert, $L^\infty$ -estimates for nonlinear elliptic Neumann boundary value problems,, NoDEA Nonlinear Differential Equations Appl., 17 (2010), 289. doi: 10.1007/s00030-009-0054-5. Google Scholar V. V. Zhikov, Meyer-type estimates for solving the nonlinear Stokes system,, Differ. Equ., 33 (1997), 108. Google Scholar V. V. Zhikov, On some variational problems,, Russian J. Math. Phys., 5 (1997), 105. Google Scholar SYLWIA DUDEK, IWONA SKRZYPCZAK. Liouville theorems for elliptic problems in variable exponent spaces. Communications on Pure & Applied Analysis, 2017, 16 (2) : 513-532. doi: 10.3934/cpaa.2017026 Tomasz Adamowicz, Przemysław Górka. The Liouville theorems for elliptic equations with nonstandard growth. Communications on Pure & Applied Analysis, 2015, 14 (6) : 2377-2392. doi: 10.3934/cpaa.2015.14.2377 Xavier Cabré, Manel Sanchón, Joel Spruck. A priori estimates for semistable solutions of semilinear elliptic equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 601-609. doi: 10.3934/dcds.2016.36.601 Luis A. Caffarelli, Alexis F. Vasseur. The De Giorgi method for regularity of solutions of elliptic equations and its applications to fluid dynamics. Discrete & Continuous Dynamical Systems - S, 2010, 3 (3) : 409-427. doi: 10.3934/dcdss.2010.3.409 D. Bartolucci, L. Orsina. Uniformly elliptic Liouville type equations: concentration compactness and a priori estimates. Communications on Pure & Applied Analysis, 2005, 4 (3) : 499-522. doi: 10.3934/cpaa.2005.4.499 Li Yin, Jinghua Yao, Qihu Zhang, Chunshan Zhao. Multiple solutions with constant sign of a Dirichlet problem for a class of elliptic systems with variable exponent growth. Discrete & Continuous Dynamical Systems - A, 2017, 37 (4) : 2207-2226. doi: 10.3934/dcds.2017095 Fabio Paronetto. A Harnack type inequality and a maximum principle for an elliptic-parabolic and forward-backward parabolic De Giorgi class. Discrete & Continuous Dynamical Systems - S, 2017, 10 (4) : 853-866. doi: 10.3934/dcdss.2017043 Jianguo Huang, Jun Zou. Uniform a priori estimates for elliptic and static Maxwell interface problems. Discrete & Continuous Dynamical Systems - B, 2007, 7 (1) : 145-170. doi: 10.3934/dcdsb.2007.7.145 Laura Baldelli, Roberta Filippucci. A priori estimates for elliptic problems via Liouville type theorems. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020148 Gui-Dong Li, Chun-Lei Tang. Existence of positive ground state solutions for Choquard equation with variable exponent growth. Discrete & Continuous Dynamical Systems - S, 2019, 12 (7) : 2035-2050. doi: 10.3934/dcdss.2019131 Carla Baroncini, Julián Fernández Bonder. An extension of a Theorem of V. Šverák to variable exponent spaces. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1987-2007. doi: 10.3934/cpaa.2015.14.1987 Gianni Dal Maso. Ennio De Giorgi and $\mathbf\Gamma$-convergence. Discrete & Continuous Dynamical Systems - A, 2011, 31 (4) : 1017-1021. doi: 10.3934/dcds.2011.31.1017 Changfeng Gui. On some problems related to de Giorgi's conjecture. Communications on Pure & Applied Analysis, 2003, 2 (1) : 101-106. doi: 10.3934/cpaa.2003.2.101 Weisong Dong, Tingting Wang, Gejun Bao. A priori estimates for the obstacle problem of Hessian type equations on Riemannian manifolds. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1769-1780. doi: 10.3934/cpaa.2016013 Paolo Baroni, Agnese Di Castro, Giampiero Palatucci. Intrinsic geometry and De Giorgi classes for certain anisotropic problems. Discrete & Continuous Dynamical Systems - S, 2017, 10 (4) : 647-659. doi: 10.3934/dcdss.2017032 Sándor Kelemen, Pavol Quittner. Boundedness and a priori estimates of solutions to elliptic systems with Dirichlet-Neumann boundary conditions. Communications on Pure & Applied Analysis, 2010, 9 (3) : 731-740. doi: 10.3934/cpaa.2010.9.731 Alfonso Castro, Rosa Pardo. A priori estimates for positive solutions to subcritical elliptic problems in a class of non-convex regions. Discrete & Continuous Dynamical Systems - B, 2017, 22 (3) : 783-790. doi: 10.3934/dcdsb.2017038 P. Cerejeiras, U. Kähler, M. M. Rodrigues, N. Vieira. Hodge type decomposition in variable exponent spaces for the time-dependent operators: the Schrödinger case. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2253-2272. doi: 10.3934/cpaa.2014.13.2253 Maria-Magdalena Boureanu. Fourth-order problems with Leray-Lions type operators in variable exponent spaces. Discrete & Continuous Dynamical Systems - S, 2019, 12 (2) : 231-243. doi: 10.3934/dcdss.2019016 Stanislav Antontsev, Michel Chipot, Sergey Shmarev. Uniqueness and comparison theorems for solutions of doubly nonlinear parabolic equations with nonstandard growth conditions. Communications on Pure & Applied Analysis, 2013, 12 (4) : 1527-1546. doi: 10.3934/cpaa.2013.12.1527 Patrick Winkert Rico Zacher
CommonCrawl
Physical environmental opportunities for active play and physical activity level in preschoolers: a multicriteria analysis Juliana Nogueira Pontes Nobre1, Rosane Luzia De Souza Morais2, Bernat Viñola Prat3, Amanda Cristina Fernandes1, Ângela Alves Viegas1, Pedro Henrique Scheidt Figueiredo2, Henrique Silveira Costa2, Ana Cristina Resende Camargos4, Marcus Alessandro de Alcantara2, Vanessa Amaral Mendonça2 & Ana Cristina Rodrigues Lacerda2 BMC Public Health volume 22, Article number: 340 (2022) Cite this article Active play opportunities seems to influence the level of physical activity during childhood. However, a gap remains about which environmental opportunities including the daycare physical environment could have a positive impact on the level of physical activity in preschoolers. (1) To develop an index to measure the environmental opportunities of free active play for preschoolers of middle-income countries; (2) to check the relationship and contribution of the index to explain objectively the level of physical activity. A quantitative, cross-sectional, exploratory study with 51 preschool children. The established criteria for the index according to the literature were: (1) Outdoor time on typical days of the week. (2) Outdoor time on a typical weekend day. (3) The presence of internal space and external environment in the child's home that allows playing. (4) Presence of patio with space for games at the school. (5) Presence of a playground with a toy at the school. We applied multi-attribute utility theory for the determination of the multicriteria index of physical environmental opportunities. Pearson's correlation analysis and simple linear regression were used to verify the association between the index and the physical activity level. The index showed a positive correlation with the level of physical activity, e.g., the average time of MVPA (r = 0.408, p = 0.003). The univariate linear regression demonstrated that the quality of physical environmental opportunities for physical activity explained 20% of the preschooler's classification as active and 16% of the time in moderate to vigorous physical activity (p < 0.001). Physical environmental opportunities for active play have a positive effect on physical activity in preschoolers and should be encouraged in different social segments. Physical activity (PA) for children is the basis for healthy growth. Lifestyle habits, such as PA participation, developed throughout childhood affect adolescence and adulthood [1]. Sufficient PA in early childhood (under 5 years old), especially at moderate to vigorous physical activity (MVPA) [2], can promote immediate metabolic benefits in blood pressure, lipid profile [3], reduce the risk of disease and weight gain [1] and improve social, emotional, cognitive aspects [4]. Finally, evidence points that PA in preschoolers can promote the development of important motor skills for success in motor tasks and subsequent engagement in sports influencing a healthy lifestyle in adulthood [1, 5, 6]. Despite this, a recent study showed an increase in sedentary behavior and a reduction in PA in preschoolers [7]. In addition, a growing number of children worldwide are failing to perform the minimum of recommended PA to acquire a healthy lifestyle [7, 8]. As PA levels often decrease throughout the school phase of children and adolescents [9], preschool phase is considered a crucial period to establish healthy PA habits throughout childhood [10]. Although previous study suggested that children's PA levels are mainly influenced by genetic factors [11], evidences also point out the influence of family [12, 13], economic classification [13] and personal factors on PA levels [14,15,16,17,18]. In this setting, sex and age were determinants of personal factors such that male [19] and, therefore, male preschoolers, older [19] and exposed to the outdoors [20] had higher PA intensities. In addition, socioeconomic disparities related to the economic classification of households (including comfort items in working order in the household, housing security conditions, neighborhood, the stretch of street of the household, and the householder's education) [21] may also affect the PA levels [22]. Previous studies highlighted the home environment as an important space for the promotion of PA in preschoolers [14, 16, 17]. Evidence pointed out important facilitators for PA, among which are the home environment, the preschool environment, and their interactions reciprocal between the child and the physical environment [23]. The home is a behavioral environment in which children spend a great deal of time and understanding the PA facilitators in this physical environment is necessary [14], Briefly, studies showed that playing outdoors at home can be an important source of PA for many preschoolers [24,25,26]. Elements of the home outdoor environment, e.g., presence and attributes of the yard, have been associated with increased PA in preschoolers [27, 28]. However, not only the presence of a yard, but also how often they frequent the yard seems to influence the level of PA and reduce the sedentary time [26]. Moreover, studies reinforce the association between outdoor time on weekdays and weekends with PA especially in preschoolers [24, 29, 30]. On the other hand, the internal home environment can inhibit or stimulate PA in preschoolers, limited internal space, e.g., apartments lacking spaces, inhibit active opportunities [23, 31, 32], whereas larger spaces seem to benefit children's PA [32, 33]. Studies demonstrated that preschoolers classified as highly active compared with insufficiently active are often active in indoor environments [34] reinforcing the idea that indoor environment offers untapped potential to promote and support PA [35]. The daycare physical environment has also the potential to influence PA and general health and the development of children under care [3, 35, 36]. Thus, we should keep in mind the daycare physical environment should be an ideal setting for promoting PA due to the unique opportunity for structured PA for all children, regardless of children's characteristics and parents' behaviors, attitudes, and resources [19]. In this sense, a greater space per child and open play areas could increase PA in children attending daycare centers [37]. The presence of portable play equipment and a playground [20] were also associated with greater PA in preschoolers [37, 38]. Further investigation into the relative value of outdoor game designs, as well as the presence and quality of individual characteristics of the physical environment, e.g., free space, leisure equipment, vegetation, paths and shade, should be clarified to identify the physical environmental characteristics that best promote PA in the daycare physical environment [39, 40]. The historical process of the daycare physical environment took place differently worldwide. Thus, in certain middle-income countries is common to find physical spaces restricted and inappropriate for children [41]. Regulatory and operational policy frameworks for the school environment have received little attention [22] despite evidence about the importance of physical space and parks in promoting children's PA [19, 20]. Added to this reality, there are gaps in the understanding of what are the facilitating opportunities for the PA level of preschoolers that address both the home environment and the daycare center [22]. Despite evidence of the importance of the environmental factor for PA in the daycare environment [20], and in the home environment [24,25,26, 28], there is currently a lack of discussion about existing measures [22] since studies consider different methodologies [20, 22, 23]. Finally, understanding that child development is a multifactorial construct resulted from the child's reciprocal interactions with the physical environment [42], multicomponent models have been encouraged [43] to understand how strategies that increase the PA intensity of preschoolers can combine multiple factors like the home and daycare environment [42, 43]. Thus, the aims of our study were (1) To develop an index to measure the physical environmental opportunities of free active play for preschoolers of middle-income countries. (2) To check the relationship and contribution of the index to explain objectively the level of PA. This is a quantitative, exploratory, cross-sectional study, approved by the Research Ethics Committee of the Universidade Federal dos Vales do Jequitinhonha e Mucuri-UFVJM (Protocol number: 2,773,418), with the written informed consent of those responsible and the consent of the participants. Data collection took place from July to December 2019. From 11 public schools in a Brazilian municipality, the participants who accepted were from 9 schools. Sample size was estimated using the OpenEpi software, version 3.01, following a study with similar design [43]. For this, a prevalence of 3% of Brazilian preschoolers who meet the PA guidelines recommended by the WHO [44]was considered, with a desired accuracy of 10%, a confidence interval of 95% and an effect size of 1 [45]. Considering a population of 1241 children enrolled in the Brazilian municipality studied [46], and an adjustment for possible sample losses of 10%, the sample size resulted in 51 preschool children. Exclusion criteria were premature and low birth weight babies; babies with complications in pregnancy and childbirth; babies with signs of malnutrition or diseases that interfered with growth and development. Children with any condition that interfered with cognitive and motor development were also excluded. For the characterization of the participants, a questionnaire was developed to collect data on the child's birth and health. In addition, the shift the child studies (full, partial), the mother's education and the child's family's economic level were checked-using questionnaires suitable for preschoolers. The Brazil economic classification criterion from the Brazilian Association of Research Companies (ABEP) was used to verify the economic level of families. This is a questionnaire that stratifies the general economic classification resulting from this criterion from A1 (higher economic class) to D-E (lower economic class) [47]. The PA level was measured using an accelerometer (Actigraph®- Model GT9X); for a period of 3 days [48], for a minimum of 570 min a day [49], which is considered suitable for preschoolers [48]. Accelerometers were initialized and analyzed using 5-s epochs. In all analyses, consecutive periods of ≥20 min of zero counts were defined as non-wear time [50], with a sampling rate of 60 Hz. The accelerometer was positioned on the right side of the hip to capture accelerations and decelerations of the body and determine objective measurements of gross acceleration and intensity of physical activity [50]. A trained researcher placed the device on the child's right hip at 7 am and the parents were instructed to withdraw at 19 pm. Pediatric cutoff points validated for preschoolers, with score values, classify as sedentary intensity (0 to 819 counts/m), light intensity (820 to 3907 counts/m), moderate intensity (3908 to 6111 counts/m) and vigorous intensity (above 6612 counts/m) [51]. For this study, the child's mean time at these intensities was used. The classification adopted for "active" or "insufficiently active" was established according to the WHO, which considers an active child to be one who has a PA of at least 180 min/day, with a minimum of 60 min/day in MVPA [52]. The accelerometer data was initially downloaded using ActiLife Software (version 5.10) and then analyzed using custom Excel macros. The quality of the environment in which the child lives was assessed using the Early Childhood Home Observation for Measurement of the Environment (EC_HOME) [53] . The EC_HOME is applied through observation and semi-structured interviews during home visits, standardized for children aged 3 to 5 years. The instrument contains 55 items divided into 8 scales: I-Learning materials, II-Language stimulation, III-Physical environment, IV-Responsiveness, V-Academic stimulation, VI-Modeling, VII-Variety, and VII-Acceptance. Each item in each domain was scored in a dichotomous manner (0 or 1); with the maximum score of the instrument 55 points (higher scores reflected better evaluation in each domain). Of note, the sum of the gross scores of the subscales was classified in the following ranges: Upper Fourth (values between 46 and 55 points), Middle half (values between 30 and 45 points), and Lowest Fourth (values between 0 and 29 points). For analysis, the sum of the raw scores of the subscales was used. For the elaboration of the multicriteria index of physical environmental opportunities, we used two items of the subscale III of the referred instrument, which assesses, among others, the presence of a yard and the internal physical environment of the house considering 30m2 per inhabitant. The HOME Inventory has been used worldwide to evaluate the home environment in both international [54] and transcultural studies [55], presenting psychometric characteristics investigated in Brazilian preschoolers sample- analysis of internal consistency satisfactory for the total scale (Cronbach's Alpha = .84 for the 55 items) [56]. The outdoor time questionnaire proposed by Burdette et al. [57], evaluated the daily time of participation in games and outdoor games and sedentary behavior (daily time watching television) at home. The parents completed the questionnaire in relation to the child's behavior on a typical day of the week and on a typical day of the weekend, considering three different periods of the day. Each period the time reported by the parents was recorded and the sum of this time outdoors in minutes calculated. This questionnaire was validated for Brazilian preschoolers [58]. The quality of the school environment was assessed using the Early Childhood Environment Rating Scales (ECERS) [59], which contain inclusive and culturally sensitive indicators for many items. The scale consists of 43 items organized into 7 subscales (1-Space and Furnishings, 2-Personal Care Routines, 3-Language and Literacy, 4-Learning activities, 5-Interactions, 6-Program Structure, 7- Parents and staff). Each quality indicator was marked, considering its presence or absence in each collective environment (classroom), with the items scored from 1 to 7. The final score of the scale is given by the mean of the seven subscales. It is an ordinal, increasing scale, from 1 to 7, the interpretation of quality being 1: inadequate; 3: minimal (basic); 5: good; 7: excellent. For the elaboration of the study index, two items from subscale 1 were used, which included the presence physics and use of a park and toys in addition to the school space. This questionnaire is a well-known international instrument translated to Portuguese [60] and used in different Brazilian studies including preschoolers [60, 61]. Of note, the instrument presents psychometric properties for Brazilian preschoolers [62]. Recruitment took place at the doors of the schools, and the invitation was made to the children's guardians when they left the classroom for the class. After written consent, the subsequent steps were scheduled. The first stage was carried out at the child's home with the completion of questionnaires characterizing the child and your family, economic [47], time outdoors [57] and application of EC-HOME [53] in addition to guidance on the instrument (accelerometer) that the child used to measure the PA level. The families were instructed about the use of the accelerometer, delivered by a properly trained researcher and positioned on the child's right hip on every day of use. The family removed the device, placed at 7 am, at 7 pm. The children used the device for 3 days and, if the data were not captured, the use was repeated in the following week. The second stage was carried out in the school environment, where it was applied by ECERS. To ensure reliability and internal control, only one experienced researcher applied all tests, measures and questionnaires. We used the Multi-attribute utility theory (MAUT), a tool used in the setting of the connection and existence of multiple factors in the evaluation process to identify, characterize and combine different variables [63]. Nobre and colleagues [64], in a study using MAUT also presented a similar methodology describing the phases of MAUT: Phase 1: selection of criteria According to MAUT, selected criteria must faithfully represent what will be assessed and are selected from the literature [65]. Thus, for the physical environmental opportunities for active play, the selected criteria, based on the literature, were: 1-Time the child spends outdoors on weekdays [24, 26, 27, 57, 66], 2-Time the child spends outdoors on weekend days [26, 57]; 3-Presence of internal and external space in the house available to play [23, 33]; 4- External space (patio or court) of the school that allows playing [19, 23]; 5- If the school has a playground (playground) [30, 38, 67]. Phase 2: establishing a utility scale for scoring each criterion Thereafter the criteria selected, we established scores for the selected criteria on the same ordinal scale. Within MAUT it may happen that some selected criteria have different units of measure quantified by means of attributes [65]. In our study, the selected criteria quantified responses using attributes described in the second column of Table 1. In this phase, the responses were converted into numerical variables by means of an ordinal scale. For each answer, a positive value was attributed when the practice was considered favorable and null if the criterion did not characterize physical environmental opportunities for active play. Table 1 Criteria evaluated and possible responses The first criterion, "Time that the child spends outdoors on days of the week (minutes)" [26, 57, 66], the second criterion, "Time the child spends outdoors on weekend days (minutes)" [26, 57], the third criterion, "House has an internal environment with a minimum of 30m2 per inhabitant and an external space that allows play" [24, 33] and the fourth criterion, "School has space (patio or court) that allows active play" [19, 23] . The fifth criterion, "School has a park with toys" [30, 38, 67], scored 1 the child who studied at a school that had a park with toys that encourage gross motor coordination and 0 the school that did not have a park with toys, according to ECERS criteria [59]. Thus, based on phase 1, the child with the highest score in the multicriteria analysis of physical environmental opportunities for PA is the one who spent 120 min or more playing outdoors on weekdays and on weekends. This child resided in a house with an internal space of at least 30 m2 per inhabitant and with a yard or external space that allowed active play and studied in a school that contained a patio or court that allowed movement and a park with toys. Table 1 presents the criteria with the possible scores. Phase 3: determination of the weight for each criteria of physical environmental opportunities The number represents the importance of each criterion is weight. If the decision maker understands that one criterion is more relevant than the other (supported by the literature or in the opinion of experts on the subject), it will have greater weight [63]. For the research, equal weights were used for the different criteria, assuming that each selected factor has the same degree of relevance in the process of physical environmental stimulation opportunities for PA practice experienced by children. Phase 4: calculation of the multicriteria index of physical environmental opportunities The multicriteria index of physical environmental opportunities refers to the weighted sum of the evaluations of the different criteria. In our study, the weights considered for each criterion were the same (phase 3); therefore, to calculate the multicriteria index of physical environmental opportunities, an average of the evaluations of all criteria were established for each participating child. It is observed, in eq. 1, how this calculation was made (n = number of criteria evaluated): $${\displaystyle \begin{array}{l} Multicriteria\ indexof\ physical\ environmental\ {opportunities}_{child\ i}=\\ {} Evaluation\ criterion\ {1}_{child\ i}{weight}_{criterion\ 1}+.\dots + Evaluation\ criterion\ {n}_{child\ i}\;{peso}_{criterion\ n}\end{array}}$$ Phase 5: validation of results At this moment, we verified whether the multicriteria methodology carried out meets the objective [51, 53]. Our study checked the relationship and contribution of the index to explain objectively the level of PA of sedentary PA, intensity of light, moderate, vigorous, MVPA and classification as "active" and "insufficiently active" [47]. Thus, a correlation analysis was carried out between the multicriteria index of physical environmental opportunities and the PA intensities collected by the accelerometer. The Excel Program (version-2010) was used to formulate the multicriteria model, later, for the validation stage; the data were transferred to the Statistical Package for the Social Sciences (version-23.0), to perform Pearson's correlation analysis and simple regression analysis (p < 0.05). After applying Shapiro Wilk test on the multicriteria index of physical environmental opportunities, we found that the variable had a normal distribution, performing a subsequent Pearson correlation analysis. Then, we analyzed those variables that showed a correlation above 0.20 by simple linear regression analysis in order to verify how much the multicriteria index of physical environmental opportunities could explain the PA intensities. Table 2 shows the participants characteristics. Participated in this study, 51 preschoolers enrolled in 9 public Municipal Early Childhood Education Centers, with an average age of 4.5 years (SD ± 0.60), with a slight predominance of boys (53%). Most of the children's families were made up of couples living with partners and more than half of the mothers had 8 years or more of schooling (65.4%). Most families belonged to the lower middle class (class C, 63.4%) and lived in houses classified as medium stimulation environments (78%). Of the participating children, most did not perform systematized PA in spaces such as clubs and similar; many accumulated 180 min/day of PA and just over half of the children accumulated 60 min/day in MVPA. The majority (64.7%) studied in the partial school shift, which totalized an average time of 4 h and 30 min a day in preschool (Table 2). Despite the average time of MVPA of the respondents meeting the WHO recommendations (WHO 2019), the average time in sedentary PA objectively measured ads up to almost 7 h a day. Table 2 Characterization of participants (n = 51) and correlation with the multicriteria index of physical environmental opportunities The multicriteria index of physical environmental opportunities was calculated following the phases described in the methodology section. Figure 1 shows the validation phase that represents the correlation between the multicriteria index of physical environmental opportunities and the PA intensities. In graph 1A, children who obtained a higher multicriteria index of physical environmental opportunities had a longer at moderate intensity. Therefore, the correlation was statistically significant, positive and moderate. In 1B graph, the children who obtained a higher multicriteria index of physical environmental opportunities, obtained a longer time in the vigorous intensity, statistically significant, positive and moderate correlation. In 1C, the children who obtained the highest multicriteria index had a longer time in the MVPA intensity, statistically positive correlation, significant and moderate. Correlation graphs between the multicriteria index of physical environmental opportunities and physical activity (PA) intensities In Fig. 2, the boxplot shows the relationship between the multicriteria index of physical environmental opportunities and the PA classification of children as active and insufficiently active. Thus, children who had more quality physical environmental opportunities for PA (higher value in the multicriteria index of physical environmental opportunities) were classified as active. In this sense, the relationship was positive, significant (p = 0.001) and moderate (x2 = 0.44). Mean difference between the physical activity classification and the multicriteria index of physical environmental opportunities We also showed linear regression with the outcome variable multicriteria index of physical environmental opportunities as a validation of the multicriteria index in Table 3. This study developed an index to measure the physical environmental opportunities of free active play for preschoolers of middle-income countries. Thus, having a higher value in the multicriteria index of physical environmental opportunities explained 12% of the moderate intensity (p = 0.013), 13% of the vigorous intensity (p = 0.009) and 16% of MVPA (p = 0.003). In addition, have physical environmental opportunities to practice higher quality PA explained 20% of the active classification (p = 0.001). Table 3 Linear regression of physical activity intensities with the outcome variable of physical activity opportunities included in the multicriteria index of physical environmental opportunities The effect size was calculated using Cohen's d [68] which considers Cohen's d = 0.20, 0.50, and 0.80 to interpret observed effect sizes as small, medium, or large, respectively. Thus, for moderate intensity, the study showed a power of 0.86 (considering an alpha error of 0.05, effect size of 0.13). For vigorous intensity the power was 0.86 (Alpha error of 0.05, effect size of 0.14). For MVPA intensity, the power was 0.86 (effect size of 0.19) and for Classification Level PA, the power presented was 0.86 (effect size of 0.25) [68]. Our data revealed a positive relationship of the multicriteria index of physical environmental opportunities with the MVPA intensity. In addition, the physical environmental opportunities for PA explained 20% of the preschooler's classification as active and 16% of the time in MVPA. It is noteworthy the relationship was moderate [69] and is in line with previous studies with similar methodology involving child development [64]. This study also used multicriteria index, associated it with domains of child development, and found moderate relationships [64]. The preschool phase is a sensitive moment for the experience of PA. In addition, enables the development of motor competence [ 5, 70], which may facilitate engagement in sports and healthy lifestyle maintenance, and creates life habits that tend to last in later stages of life [1]. A previous study using direct PA measurement in Sweden preschoolers evidenced that the structural characteristics of the preschool, e.g., formalized PA policy and more time spent outdoors, were positively associated with PA of children [71]. Thus, formalized PA policies and outdoor time can be important for promoting children's PA during preschool hours [20]. Moreover, previous studies have advocated the expansion of the time in outdoor recreation associated to a structured PA in the preschool environment [43, 72]. Thus, the promotion of PA and opportunities to encourage the natural desire for movement beginning them early in life is beneficial [10]. Studies about PA intensities for preschoolers have focused on MVPA [73]. Of note, the basis for prioritizing MVPA is probably the beneficial impact pointed on improvement of health-related physical fitness conditions [2, 74], cognitive development [75]and increased motor competence [76]. In addition, the preschool space having parks containing toys and equipment as well as a patio enabling the preschoolers to increase the physical environmental opportunities for active play during recreation time [20, 67, 71]. With this regard, despite the educational legislation does not make the presence of physical education professionals mandatory in the context of preschool [77, 78], Brazilian preschoolers who have the presence of a physical education professional probably have better motor skills [77]. Given the above, we hypothesize that the presence of the park at the preschool could be determinant to increase the PA opportunities, especially in the MVPA intensity [67, 71]. Our multicriteria index of physical environmental opportunities also pointed the importance of the home environments [7]. The family environment plays an important role to provide opportunities for physical activities [66, 79]. In particular, playing outdoors requires social support and parental supervision [80]. In addition, because parental restrictions can prevent participation in PA and outdoor play in preschoolers [81], our data reinforce the importance of external and internal space for active play at home, since most of the responsibility for promoting healthy behaviors and PA practices currently falls on families [82]. About the evaluation of the quality of home, our data showed that more than half of preschoolers live in medium stimulus environments [53] and belong to class C, e.g., extract that comprises the lower middle class. Thus, for families whose houses do not have external and internal spaces that allow active play [7], the presence of parks and outdoor leisure areas in the neighborhood daycare environment seems to be crucial for children to increase the level of PA especially children whose home environment may not be conducive to activity [30]. Considering PA time including all the intensities, 96.1% of the Brazilian preschoolers accumulated 180 min/day of PA. Furthermore, the quality of environmental opportunities for active play seemed to contribute substantially to the acquisition of moderate, vigorous and MVPA intensities, and for the preschooler to become physically active. Surprisingly our data showed that the majority of the Brazilian preschoolers reached the recommended minimum daily PA. In this sense, the daily PA time in Brazilian preschoolers was higher compared to Chinese preschoolers (83.8%) [83], with their accumulated daily PA time more likely due to time spent at light intensity [23]. We suppose that the environmental factors together assessed by the multicriteria index of physical environmental opportunities corroborate the reach of MVPA intensity. Namely, the mean MVPA intensity values of active preschoolers (Mean ± SD = 70.68 min ± 9.09) and of insufficiently active preschoolers (47.81 min ± 8.85). Active preschoolers scored higher on the multicriteria index of physical environmental opportunities (0.67 point ±0.18) compared to those who were insufficiently active (0.49 point ±0.18). Other studies investigated preschoolers with the same time points for the PA classification [52] and our data showed that the percentage of Brazilian preschoolers (54.90%) who meet the daily guidelines [52] was higher than Canada (13.7%) [84] and Sweden (33%) preschoolers [17]. Collectively, our results are in line with Tucker and colleagues study [72] and support the implementation of opportunities that increase children's access to outdoor play, as well as ample spaces, both in preschool ambient [20, 43] and in other places such as the home environment [26, 31, 33], in order to provide PA opportunities using body movement experiences [43]. In this sense, the multicriteria analysis meets a current demand in the care of the pediatric population, regarding the construction of parameters that indicate physical environmental opportunities for active play and physical activity level in preschoolers. Our study has strengths and limitations. The cross-sectional design of the present study does not allow inferring cause and effect. Aspects related to social modeling, parental encouragement, and logistical support for PA should be considered in future works. In addition, our sample seemed to be a very active sample, which limits generalizability of the findings. Although accelerometry is a direct measure of PA intensity among preschoolers [49] and used in many studies [82], accelerometry is a measure unable to detect accurately PA intensity in activities with significant upper body movements. However, the direct measurement of daily PA level avoided the risk of bias related to the self-reported measures such as memory difficulties and social desirability. An important limitation of this study is that the ECERS questionnaire was not validated for Brazilian preschoolers. However, this questionnaire [59] is a well-known international instrument translated to Portuguese [60] and used by different Brazilian studies with preescholers [60, 61] with established psychometric properties [62]. Of note, although previous international groups from different countries, including Brazil, used the Home Observation for Measurement of the Environment - HOME Inventory, the instrument has not been subject to an analysis of measurement equivalence/invariance cross-culturally [55]. However, we noteworthy that the HOME Inventory has been used worldwide to evaluate the home environment in both international [54] and transcultural studies [55], presenting psychometric characteristics that were investigated in Brazilian preschoolers sample [56]. Moreover, we used questionnaires [57] that allowed the assessment of the quality of the home [53] and school [59] environments allowing the elaboration of the multicriteria index of physical environmental opportunities to evaluate PA opportunities. Physical environmental opportunities were determinant for the higher intensities of PA. Therefore, playing outdoors, living at home with a yard and indoor space, studying in schools with a patio and playground seem to favor the possibilities for preschoolers to experience MVPA. The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. ABEP: Associação brasileira de estatística e pesquisa EC-HOME: Early Childhood Home Observation for Measurement of the Environment ECERS: Early Childhood Environment Rating Scales MAUT: Multi-attribute utility theory MVPA: Moderate to vigorous physical activity PA: UFVJM: Universidade Federal dos Vales do Jequitinhonha e Mucuri Robinson LE, Stodden DF, Barnett LM, et al. Motor competence and its effect on positive developmental trajectories of health. Sports Med. 2015;45(9):1273–84. https://doi.org/10.1007/s40279-015-0351-6. Utesch T, Bardid F, Büsch D, Strauss B. The relationship between motor competence and physical fitness from early childhood to early adulthood: a meta-analysis. Sports Med. 2019;49(4):541–51. https://doi.org/10.1007/s40279-019-01068-y. Timmons B, Leblanc A, Carson V, et al. Systematic review of physical activity and health in the early years (aged 0–4 years). Appl Physiol Nutr Metab. 2012;37(4):773–92. https://doi.org/10.1139/h2012-070. Tandon PS, Saelens BE, Christakis DA. Active play opportunities at child care. Pediatric. 2015;135(6):425–31. https://doi.org/10.1542/peds.2014-2750. Stodden DF, Goodway JD, Langendorfer SJ, et al. A developmental perspective on the role of motor skill competence in physical activity: an emergent relationship. Quest. 2008;60:290–306. https://doi.org/10.1080/00336297.2008.10483582. Logan SW, Webster KE, et al. Relationship between fundamental motor skill competence and physical activity during childhood and adolescence: a systematic review. Kinesiol Rev. 2015;4(4):416–26. https://doi.org/10.1123/kr.2013-0012. Razak LA, Yoong SL, Wiggers J, et al. Impact of scheduling multiple outdoor free-play periods in childcare on child moderate-to-vigorous physical activity: a cluster randomised trial. Int J Behav Nutr Phys Act. 2018;15(1):1–12. https://doi.org/10.1186/s12966-018-0665-5. Dias KI, White J, Jago R, et al. International comparison of the levels and potential correlates of objectively measured sedentary time and physical activity among three-to-four-year-old children. Int J Environ Res Public Health. 2019;16(11):1929. https://doi.org/10.3390/ijerph16111929. Cooper AR, Goodman A, Page A, et al. Objectively measured physical activity and sedentary time in youth: the international children's accelerometry database (ICAD). Int J Behav Nutr Phys Act. 2015;12(1):113. https://doi.org/10.1186/s12966-015-0274-5. Goldfield GS, Harvey A, Grattan K, et al. Physical activity promotion in the preschool years: a critical period to intervene. Int J Environ Res Public Health. 2012;9(4):1326–42. https://doi.org/10.3390/ijerph9041326. Pate RR, Pfeiffer KA, Trost SG, et al. Physical activity among children attending preschools. Pediatric. 2004;114(5):1258–63. https://doi.org/10.1542/peds.2003-1088-L. Hinkley T, Crawford D, Salmon J, et al. Preschool children and physical activity: a review of correlates. Am J Prev Med. 2008;34:435–41. Mitchell J, Skouteris H, McCabe M, et al. Physical activity in young children: a systematic review of parental influences. Early Child Dev Care. 2012;182(11):1411–37. https://doi.org/10.1080/03004430.2011.619658. Hesketh KR, Grin SJ, Van Sluijs EM. UK preschool-aged children's physical activity levels in childcare and at home: a cross-sectional exploration. Int J Behav Nutr Phys Act. 2015;12(1):1–9. https://doi.org/10.1186/s12966-015-0286-1. O'Dwyer M, Fairclough SJ, Ridgers ND, et al. Patterns of objectively measured moderate-to-vigorous physical activity in preschool children. J Phys Act Health. 2014;11(6):1233–8. https://doi.org/10.1123/jpah.2012-0163. Møller NC, Christensen LB, Mølgaard C, et al. Descriptive analysis of preschool physical activity and sedentary behaviors – a cross sectional study of 3-year-olds nested in the SKOT cohort. BMC Public Health. 2017;17(1):1–12. https://doi.org/10.1186/s12889-017-4521-3. Berglind D, Tynelius P. Objectively measured physical activity patterns, sedentary time and parent-reported screen-time across the day in four-year-old Swedish children. BMC Public Health. 2018;18(69). https://doi.org/10.1186/s12889-017-4600-5. O'Neill JR, Pfeiffer KA, Dowda M, et al. In-school and out-of-school physical activity in preschool children. J Phys Act Health. 2016;13(6):606–10. https://doi.org/10.1123/jpah.2015-0245. Nilsen AKO, Anderssen SA, Resaland GK, et al. Boys, older children, and highly active children benefit most from the preschool arena regarding moderate-to-vigorous physical activity: a cross-sectional study of Norwegian preschoolers. Prev Med Rep. 2019;14:100837. https://doi.org/10.1016/j.pmedr.2019.100837. Janković M, Batez M, Stupar D, et al. Physical activity of Serbian children in daycare. Child. 2021;8(2):161. https://doi.org/10.3390/children8020161. Morais RLM, Magalhães LC, Nobre JNP, et al. Quality of the home, daycare and neighborhood environment and the cognitive development of economically disadvantaged children in early childhood: a mediation analysis. Infant Behav Dev. 2021;64:101619. https://doi.org/10.1016/j.infbeh.2021.101619. Christian H, Maitland C, Enkel S, et al. Influence of the day care, home and neighbourhood environment on young children's physical activity and health: protocol for the PLAYCE observational study. BMJ Open. 2016;6(12). https://doi.org/10.1136/bmjopen-2016-014058. Hesketh KR, Lakshman R, Van Sluijs EM. Barriers and facilitators to young children's physical activity and sedentary behaviour: a systematic review and synthesis of qualitative literature. Obes Rev. 2017;18(9):987–1017. https://doi.org/10.1111/obr.12562. Burdette HL, Whitaker RC. Resurrecting free play in young children: looking beyond fitness and fatness to attention, affiliation, and affect. Arch Pediatr Adolesc Med. 2005;159:46–50. Aarts MJ, de Vries SI, Van Oers HA, et al. Outdoor play among children in relation to neighborhood characteristics: a cross-sectional neighborhood observation study. Int J Behav Nutr Phys Act. 2012;9:98–109. https://doi.org/10.1186/1479-5868-9-98. Määttä S, Ray C, Vepsäläinen H, Lehto E, et al. Parental education and pre-school children's objectively measured sedentary time: the role of co-participation in physical activity. Int J Environ Res Public Health. 2018;15(2):366. https://doi.org/10.3390/ijerph15020366. Janta B. Caring for children in Europe: how childcare, parental leave and flexible working arrangements interact in Europe. Report prepared for the European Commission, Directorate-General for Employment, Social Affairs and Inclusion: RAND Corporation, Europe; 2014. Barnett L, Hinkley T, Okely AD, et al. Child, family and environmental correlates of children's motor skill proficiency. Med Sci Sports Exerc. 2012;16:332–6. https://doi.org/10.1016/j.jsams.2012.08.011. De Craemer M, De Decker E, De Bourdeaudhuij I, et al. Physical activity and beverage consumption in preschoolers: focus groups with parents and teachers. BMC Public Health. 2013;13:278. https://doi.org/10.1186/1471-2458-13-278. French SA, Sherwood NE, Mitchell NR, Fan Y. Park use is associated with less sedentary time among low-income parents and their preschool child: the NET-works study. Prev Med Rep. 2017;5:7–12. https://doi.org/10.1016/j.pmedr.2016.11.003. De Decker E, De Craemer M, De Bourdeaudhuij I, et al. Influencing factors of sedentary behavior in European preschool settings: an exploration through focus groups with teachers. J Sch Health. 2013;83:654–61. Lyn R, Evers S, Davis J, et al. Barriers and supports to implementing a nutrition and physical activity intervention in child care: directors' perspectives. J Nutr Educ Behav. 2014;46(3):171–80. https://doi.org/10.1016/j.jneb.2013.11.003. Tovar A, Mena NZ, Risica P, et al. Nutrition and physical activity environments of home-based child care: what Hispanic providers have to say. Child Obes. 2015;11(5):521–9. https://doi.org/10.1089/chi.2015.0040. Howie EK, Brown WH, Dowda M, et al. Physical activity behaviours of highly active preschoolers. Pediatr Obes. 2013;8(2):142–9. https://doi.org/10.1111/j.2047-6310.2012.00099.x. Vanderloo LM. Screen-viewing among preschoolers in childcare: a systematic review. BMC Pediatr. 2014;14:205. https://doi.org/10.1186/1471-2431-14-205. Hodges EA, Smith C, Tidwell S, et al. Promoting physical activity in preschoolers to prevent obesity: a review of the literature. J Pediatr Nurs. 2013;28(1):3–19. https://doi.org/10.1016/j.pedn.2012.01.002. Trost SG, Ward DS, Senso M. Effects of child care policy and environment on physical activity. Med Sci Sports Exerc. 2010;42(3):520–5. Gubbels JS, Van Kann DH, Jansen MW. Play equipment, physical activity opportunities, and children's activity levels at childcare. J Environ Public Health. 2012. https://doi.org/10.1155/2012/326520. Cosco NG, Moore RC, Smith WR. Childcare outdoor renovation as a built environment health promotion strategy: evaluating the preventing obesity by design intervention. Am J Health Promot. 2014;28:27–32. Hnatiuk JA, Brown HE, Downing KL, et al. Interventions to increase physical activity in children 0–5 years old: a systematic review, meta-analysis and realist synthesis. Obes Rev. 2019;20(1):75–87. https://doi.org/10.1111/obr.12763. Magalhães CM. A história da atenção à criança e da infância no Brasil e o surgimento da creche e da pré-escola. Revista Linhas. 2017;18(38):81–142. Black MM, Walker SP, Fernald LC, et al. Early childhood development coming of age: science through the life course. Lancet. 2016;389:77–90. Coe DP. Means of optimizing physical activity in the preschool environment. Am J Lifestyle Med. 2020;14(1):16–23. https://doi.org/10.1177/1559827618818419. Martins CML, Lemos LFGBP, Souza Filho AN, et al. Adherence to 24-hour movement guidelines in low-income Brazilian preschoolers and associations with demographic correlates. Am J Hum Biol. 2021;33(4):e23519. https://doi.org/10.1002/ajhb.23519. Cordeiro R. Effect of design in cluster sampling to estimate the distribution of occupations among workers. Rev Saude Publica. 2001;35(1):10–5. https://doi.org/10.1590/S0034-89102001000100002. Viegas AA, et al. Associations of physical activity and cognitive function with gross motor skills in preschoolers: cross-sectional study. J Mot Behav. 2021:1–16. https://doi.org/10.1080/00222895.2021.1897508. ABEP – Associação Brasileira de Empresas de Pesquisa. Critério de classificação econômica Brasil. Disponível em: http://www.abep.org/. Acesso: 20 Nov 2019. Penpraze V, Reilly JJ, MacLean CM, et al. Monitoring of physical activity in young children: how much is enough? Pediatr Exerc Sci. 2006;18(4):483–91. https://doi.org/10.1123/pes.18.4.483. Matarma T, Lagström H, Hurme S, et al. Motor skills in association with physical activity, sedentary time, body fat, and day care attendance in 5-6-year-old children—the STEPS study. Scand J Med Sci Sports. 2018;28(12):2668–76. https://doi.org/10.1111/sms.13264. Migueles JH, Cadenas-Sanchez C, Ekelund U, et al. Accelerometer data collection and processing criteria to assess physical activity and other outcomes: a systematic review and practical considerations. Sports Med. 2017;47(9):1821–45. https://doi.org/10.1007/s40279-017-0716-0. Butte NF, Wong WW, Lee JS, et al. Prediction of energy expenditure and physical activity in preschoolers. Med Sci Sports Exerc. 2014;46(6):1216–26. https://doi.org/10.1249/mss.0000000000000209. World Health Organization, et al. Guidelines on physical activity, sedentary behaviour and sleep for children under 5 years of age: web annex: evidence profiles: World Health Organization; 2019. https://apps.who.int/iris/handle/10665/311664 Caldwell BM, Bradley RH. Home observation for measurement of the environment: administration manual. Little Rock: University of Ark; 2003. Jones PC, Pendergast LL, Schaefer BA, et al. Measuring home environments across cultures: invariance of the HOME scale across eight international sites from the MAL-ED study. J Sch Psychol. 2017;64:109–27. https://doi.org/10.1016/j.jsp.2017.06.001. Bradley RH. Constructing and adapting causal and formative measures of family settings: the HOME inventory as illustration. J Fam Theory Rev. 2015;7(4):381–414. https://doi.org/10.1111/jftr.12108. Dias NM, Mecca TP, Pontes JM. The family environment assessment: study of the use of the EC-HOME in a Brazilian sample. Trends Psychol. 2017;25:1897–912. https://doi.org/10.9788/TP2017.4-19. Burdette HL, et al. Parental report of outddor playtime as a measure of physical activity in preschool children. Arch Pediatr Adolesc Med. 2004;158(4):353–7. Gonçalves WSF, Byrne R, de Lira PIC, et al. Psychometric properties of instruments to measure parenting practices and children's movement behaviors in low-income families from Brazil. BMC Med Res Methodol. 2021;21(1):129. https://doi.org/10.1186/s12874-021-01320-y. Harms T. The use of environment rating scales in early childhood education. Cad Pesqui. 2013;43(148):76–97. https://doi.org/10.1590/S0100-15742013000100005. Abreu-lima I, et al. Escala de avaliação de ambientes em educação de infância. (early childhood education environment rating scale) Ed. rev. Portugal: Livpis/Legis; 2008. Harms T, Clifford RM, Cryer D. Escala de avaliação de ambientes de educação infantil (EcerS-revised). São Paulo: Mimeo; 2009. translated and adapted.: Fundação Carlos Chagas; use limited to research Mariano M, Caetano SC. Ribeiro da Silva a, et al. psychometric properties of the ECERS-R among an epidemiological sample of preschools. Early Educ Dev. 2019;30(4):511–21. https://doi.org/10.1080/10409289.2018.1554388. Keeney RL, Raiffa H. Decisions with multiple objectives: preferences and value trade-off. New York: Wiley; 1976. Nobre JNP, Vinolas PB, Santos JN, et al. Quality of interactive media use in early childhood and child development: a multicriteria analysis. Jped. 2020;96(3):310–7. https://doi.org/10.1016/j.jped.2018.11.015. Adunlin G, Diaby V, Xiao H. Application of multicriteria decision analysis in health care: a systematic review and bibliometric analysis. Health Expect. 2015;18(6):1894–905. https://doi.org/10.1111/hex.12287. Reimers AK, Boxberger K, Schmidt SC, et al. Social support and modelling in relation to physical activity participation and outdoor play in preschool children. Child. 2019;6(10):115. https://doi.org/10.3390/children6100115. Broekhuizen K, Scholten AM, de Vries SI. The value of (pre) school playgrounds for children's physical activity level: a systematic review. Int J Behav Nutr Phys Act. 2014;3(11):59. https://doi.org/10.1186/1479-5868-11-59. Cohen J. Statistical power analysis for the behavioral sciences: Academic; Routledge, 2013. Schober P, Boer C, Schwarte LA. Correlation coefficients: appropriate use and interpretation. Anesth Analg. 2018;126(5):1763–8. Rodrigues LP, Stodden DF, Lopes VP. Developmental pathways of change in fitness and motor competence are related to overweight and obesity status at the end of primary school. J Sci Med Sport. 2016;19(1):87–92. Chen C, Ahlqvist VH, Henriksson P, et al. Preschool environment and preschool teacher's physical activity and their association with children's activity levels at preschool. PLoS One. 2020;15(10). https://doi.org/10.1371/journal.pone.0239838. Tucker P, Vanderloo LM, Johnson AM, et al. Impact of the supporting physical activity in the childcare environment (SPACE) intervention on preschoolers' physical activity levels and sedentary time: a single-blind cluster randomized controlled trial. Int J Behav Nutr Phys Act. 2017;14(1):120. https://doi.org/10.1186/s12966-017-0579-7. Wadsworth DD, Johnson JL, Carroll AV, et al. Intervention strategies to elicit MVPA in preschoolers during outdoor play. Int J Environ Res Public Health. 2020;17(2):650. https://doi.org/10.3390/ijerph17020650. Cattuzzo MT, dos Santos HR, Ré AHN, et al. Motor competence and health related physical fitness in youth: a systematic review. J Sci Med Sport. 2016;19(2):123–9. https://doi.org/10.1016/j.jsams.2014.12.004. Carson V, Lee EY, Hewitt L, et al. Systematic review of the relationships between physical activity and health indicators in the early years (0-4 years). BMC Public Health. 2017;17(5):854. https://doi.org/10.1186/s12889-017-4860-0. Webster EK, Martin CK, Staiano AE. Fundamental motor skills, screen-time, and physical activity in preschoolers. J Sport Health Sci. 2019;8(2):114–21. https://doi.org/10.1016/j.jshs.2018.11.006. Santos G, et al. Motor competence of brazilian preschool children assessed by TGMD-2 test: a systematic review. J Phys Educ. 2020;31(1). https://doi.org/10.4025/jphyseduc.v31i1.3117 BRASIL. Ministério da Educação. Base nacional comum curricular. Brasília: MEC/SEB; 2017. Disponível em: < http://basenacionalcomum.mec.gov.br/images/BNCC_EI_EF_110518_versaofinal_site.pdf>. Acesso em 20 Oct 2020 Niermann CY, Gerards SM, Kremers SP. Conceptualizing family influences on Children's energy balance-related behaviors: levels of interacting family environmental subsystems (the LIFES framework). Int J Environ Res Public Health. 2018;15(12):2714. https://doi.org/10.3390/ijerph15122714. Clements R. An investigation of the status of outdoor play. Contemp Issues Early Child. 2004;5(1):68–80. https://doi.org/10.2304/ciec.2004.5.1.10. Carver A, Timperio A, Hesketh K, et al. Are children and adolescents less active if parents restrict their physical activity and active transport due to perceived risk? Soc Sci Med. 2010;70:1799–805. https://doi.org/10.1016/j.socscimed.2010.02.010. Lahuerta-Contell S, Molina-García J, Queralt A, Martínez-Bello VE. The role of preschool hours in achieving physical activity recommendations for preschoolers. Child. 2021;8(2):82. https://doi.org/10.3390/children8020082. Quan M, Zhang H, Zhang J, et al. Are preschool children active enough in Shanghai: an accelerometer-based cross-sectional study. BMJ Open. 2019;9(4). https://doi.org/10.1136/bmjopen-2018-024090. Colley RC, Garriguet D, Adamo KB, et al. Physical activity and sedentary behavior during the early years in Canada: a cross-sectional study. Int J Behav Nutr Phys Act. 2013;10(1):1–9. https://doi.org/10.1186/1479-5868-10-54. Lindsay AC, Salkeld JA, Greaney ML, Sands FD. Latino family childcare providers' beliefs, attitudes, and practices related to promotion of healthy behaviors among preschool children: a qualitative study. J Obes. 2015. https://doi.org/10.1155/2015/409742. We thank the Universidade Federal dos Vales do Jequitinhonha e Mucuri for institutional support. The CNPq, FAPEMIG, and CAPES. The authors are grateful to municipal education secretary and directors of public schools of Diamantina (MG). The CNPq (303539/2021-6), FAPEMIG (APQ-01898-18), and CAPES (Finance Code 001 for financial support and scholarships). Centro Integrado de Pós-Graduação e Pesquisa em Saúde (CIPq-Saúde), Universidade Federal dos Vales do Jequitinhonha e Mucuri (UFVJM), Diamantina, Minas Gerais, Brazil Juliana Nogueira Pontes Nobre, Amanda Cristina Fernandes & Ângela Alves Viegas Faculdade de Fisioterapia, Universidade Federal dos Vales do Jequitinhonha e Mucuri (UFVJM), Diamantina, Minas Gerais, Brazil Rosane Luzia De Souza Morais, Pedro Henrique Scheidt Figueiredo, Henrique Silveira Costa, Marcus Alessandro de Alcantara, Vanessa Amaral Mendonça & Ana Cristina Rodrigues Lacerda Instituto de Ciência e Tecnologia (ICT - UFVJM) e SaSA, Universidade Federal dos Vales do Jequitinhonha e Mucuri (UFVJM), Diamantina, Minas Gerais, Brazil Bernat Viñola Prat Faculdade de Fisioterapia, Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, Brazil Ana Cristina Resende Camargos Juliana Nogueira Pontes Nobre Rosane Luzia De Souza Morais Amanda Cristina Fernandes Ângela Alves Viegas Pedro Henrique Scheidt Figueiredo Henrique Silveira Costa Marcus Alessandro de Alcantara Vanessa Amaral Mendonça Ana Cristina Rodrigues Lacerda JNPN: Conceptualization, Formal analysis, Data Curation, Methodology; RLSM: Writing Review & Editing – Original Draft; BVP: Formal analysis, Review; ACF: Conceptualization, Formal analysis, Data Curation; AAV: Methodology, Writing; PHSF: Review & Editing; HSC: Review & Editing; ACRC: Review & Editing; MAA: Review & Editing; VAM: Review & Editing; ACRL: Conceptualization, Data Curation, Writing, Review & Editing – Original Draft. All authors have read and approved the manuscript. Correspondence to Juliana Nogueira Pontes Nobre. All protocols was carried out in accordance with relevant guidelines and regulations. This study was approved by the Research Ethics Committee of the Universidade Federal dos Vales do Jequitinhonha e Mucuri (Protocol: 2.773.418), authorized by the Municipal Education Secretariat of Diamantina (MG), Brazil. We declare that all parents of the children or legal guardians signed the informed consent form in writing, authorizing participation in the study. We declare no competing interests. Nobre, J.N.P., Morais, R.L.D.S., Prat, B.V. et al. Physical environmental opportunities for active play and physical activity level in preschoolers: a multicriteria analysis. BMC Public Health 22, 340 (2022). https://doi.org/10.1186/s12889-022-12750-8
CommonCrawl
Living Reviews in Solar Physics December 2013 , 10:2 | Cite as The Solar Wind as a Turbulence Laboratory Roberto Bruno Vincenzo Carbone Latest version View article history In this review we will focus on a topic of fundamental importance for both astrophysics and plasma physics, namely the occurrence of large-amplitude low-frequency fluctuations of the fields that describe the plasma state. This subject will be treated within the context of the expanding solar wind and the most meaningful advances in this research field will be reported emphasizing the results obtained in the past decade or so. As a matter of fact, Helios inner heliosphere and Ulysses' high latitude observations, recent multi-spacecrafts measurements in the solar wind (Cluster four satellites) and new numerical approaches to the problem, based on the dynamics of complex systems, brought new important insights which helped to better understand how turbulent fluctuations behave in the solar wind. In particular, numerical simulations within the realm of magnetohydrodynamic (MHD) turbulence theory unraveled what kind of physical mechanisms are at the basis of turbulence generation and energy transfer across the spectral domain of the fluctuations. In other words, the advances reached in these past years in the investigation of solar wind turbulence now offer a rather complete picture of the phenomenological aspect of the problem to be tentatively presented in a rather organic way. This article is a revised version of 10.12942/lrsp-2005-4. Supplementary material is available for this article at 10.12942/lrsp-2013-2. The whole heliosphere is permeated by the solar wind, a supersonic and super-Alfvén plasma flow of solar origin which continuously expands into the heliosphere. This medium offers the best opportunity to study directly collisionless plasma phenomena, mainly at low frequencies where high-amplitude fluctuations have been observed. During its expansion, the solar wind develops a strong turbulent character, which evolves towards a state that resembles the well known hydrodynamic turbulence described by Kolmogorov (1941, (1991). Because of the presence of a strong magnetic field carried by the wind, low-frequency fluctuations in the solar wind are usually described within a magnetohydrodynamic (MHD, hereafter) benchmark (Kraichnan, (1965; Biskamp, (1993; Tu and Marsch, (1995a; Biskamp, (2003; Petrosyan et al., (2010). However, due to some peculiar characteristics, the solar wind turbulence contains some features hardly classified within a general theoretical framework. Turbulence in the solar heliosphere plays a relevant role in several aspects of plasma behavior in space, such as solar wind generation, high-energy particles acceleration, plasma heating, and cosmic rays propagation. In the 1970s and 80s, impressive advances have been made in the knowledge of turbulent phenomena in the solar wind. However, at that time, spacecraft observations were limited by a small latitudinal excursion around the solar equator and, in practice, only a thin slice above and below the equatorial plane was accessible, i.e., a sort of 2D heliosphere. A rather exhaustive survey of the most important results based on in-situ observations in the ecliptic plane has been provided in an excellent review by Tu and Marsch (1995a) and we invite the reader to refer to that paper. This one, to our knowledge, has been the last large review we find in literature related to turbulence observations in the ecliptic. In the 1990s, with the launch of the Ulysses spacecraft, investigations have been extended to the high-latitude regions of the heliosphere, allowing us to characterize and study how turbulence evolves in the polar regions. An overview of Ulysses results about polar turbulence can also be found in Horbury and Tsurutani (2001). With this new laboratory, relevant advances have been made. One of the main goals of the present work will be that of reviewing observations and theoretical efforts made to understand the near-equatorial and polar turbulence in order to provide the reader with a rather complete view of the low-frequency turbulence phenomenon in the 3D heliosphere. New interesting insights in the theory of turbulence derive from the point of view which considers a turbulent flow as a complex system, a sort of benchmark for the theory of dynamical systems. The theory of chaos received the fundamental impulse just through the theory of turbulence developed by Ruelle and Takens (1971) who, criticizing the old theory of Landau and Lifshitz (1971), were able to put the numerical investigation by Lorenz (1963) in a mathematical framework. Gollub and Swinney (1975) set up accurate experiments on rotating fluids confirming the point of view of Ruelle and Takens (1971) who showed that a strange attractor in the phase space of the system is the best model for the birth of turbulence This gave a strong impulse to the investigation of the phenomenology of turbulence from the point of view of dynamical systems (Bohr et al., (1998). For example, the criticism by Landau leading to the investigation of intermittency in fully developed turbulence was worked out through some phenomenological models for the energy cascade (cf. Frisch, (1995). Recently, turbulence in the solar wind has been used as a big wind tunnel to investigate scaling laws of turbulent fluctuations, multifractals models, etc. The review by Tu and Marsch (1995a) contains a brief introduction to this important argument, which was being developed at that time relatively to the solar wind (Burlaga, (1993; Carbone, (1993; Biskamp, (1993, (2003; Burlaga, (1995). The reader can convince himself that, because of the wide range of scales excited, space plasma can be seen as a very big laboratory where fully developed turbulence can be investigated not only per se, rather as far as basic theoretical aspects are concerned. Turbulence is perhaps the most beautiful unsolved problem of classical physics, the approaches used so far in understanding, describing, and modeling turbulence are very interesting even from a historic point of view, as it clearly appears when reading, for example, the book by Frisch (1995). History of turbulence in interplanetary space is, perhaps, even more interesting since its knowledge proceeds together with the human conquest of space Thus, whenever appropriate, we will also introduce some historical references to show the way particular problems related to turbulence have been faced in time, both theoretically and technologically. Finally, since turbulence is a phenomenon visible everywhere in nature, it will be interesting to compare some experimental and theoretical aspects among different turbulent media in order to assess specific features which might be universal, not limited only to turbulence in space plasmas. In particular, we will compare results obtained in interplanetary space with results obtained from ordinary fluid flows on Earth, and from experiments on magnetic turbulence in laboratory plasmas designed for thermonuclear fusion. 1.1 What does turbulence stand for? The word turbulent is used in the everyday experience to indicate something which is not regular. In Latin the word turba means something confusing or something which does not follow an ordered plan. A turbulent boy, in all Italian schools, is a young fellow who rebels against ordered schemes. Following the same line, the behavior of a flow which rebels against the deterministic rules of classical dynamics is called turbulent. Even the opposite, namely a laminar motion, derives from the Latin word lámina, which means stream or sheet, and gives the idea of a regular streaming motion. Anyhow, even without the aid of a laboratory experiment and a Latin dictionary, we experience turbulence every day. It is relatively easy to observe turbulence and, in some sense, we generally do not pay much attention to it (apart when, sitting in an airplane, a nice lady asks us to fasten our seat belts during the flight because we are approaching some turbulence!). Turbulence appears everywhere when the velocity of the flow is high enough1, for example, when a flow encounters an obstacle (cf., e.g., Figure 1) in the atmospheric flow, or during the circulation of blood, etc. Even charged fluids (plasma) can become turbulent. For example, laboratory plasmas are often in a turbulent state, as well as natural plasmas like the outer regions of stars. Living near a star, we have a big chance to directly investigate the turbulent motion inside the flow which originates from the Sun, namely the solar wind. This will be the main topic of the present review. Figure 1: Turbulence as observed in a river. Here we can see different turbulent wakes due to different obstacles (simple stones) emerging naturally above the water level. Turbulence that we observe in fluid flows appears as a very complicated state of motion, and at a first sight it looks (apparently!) strongly irregular and chaotic, both in space and time. The only dynamical rule seems to be the impossibility to predict any future state of the motion. However, it is interesting to recognize the fact that, when we take a picture of a turbulent flow at a given time, we see the presence of a lot of different turbulent structures of all sizes which are actively present during the motion. The presence of these structures was well recognized long time ago, as testified by the beautiful pictures of vortices observed and reproduced by the Italian genius Leonardo da Vinci, as reported in the textbook by Frisch (1995). Figure 2 shows, as an example, one picture from Leonardo which can be compared with Figure 3 taken from a typical experiment on a turbulent jet. Three examples of vortices taken from the pictures by Leonardo da Vinci (cf. Frisch, (1995). Turbulence as observed in a turbulent water jet (Van Dyke, (1982) reported in the book by Frisch (1995) (photograph by P. Dimotakis, R. Lye, and D. Papantoniu). Turbulent features can be recognized even in natural turbulent systems like, for example, the atmosphere of Jupiter (see Figure 4). A different example of turbulence in plasmas is reported in Figure 5 where we show the result of a typical high resolution numerical simulations of 2D MHD turbulence In this case the turbulent field shown is the current density. These basic features of mixing between order and chaos make the investigation of properties of turbulence terribly complicated, although extraordinarily fascinating. When we look at a flow at two different times, we can observe that the general aspect of the flow has not changed appreciably, say vortices are present all the time but the flow in each single point of the fluid looks different. We recognize that the gross features of the flow are reproducible but details are not predictable. We have to use a statistical approach to turbulence, just as it is done to describe stochastic processes, even if the problem is born within the strange dynamics of a deterministic system! Turbulence in the atmosphere of Jupiter as observed by Voyager. High resolution numerical simulations of 2D MHD turbulence at resolution 2048 × 2048 (courtesy by H. Politano). Here, the authors show the current density J(x, y), at a given time, on the plane (x, y). Turbulence increases the properties of transport in a flow. For example, the urban pollution, without atmospheric turbulence, would not be spread (or eliminated) in a relatively short time. Results from numerical simulations of the concentration of a passive scalar transported by a turbulent flow is shown in Figure 6. On the other hand, in laboratory plasmas inside devices designed to achieve thermo-nuclear controlled fusion, anomalous transport driven by turbulent fluctuations is the main cause for the destruction of magnetic confinement. Actually, we are far from the achievement of controlled thermo-nuclear fusion. Turbulence, then, acquires the strange feature of something to be avoided in some cases, or to be invoked in some other cases. Turbulence became an experimental science since Osborne Reynolds who, at the end of 19th century, observed and investigated experimentally the transition from laminar to turbulent flow. He noticed that the flow inside a pipe becomes turbulent every time a single parameter, a combination of the viscosity coefficient η, a characteristic velocity U, and length L, would increase. This parameter Re = ULρ/η (ρ is the mass density of the fluid) is now called the Reynolds number. At lower Re, say Re ≤ 2300, the flow is regular (that is the motion is laminar), but when Re increases beyond a certain threshold of the order of Re ≃ 4000, the flow becomes turbulent. As Re increases, the transition from a laminar to a turbulent state occurs over a range of values of Re with different characteristics and depending on the details of the experiment. In the limit Re → ∞ the turbulence is said to be in a fully developed turbulent state. The original pictures by Reynolds are shown in Figure 7. Concentration field c(x, y), at a given time, on the plane (x, y). The field has been obtained by a numerical simulation at resolution 2048 × 2048. The concentration is treated as a passive scalar, transported by a turbulent field. Low concentrations are reported in blue while high concentrations are reported in yellow (courtesy by A. Noullez). The original pictures by Reynolds which show the transition to a turbulent state of a flow in a pipe, as the Reynolds number increases from top to bottom (see the website Reynolds, (1883). 1.2 Dynamics vs. statistics In Figure 8 we report a typical sample of turbulence as observed in a fluid flow in the Earth's atmosphere. Time evolution of both the longitudinal velocity component and the temperature is shown. Measurements in the solar wind show the same typical behavior. A typical sample of turbulence as measured by Helios 2 spacecraft is shown in Figure 9. A further sample of turbulence, namely the radial component of the magnetic field measured at the external wall of an experiment in a plasma device realized for thermonuclear fusion, is shown in Figure 10. As it is well documented in these figures, the main feature of fully developed turbulence is the chaotic character of the time behavior. Said differently, this means that the behavior of the flow is unpredictable. While the details of fully developed turbulent motions are extremely sensitive to triggering disturbances, average properties are not. If this was not the case, there would be little significance in the averaging process. Predictability in turbulence can be recast at a statistical level. In other words, when we look at two different samples of turbulence, even collected within the same medium, we can see that details look very different. What is actually common is a generic stochastic behavior. This means that the global statistical behavior does not change going from one sample to the other. The idea that fully developed turbulent flows are extremely sensitive to small perturbations but have statistical properties that are insensitive to perturbations is of central importance throughout this review. Fluctuations of a certain stochastic variable ψ are defined here as the difference from the average value δψ = ψ−ψ, where brackets mean some averaging process. Actually, the method of taking averages in a turbulent flow requires some care. We would like to recall that there are, at least, three different kinds of averaging procedures that may be used to obtain statistically-averaged properties of turbulence The space averaging is limited to flows that are statistically homogeneous or, at least, approximately homogeneous over scales larger than those of fluctuations. The ensemble averages are the most versatile, where average is taken over an ensemble of turbulent flows prepared under nearly identical external conditions. Of course, these flows are not completely identical because of the large fluctuations present in turbulence Each member of the ensemble is called a realization. The third kind of averaging procedure is the time average, which is useful only if the turbulence is statistically stationary over time scales much larger than the time scale of fluctuations. In practice, because of the convenience offered by locating a probe at a fixed point in space and integrating in time, experimental results are usually obtained as time averages. The ergodic theorem (Halmos, (1956) assures that time averages coincide with ensemble averages under some standard conditions (see Appendix B). Turbulence as measured in the atmospheric boundary layer. Time evolution of the longitudinal velocity and temperature are shown in the upper and lower panels, respectively. The turbulent samples have been collected above a grass-covered forest clearing at 5 m above the ground surface and at a sampling rate of 56 Hz (Katul et al., (1997). A different property of turbulence is that all dynamically interesting scales are excited, that is, energy is spread over all scales. This can be seen in Figure 11 where we show the magnetic field intensity within a typical solar wind stream (see top panel). In the middle and bottom panels we show fluctuations at two different detailed scales. A kind of self-similarity (say a similarity at all scales) is observed. Since fully developed turbulence involves a hierarchy of scales, a large number of interacting degrees of freedom are involved. Then, there should be an asymptotic statistical state of turbulence that is independent on the details of the flow. Hopefully, this asymptotic state depends, perhaps in a critical way, only on simple statistical properties like energy spectra, as much as in statistical mechanics equilibrium where the statistical state is determined by the energy spectrum (Huang, (1987). Of course, we cannot expect that the statistical state would determine the details of individual realizations, because realizations need not to be given the same weight in different ensembles with the same low-order statistical properties. A sample of fast solar wind at distance 0.9 AU measured by the Helios 2 spacecraft. From top to bottom: speed, number density, temperature, and magnetic field, as a function of time. Figure 10: Turbulence as measured at the external wall of a device designed for thermonuclear fusion, namely the RFX in Padua (Italy). The radial component of the magnetic field as a function of time is shown in the figure (courtesy by V. Antoni). Magnetic intensity fluctuations as observed by Helios 2 in the inner solar wind at 0.9 AU, for different blow-ups. Some self-similarity is evident here. It should be emphasized that there are no firm mathematical arguments for the existence of an asymptotic statistical state. As we have just seen, reproducible statistical results are obtained from observations, that is, it is suggested experimentally and from physical plausibility. Apart from physical plausibility, it is embarrassing that such an important feature of fully developed turbulence, as the existence of a statistical stability, should remain unsolved. However, such is the complex nature of turbulence 2 Equations and Phenomenology In this section, we present the basic equations that are used to describe charged fluid flows, and the basic phenomenology of low-frequency turbulence Readers interested in examining closely this subject can refer to the very wide literature on the subject of turbulence in fluid flows, as for example the recent books by, e.g., Pope (2000); McComb (1990); Frisch (1995) or many others, and the less known literature on MHD flows (Biskamp, (1993; Boyd and Sanderson, (2003; Biskamp, (2003). In order to describe a plasma as a continuous medium it will be assumed collisional and, as a consequence, all quantities will be functions of space r and time t. Apart for the required quasi-neutrality, the basic assumption of MHD is that fields fluctuate on the same time and length scale as the plasma variables, say ωτH ≃ 1 and kLH ≃ 1 (k and ω are, respectively, the wave number and the frequency of the fields, while τH and LH are the hydrodynamic time and length scale, respectively). Since the plasma is treated as a single fluid, we have to take the slow rates of ions. A simple analysis shows also that the electrostatic force and the displacement current can be neglected in the non-relativistic approximation. Then, MHD equations can be derived as shown in the following sections. 2.1 The Navier-Stokes equation and the Reynolds number Equations which describe the dynamics of real incompressible fluid flows have been introduced by Claude-Louis Navier in 1823 and improved by George G. Stokes. They are nothing but the momentum equation based on Newton's second law, which relates the acceleration of a fluid particle2 to the resulting volume and body forces acting on it. These equations have been introduced by Leonhard Euler, however, the main contribution by Navier was to add a friction forcing term due to the interactions between fluid layers which move with different speed. This term results to be proportional to the viscosity coefficients η and ξ and to the variation of speed. By defining the velocity field u(r, t) the kinetic pressure p and the density ρ, the equations describing a fluid flow are the continuity equation to describe the conservation of mass $$\frac{{\partial \rho }} {{\partial t}} + (u \cdot \nabla )\rho = - \rho \nabla \cdot u,$$ the equation for the conservation of momentum $$\rho \left[ {\frac{{\partial u}} {{\partial t}} + (u \cdot \nabla )u} \right] = - \nabla p + \eta \nabla ^2 u + \left( {\xi + \frac{\eta } {3}} \right)\nabla (\nabla \cdot u),$$ and an equation for the conservation of energy $$\rho T\left[ {\frac{{\partial s}} {{\partial t}} + (u \cdot \nabla )s} \right] = \nabla (\chi \nabla T) + \frac{\eta } {2}\left( {\frac{{\partial u_i }} {{\partial x_k }} + \frac{{\partial u_k }} {{\partial x_i }} - \frac{2} {3}\delta _{ik} \nabla \cdot u} \right)^2 + \xi (\nabla \cdot u)^2 ,$$ where s is the entropy per mass unit, T is the temperature, and χ is the coefficient of thermoconduction. An equation of state closes the system of fluid equations. The above equations considerably simplify if we consider the incompressible fluid, where ρ = const. so that we obtain the Navier-Stokes (NS) equation $$\frac{{\partial u}} {{\partial t}} + (u \cdot \nabla )u = - \left( {\frac{{\nabla p}} {\rho }} \right) + \nu \nabla ^2 u,$$ where the coefficient ν = η/ρ is the kinematic viscosity. The incompressibility of the flow translates in a condition on the velocity field, namely the field is divergence-free, i.e., ∇·u = 0. This condition eliminates all high-frequency sound waves and is called the incompressible limit. The non-linear term in equations represents the convective (or substantial) derivative. Of course, we can add on the right hand side of this equation all external forces, which eventually act on the fluid parcel. We use the velocity scale U and the length scale L to define dimensionless independent variables, namely r = r'L (from which ∇ = ∇'/L) and t = t'(L/U), and dependent variables u = u'U and p = p'U2ρ. Then, using these variables in Equation (4), we obtain $$\frac{{\partial u'}} {{\partial t'}} + (u' \cdot \nabla ')u' = - \nabla 'p' + Re^{ - 1} \nabla '^2 u'.$$ The Reynolds number Re = UL/ν is evidently the only parameter of the fluid flow. This defines a Reynolds number similarity for fluid flows, namely fluids with the same value of the Reynolds number behaves in the same way. Looking at Equation (5) it can be realized that the Reynolds number represents a measure of the relative strength between the non-linear convective term and the viscous term in Equation (4). The higher Re, the more important the non-linear term is in the dynamics of the flow. Turbulence is a genuine result of the non-linear dynamics of fluid flows. 2.2 The coupling between a charged fluid and the magnetic field Magnetic fields are ubiquitous in the Universe and are dynamically important. At high frequencies, kinetic effects are dominant, but at frequencies lower than the ion cyclotron frequency, the evolution of plasma can be modeled using the MHD approximation. Furthermore, dissipative phenomena can be neglected at large scales although their effects will be felt because of non-locality of non-linear interactions. In the presence of a magnetic field, the Lorentz force j × B, where j is the electric current density, must be added to the fluid equations, namely $$\rho \left[ {\frac{{\partial u}} {{\partial t}} + (u \cdot \nabla )u} \right] = - \nabla p + \eta \nabla ^2 u + \left( {\xi + \frac{\eta } {3}} \right)\nabla (\nabla \cdot u) - \frac{1} {{4\pi }}B \times (\nabla \times B),$$ and the Joule heat must be added to the equation for energy $$\rho T\left[ {\frac{{\partial s}} {{\partial t}} + (u \cdot \nabla )s} \right] = \sigma _{ik} \frac{{\partial u_i }} {{\partial x_k }} + \chi \nabla ^2 T + \frac{{c^2 }} {{16\pi ^2 \sigma }}(\nabla \times B)^2 ,$$ where σ is the conductivity of the medium, and we introduced the viscous stress tensor $$\sigma _{ik} = \eta \left( {\frac{{\partial u_i }} {{\partial x_k }} + \frac{{\partial u_k }} {{\partial x_i }} - \frac{2} {3}\delta _{ik} \nabla \cdot u} \right) + \xi \delta _{ik} \nabla \cdot u.$$ An equation for the magnetic field stems from the Maxwell equations in which the displacement current is neglected under the assumption that the velocity of the fluid under consideration is much smaller than the speed of light. Then, using $$\nabla \times B = \mu _0 j$$ and the Ohm's law for a conductor in motion with a speed u in a magnetic field $$j = \sigma (E + u \times B),$$ we obtain the induction equation which describes the time evolution of the magnetic field $$\frac{{\partial B}} {{\partial t}} = \nabla \times (u \times B) + (1/\sigma \mu _0 )\nabla ^2 B,$$ together with the constraint ∇ · B = 0 (no magnetic monopoles in the classical case). In the incompressible case, where ∇ · u = 0, MHD equations can be reduced to $$\frac{{\partial u}} {{\partial t}} + (u \cdot \nabla )u = - \nabla P_{tot} + \nu \nabla ^2 u + (b \cdot \nabla )b$$ $$\frac{{\partial b}} {{\partial t}} + (u \cdot \nabla )b = - (b \cdot \nabla )u + \eta \nabla ^2 b.$$ Here Ptot is the total kinetic P k = nkT plus magnetic pressure Pm = B2/8π, divided by the constant mass density ρ. Moreover, we introduced the velocity variables b = B/√πρ and the magnetic diffusivity η. Similar to the usual Reynolds number, a magnetic Reynolds number Rm can be defined, namely $$R_m = \frac{{c_A L_0 }} {\eta },$$ (11a) where cA = B0/√4πρ is the Alfvén speed related to the large-scale B0 magnetic field B0. This number in most circumstances in astrophysics is very large, but the ratio of the two Reynolds numbers or, in other words, the magnetic Prandtl number Pm = ν/η can differ widely. In absence of dissipative terms, for each volume V MHD equations conserve the total energy E(t) $$E(t) = \int_V {(v^2 + b^2 )d^3 r,}$$ the cross-helicity Hc(t), which represents a measure of the degree of correlations between velocity and magnetic fields $$H_c (t) = \int_V {v \cdot b d^3 r,}$$ and the magnetic helicity H(t), which represents a measure of the degree of linkage among magnetic flux tubes $$H(t) = \int_V {a \cdot b d^3 r,}$$ where b = ∇ × a. The change of variable due to Elsäasser (1950), say z± = u ± b', where we explicitly use the background uniform magnetic field b' = b + cA (at variance with the bulk velocity, the largest scale magnetic field cannot be eliminated through a Galilean transformation), leads to the more symmetrical form of the MHD equations in the incompressible case $$\frac{{\partial z^ \pm }} {{\partial t}} = \mp (c_A \cdot \nabla )z^ \pm + (z \mp \cdot \nabla )z^ \pm = - \nabla P_{tot} + \nu ^ \pm \nabla ^2 z^ \pm + \nu ^ \mp \nabla ^2 z^ \mp + F^ \pm ,$$ where 2ν± = ν±η are the dissipative coefficients, and F± are eventual external forcing terms. The relations ∇ · z± = 0 complete the set of equations. On linearizing Equation (15) and neglecting both the viscous and the external forcing terms, we have $$\frac{{\partial z^ \pm }} {{\partial t}} = \mp (c_A \cdot \nabla )z^ \pm \simeq 0,$$ which shows that z−(x − cAt) describes Alfvénic fluctuations propagating in the direction of B0, and z+(x + cAt) describes Alfvénic fluctuations propagating opposite to B0. Note that MHD Equations (15) have the same structure as the Navier-Stokes equation, the main difference stems from the fact that non-linear coupling happens only between fluctuations propagating in opposite directions. As we will see, this has a deep influence on turbulence described by MHD equations. It is worthwhile to remark that in the classical hydrodynamics, dissipative processes are defined through three coefficients, namely two viscosities and one thermoconduction coefficient. In the hydromagnetic case the number of coefficients increases considerably. Apart from few additional electrical coefficients, we have a large-scale (background) magnetic field B0. This makes the MHD equations intrinsically anisotropic. Furthermore, the stress tensor (8) is deeply modified by the presence of a magnetic field B0, in that kinetic viscous coefficients must depend on the magnitude and direction of the magnetic field (Braginskii, (1965). This has a strong influence on the determination of the Reynolds number. 2.3 Scaling features of the equations The scaled Euler equations are the same as Equations (4 and 5), but without the term proportional to R−1. The scaled variables obtained from the Euler equations are, then, the same. Thus, scaled variables exhibit scaling similarity, and the Euler equations are said to be invariant with respect to scale transformations. Said differently, this means that NS Equations (4) show scaling properties (Frisch, (1995), that is, there exists a class of solutions which are invariant under scaling transformations. Introducing a length scale ℓ, it is straightforward to verify that the scaling transformations ℓ ↑ λ ℓ' and u → λ h u' (λ is a scaling factor and h is a scaling index) leave invariant the inviscid NS equation for any scaling exponent h, providing P → λ2hP'. When the dissipative term is taken into account, a characteristic length scale exists, say the dissipative scale ℓD. From a phenomenological point of view, this is the length scale where dissipative effects start to be experienced by the flow. Of course, since ℓD is in general very low, we expect that ℓD is very small. Actually, there exists a simple relationship for the scaling of .D with the Reynolds number, namely ℓD ~ LRe−3/4. The larger the Reynolds number, the smaller the dissipative length scale. As it is easily verified, ideal MHD equations display similar scaling features. Say the following scaling transformations u → λ h u' and B → λ β B' (β here is a new scaling index different from h), leave the inviscid MHD equations unchanged, providing P → λ2βP', T → λ2hT', and ρ → λ2(β−h)ρ'. This means that velocity and magnetic variables have different scalings, say h ≠ β, only when the scaling for the density is taken into account. In the incompressible case, we cannot distinguish between scaling laws for velocity and magnetic variables. 2.4 The non-linear energy cascade The basic properties of turbulence, as derived both from the Navier-Stokes equation and from phenomenological considerations, is the legacy of A. N. Kolmogorov (Frisch, (1995).3 Phenomenology is based on the old picture by Richardson who realized that turbulence is made by a collection of eddies at all scales. Energy, injected at a length scale L, is transferred by non-linear interactions to small scales where it is dissipated at a characteristic scale ℓD, the length scale where dissipation takes place The main idea is that at very large Reynolds numbers, the injection scale L and the dissipative scale ℓD are completely separated. In a stationary situation, the energy injection rate must be balanced by the energy dissipation rate and must also be the same as the energy transfer rate ε measured at any scale ℓ within the inertial range ℓD ≪ ℓ ≪ L. From a phenomenological point of view, the energy injection rate at the scale L is given by ∈ D ~ U2/τ L , where τ L is a characteristic time for the injection energy process, which results to be τ L ~ L/U At the same scale L the energy dissipation rate is due to ∈ D ~ U2/τ D , where τ D is the characteristic dissipation time which, from Equation (4), can be estimated to be of the order of τ D ~ L2/ν. As a result, the ratio between the energy injection rate and dissipation rate is $$\frac{{\varepsilon _L }} {{\varepsilon _D }} \sim \frac{{\tau _D }} {{\tau _L }} \sim Re,$$ that is, the energy injection rate at the largest scale L is Re-times the energy dissipation rate. In other words, in the case of large Reynolds numbers, the fluid system is unable to dissipate the whole energy injected at the scale L. The excess energy must be dissipated at small scales where the dissipation process is much more efficient. This is the physical reason for the energy cascade. Fully developed turbulence involves a hierarchical process, in which many scales of motion are involved. To look at this phenomenon it is often useful to investigate the behavior of the Fourier coefficients of the fields. Assuming periodic boundary conditions the α-th component of velocity field can be Fourier decomposed as $$u_\alpha (r,t) = \sum\limits_k {u_\alpha (k,t)\exp (ik \cdot r)},$$ where k = 2πn/L and n is a vector of integers. When used in the Navier-Stokes equation, it is a simple matter to show that the non-linear term becomes the convolution sum $$\frac{{\partial u_\alpha (k,t)}} {{\partial t}} = M_{\alpha \beta \gamma } (k)\sum\limits_q {u_\gamma (k - q,t)u_\beta (q,t)},$$ where Mαβγ(k) = −ik β (δ αγ − k α − k β /k2) (for the moment we disregard the linear dissipative term). MHD equations can be written in the same way, say by introducing the Fourier decomposition for Elsäasser variables $$z_\alpha ^ \pm (r,t) = \sum\limits_k {z_\alpha ^ \pm (k,t)\exp (ik \cdot r)} $$ and using this expression in the MHD equations we obtain an equation which describes the time evolution of each Fourier mode. However, the divergence-less condition means that not all Fourier modes are independent, rather k · z±(k, t) = 0 means that we can project the Fourier coefficients on two directions which are mutually orthogonal and orthogonal to the direction of k, that is, $$z^ \pm (k,t) = \sum\limits_{a = 1}^2 {z_a^ \pm (k,t)e^{(a)} (k)},$$ with the constraint that k · e(a)(k) = 0. In presence of a background magnetic field we can use the well defined direction B0, so that $$\begin{array}{*{20}c} {e^{(1)} (k) = \frac{{ik \times B_0 }} {{\left| {k \times B_0 } \right|}};} & {e^{(2)} (k) = \frac{{ik}} {{\left| k \right|}} \times e^{(1)} (k)} \\ \end{array}.$$ Note that in the linear approximation where the Elsäasser variables represent the usual MHD modes, z 1 ± (k, t) represent the amplitude of the Alfvén mode while z 2 ± (k, t) represent the amplitude of the incompressible limit of the magnetosonic mode. From MHD Equations (15) we obtain the following set of equations: $$\left[ {\frac{\partial } {{\partial t}} \mp i(k \cdot c_A )} \right]z_a^ \pm (k,t) = \left( {\frac{L} {{2\pi }}} \right)^3 \sum\limits_{p + q = k}^\delta {\sum\limits_{b,c = 1}^2 {A_{abc} ( - k,p,q)z_b^ \pm (p,t)z_c^ \mp (q,t)} }.$$ The coupling coefficients, which satisfy the symmetry condition A abc (k, p, q) = −A bac (p, k, q), are defined as $$A_{abc} ( - k,p,q) = \left[ {(ik)* \cdot e^{(c)} (q)} \right]\left[ {e^{(a)*} (k) \cdot e^{(b)} (p)} \right],$$ and the sum in Equation (19) is defined as $$\sum\limits_{p + q = k}^\delta { \equiv \left( {\frac{{2\pi }} {L}} \right)^3 \sum\limits_p {\sum\limits_q {\delta _{k,p + q} } } },$$ (19b) where δk,p+q is the Kronecher's symbol. Quadratic non-linearities of the original equations correspond to a convolution term involving wave vectors k, p and q related by the triangular relation p = k−q. Fourier coefficients locally couple to generate an energy transfer from any pair of modes p and q to a mode k = p + q. The pseudo-energies E±(t) are defined as $$E^ \pm (t) = \frac{1} {2}\frac{1} {{L^3 }}\int_{L^3 } {\left| {z^ \pm (r,t)} \right|^2 d^3 r = \frac{1} {2}\sum\limits_k {\sum\limits_{a = 1}^2 { \equiv \left| {z_a^ \pm (k,t)} \right|^2 } } }$$ (19c) and, after some algebra, it can be shown that the non-linear term of Equation (19) conserves separately E±(t). This means that both the total energy E(t) = E+ + E− and the cross-helicity Ec(t) = E+−E−, say the correlation between velocity and magnetic field, are conserved in absence of dissipation and external forcing terms. In the idealized homogeneous and isotropic situation we can define the pseudo-energy tensor, which using the incompressibility condition can be written as $$U_{ab}^ \pm (k,t) \equiv \left( {\frac{L} {{2\pi }}} \right)^3 \left\langle {z_a^ \pm (k,t)z_b^ \pm (k,t)} \right\rangle = \left( {\delta _{ab} - \frac{{k_a k_b }} {{k^2 }}} \right)q^ \pm (k),$$ (19d) brackets being ensemble averages, where q±(k) is an arbitrary odd function of the wave vector k and represents the pseudo-energies spectral density. When integrated over all wave vectors under the assumption of isotropy $$\begin{array}{*{20}c} {Tr\left[ {\int {d^3 k U_{ab}^ \pm (k,t)} } \right] = 2\smallint _0^\infty } & {E^ \pm (k,t)dk} \\ \end{array},$$ (19e) where we introduce the spectral pseudo-energy E±(k, t) = 4πk2q±(k, t). This last quantity can be measured, and it is shown that it satisfies the equations $$\frac{{\partial E^ \pm (k,t)}} {{\partial t}} = T^ \pm (k,t) - 2\nu k^2 E^ \pm (k,t) + F^ \pm (k,t).$$ We use ν = η in order not to worry about coupling between + and − modes in the dissipative range. Since the non-linear term conserves total pseudo-energies we have $$\smallint _0^\infty dkT^ \pm (k,t) = 0,$$ so that, when integrated over all wave vectors, we obtain the energy balance equation for the total pseudo-energies $$\frac{{dE^ \pm (t)}} {{dt}} = \int_0^\infty {dk F^ \pm (k,t) - 2\nu } \int_0^\infty {dk k^2 E^ \pm (k,t)}.$$ This last equation simply means that the time variations of pseudo-energies are due to the difference between the injected power and the dissipated power, so that in a stationary state $$\int_0^\infty {dk F^ \pm (k,t) - 2\nu } \int_0^\infty {dk k^2 E^ \pm (k,t) = \varepsilon ^ \pm }.$$ Looking at Equation (20), we see that the role played by the non-linear term is that of a redistribution of energy among the various wave vectors. This is the physical meaning of the non-linear energy cascade of turbulence 2.5 The inhomogeneous case Equations (20) refer to the standard homogeneous and incompressible MHD. Of course, the solar wind is inhomogeneous and compressible and the energy transfer equations can be as complicated as we want by modeling all possible physical effects like, for example, the wind expansion or the inhomogeneous large-scale magnetic field. Of course, simulations of all turbulent scales requires a computational effort which is beyond the actual possibilities. A way to overcome this limitation is to introduce some turbulence modeling of the various physical effects. For example, a set of equations for the cross-correlation functions of both Elsäasser fluctuations have been developed independently by Marsch and Tu (1989), Zhou and Matthaeus (1990), Oughton and Matthaeus (1992), and Tu and Marsch (1990a), following Marsch and Mangeney (1987) (see review by Tu and Marsch, (1996), and are based on some rather strong assumptions: i) a two-scale separation, and ii) small-scale fluctuations are represented as a kind of stochastic process (Tu and Marsch, (1996). These equations look quite complicated, and just a comparison based on order-of-magnitude estimates can be made between them and solar wind observations (Tu and Marsch, (1996). A different approach, introduced by Grappin et al. (1993), is based on the so-called "expandingbox model" (Grappin and Velli, (1996; Liewer et al., (2001; Hellinger et al., (2005). The model uses transformation of variables to the moving solar wind frame that expands together with the size of the parcel of plasma as it propagates outward from the Sun. Despite the model requires several simplifying assumptions, like for example lateral expansion only for the wave-packets and constant solar wind speed, as well as a second-order approximation for coordinate transformation Liewer et al. (2001) to remain tractable, it provides qualitatively good description of the solar wind expansions, thus connecting the disparate scales of the plasma in the various parts of the heliosphere. 2.6 Dynamical system approach to turbulence In the limit of fully developed turbulence, when dissipation goes to zero, an infinite range of scales are excited, that is, energy lies over all available wave vectors. Dissipation takes place at a typical dissipation length scale which depends on the Reynolds number Re through ℓD ~ LRe−3/4 (for a Kolmogorov spectrum E(k) ~ k−5/3). In 3D numerical simulations the minimum number of grid points necessary to obtain information on the fields at these scales is given by N ~ (L/ℓD)3 ~ Re9/4. This rough estimate shows that a considerable amount of memory is required when we want to perform numerical simulations with high Re. At present, typical values of Reynolds numbers reached in 2D and 3D numerical simulations are of the order of 104 and 103, respectively. At these values the inertial range spans approximately one decade or a little more. Given the situation described above, the question of the best description of dynamics which results from original equations, using only a small amount of degree of freedom, becomes a very important issu. This can be achieved by introducing turbulence models which are investigated using tools of dynamical system theory (Bohr et al., (1998). Dynamical systems, then, are solutions of minimal sets of ordinary differential equations that can mimic the gross features of energy cascade turbulence These studies are motivated by the famous Lorenz's model (Lorenz, (1963) which, containing only three degrees of freedom, simulates the complex chaotic behavior of turbulent atmospheric flows, becoming a paradigm for the study of chaotic systems. The Lorenz's model has been used as a paradigm as far as the transition to turbulence is concerned. Actually, since the solar wind is in a state of fully developed turbulence, the topic of the transition to turbulence is not so close to the main goal of this review. However, since their importance in the theory of dynamical systems, we spend few sentences abut this central topic. Up to the Lorenz's chaotic model, studies on the birth of turbulence dealt with linear and, very rarely, with weak non-linear evolution of external disturbances. The first physical model of laminar-turbulent transition is due to Landau and it is reported in the fourth volume of the course on Theoretical Physics (Landau and Lifshitz, (1971). According to this model, as the Reynolds number is increased, the transition is due to a infinite series of Hopf bifurcations at fixed values of the Reynolds number. Each subsequent bifurcation adds a new incommensurate frequency to the flow whose dynamics become rapidly quasi-periodic. Due to the infinite number of degree of freedom involved, the quasi-periodic dynamics resembles that of a turbulent flow. The Landau transition scenario is, however, untenable because incommensurate frequencies cannot exist without coupling between them. Ruelle and Takens (1971) proposed a new mathematical model, according to which after few, usually three, Hopf bifurcations the flow becomes suddenly chaotic. In the phase space this state is characterized by a very intricate attracting subset, a strange attractor. The flow corresponding to this state is highly irregular and strongly dependent on initial conditions. This characteristic feature is now known as the butterfly effect and represents the true definition of deterministic chaos. These authors indicated as an example for the occurrence of a strange attractor the old strange time behavior of the Lorenz's model. The model is a paradigm for the occurrence of turbulence in a deterministic system, it reads $$\begin{array}{*{20}c} {\frac{{dx}} {{dt}} = P_r (y - x),} & {\frac{{dy}} {{dt}} = Rx - y - xz,} & {\frac{{dz}} {{dt}} = xy - bz} \\ \end{array},$$ where x(t), y(t), and z(t) represent the first three modes of a Fourier expansion of fluid convective equations in the Boussinesq approximation, Pr is the Prandtl number, b is a geometrical parameter, and R is the ratio between the Rayleigh number and the critical Rayleigh number for convective motion. The time evolution of the variables x(t), y(t), and z(t) is reported in Figure 12. A reproduction of the Lorenz butterfly attractor, namely the projection of the variables on the plane (x, z) is shown in Figure 13. A few years later, Gollub and Swinney (1975) performed very sophisticated experiments,4 concluding that the transition to turbulence in a flow between co-rotating cylinders is described by the Ruelle and Takens (1971) model rather than by the Landau scenario. After this discovery, the strange attractor model gained a lot of popularity, thus stimulating a large number of further studies on the time evolution of non-linear dynamical systems. An enormous number of papers on chaos rapidly appeared in literature, quite in all fields of physics, and transition to chaos became a new topic. Of course, further studies on chaos rapidly lost touch with turbulence studies and turbulence, as reported by Feynman et al. (1977), still remains ... the last great unsolved problem of the classical physics. Furthermore, we like to cite recent theoretical efforts made by Chian and coworkers (Chian et al., (1998, (2003) related to the onset of Alfvénic turbulence These authors, numerically solved the derivative non-linear Schrödinger equation (Mjølhus, (1976; Ghosh and Papadopoulos, (1987) which governs the spatio-temporal dynamics of non-linear Alfvén waves, and found that Alfvénic intermittent turbulence is characterized by strange attractors. Note that, the physics involved in the derivative non-linear Schrödinger equation, and in particular the spatio-temporal dynamics of non-linear Alfvén waves, cannot be described by the usual incompressible MHD equations. Rather dispersive effects are required. At variance with the usual MHD, this can be satisfied by requiring that the effect of ion inertia be taken into account. This results in a generalized Ohm's law by including a (j̲ × B̲)-term, which represents the compressible Hall correction to MHD, say the so-called compressible Hall-MHD model. Time evolution of the variables x(t), y(t), and z(t) in the Lorenz's model (see Equation (22)). This figure has been obtained by using the parameters Pr = 10, b = 8/3, and R = 28. The Lorenz butterfly attractor, namely the time behavior of the variables z(t) vs. x(t) as obtained from the Lorenz's model (see Equation (22)). This figure has been obtained by using the parameters Pr = 10, b = 8/3, and R = 28. In this context turbulence can evolve via two distinct routes: Pomeau.Manneville intermittency (Pomeau and Manneville, (1980) and crisis-induced intermittency (Ott and Sommerer, (1994). Both types of chaotic transitions follow episodic switching between different temporal behaviors. In one case (Pomeau.Manneville) the behavior of the magnetic fluctuations evolve from nearly periodic to chaotic while, in the other case the behavior intermittently assumes weakly chaotic or strongly chaotic features. 2.7 Shell models for turbulence cascade Since numerical simulations, in some cases, cannot be used, simple dynamical systems can be introduced to investigate, for example, statistical properties of turbulent flows which can be compared with observations. These models, which try to mimic the gross features of the time evolution of spectral Navier-Stokes or MHD equations, are often called "shell models" or "discrete cascade models". Starting from the old papers by Siggia (1977) different shell models have been introduced in literature for 3D fluid turbulence (Biferale, (2003). MHD shell models have been introduced to describe the MHD turbulent cascade (Plunian et al., (2012), starting from the paper by Gloaguen et al. (1985). The most used shell model is usually quoted in literature as the GOY model, and has been introduced some time ago by Gledzer (1973) and by Ohkitani and Yamada (1989). Apart from the first MHD shell model (Gloaguen et al., (1985), further models, like those by Frick and Sokoloff (1998) and Giuliani and Carbone (1998) have been introduced and investigated in detail. In particular, the latter ones represent the counterpart of the hydrodynamic GOY model, that is they coincide with the usual GOY model when the magnetic variables are set to zero. In the following, we will refer to the MHD shell model as the FSGC model. The shell model can be built up through four different steps: Introduce discrete wave vectors: As a first step we divide the wave vector space in a discrete number of shells whose radii grow according to a power k n = k0λ n , where λ > 1 is the inter-shell ratio, k0 is the fundamental wave vector related to the largest available length scale L, and n = 1, 2, ..., N. Assign to each shell discrete scalar variables: Each shell is assigned two or more complex scalar variables u n (t) and b n (t), or Elsäasser variables Z n ± (t) = u n ± b n (t). These variables describe the chaotic dynamics of modes in the shell of wave vectors between k n and kn+1. It is worth noting that the discrete variable, mimicking the average behavior of Fourier modes within each shell, represents characteristic fluctuations across eddies at the scale ℓ n ~ k n −1 . That is, the fields have the same scalings as field differences, for example Z n ± ~ |Z±(x + ℓ n ) − Z±(x)| ~ ℓ n h in fully developed turbulence In this way, the possibility to describe spatial behavior within the model is ruled out. We can only get, from a dynamical shell model, time series for shell variables at a given k n , and we loose the fact that turbulence is a typical temporal and spatial complex phenomenon. Introduce a dynamical model which describes non-linear evolution: Looking at Equation (19) a model must have quadratic non-linearities among opposite variables Z n ± (t) and Z n ∓ (t), and must couple different shells with free coupling coefficients. Fix as much as possible the coupling coefficients: This last step is not standard. A numerical investigation of the model might require the scanning of the properties of the system when all coefficients are varied. Coupling coefficients can be fixed by imposing the conservation laws of the original equations, namely the total pseudo-energies $$E^ \pm (t) = \frac{1} {2}\sum\limits_n {\left| {Z_n^ \pm } \right|^2 },$$ that means the conservation of both the total energy and the cross-helicity: $$\begin{array}{*{20}c} {E(t) = \frac{1} {2}\sum\limits_n {\left| {u_n } \right|^2 + \left| {b_n } \right|^2 ;} } & {H_c (t) = \sum\limits_n {2\Re e(u_n b_n^* )} } \\ \end{array},$$ where Re indicates the real part of the product u n b n *. As we said before, shell models cannot describe spatial geometry of non-linear interactions in turbulence, so that we loose the possibility of distinguishing between two-dimensional and three-dimensional turbulent behavior. The distinction is, however, of primary importance, for example as far as the dynamo effect is concerned in MHD. However, there is a third invariant which we can impose, namely $$H(t) = \sum\limits_n {\left| { - 1} \right|^n \frac{{\left| {b_n } \right|^2 }} {{k_n^\alpha }}},$$ which can be dimensionally identified as the magnetic helicity when α = 1, so that the shell model so obtained is able to mimic a kind of 3D MHD turbulence (Giuliani and Carbone (1998). After some algebra, taking into account both the dissipative and forcing terms, FSGC model can be written as $$\frac{{dZ_n^ \pm }} {{dt}} = ik_n \Phi _n^{ \pm *} + \frac{{\nu \pm \mu }} {2}k_n^2 Z_n^ + + \frac{{\nu \mp \mu }} {2}k_n^2 Z_n^ - + F_n^ \pm ,$$ $$\begin{array}{*{20}c} {\Phi _n^ \pm = \left( {\frac{{2 - a - c}} {2}} \right)Z_{n + 2}^ \pm Z_{n + 1}^ \mp + \left( {\frac{{a + c}} {2}} \right)Z_{n + 1}^ \pm Z_{n + 2}^ \mp + } \\ { + \left( {\frac{{c - a}} {{2\lambda }}} \right)Z_{n - 1}^ \pm Z_{n + 1}^ \mp - \left( {\frac{{a + c}} {{2\lambda }}} \right)Z_{n - 1}^ \mp Z_{n + 1}^ \pm + } \\ { - \left( {\frac{{c - a}} {{2\lambda ^2 }}} \right)Z_{n - 2}^ \mp Z_{n - 1}^ \pm - \left( {\frac{{2 - a - c}} {{2\lambda ^2 }}} \right)Z_{n - 1}^ \mp Z_{n - 2}^ \pm } \\ \end{array},$$ where5 λ = 2, a = 1/2, and c = 1/3. In the following, we will consider only the case where the dissipative coefficients are the same, i.e., ν = μ. 2.8 The phenomenology of fully developed turbulence: Fluid-like case Here we present the phenomenology of fully developed turbulence, as far as the scaling properties are concerned. In this way we are able to recover a universal form for the spectral pseudo-energy in the stationary case. In real space a common tool to investigate statistical properties of turbulence is represented by field increments Δz ℓ ± (r) = [z±(r + ℓ) − z±(r)] · e, being e the longitudinal direction. These stochastic quantities represent fluctuations6 across eddies at the scale ℓ. The scaling invariance of MHD equations (cf. Section 2.3), from a phenomenological point of view, implies that we expect solutions where Δz ℓ ± ~ ℓ h . All the statistical properties of the field depend only on the scale ℓ, on the mean pseudo-energy dissipation rates ε±, and on the viscosity ν. Also, ε± is supposed to be the common value of the injection, transfer and dissipation rates. Moreover, the dependence on the viscosity only arises at small scales, near the bottom of the inertial range. Under these assumptions the typical pseudo-energy dissipation rate per unit mass scales as ε± ~ (Δz ℓ ± ±)2/t ℓ ± . The time t ℓ ± associated with the scale . is the typical time needed for the energy to be transferred on a smaller scale, say the eddy turnover time t ℓ ± ~ ℓ/Δz ℓ ∓ , so that $$\varepsilon ^ \pm \sim (\Delta z_\ell ^ \pm )^2 \Delta z^ \mp /\ell .$$ When we conjecture that both Δz± fluctuations have the same scaling laws, namely Δz± ~ ℓ h we recover the Kolmogorov scaling for the field increments $$\Delta z_\ell ^ \pm \sim (\varepsilon ^ \pm )^{1/3} \ell ^{1/3} .$$ Usually, we refer to this scaling as the K41 model (Kolmogorov, (1941, (1991; Frisch, (1995). Note that, since from dimensional considerations the scaling of the energy transfer rate should be ε± ~ ℓ1−3h, h = 1/3 is the choice to guarantee the absence of scaling for ε±. In the real space turbulence properties can be described using either the probability distribution functions (PDFs hereafter) of increments, or the longitudinal structure functions, which represents nothing but the higher order moments of the field. Disregarding the magnetic field, in a purely fully developed fluid turbulence, this is defined as S ℓ (p) = 〈Δu ℓ p 〉. These quantities, in the inertial range, behave as a power law S ℓ (p) ~ ℓ ξp , so that it is interesting to compute the set of scaling exponent ξ p . Using, from a phenomenological point of view, the scaling for field increments (see Equation (26)), it is straightforward to compute the scaling laws S ℓ (p) ~ ℓp/3. Then ξ p = p/3 results to be a linear function of the order p. When we assume the scaling law Δz ℓ ± ~ ℓ h , we can compute the high-order moments of the structure functions for increments of the Elsäasser variables, namely 〈(Δz ℓ ± ) p 〉 ~ ℓ ξ p , thus obtaining a linear scaling ξ p = p/3, similar to usual fluid flows. For Gaussianly distributed fields, a particular role is played by the second-order moment, because all moments can be computed from S ℓ (2) . It is straightforward to translate the dimensional analysis results to Fourier spectra. The spectral property of the field can be recovered from S ℓ (2) , say in the homogeneous and isotropic case $$S_\ell ^{(2)} = 4\int_0^\infty {E(k)\left( {1 - \frac{{\sin k\ell }} {{k\ell }}} \right)dk,}$$ where k ~ 1/ℓ is the wave vector, so that in the inertial range where Equation (42) is verified $$E(k) \sim \varepsilon ^{2/3} k^{ - 5/3} .$$ The Kolmogorov spectrum (see Equation (27)) is largely observed in all experimental investigations of turbulence, and is considered as the main result of the K41 phenomenology of turbulence (Frisch, (1995). However, spectral analysis does not provide a complete description of the statistical properties of the field, unless this has Gaussian properties. The same considerations can be made f.o[.r the spectral pseudo-energies E±(k), which are related to the 2nd order structure functions 〈[±z ℓ ± ]2〉. 2.9 The phenomenology of fully developed turbulence: Magnetically-dominated case The phenomenology of the magnetically-dominated case has been investigated by Iroshnikov (1963) and Kraichnan (1965), then developed by Dobrowolny et al. (1980b) to tentatively explain the occurrence of the observed Alfvénic turbulence, and finally by Carbone (1993) and Biskamp (1993) to get scaling laws for structure functions. It is based on the Alfvén effect, that is, the decorrelation of interacting eddies, which can be explained phenomenologically as follows. Since non-linear interactions happen only between opposite propagating fluctuations, they are slowed down (with respect to the fluid-like case) by the sweeping of the fluctuations across each other. This means that ε± ~ (Δz ℓ ± )2/T ℓ ± but the characteristic time T ℓ ± required to efficiently transfer energy from an eddy to another eddy at smaller scales cannot be the eddy-turnover time, rather it is increased by a factor t ℓ ± /tA (t A ~ ℓ/cA < t ℓ ± is the Alfvén time), so that T ℓ ± ~ (t ℓ ± )2/tA. Then, immediately $$\varepsilon ^ \pm \sim \frac{{[\Delta z_\ell ^ \pm ]^2 [\Delta z_\ell ^ \mp ]^2 }} {{\ell c_A }} .$$ This means that both ± modes are transferred at the same rate to small scales, namely ∈+ ~ ∈− ~ ∈, and this is the conclusion drawn by Dobrowolny et al. (1980b). In reality, this is not fully correct, namely the Alfvén effect yields to the fact that energy transfer rates have the same scaling laws for ± modes but, we cannot say anything about the amplitudes of ε+ and ε− (Carbone, (1993). Using the usual scaling law for fluctuations, it can be shown that the scaling behavior holds ∈ → λ1−4hε'. Then, when the energy transfer rate is constant, we found a scaling law different from that of Kolmogorov and, in particular, $$\Delta z_\ell ^ \pm \sim (\varepsilon c_A )^{1/4} \ell ^{1/4} .$$ Using this phenomenology the high-order moments of fluctuations are given by S ∓ (p) ~ ℓp/4. Even in this case, ξ p = p/4 results to be a linear function of the order p. The pseudo-energy spectrum can be easily found to be $$E^ \pm (k) \sim (\varepsilon c_A )^{1/2} k^{ - 3/2} .$$ This is the Iroshnikov-Kraichnan spectrum. However, in a situation in which there is a balance between the linear Alfvén time scale or wave period, and the non-linear time scale needed to transfer energy to smaller scales, the energy cascade is indicated as critically balanced (Goldreich and Sridhar, (1995). In these conditions, it can be shown that the power spectrum P(k) would scale as f−5/3 when the angle θ B between the mean field direction and the flow direction is 90° while, the same scaling would follow f−2 in case θ B = 0° and the spectrum would also have a smaller energy content than in the other case. 2.10 Some exact relationships So far, we have been discussing about the inertial range of turbulence What this means from a heuristic point of view is somewhat clear, but when we try to identify the inertial range from the spectral properties of turbulence, in general the best we can do is to identify the inertial range with the intermediate range of scales where a Kolmogorov's spectrum is observed. The often used identity inertial range ≃ intermediate range, is somewhat arbitrary. In this regard, a very important result on turbulence, due to Kolmogorov (1941, (1991), is the so-called "4/5-law" which, being obtained from the Navier-Stokes equation, is "... one of the most important results in fully developed turbulence because it is both exact and nontrivial" (cf. Frisch, (1995). As a matter of fact, Kolmogorov analytically derived the following exact relation for the third order structure function of velocity fluctuations: $$\left\langle {(\Delta v_\parallel (r,\ell ))^3 } \right\rangle = - \frac{4} {5}\varepsilon \ell ,$$ where r is the sampling direction, ℓ is the corresponding scale, and ∈ is the mean energy dissipation per unit mass, assumed to be finite and nonvanishing. This important relation can be obtained in a more general framework from MHD equations. A Yaglom's relation for MHD can be obtained using the analogy of MHD equations with a transport equation, so that we can obtain a relation similar to the Yaglom's equation for the transport of a passive quantity (Monin and Yaglom, (1975). Using the above analogy, the Yaglom's relation has been extended some time ago to MHD turbulence by Chandrasekhar (1967), and recently it has been revised by Politano et al. (1998) and Politano and Pouquet (1998) in the framework of solar wind turbulence In the following section we report an alternative and more general derivation of the Yaglom's law using structure functions (Sorriso-Valvo et al., (2007; Carbone et al., (2009c). 2.11 Yaglom's law for MHD turbulence To obtain a general law we start from the incompressible MHD equations. If we write twice the MHD equations for two different and independent points x i and x i ' = x i + ℓ i , by substraction we obtain an equation for the vector differences Δz i ± = (z i ± )' − z i ± . Using the hypothesis of independence of points x i ' and x i with respect to derivatives, namely ∂ i (z i ± )' = ∂ i 'z j ± = 0 (where ∂ i ' represents derivative with respect to x i '), we get $$\partial _t \Delta z_i^ \pm + \Delta z_\alpha ^ \mp \partial '_\alpha \Delta z_i^ \pm + z_\alpha ^ \mp (\partial '_\alpha + \partial _\alpha )\Delta z_i^ \pm = - (\partial '_i + \partial _i )\Delta P + + (\partial _\alpha ^{2'} + \partial _\alpha ^2 )[\nu ^ \pm \Delta z_i^ + + \nu ^ \mp \Delta z_i^ - ]$$ (ΔP = Ptot' − Ptot). We look for an equation for the second-order correlation tensor 〈Δz i ± Δz j ± 〉 related to pseudo-energies. Actually the more general thing should be to look for a mixed tensor, namely 〈Δz i ± Δz j ∓ 〉, taking into account not only both pseudo-energies but also the time evolution of the mixed correlations 〈z i + z j − 〉 and 〈z i − z j + 〉. However, using the DIA closure by Kraichnan, it is possible to show that these elements are in general poorly correlated (Veltri, (1980). Since we are interested in the energy cascade, we limit ourselves to the most interesting equation that describes correlations about Alfvénic fluctuations of the same sign. To obtain the equations for pseudo-energies we multiply Equations (31) by Δz j ± , then by averaging we get $$\begin{array}{*{20}c} {\partial _t \left\langle {\Delta z_i^ \pm \Delta z_j^ \pm } \right\rangle + \frac{\partial } {{\partial \ell _\alpha }}\left\langle {\Delta Z_\alpha ^ \mp \left( {\Delta Z_i^ \pm \Delta Z_j^ \pm } \right)} \right\rangle = } \\ { = - \Lambda _{ij} - \Pi _{ij} + 2\nu \frac{{\partial ^2 }} {{\partial \ell _\alpha ^2 }}\left\langle {\Delta z_i^ \pm \Delta z_j^ \pm } \right\rangle - \frac{4} {3}\frac{\partial } {{\partial \ell _\alpha }}(\varepsilon _{ij}^ \pm \ell _\alpha )} \\ \end{array},$$ where we used the hypothesis of local homogeneity and incompressibility. In Equation (32) we defined the average dissipation tensor $$\varepsilon _{ij}^ \pm = \nu \left\langle {\left( {\partial _\alpha Z_i^ \pm } \right)\left( {\partial _\alpha Z_j^ \pm } \right)} \right\rangle .$$ The first and second term on the r.h.s. of the Equation (32) represent respectively a tensor related to large-scales inhomogeneities $$\Lambda _{ij} = \left\langle {z_\alpha ^ \mp \left( {\partial '_\alpha + \partial _\alpha } \right)\left( {\Delta z_i^ \pm \Delta z_j^ \pm } \right)} \right\rangle$$ and the tensor related to the pressure term $$\Pi _{ij} = \left\langle {\Delta z_j^ \pm \left( {\partial '_i + \partial _i } \right)\Delta P + \Delta z_i^ \pm \left( {\partial '_j + \partial _j } \right)\Delta P} \right\rangle .$$ Furthermore, In order not to worry about couplings between Elsäasser variables in the dissipative terms, we make the usual simplifying assumption that kinematic viscosity is equal to magnetic diffusivity, that is ν± = ν∓ = ν. Equation (32) is an exact equation for anisotropic MHD equations that links the second-order complete tensor to the third-order mixed tensor via the average dissipation rate tensor. Using the hypothesis of global homogeneity the term Λ ij = 0, while assuming local isotropy Π ij = 0. The equation for the trace of the tensor can be written as $$\partial _t \left\langle {\left| {\Delta z_i^ \pm } \right|^2 } \right\rangle + \frac{\partial } {{\partial \ell _\alpha }}\left\langle {\Delta z_\alpha ^ \mp \left| {\Delta z_i^ \pm } \right|^2 } \right\rangle = 2\nu \frac{{\partial ^2 }} {{\partial \ell _\alpha }}\left\langle {\left| {\Delta z_i^ \pm } \right|^2 } \right\rangle - \frac{4} {3}\frac{\partial } {{\partial \ell _\alpha }}(\varepsilon _{ii}^ \pm \ell _\alpha ),$$ where the various quantities depends on the vector ℓ α . Moreover, by considering only the trace we ruled out the possibility to investigate anisotropies related to different orientations of vectors within the second-order moment. It is worthwhile to remark here that only the diagonal elements of the dissipation rate tensor, namely ∈ ii ± are positive defined while, in general, the off-diagonal elements ∈ ij ± are not positive. For a stationary state the Equation (36) can be written as the divergenceless condition of a quantity involving the third-order correlations and the dissipation rates $$\frac{\partial } {{\partial \ell _\alpha }}\left[ {\left\langle {\Delta z_\alpha ^ \mp \left| {\Delta z_i^ \pm } \right|^2 } \right\rangle - 2\nu \frac{\partial } {{\partial \ell _\alpha }}\left\langle {\left| {\Delta z_i^ \pm } \right|^2 } \right\rangle - \frac{4} {3}(\varepsilon _{ii}^ \pm \ell _\alpha )} \right] = 0,$$ from which we can obtain the Yaglom's relation by projecting Equation (37) along the longitudinal ℓ α = ℓe r direction. This operation involves the assumption that the flow is locally isotropic, that is fields depends locally only on the separation ℓ, so that $$\left( {\frac{2} {\partial } + \frac{\partial } {{\partial \ell }}} \right)\left[ {\left\langle {\Delta z_\ell ^ \mp \left| {\Delta z_i^ \pm } \right|^2 } \right\rangle - 2\nu \frac{\partial } {{\partial \ell }}\left\langle {\left| {\Delta z_i^ \pm } \right|^2 } \right\rangle + \frac{4} {3}\varepsilon _{ii}^ \pm \ell } \right] = 0.$$ The only solution that is compatible with the absence of singularity in the limit ℓ → 0 is $$\left\langle {\Delta z_\ell ^ \mp \left| {\Delta z_i^ \pm } \right|^2 } \right\rangle = 2\nu \frac{\partial } {{\partial \ell }}\left\langle {\left| {\Delta z_i^ \pm } \right|^2 } \right\rangle + \frac{4} {3}\varepsilon _{ii}^ \pm \ell ,$$ which reduces to the Yaglom's law for MHD turbulence as obtained by Politano and Pouquet (1998) in the inertial range when ν → 0 $$Y_\ell ^ \pm \equiv \left\langle {\Delta z_\ell ^ \mp \left| {\Delta z_i^ \pm } \right|^2 } \right\rangle = \frac{4} {3}\varepsilon _{ii}^ \pm \ell .$$ Finally, in the fluid-like case where z i + = z i − = u i we obtain the usual Yaglom's law for fluid flows $$\left\langle {\Delta v_\ell \left| {\Delta v_\ell } \right|^2 } \right\rangle = - \frac{4} {3}(\varepsilon \ell ),$$ which in the isotropic case, where 〈Δu ℓ 3 〉 = 3〈ΔuℓΔu y 2 〉 = 3〈ΔuℓΔu z 2 〉 (Monin and Yaglom, (1975), immediately reduces to the Kolmogorov's law $$\left\langle {\Delta v_\ell ^3 } \right\rangle = - \frac{4} {5}\varepsilon \ell$$ (the separation ℓ has been taken along the streamwise x-direction). The relations we obtained can be used, or better, in a certain sense they might be used, as a formal definition of inertial range. Since they are exact relationships derived from Navier-Stokes and MHD equations under usual hypotheses, they represent a kind of "zeroth-order" conditions on experimental and theoretical analysis of the inertial range properties of turbulence It is worthwhile to remark the two main properties of the Yaglom's laws. The first one is the fact that, as it clearly appears from the Kolmogorov's relation (Kolmogorov, (1941), the third-order moment of the velocity fluctuations is different from zero. This means that some non-Gaussian features must be at work, or, which is the same, some hidden phase correlations. Turbulence is something more complicated than random fluctuations with a certain slope for the spectral density. The second feature is the minus sign which appears in the various relations. This is essential when the sign of the energy cascade must be inferred from the Yaglom relations, the negative asymmetry being a signature of a direct cascade towards smaller scales. Note that, Equation (40) has been obtained in the limit of zero viscosity assuming that the pseudo-energy dissipation rates ∈ ii ± remain finite in this limit. In usual fluid flows the analogous hypothesis, namely ν remains finite in the limit ν → 0, is an experimental evidence, confirmed by experiments in different conditions (Frisch, (1995). In MHD turbulent flows this remains a conjecture, confirmed only by high resolution numerical simulations (Mininni and Pouquet, (2009). From Equation (37), by defining ΔZ i ± = Δu i ± Δb i we immediately obtain the two equations $$\frac{\partial } {{\partial \ell _\alpha }}\left[ {\left\langle {\Delta v_\alpha \Delta E} \right\rangle - 2\left\langle {\Delta b_\alpha \Delta C} \right\rangle - 2\nu \frac{\partial } {{\partial \ell _\alpha }}\left\langle {\Delta E} \right\rangle - \frac{4} {3}(\varepsilon _E \ell _\alpha )} \right] = 0$$ $$\frac{\partial } {{\partial \ell _\alpha }}\left[ { - \left\langle {\Delta b_\alpha \Delta E} \right\rangle + 2\left\langle {\Delta v_\alpha \Delta C} \right\rangle - 4\nu \frac{\partial } {{\partial \ell _\alpha }}\left\langle {\Delta C} \right\rangle - \frac{4} {3}(\varepsilon _C \ell _\alpha )} \right] = 0,$$ where we defined the energy fluctuations ΔE = |Δu i |2 + |Δb i |2 and the correlation fluctuations ΔC = Δu i Δb i . In the same way the quantities ∈ E = (∈ ii + + ∈ ii − )/2 and ∈ C = (∈ ii + − ∈ ii − /2 represent the energy and correlation dissipation rate, respectively. By projecting once more on the longitudinal direction, and assuming vanishing viscosity, we obtain the Yaglom's law written in terms of velocity and magnetic fluctuations $$\left\langle {\Delta v_\ell \Delta E} \right\rangle - 2\left\langle {\Delta b_\ell \Delta C} \right\rangle = - \frac{4} {3}\varepsilon _E \ell$$ $$- \left\langle {\Delta b_\ell \Delta E} \right\rangle + 2\left\langle {\Delta v_\ell \Delta C} \right\rangle = - \frac{4} {3}\varepsilon _C \ell .$$ 2.12 Density-mediated Elsäasser variables and Yaglom's law Relation (40), which is of general validity within MHD turbulence, requires local characteristics of the turbulent fluid flow which can be not always satisfied in the solar wind flow, namely, largescale homogeneity, isotropy, and incompressibility. Density fluctuations in solar wind have a low amplitude, so that nearly incompressible MHD framework is usually considered (Montgomery et al., (1987; Matthaeus and Brown, (1988; Zank and Matthaeus, (1993; Matthaeus et al., (1991; Bavassano and Bruno, (1995). However, compressible fluctuations are observed, typically convected structures characterized by anticorrelation between kinetic pressure and magnetic pressure (Tu and Marsch, (1994). Properties and interaction of the basic MHD modes in the compressive case have also been considered (Goldreich and Sridhar, (1995; Cho and Lazarian, (2002). A first attempt to include density fluctuations in the framework of fluid turbulence was due to Lighthill (1955). He pointed out that, in a compressible energy cascade, the mean energy transfer rate per unit volume ∈ V ~ ρu3/ℓ should be constant in a statistical sense (u being the characteristic velocity fluctuations at the scale ℓ), thus obtaining the scaling relation u ~ (ℓ/ρ)1/3. Fluctuations of a density-weighted velocity field u ≡ ρ1/3v should thus follow the usual Kolmogorov scaling u3 ~ ℓ. The same phenomenological arguments can be introduced in MHD turbulence Carbone et al. (2009a) by considering the pseudoenergy dissipation rates per unit volume ∈ V ± = ρ∈ ii ± and introducing density-weighted Elsäasser fields, defined as w± ≡ ρ1/3z±. A relation equivalent to the Yaglom-type relation (40) $$W_\ell ^ \pm \equiv \left\langle \rho \right\rangle ^{ - 1} \left\langle {\Delta w_\ell ^ \mp \left| {\Delta w_i^ \pm } \right|^2 } \right\rangle = - C\varepsilon _{ii}^ \pm \ell$$ (C is some constant assumed to be of the order of unit) should then hold for the density-weighted increments Δw±. Relation W ℓ ± reduces to Y ℓ ± in the case of constant density, allowing for comparison between the Yaglom's law for incompressible MHD flows and their compressible counterpart. Despite its simple phenomenological derivation, the introduction of the density fluctuations in the Yaglom-type scaling (47) should describe the turbulent cascade for compressible fluid (or magnetofluid) turbulence Even if the modified Yaglom's law (47) is not an exact relation as (40), being obtained from phenomenological considerations, the law for the velocity field in a compressible fluid flow has been observed in numerical simulations, the value of the constant C results negative and of the order of unity (Padoan et al., (2007; Kowal and Lazarian, (2007). 2.13 Yaglom's law in the shell model for MHD turbulence As far as the shell model is concerned, the existence of a cascade towards small scales is expressed by an exact relation, which is equivalent to Equation (41). Using Equations (24), the scale-by-scale pseudo-energy budget is given by $$\frac{d} {{dt}}\sum\limits_n {\left| {Z_n^ \pm } \right|^2 = k_n \operatorname{Im} [T_n^ \pm ] - } \sum\limits_n {2\nu k_n^2 \left| {Z_n^ \pm } \right|^2 + } \sum\limits_n {2\Re e[Z_n^ \pm F_n^{ \pm *} ].}$$ The second and third terms on the right hand side represent, respectively, the rate of pseudoenergy dissipation and the rate of pseudo-energy injection. The first term represents the flux of pseudo-energy along the wave vectors, responsible for the redistribution of pseudo-energies on the wave vectors, and is given by $$\begin{array}{*{20}c} {T_n^ \pm - (a + c)Z_n^ \pm Z_{n + 1}^ \pm Z_{n + 2}^ \mp + \left( {\frac{{2 - a - c}} {\lambda }} \right)Z_{n - 1}^ \pm Z_{n + 1}^ \pm Z_n^ \mp + } \\ { + (2 - a - c)Z_n^ \pm Z_{n + 2}^ \pm Z_{n + 1}^ \mp + \left( {\frac{{c - a}} {\lambda }} \right)Z_) Z_n^ \pm Z_{n + 1}^ \pm Z_{n - 1}^ \mp .} \\ \end{array}$$ Using the same assumptions as before, namely: i) the forcing terms act only on the largest scales, ii) the system can reach a statistically stationary state, and iii) in the limit of fully developed turbulence, ν → 0, the mean pseudo-energy dissipation rates tend to finite positive limits ∈±, it can be found that $$\left\langle {T_n^ \pm } \right\rangle = - \varepsilon ^ \pm k_n^{ - 1} .$$ This is an exact relation which is valid in the inertial range of turbulence Even in this case it can be used as an operative definition of the inertial range in the shell model, that is, the inertial range of the energy cascade in the shell model is defined as the range of scales k n , where the law from Equation (49) is verified. 3 Early Observations of MHD Turbulence in the Ecliptic Here we briefly present the history, since the first Mariner missions during the 1960s, of the main steps towards the completion of an observational picture of turbulence in interplanetary space This retrospective look at all the advances made in this field shows that space flights allowed us to discover a very large laboratory in space As a matter of fact, in a wind tunnel we deal with characteristic dimensions of the order of L ≤ 10 m and probes of the size of about d ≃ 1 cm. In space, L ≃ 108 m, while "probes" (say spacecrafts) are about d ≃ 5 m. Thus, space provides a much larger laboratory. Most measurements are single point measurements, the ESA-Cluster project providing for multiple measurements only recently. 3.1 Turbulence in the ecliptic When dealing with laboratory turbulence it is important to know all the aspects of the experimental device where turbulent processes take place in order to estimate related possible effects driven or influenced by the environment. In the solar wind, the situation is, in some aspects, similar although the plasma does not experience any confinement due to the "experimental device", which would be represented by free interplanetary space However, it is a matter of fact that the turbulent state of the wind fluctuations and the subsequent radial evolution during the wind expansion greatly differ from fast to slow wind, and it is now well accepted that the macrostructure convected by the wind itself plays some role (see reviews by Tu and Marsch, (1995a; Goldstein et al., (1995b). Fast solar wind originates from the polar regions of the Sun, within the open magnetic field line regions identified by coronal holes. Beautiful observations by SOHO spacecraft (see animation of Figure 14) have localized the birthplace of the solar wind within the intergranular lane, generally where three or more granules get together. Clear outflow velocities of up to 10 km s−1 have been recorded by SOHO/SUMER instrument (Hassler et al., (1999). mpg-Movie (2362.87792969 KB) Still from a movie showing An animation built on SOHO/EIT and SOHO/SUMER observations of the solar-wind source regions and magnetic structure of the chromospheric network. Outflow velocities, at the network cell boundaries and lane junctions below the polar coronal hole, reach up to 10 km s−1 are represented by the blue colored areas (original figures from Hassler et al., (1999). (For video see appendix) Slow wind, on the contrary, originates from the equatorial zone of the Sun. The slow wind plasma leaks from coronal features called "helmets", which can be easily seen protruding into the Sun's atmosphere during a solar eclipse (see Figure 15). Moreover, plasma emissions due to violent and abrupt phenomena also contribute to the solar wind in these regions of the Sun. An alternative view is that both high- and low-speed winds come from coronal holes (defined as open field regions) and that the wind speed at 1 AU is determined by the rate of flux-tube expansion near the Sun as firstly suggested by Levine et al. (1977) (Wang and Sheeley Jr, (1990; Bravo and Stewart, (1997; Arge and Pizzo, (2000; Poduval and Zhao, (2004; Whang et al., (2005, see also:) and/or by the location and strength of the coronal heating (Leer and Holzer, (1980; Hammer, (1982; Hollweg, (1986; Withbroe, (1988; Wang, (1993, (1994; Sandbaek et al., (1994; Hansteen and Leer, (1995; Cranmer et al., (2007). Helmet streamer during a solar eclipse. Slow wind leaks into the interplanetary space along the flanks of this coronal structure. Image reproduced from MSFC. However, this situation greatly changes during different phases of the solar activity cycle. Polar coronal holes, which during the maximum of activity are limited to small and not well defined regions around the poles, considerably widen up during solar minimum, reaching the equatorial regions (Forsyth et al., (1997; Forsyth and Breen, (2002; Balogh et al., (1999). This new configuration produces an alternation of fast and slow wind streams in the ecliptic plane, the plane where most of the spacecraft operate and record data. During the expansion, a dynamical interaction between fast and slow wind develops, generating the so called "stream interface", a thin region ahead of the fast stream characterized by strong compressive phenomena. Figure 16 shows a typical situation in the ecliptic where fast streams and slow wind were observed by Helios 2 s/c during its primary mission to the Sun. At that time, the spacecraft moved from 1 AU (around day 17) to its closest approach to the Sun at 0.29 AU (around day 108). During this radial excursion, Helios 2 had a chance to observe the same co-rotating stream, that is plasma coming from the same solar source, at different heliocentric distances. This fortuitous circumstance, gave us the unique opportunity to study the radial evolution of turbulence under the reasonable hypothesis of time-stationarity of the source regions. Obviously, similar hypotheses decay during higher activity phase of the solar cycle since, as shown in Figure 17, the nice and regular alternation of fast co-rotating streams and slow wind is replaced by a much more irregular and spiky profile also characterized by a lower average speed. Figure 18 focuses on a region centered on day 75, recognizable in Figure 16, when the s/c was at approximately 0.7 AU from the Sun. Slow wind on the left-hand side of the plot, fast wind on the right hand side, and the stream interface in between, can be clearly seen. This is a sort of canonical situation often encountered in the ecliptic, within the inner heliosphere, during solar activity minimum. Typical solar wind parameters, like proton number density ρ p proton temperature T p , magnetic field intensity |B|, azimuthal angle Φ, and elevation angle Θ are shown in the panels below the wind speed profile. A quick look at the data reveals that fast wind is less dense but hotter than slow wind. Moreover, both proton number density and magnetic field intensity are more steady and, in addition, the bottom two panels show that magnetic field vector fluctuates in direction much more than in slow wind. This last aspect unravels the presence of strong Alfvénic fluctuations which act mainly on magnetic field and velocity vector direction, and are typically found within fast wind (Belcher and Davis Jr, (1971; Belcher and Solodyna, (1975). The region just ahead of the fast wind, namely the stream interface, where dynamical interaction between fast and slow wind develops, is characterized by compressive effects which enhance proton density, temperature and field intensity. Within slow wind, a further compressive region precedes the stream interface but it is not due to dynamical effects but identifies the heliospheric current sheet, the surface dividing the two opposite polarities of the interplanetary magnetic field. As a matter of fact, the change of polarity can be noted within the first half of day 73 when the azimuthal angle Φ rotates by about 180°. Detailed studies (Bavassano et al., (1997) based on interplanetary scintillations (IPS) and in-situ measurements have been able to find a clear correspondence between the profile of path-integrated density obtained from IPS measurements and in-situ measurements by Helios 2 when the s/c was around 0.3 AU from the Sun. High velocity streams and slow wind as seen in the ecliptic during solar minimum as function of time [yyddd]. Streams identified by labels are the same co-rotating stream observed by Helios 2, during its primary mission to the Sun in 1976, at different heliocentric distances. These streams, named "The Bavassano.Villante streams" after Tu and Marsch (1995a), have been of fundamental importance in understanding the radial evolution of MHD turbulence in the solar wind. High velocity streams and slow wind as seen in the ecliptic during solar maximum. Data refer to Helios 2 observations in 1979. High velocity streams and slow wind as seen in the ecliptic during solar minimum. Figure 19 shows measurements of several plasma and magnetic field parameters. The third panel from the top is the proton number density and it shows an enhancement within the slow wind just preceding the fast stream, as can be seen at the top panel. In this case the increase in density is not due to the dynamical interaction between slow and fast wind but it represents the profile of the heliospheric current sheet as sketched on the left panel of Figure 19. As a matter of fact, at these short distances from the Sun, dynamical interactions are still rather weak and this kind of compressive effects can be neglected with respect to the larger density values proper of the current sheet. 3.1.1 Spectral properties First evidences of the presence of turbulent fluctuations were showed by Coleman (1968), who, using Mariner 2 magnetic and plasma observations, investigated the statistics of interplanetary fluctuations during the period August 27 - October 31, 1962, when the spacecraft orbited from 1.0 to 0.87 AU. At variance with Coleman (1968), Barnes and Hollweg (1974) analyzed the properties of the observed low-frequency fluctuations in terms of simple waves, disregarding the presence of an energy spectrum. Here we review the gross features of turbulence as observed in space by Mariner and Helios spacecraft. By analyzing spectral densities, Coleman (1968) concluded that the solar wind flow is often turbulent, energy being distributed over an extraordinarily wide frequency range, from one cycle per solar rotation to 0.1 Hz. The frequency spectrum, in a range of intermediate frequencies [2 × 10−5 −2.3 × 10−3], was found to behave roughly as f−1.2, the difference with the expected Kraichnan f−1.5 spectral slope was tentatively attributed to the presence of high-frequency transverse fluctuations resulting from plasma garden-hose instability (Scarf et al., (1967). Waves generated by this instability contribute to the spectrum only in the range of frequencies near the proton cyclotron frequency and would weaken the frequency dependence relatively to the Kraichnan scaling. The magnetic spectrum obtained by Coleman (1968) is shown in Figure 20. Left panel: a simple sketch showing the configuration of a helmet streamer and the density profile across this structure. Right panel: Helios 2 observations of magnetic field and plasma parameters across the heliospheric current sheet. From top to bottom: wind speed, magnetic field azimuthal angle, proton number density, density fluctuations and normalized density fluctuations, proton temperature, magnetic field magnitude, total pressure, and plasma beta, respectively. Image reproduced by permission from Bavassano et al. (1997), copyright by AGU. The magnetic energy spectrum as obtained by Coleman (1968). Spectral properties of the interplanetary medium have been summarized by Russell (1972), who published a composite spectrum of the radial component of magnetic fluctuations as observed by Mariner 2, Mariner 4, and OGO 5 (see Figure 21). The frequency spectrum so obtained was divided into three main ranges: i) up to about 10−4 Hz the spectral slope is about 1/f; ii) at intermediate frequencies 10−4 ≤ f ≤ 10−1 Hz a spectrum which roughly behaves as f3/2 has been found; iii) the high-frequency part of the spectrum, up to 1 Hz, behaves as 1/f2. The intermediate range7 of frequencies shows the same spectral properties as that introduced by Kraichnan (1965) in the framework of MHD turbulence It is worth reporting that scatter plots of the values of the spectral index of the intermediate region do not allow us to distinguish between a Kolmogorov spectrum f−5/3 and a Kraichnan spectrum f−3/2 (Veltri, (1980). A composite figure of the magnetic spectrum obtained by Russell (1972). Only lately, Podesta et al. (2007) addressed again the problem of the spectral exponents of kinetic and magnetic energy spectra in the solar wind. Their results, instead of clarifying once forever the ambiguity between f−5/3 and f−3/2 scaling, placed new questions about this unsolved problem. As a matter of fact, Podesta et al. (2007) chose different time intervals between 1995 and 2003 lasting 2 or 3 solar rotations during which WIND spacecraft recorded solar wind velocity and magnetic field conditions. Figure 22 shows the results obtained for the time interval that lasted about 3 solar rotations between November 2000 and February 2001, and is representative also of the other analyzed time intervals. Quite unexpectedly, these authors found that the power law exponents of velocity and magnetic field fluctuations often have values near 3/2 and 5/3, respectively. In addition, the kinetic energy spectrum is characterized by a power law exponent slightly greater than or equal to 3/2 due to the effects of density fluctuations. It is worth mentioning that this difference was first observed by Salem (2000) years before, but, at that time, the accuracy of the data was questioned Salem et al. (2009). Thus, to corroborate previous results, Salem et al. (2009) investigated anomalous scaling and intermittency effects of both magnetic field and solar wind velocity fluctuations in the inertial range using WIND data. These authors used a wavelet technique for a systematic elimination of intermittency effects on spectra and structure functions in order to recover the actual scaling properties in the inertial range. They found that magnetic field and velocity fluctuations exhibit a well-defined, although different, monofractal behavior, following a Kolmogorov −5/3 scaling and a Iroshnikov-Kraichnan −3/2 scaling, respectively. These results are clearly opposite to the expected scaling for kinetic and magnetic fluctuations which should follow Kolmogorov and Kraichnan scaling, respectively (see Section 2.8). However, as remarked by Roberts (2007), Voyager observations of the velocity spectrum have demonstrated a likely asymptotic state in which the spectrum steepens towards a spectral index of −5/3, finally matching the magnetic spectrum and the theoretical expectation of Kolmogorov turbulence Moreover, the same authors examined Ulysses spectra to determine if the Voyager result, based on a very few sufficiently complete intervals, were correct. Preliminary results confirmed the −5/3 slope for velocity fluctuations at ~5 AU from the Sun in the ecliptic. Figure 23, taken from Roberts (2007), shows the evolution of the spectral index during the radial excursion of Ulysses. These authors examined many intervals in order to develop a more general picture of the spectral evolution in various conditions, and how magnetic and velocity spectra differ in these cases. The general trend shown in Figure 23 is towards −5/3 as the distance increases. Lower values are due to the highly Alfvénic fast polar wind while higher values, around 2, are mainly due to the jumps at the stream fronts as previously shown by Roberts (2007). Thus, the discrepancy between magnetic and velocity spectral slope is only temporary and belongs to the evolutionary phase of the spectra towards a well developed Kolmogorov like turbulence spectrum. Horbury et al. (2008) performed a study on the anisotropy of the energy spectrum of magnetohydrodynamic (MHD) turbulence with respect to the magnetic field orientation to test the validity of the critical balance theory (Goldreich and Sridhar, (1995) in space plasma environment. This theory predicts that the power spectrum P(k) would scale as f−5/3 when the angle θ B between the mean field direction and the flow direction is 90°. On the other hand, in case θ B = 0° the scaling would follow θ−2. Moreover, the latter spectrum would also have a smaller energy content. Horbury et al. (2008) used 30 days of Ulysses magnetic field observations (1995, days 100 – 130) with a resolution of 1 second. At that time, Ulysses was immersed in the steady high speed solar wind coming from the Sun's Northern polar coronal hole at 1.4 AU from the Sun. These authors studied the anisotropies of the turbulence by measuring how the spacecraft frame spectrum of magnetic fluctuations varies with θ B . They adopted a method based on wavelet analysis which was sensitive to the frequent changes of the local magnetic field direction. The lower panel of Figure 24 clearly shows that for angles larger than about 45. the spectral index smoothly fluctuates around −5/3 while, for smaller angles, it tends to a value of −2, as predicted by the critical balance type of cascade. However, although the same authors recognize that a spectral index of .2 has not been routinely observed in the fast solar wind and that the range of θ B over which the spectral index deviates from −5/3 is wider than expected, they consider these findings to be a robust evidence of the validity of critical balance theory in space plasma environment. 3.1.2 Experimental evaluation of Reynolds number in the solar wind Properties of solar wind fluctuations have been widely studied in the past, relying on the "frozen-in approximation" (Taylor, (1938). The hypothesis at the basis of Taylor's approximation is that, since large integral scales in turbulence contain most of the energy, the advection due to the smallest turbulent scales fluctuations can be disregarded and, consequently, the advection of a turbulent field past an observer in a fixed location is considered solely due to the larger scales. In experimental physics, this hypothesis allows time series measured at a single point in space to be interpreted as spatial variations in the mean flow being swept past the observer. However, the canonical way to establish the presence of spatial structures relies in the computation of two-point single time measurements. Only recently, the simultaneous presence of several spacecraft sampling solar wind parameters allowed to correlate simultaneous in-situ observations in two different observing locations in space Matthaeus et al. (2005) and Weygand et al. (2007) firstly evaluated the twopoint correlation function using simultaneous measurements of interplanetary magnetic field from the Wind, ACE, and Cluster spacecraft. Their technique allowed to compute for the first time fundamental turbulence parameters previously determined from single spacecraft measurements. In particular, these authors evaluated the correlation scale λ C and the Taylor microscale λ T which allow to determine empirically the effective magnetic Reynolds number. Top panel: Trace of power in the magnetic field as a function of the angle between the local magnetic field and the sampling direction at a spacecraft frequency of 61 mHz. The larger scatter for θ B > 90 is the result of fewer data points at these angles. Bottom panel: spectral index of the trace, fitted over spacecraft frequencies from 15.98 mHz. Image reproduced by permission from Horbury et al. (2008), copyright by APS. As a matter of fact, there are three standard turbulence length scales which can be identified in a typical turbulence power spectrum as shown in Figure 25: the correlation length λ C , the Taylor scale λ T and the Kolmogorov scale λ K . The Correlation or integral length scale represents the largest separation distance over which eddies are still correlated, i.e., the largest turbulent eddy size. The Taylor scale is the scale size at which viscous dissipation begins to affect the eddies, it is several times larger than Kolmogorov scale and marks the transition from the inertial range to the dissipation range. The Kolmogorov scale is the one that characterizes the smallest dissipation-scale eddies. The Taylor scale λ T and the correlation length λ C , as indicated in Figure 26, can be obtained from the two-point correlation function being the former the radius of curvature of the Correlation function at the origin and the latter the scale at which turbulent fluctuation are no longer correlated. Thus, λ T can be obtained from from Taylor expansion of the two point correlation function for r → 0 (Tennekes and Lumely, (1972): $$R(r) \approx 1 - \frac{{r^2 }} {{2\lambda _T^2 }} + \ldots$$ where r is the spacecraft separation and R(r) = 〈b(x) · b(x + r). is the auto-correlation function computed along the x direction for the fluctuating field b(x). On the other hand, the correlation length λ C can be obtained integrating the normalized correlation function along a chosen direction of integration ξ: $$\lambda _C \approx \int_0^\infty {\frac{{R(\xi )}} {{R(0)}}} d\xi .$$ Typical interplanetary magnetic field power spectrum at 1 AU. The low frequency range refers to Helios 2 observations (adapted from Bruno et al., (2009) while the high frequency refers to WIND observations (adapted from Leamon et al., (1998). Vertical dashed lines indicate the correlative, Taylor and Kolmogorov length scales. Typical two-point correlation function. The Taylor scale λ T and the correlation length λ C are the radius of curvature of the Correlation function at the origin (see inset graph) and the scale at which turbulent fluctuation are no longer correlated, respectively. At this point, following Batchelor (1970) it is possible to obtain the effective magnetic Reynolds number: $$R_m^{eff} = \left( {\frac{{\lambda _C }} {{\lambda _T }}} \right)^2 .$$ Figure 27 shows estimates of the correlation function from ACE-Wind for separation distances 20 − 350 R E and two sets of Cluster data for separations 0.02 − 0.04 R E and 0.4 − 1.2 R E , respectively. Estimates of the correlation function from ACE-Wind for separation distances 20 . 350 R E and two sets of Cluster data for separations 0.02 − 0.04 R E and 0.4 − 1.2 R E , respectively. Image adapted from Matthaeus et al. (2005). Following the definitions of λ C and λ T given above, Matthaeus et al. (2005) were able to fit the first data set of Cluster, i.e., the one with shorter separations, with a parabolic fit while they used an exponential fit for ACE-Wind and the second Cluster data set. These fits provided estimates for λ C and λ T from which these authors obtained the first empirical determination of R m eff which resulted to be of the order of 2.3 × 105, as illustrated in Figure 28. 3.1.3 Evidence for non-linear interactions As we said previously, Helios 2 s/c gave us the unique opportunity to study the radial evolution of turbulent fluctuations in the solar wind within the inner heliosphere. Most of the theoretical studies which aim to understand the physical mechanism at the base of this evolution originate from these observations (Bavassano et al., (1982b; Denskat and Neubauer, (1983). In Figure 29 we consider again similar observations taken by Helios 2 during its primary mission to the Sun together with observations taken by Ulysses in the ecliptic at 1.4 and 4.8 AU in order to extend the total radial excursion. Helios 2 power density spectra were obtained from the trace of the spectral matrix of magnetic field fluctuations, and belong to the same co-rotating stream observed on day 49, at a heliocentric distance of 0.9 AU, on day 75 at 0.7 AU and, finally, on day 104 at 0.3 AU. Ulysses spectra, constructed in the same way as those of Helios 2, were taken at 1.4 and 4.8 AU during the ecliptic phase of the orbit. Observations at 4.8 AU refer to the end of 1991 (fast wind period started on day 320, slow wind period started on day 338) while observations taken at 1.4 AU refer to fast wind observed at the end of August of 2007, starting on day 241:12. Left panel: parabolic fit at small scales in order to estimate λ T Right panel: exponential fit at intermediate and large scales in order to estimate λ C . The square of the ratio of these two length scales gives an estimate of the effective magnetic Reynolds number. Image adapted from Matthaeus et al. (2005). While the spectral index of slow wind does not show any radial dependence, being characterized by a single Kolmogorov type spectral index, fast wind is characterized by two distinct spectral slopes: about −1 within low frequencies and about a Kolmogorov like spectrum at higher frequencies. These two regimes are clearly separated by a knee in the spectrum often referred to as "frequency break". As the wind expands, the frequency break moves to lower and lower frequencies so that larger and larger scales become part of the Kolmogorov-like turbulence spectrum, i.e., of what we will indicate as "inertial range" (see discussion at the end of the previous section). Thus, the power spectrum of solar wind fluctuations is not solely function of frequency f, i.e., P(f), but it also depends on heliocentric distance r, i.e., P(f) → P(f, r). Figure 30 shows the frequency location of the spectral breaks observed in the left-hand-side panel of Figure 29 as a function of heliocentric distance The radial distribution of these 5 points suggests that the frequency break moves at lower and lower frequencies during the wind expansion following a power-law of the order of R−1.5. Previous results, obtained for long data sets spanning hundreds of days and inevitably mixing fast and slow wind, were obtained by Matthaeus and Goldstein (1986) who found the breakpoint around 10 h at 1 AU, and Klein et al. (1992) who found that the breakpoint was near 16 h at 4 AU. Obviously, the frequency location of the breakpoint provided by these early determinations is strongly affected by the fact that mixing fast and slow wind would shift the frequency break to lower frequencies with respect to solely fast wind. In any case, this frequency break is strictly related to the correlation length (Klein, (1987) and the shift to lower frequency, during the wind expansion, is consistent with the growth of the correlation length observed in the inner (Bruno and Dobrowolny, (1986) and outer heliosphere (Matthaeus and Goldstein, (1982a). Analogous behavior for the low frequency shift of the spectral break, similar to the one observed in the ecliptic, has been reported by Horbury et al. (1996a) studying the rate of turbulent evolution over the Sun's poles. These authors used Ulysses magnetic field observations between 1.5 and 4.5 AU selecting mostly undisturbed, high speed polar flows. They found a radial gradient of the order of R−1.1, clearly slower than the one reported in Figure 30 or that can be inferred from results by Bavassano et al. (1982b) confirming that the turbulence evolution in the polar wind is slower than the one in the ecliptic, as qualitatively predicted by Bruno (1992), because of the lack of large scale stream shears. However, these results will be discussed more extensively in in Section 4.1. Left panel: power density spectra of magnetic field fluctuations observed by Helios 2 between 0.3 and 1 AU within the trailing edge of the same corotating stream shown in Figure 16, during the first mission to the Sun in 1976 and by Ulysses between 1.4 and 4.8 AU during the ecliptic phase. Ulysses observations at 4.8 AU refer to the end of 1991 while observations taken at 1.4 AU refer to the end of August of 2007. While the spectral index of slow wind does not show any radial dependence, the spectral break, clearly present in fast wind and marked by a blue dot, moves to lower and lower frequency as the heliocentric distance increases. Image adapted from Bruno et al. (2009). However, the phenomenology described above only apparently resembles hydrodynamic turbulence where the large eddies, below the frequency break, govern the whole process of energy cascade along the spectrum (Tu and Marsch, (1995b). As a matter of fact, when the relaxation time increases, the largest eddies provide the energy to be transferred along the spectrum and dissipated, with a decay rate approximately equal to the transfer rate and, finally, to the dissipation rate at the smallest wavelengths where viscosity dominates. Thus, we expect that the energy containing scales would loose energy during this process but would not become part of the turbulent cascade, say of the inertial range. Scales on both sides of the frequency break would remain separated. Accurate analysis performed in the solar wind (Bavassano et al., (1982b; Marsch and Tu, (1990b; Roberts, (1992) have shown that the low frequency range of the solar wind magnetic field spectrum radially evolves following the WKB model, or geometrical optics, which predicts a radial evolution of the power associated with the fluctuations ~ r−3. Moreover, a steepening of the spectrum towards a Kolmogorov like spectral index can be observed. On the contrary, the same in-situ observations established that the radial decay for the higher frequencies was faster than ~r−3 and the overall spectral slope remained unchanged. This means that the energy contained in the largest eddies does not decay as it would happen in hydrodynamic turbulence and, as a consequence, the largest eddies cannot be considered equivalent to the energy containing eddies identified in hydrodynamic turbulence So, this low frequency range is not separated from the inertial range but becomes part of it as the turbulence ages. These observations cast some doubts on the applicability of hydrodynamic turbulence paradigm to interplanetary MHD turbulence A theoretical help came from adopting a local energy transfer function (Tu et al., (1984; Tu, (1987a,b, (1988), which would take into account the non-linear effects between eddies of slightly differing wave numbers, together with a WKB description which would mainly work for the large scale fluctuations. This model was able to reproduce the displacement of the frequency break with distance by combining the linear WKB law and a model of nonlinear coupling besides most of the features observed in the magnetic power spectra P(f, r) observed by Bavassano et al. (1982b). In particular, the concept of the "frequency break", just mentioned, was pointed out for the first time by Tu et al. (1984) who, developing the analytic solution for the radially evolving power spectrum P(f, r) of fluctuations, obtained a critical frequency "fc" such that for frequencies f ≪ fc, P(f, r) ∝ f−1 and for f≫ fc, P(f, r) ∝ f−1.5. Radial dependence of the frequency break observed in the ecliptic within fast wind as shown in the previous Figure 29. The radial dependence seems to be governed by a power-law of the order of R−1.5. 3.1.4 Fluctuations anisotropy Interplanetary magnetic field (IMF) and velocity fluctuations are rather anisotropic as for the first time observed by Belcher and Davis Jr (1971); Belcher and Solodyna (1975); Chang and Nishida (1973); Burlaga and Turner (1976); Solodyna and Belcher (1976); Parker (1980); Bavassano et al. (1982a); Tu et al. (1989a); and Marsch and Tu (1990a). This feature can be better observed if fluctuations are rotated into the minimum variance reference system (see Appendix D). Sonnerup and Cahill (1967) introduced the minimum variance analysis which consists in determining the eigenvectors of the matrix $$S_{ij} = \left\langle {B_i B_j } \right\rangle - \left\langle {B_i } \right\rangle \left\langle {B_j } \right\rangle ,$$ where i and j denote the components of magnetic field along the axes of a given reference system. The statistical properties of eigenvalues approximately satisfy the following statements: One of the eigenvalues of the variance matrix is always much smaller than the others, say λ1 ≪ (λ2, λ3), and the corresponding eigenvector Ṽ1 is the minimum-variance direction (see Appendix D.1 for more details). This indicates that, at least locally, the magnetic fluctuations are confined in a plane perpendicular to the minimum-variance direction. In the plane perpendicular to Ṽ1, fluctuations appear to be anisotropically distributed, say λ3 > λ2. Typical values for eigenvalues are λ3 : λ2 : λ1 = 10 : 3.5 : 1.2 (Chang and Nishida, (1973; Bavassano et al., (1982a). The direction Ṽ1 is nearly parallel to the average magnetic field B0, that is, the distribution of the angles between Ṽ1 and B0 is narrow with width of about 10° and centered around zero. As shown in Figure 31, in this new reference system it is readily seen that the maximum and intermediate components have much more power compared with the minimum variance component. Generally, this kind of anisotropy characterizes Alfvénic intervals and, as such, it is more commonly found within high velocity streams (Marsch and Tu, (1990a). A systematic analysis for both magnetic and velocity fluctuations was performed by Klein et al. (1991, (1993) between 0.3 and 10 AU. These studies showed that magnetic field and velocity minimum variance directions are close to each other within fast wind and mainly clustered around the local magnetic field direction. The effects of expansion are such as to separate field and velocity minimum variance directions. While magnetic field fluctuations keep their minimum variance direction loosely aligned with the mean field direction, velocity fluctuations tend to have their minimum variance direction oriented along the radial direction. The depleted alignment to the background magnetic field would suggest a smaller anisotropy of the fluctuations. As a matter of fact, Klein et al. (1991) found that the degree of anisotropy, which can be defined as the ratio between the power perpendicular to and that along the minimum variance direction, decreases with heliocentric distance in the outer heliosphere. At odds with these conclusions were the results by Bavassano et al. (1982a) who showed that the ratio λ1/λ3, calculated in the inner heliosphere within a co-rotating high velocity stream, clearly decreased with distance, indicating that the degree of magnetic anisotropy increased with distance Moreover, this radial evolution was more remarkable for fluctuations of the order of a few hours than for those around a few minutes. Results by Klein et al. (1991) in the outer heliosphere and by Bavassano et al. (1982a) in the inner heliosphere remained rather controversial until recent studies (see Section 10.2), performed by Bruno et al. (1999b), found a reason for this discrepancy. A different approach to anisotropic fluctuations in solar wind turbulence have been made by Bigazzi et al. (2006) and Sorriso-Valvo et al. (2006, (2010b). In these studies the full tensor of the mixed second-order structure functions has been used to quantitatively measure the degree of anisotropy and its effect on small-scale turbulence through a fit of the various elements of the tensor on a typical function (Sorriso-Valvo et al., (2006). Moreover three different regions of the near-Earth space have been studied, namely the solar wind, the Earth's foreshock and magnetosheath showing that, while in the undisturbed solar wind the observed strong anisotropy is mainly due to the largescale magnetic field, near the magnetosphere other sources of anisotropy influence the magnetic field fluctuations (Sorriso-Valvo et al., (2010b). Power density spectra of the three components of IMF after rotation into the minimum variance reference system. The black curve corresponds to the minimum variance component, the blue curve to the maximum variance, and the red one to the intermediate component. This case refers to fast wind observed at 0.3 AU and the minimum variance direction forms an angle of ~ 8° with respect to the ambient magnetic field direction. Thus, most of the power is associated with the two components quasi-transverse to the ambient field. 3.1.5 Simulations of anisotropic MHD In the presence of a DC background magnetic field B0 which, differently from the bulk velocity field, cannot be eliminated by a Galilean transformation, MHD incompressible turbulence becomes anisotropic (Shebalin et al., (1983; Montgomery, (1982; Zank and Matthaeus, (1992; Carbone and Veltri, (1990; Oughton, (1993). The main effect produced by the presence of the background field is to generate an anisotropic distribution of wave vectors as a consequence of the dependence of the characteristic time for the non-linear coupling on the angle between the wave vector and the background field. This effect can be easily understood if one considers the MHD equation. Due to the presence of a term (B0 · ∇)z±, which describes the convection of perturbations in theaverage magnetic field, the non-linear interactions between Alfvénic fluctuations are weakened, since convection decorrelates the interacting eddies on a time of the order (k · B0)−1. Clearly fluctuations with wave vectors almost perpendicular to B0 are interested by such an effect much less than fluctuations with k ∥ B0. As a consequence, the former are transferred along the spectrum much faster than the latter (Shebalin et al., (1983; Grappin, (1986; Carbone and Veltri, (1990). To quantify anisotropy in the distribution of wave vectors k for a given dynamical variable Q(k, t) (namely the energy, cross-helicity, etc.), it is useful to introduce the parameter $$\Omega _Q = \tan ^{ - 1} \sqrt {\frac{{\left\langle {k_ \bot ^2 } \right\rangle _Q }} {{\left\langle {k_\parallel ^2 } \right\rangle _Q }}}$$ (Shebalin et al., (1983; Carbone and Veltri, (1990), where the average of a given quantity g(k) is defined as $$\left\langle {g(k)} \right\rangle _Q = \frac{{\int {d^3 k g(k)Q(k,t)} }} {{\int {d^3 k Q(k,t)} }}.$$ For a spectrum with wave vectors perpendicular to B0 we have a spectral anisotropy Ω = 90°, while for an isotropic spectrum Ω = 45°. Numerical simulations in 2D configuration by Shebalin et al. (1983) confirmed the occurrence of anisotropy, and found that anisotropy increases with the Reynolds number. Unfortunately, in these old simulations, the Reynolds numbers used are too small to achieve a well defined spectral anisotropy. Carbone and Veltri (1990) started from the spectral equations obtained through the Direct Interaction Approximation closure by Veltri et al. (1982), and derived a shell model analogous for the anisotropic MHD turbulence Of course the anisotropy is over-simplified in the model, in particular the Alfvén time is assumed isotropic. However, the model was useful to investigate spectral anisotropy at very high Reynolds numbers. The phenomenological anisotropic spectrum obtained from the model, for both pseudo-energies obtained through polarizations a = 1, 2 defined through Equation (18), can be written as $$E_a^ \pm (k,t) \sim C_a^ \pm \left[ {\ell _{||}^2 k_{||}^2 + \ell _ \bot ^2 k_ \bot ^2 } \right]^{ - \mu ^ \pm } .$$ The spectral anisotropy is different within the injection, inertial, and dissipative ranges of turbulence (Carbone and Veltri, (1990). Wave vectors perpendicular to B0 are present in the spectrum, but when the process of energy transfer generates a strong anisotropy (at small times), a competing process takes place which redistributes the energy over all wave vectors. The dynamical balance between these tendencies fixes the value of the spectral anisotropy Ω ≃ 55° in the inertial range. On the contrary, since the redistribution of energy cannot take place, in the dissipation domain the spectrum remains strongly anisotropic, with Ω ≃ 80°. When the Reynolds number increases, the contribution of the inertial range extends, and the increases of the total anisotropy tends to saturate at about Ω ≃ 60° at Reynolds number of 105. This value corresponds to a rather low value for the ratio between parallel and perpendicular correlation lengths ℓ∥/ℓ⊥ ≥ 2, too small with respect to the observed value ℓ∥/ℓ⊥ ≥ 10. This suggests that the non-linear dynamical evolution of an initially isotropic spectrum of turbulence is perhaps not sufficient to explain the observed anisotropy. These results have been confirmed numerically (Oughton et al., (1994). 3.1.6 Spectral anisotropy in the solar wind The correlation time, as defined in Appendix A, estimates how much an element of our time series x(t) at time t1 depends on the value assumed by x(t) at time t0, being t1 = t0 + δt. This concept can be transferred from the time domain to the space domain if we adopt the Taylor hypothesis and, consequently, we can talk about spatial scales. Correlation lengths in the solar wind generally increase with heliocentric distance (Matthaeus and Goldstein, (1982b; Bruno and Dobrowolny, (1986), suggesting that large scale correlations are built up during the wind expansion. This kind of evolution is common to both fast and slow wind as shown in Figure 32, where we can observe the behavior of the B z correlation function for fast and slow wind at 0.3 and 0.9 AU. Correlation function just for the Z component of interplanetary magnetic field as observed by Helios 2 during its primary mission to the Sun. The blue color refers to data recorded at 0.9 AU while the red color refers to 0.3 AU. Solid lines refer to fast wind, dashed lines refer to slow wind. Moreover, the fast wind correlation functions decrease much faster than those related to slow wind. This behavior reflects also the fact that the stochastic character of Alfvénic fluctuations in the fast wind is very efficient in decorrelating the fluctuations of each of the magnetic field components. More detailed studies performed by Matthaeus et al. (1990) provided for the first time the twodimensional correlation function of solar wind fluctuations at 1 AU. The original dataset comprised approximately 16 months of almost continuous magnetic field 5-min averages. These results, based on ISEE 3 magnetic field data, are shown in Figure 33, also called the "The Maltese Cross". This figure has been obtained under the hypothesis of cylindrical symmetry. Real determination of the correlation function could be obtained only in the positive quadrant, and the whole plot was then made by mirroring these results on the remaining three quadrants. The iso-contour lines show contours mainly elongated along the ambient field direction or perpendicular to it. Alfvénic fluctuations with k ⊥ B0 contribute to contours elongated parallel to r⊥. Fluctuations in the two-dimensional turbulence limit (Montgomery, (1982) contribute to contours elongated parallel to r⊥. This two-dimensional turbulence is characterized for having both the wave vector k and the perturbing field δb perpendicular to the ambient field B0. Given the fact that the analysis did not select fast and slow wind, separately, it is likely that most of the slab correlations came from the fast wind while the 2D correlations came from the slow wind. As a matter of fact, Dasso et al. (2005), using 5 years of spacecraft observations at roughly 1 AU, showed that fast streams are dominated by fluctuations with wavevectors quasi-parallel to the local magnetic field, while slow streams are dominated by quasi-perpendicular fluctuation wavevectors. Anisotropic turbulence has been observed in laboratory plasmas and reverse pinch devices (Zweben et al., (1979). Bieber et al. (1996) formulated an observational test to distinguish the slab (Alfvénic) from the 2D component within interplanetary turbulence These authors assumed a mixture of transverse fluctuations, some of which have wave vectors perpendicular k ⊥ B0 and polarization of fluctuations δB(k⊥) perpendicular to both vectors (2D geometry with k ∥ ≃ 0), and some parallel to the mean magnetic field k ∥ B0, the polarization of fluctuations δB(k∥) being perpendicular to the direction of B0 (slab geometry with k⊥ ≃ 0). The magnetic field is then rotated into the same mean field coordinate system used by Belcher and Davis Jr (1971) and Belcher and Solodyna (1975), where the y-coordinate is perpendicular to both B0 and the radial direction, while the x-coordinate is perpendicular to B0 but with a component also in the radial direction. Using that geometry, and defining the power spectrum matrix as $$P_{ij} (k) = \frac{1} {{(2\pi )^3 }}\int {d^3 r} \left\langle {B_i (x)B_j (x + r)} \right\rangle e^{ - ik \cdot r} ,$$ it can be found that, assuming axisymmetry, a two-component model can be written in the frequency domain $$f P_{yy} (f) = rC_s \left( {\frac{{2\pi f}} {{U_w \cos \psi }}} \right)^{1 - q} + (1 - r)C_s \frac{{2q}} {{(1 + q)}}\left( {\frac{{2\pi f}} {{U_w \sin \psi }}} \right)^{1 - q} ,$$ $$f P_{xx} (f) = rC_s \left( {\frac{{2\pi f}} {{U_w \cos \psi }}} \right)^{1 - q} + (1 - r)C_s \frac{2} {{(1 + q)}}\left( {\frac{{2\pi f}} {{U_w \sin \psi }}} \right)^{1 - q} ,$$ where the anisotropic energy spectrum is the sum of both components: $$fT(f) = 2rC_s \left( {\frac{{2\pi f}} {{U_w \cos \psi }}} \right)^{1 - q} + 2(1 - r)C_s \left( {\frac{{2\pi f}} {{U_w \sin \psi }}} \right)^{1 - q} .$$ Here f is the frequency, C s is a constant defining the overall spectrum amplitude in wave vector space, U w is the bulk solar wind speed and ψ is the angle between B0 and the wind direction. Finally, r is the fraction of slab components and (1 − r) is the fraction of 2D components. Contour plot of the 2D correlation function of interplanetary magnetic field fluctuations as a function of parallel and perpendicular distance with respect to the mean magnetic field. The separation in r∥ and r⊥ is in units of 1010 cm. Image reproduced by permission from Matthaeus et al. (1990), copyright by AGU. The ratio test adopted by these authors was based on the ratio between the reduced perpendicular spectrum (fluctuations ⊥ to the mean field and solar wind flow direction) and the reduced quasi-parallel spectrum (fluctuations ⊥ to the mean field and in the plane defined by the mean field and the flow direction). This ratio, expected to be 1 for slab turbulence, resulted to be ~ 1.4 for fluctuations within the inertial range, consistent with 74% of 2D turbulence and 26% of slab. A further test, the anisotropy test, evaluated how the spectrum should vary with the angle between the mean magnetic field and the flow direction of the wind. The measured slab spectrum should decrease with the field angle while the 2D spectrum should increase, depending on how these spectra project on the flow direction. The results from this test were consistent with with 95% of 2D turbulence and 5% of slab. In other words, the slab turbulence due to Alfvénic fluctuations would be a minor component of interplanetary MHD turbulence A third test derived from Mach number scaling associated with the nearly incompressible theory (Zank and Matthaeus, (1992), assigned the same fraction ~ 80% to the 2D component. However, the data base for this analysis was derived from Helios magnetic measurements, and all data were recorded near times of solar energetic particle events. Moreover, the quasi totality of the data belonged to slow solar wind (Wanner and Wibberenz, (1993) and, as such, this analysis cannot be representative of the whole phenomenon of turbulence in solar wind. As a matter of fact, using Ulysses observations, Smith (2003) found that in the polar wind the percentage of slab and 2D components is about the same, say the high latitude slab component results unusually higher as compared with ecliptic observations. Successive theoretical works by Ghosh et al. (1998a,b) in which they used compressible models in large variety of cases were able to obtain, in some cases, parallel and perpendicular correlations similar to those obtained in the solar wind. However, they concluded that the "Maltese" cross does not come naturally from the turbulent evolution of the fluctuations but it strongly depends on the initial conditions adopted when the simulation starts. It seems that the existence of these correlations in the initial data represents an unavoidable constraint. Moreover, they also stressed the importance of time-averaging since the interaction between slab waves and transverse pressurebalanced magnetic structures causes the slab turbulence to evolve towards a state in which a two-component correlation function emerges during the process of time averaging. The presence of two populations, i.e., a slab-like and a quasi-2D like, was also inferred by Dasso et al. (2003). These authors computed the reduced spectra of the normalized cross-helicity and the Alfvén ratio from ACE dataset. These parameters, calculated for different intervals of the angle θ between the flow direction and the orientation of the mean field B0, showed a remarkable dependence on θ. The geometry used in these analyses assumes that the energy spectrum in the rest frame of the plasma is axisymmetric and invariant for rotations about the direction of B0. Even if these assumption are good when we want to translate results coming from 2D numerical simulations to 3D geometry, these assumptions are quite in contrast with the observational fact that the eigenvalues of the variance matrix are different, namely λ3 ≠ λ2. Going back from the correlation tensor to the power spectrum is a complicated technical problem. However, Carbone et al. (1995a) derived a description of the observed anisotropy in terms of a model for the three-dimensional energy spectra of magnetic fluctuations. The divergence-less of the magnetic field allows to decompose the Fourier amplitudes of magnetic fluctuations in two independent polarizations: The first one I[1](k) corresponds, in the weak turbulence theory, to the Alfvénic mode, while the second polarization I[2](k) corresponds to the magnetosonic mode. By using only the hypothesis that the medium is statistically homogeneous and some algebra, authors found that the energy spectra of both polarizations can be related to the two-points correlation tensor and to the variance matrix. Through numerical simulations of the shell model (see later in the review) it has been shown that the anisotropic energy spectrum can be described in the inertial range by a phenomenological expression $$I^{[s]} (k) = C_s \left[ {\left( {\ell _x^{[s]} k_x } \right)^2 + \left( {\ell _y^{[s]} k_y } \right)^2 + \left( {\ell _z^{[s]} k_z } \right)^2 } \right]^{ - 1 - \mu _s /2} ,$$ where k i are the Cartesian components of the wave vector k, and C s , ℓ i [s] , and μ s (s = 1, 2 indicates both polarizations; i = x, y, z) are free parameters. In particular, C s gives information on the energy content of both polarizations, ℓ i [s] represent the spectral extensions along the direction of a given system of coordinates, and μ s are two spectral indices. A fit to the eigenvalues of the variance matrix allowed Carbone et al. (1995a) to fix the free parameters of the spectrum for both polarizations. They used data from Bavassano et al. (1982a) who reported the values of λ i at five wave vectors calculated at three heliocentric distances, selecting periods of high correlation (Alfvénic periods) using magnetic field measured by the Helios 2 spacecraft. They found that the spectral indices of both polarizations, in the range 1.1 ≤ μ1 ≤ 1.3 and 1.46 ≤ μ2 ≤ 1.8 increase systematically with increasing distance from the Sun, the polarization [2] spectra are always steeper than the corresponding polarization [1] spectra, while polarization [1] is always more energetic than polarization [2]. As far as the characteristic lengths are concerned, it can be found that ℓ x [1] > ℓ y [1] ≫ ℓ z [1], indicating that wave vectors k ∥ B0 largely dominate. Concerning polarization [2], it can be found that ℓ x [2] ≫ ℓ y [2] ≃ ∓ z [2] , indicating that the spectrum I[2](k) is strongly flat on the plane defined by the directions of B0 and the radial direction. Within this plane, the energy distribution does not present any relevant anisotropy. Let us compare these results with those by Matthaeus et al. (1990), the comparison being significant as far as the plane yz is taken into account. The decomposition of Carbone et al. (1995a) in two independent polarizations is similar to that of Matthaeus et al. (1990), a contour plot of the trace of the correlation tensor Fourier transform T(k) = I[1](k) + I[2](k) on the plane (k y ; k z ) shows two populations of fluctuations, with wave vectors nearly parallel and nearly perpendicular to B0, respectively. The first population is formed by all the polarization [1] fluctuations and by the fluctuations with k ∥ B0 belonging to polarization [2]. The latter fluctuations are physically indistinguishable from the former, in that when k is nearly parallel to B0, both polarization vectors are quasi-perpendicular to B0. On the contrary, the second population is almost entirely formed by fluctuations belonging to polarization [2]. While it is clear that fluctuations with k nearly parallel to B0 are mainly polarized in the plane perpendicular to B0 (a consequence of ∇ · B = 0), fluctuations with k nearly perpendicular to B0 are polarized nearly parallel to B0. Although both models yield to the occurrence of two populations, Matthaeus et al. (1990) give an interpretation of their results which is in contrast with that of Carbone et al. (1995a). Namely Matthaeus et al. (1990) suggest that a nearly 2D incompressible turbulence characterized by wave vectors and magnetic fluctuations, both perpendicular to B0, is present in the solar wind. However, this interpretation does not arise from data analysis, rather from the 2D numerical simulations by Shebalin et al. (1983) and from analytical studies (Montgomery, (1982). Let us note, however, that in the former approach, which is strictly 2D, when k ⊥ B0 magnetic fluctuations are necessarily parallel to B0. In the latter one, along with incompressibility, it is assumed that the energy in the fluctuations is much less than in the DC magnetic field; both hypotheses do not apply to the solar wind case. On the contrary, results by Carbone et al. (1995a) can be directly related to the observational data. In any case, it is worth reporting that a model like that discussed here, that is a superposition of fluctuations with both slab and 2D components, has been used to describe turbulence also in the Jovian magnetosphere (Saur et al., (2002, (2003). In addition, several theoretical and observational works indicate that there is a competition between the radial axis and the mean field axis in shaping the polarization and spectral anisotropies in the solar wind. In this respect, Grappin and Velli (1996) used numerical simulations of MHD equations which included expansion effects (Expanding Box Model) to study the formation of anisotropy in the wind and the interaction of Alfvén waves within a transverse magnetic structures. These authors found that a large-scale isotropic Alfvénic eddy stretched by expansion naturally mixes with smaller scale transverse Alfvén waves with a different anisotropy. Saur and Bieber (1999), on the other hand, employed three different tests on about three decades of solar wind observations at 1 AU in order to better understand the anisotropic nature of solar wind fluctuations. Their data analysis strongly supported the composite model of a turbulence made of slab and 2-D fluctuations. Narita et al. (2011b), using the four Cluster spacecraft, determined the three-dimensional wavevector spectra of fluctuating magnetic fields in the solar wind within the inertial range. These authors found that the spectra are anisotropic throughout the analyzed frequency range and the power is extended primarily in the directions perpendicular to the mean magnetic field, as might be expected of 2-D turbulence, however, the analyzed fluctuations cannot be considered axisymmetric. Finally, Turner et al. (2011) suggested that the non-axisymmetry anisotropy of the frequency spectrum observed using in-situ observations may simply arise from a sampling effect related to the fact that the s/c samples three dimensional fluctuations as a one-dimensional series and that the energy density is not equally distributed among the different scales (i.e., spectral index > 1). 3.1.7 Magnetic helicity Magnetic helicity Hm, as defined in Appendix B.1, measures the "knottedness" of magnetic field lines (Moffatt, (1978). Moreover, Hm is a pseudo scalar and changes sign for coordinate inversion. The plus or minus sign, for circularly polarized magnetic fluctuations in a slab geometry, indicates right or left-hand polarization. Statistical information about the magnetic helicity is derived from the Fourier transform of the magnetic field auto-correlation matrix R ij (r) = 〈B i (x) · B j (x+r)〉 as shown by Matthaeus and Goldstein (1982b). While the trace of the symmetric part of the spectral matrix accounts for the magnetic energy, the imaginary part of the spectral matrix accounts for the magnetic helicity (Batchelor, (1970; Montgomery, (1982; Matthaeus and Goldstein, (1982b). However, what is really available from in-situ measurements in space experiments are data from a single spacecraft, and we can obtain values of R only for collinear sequences of r along the x direction which corresponds to the radial direction from the Sun. In these conditions the Fourier transform of R allows us to obtain only a reduced spectral tensor along the radial direction so that Hm(k) will depend only on the wave-number k in this direction. Although the reduced spectral tensor does not carry the complete spectral information of the fluctuations, for slab and isotropic symmetries it contains all the information of the full tensor. The expression used by Matthaeus and Goldstein (1982b) to compute the reduced Hm is given in Appendix B.2. In the following, we will drop the suffix r for sake of simplicity. The general features of the reduced magnetic helicity spectrum in the solar wind were described for the first time by Matthaeus and Goldstein (1982b) in the outer heliosphere, and by Bruno and Dobrowolny (1986) in the inner heliosphere. A useful dimensionless way to represent both the degree of and the sense of polarization is the normalized magnetic helicity σm (see Appendix B.2). This quantity can randomly vary between +1 and −1, as shown in Figure 34 from the work by Matthaeus and Goldstein (1982b) and relative to Voyager's data taken at 1 AU. However, net values of ±1 are reached only for pure circularly polarized waves. Based on these results, Goldstein et al. (1991) were able to reproduce the distribution of the percentage of occurrence of values of σm(f) adopting a model where the magnitude of the magnetic field was allowed to vary in a random way and the tip of the vector moved near a sphere. By this way they showed that the interplanetary magnetic field helicity measurements were inconsistent with the previous idea that fluctuations were randomly circularly polarized at all scales and were also magnitude preserving. σm vs. frequency and wave number relative to an interplanetary data sample recorded by Voyager 1 at approximately 1 AU. Image reproduced by permission from Matthaeus and Goldstein (1982b), copyright by AGU. However, evidence for circular polarized MHD waves in the high frequency range was provided by Polygiannakis et al. (1994), who studied interplanetary magnetic field fluctuations from various datasets at various distances ranging from 1 to 20 AU. They also concluded that the difference between left- and right-hand polarizations is significant and continuously varying. As already noticed by Smith et al. (1983, (1984), knowing the sign of σm and the sign of the normalized cross-helicity σc it is possible to infer the sense of polarization of the fluctuations. As a matter of fact, a positive cross-helicity indicates an Alfvén mode propagating outward, while a negative cross-helicity indicates a mode propagating inward. On the other hand, we know that a positive magnetic-helicity indicates a right-hand polarized mode, while a negative magnetichelicity indicates a left-hand polarized mode. Thus, since the sense of polarization depends on the propagating direction with respect to the observer, σm(f)σc(f) < 0 will indicate right circular polarization while σm(f)σc(f) > 0 will indicate left circular polarization. Thus, each time magnetic helicity and cross-helicity are available from measurements in a super-Alfvénic flow, it is possible to infer the rest frame polarization of the fluctuations from a single point measurements, assuming the validity of the slab geometry. The high variability of σm, observable in Voyager's data (see Figure 34), was equally observed in Helios 2 data in the inner heliosphere (Bruno and Dobrowolny, (1986). The authors of this last work computed the difference (MH > 0) − |MH < 0| of magnetic helicity for different frequency bands and noticed that most of the resulting magnetic helicity was contained in the lowest frequency band. This result supported the theoretical prediction of an inverse cascade of magnetic helicity from the smallest to the largest scales during turbulence development (Pouquet et al., (1976). Numerical simulations of the incompressible MHD equations by Mininni et al. (2003a), discussed in Section 3.1.9, clearly confirm the tendency of magnetic helicity to follow an inverse cascade. The generation of magnetic field in turbulent plasmas and the successive inverse cascade has strong implications in the emergence of large scale magnetic fields in stars, interplanetary medium and planets (Brandenburg, (2001). This phenomenon was firstly demonstrated in numerical simulations based on the eddy damped quasi normal Markovian (EDQNM) closure model of three-dimensional MHD turbulence by Pouquet et al. (1976). Successively, other investigators confirmed such a tendency for the magnetic helicity to develop an inverse cascade (Meneguzzi et al., (1981; Cattaneo and Hughes, (1996; Brandenburg, (2001). Mininni et al. (2003a) performed the first direct numerical simulations of turbulent Hall dynamo. They showed that the Hall current can have strong effects on turbulent dynamo action, enhancing or even suppressing the generation of the large-scale magnetic energy. These authors injected a weak magnetic field at small scales in a system kept in a stationary regime of hydrodynamic turbulence and followed the exponential growth of magnetic energy due to the dynamo action. This evolution can be seen in Figure 35 in the same format described for Figure 40, shown in Section 3.1.9. Now, the forcing is applied at wave number kforce = 10 in order to give enough room for the inverse cascade to develop. The fluid is initially in a strongly turbulent regime as a result of the action of the external force at wave number kforce = 10. An initial magnetic fluctuation is introduced at t = 0 at kseed = 35. The magnetic energy starts growing exponentially fast and, when the saturation is reached, the magnetic energy is larger than the kinetic energy. Notably, it is much larger at the largest scales of the system (i.e., k = 1). At these large scales, the system is very close to a magnetostatic equilibrium characterized by a force-free configuration. mpg-Movie (1752.1640625 KB) Still from a movie showing A numerical simulation of the incompressible MHD equations in three dimensions, assuming periodic boundary conditions (see details in Mininni et al., (2003a). The left panel shows the power spectra for kinetic energy (green), magnetic energy (red), and total energy (blue) vs. time. The right panel shows the spatially integrated kinetic, magnetic, and total energies vs. time. The vertical (orange) line indicates the current time. These results correspond to a 1283 simulation with an external force applied at wave number kforce = 10 (movie kindly provided by D. Gómez). (For video see appendix) 3.1.8 Alfvén correlations as incompressive turbulence In a famous paper, Belcher and Davis Jr (1971) showed that a strong correlation exists between velocity and magnetic field fluctuations, in the form $$\delta v \simeq \pm \frac{{\delta B}} {{\sqrt {4\pi \rho } }},$$ where the sign of the correlation is given by the sign[−k · B0], being k the wave vector and B0 the background magnetic field vector. These authors showed that in about 25 d of data from Mariner 5, out of the 160 d of the whole mission, fluctuations were described by Equation (59), and the sign of the correlation was such to indicate always an outward sense of propagation with respect to the Sun. Authors also noted that these periods mainly occur within the trailing edges of high-speed streams. Moreover, in the regions where Equation (59) is verified to a high degree, the magnetic field magnitude is almost constant (B2 ~ const.). Alfvénic correlation in fast solar wind. Left panel: large scale Alfvénic fluctuations found by Bruno et al. (1985). Right panel: small scale Alfvénic fluctuations for the first time found by Belcher and Solodyna (1975). Image reproduced by permission, copyright by AGU. Today we know that Alfvén correlations are ubiquitous in the solar wind and that these correlations are much stronger and are found at lower and lower frequencies, as we look at shorter and shorter heliocentric distances. In the right panel of Figure 36 we show results from Belcher and Solodyna (1975) obtained on the basis of 5 min averages of velocity and magnetic field recorded by Mariner 5 in 1967, during its mission to Venus. On the left panel of Figure 36 we show results from a similar analysis performed by Bruno et al. (1985) obtained on the basis of 1 h averages of velocity and magnetic field recorded by Helios 2 in 1976, when the s/c was at 0.29 AU from the Sun. These last authors found that, in their case, Alfvén correlations extended to time periods as low as 15 h in the s/c frame at 0.29 AU, and to periods a factor of two smaller near the Earth's orbit. Now, if we think that this long period of the fluctuations at 0.29 AU was larger than the transit time from the Sun to the s/c, this results might be the first evidence for a possible solar origin for these fluctuations, probably caused by the shuffling of the foot-points of the solar surface magnetic field. Alfvénic modes are not the only low frequency plasma fluctuations allowed by the MHD equations but they certainly are the most frequent fluctuations observed in the solar wind. The reason why other possible propagating modes like the slow sonic mode and the fast magnetosonic mode cannot easily be found, besides the fact that the eigenvectors associated with these modes are not directly identifiable because they necessitate prior identification of wavevectors, contrary to the simple Alfvénic eigenvectors, depends also on the fact that these compressive modes are strongly damped in the solar wind shortly after they are generated (see Section 6). On the contrary, Alfvén fluctuations, which are difficult to be damped because of their incompressive nature, survive much longer and dominate solar wind turbulence Nevertheless, there are regions where Alfvén correlations are much stronger like the trailing edge of fast streams, and regions where these correlations are weak like intervals of slow wind (Belcher and Davis Jr, (1971; Belcher and Solodyna, (1975). However, the degree of Alfvén correlations unavoidably fades away with increasing heliocentric distance, although it must be reported that there are cases when the absence of strong velocity shears and compressive phenomena favor a high Alfvén correlation up to very large distances from the Sun (Roberts et al., (1987a; see Section 5.1). Alfvénic correlation in fast and slow wind. Notice the different degree of correlation between these two types of wind. Just to give a qualitative quick example about Alfvénic correlations in fast and slow wind, we show in Figure 37 the speed profile for about 100 d of 1976 as observed by Helios 2, and the traces of velocity and magnetic field Z components (see Appendix D for the orientation of the reference system) V Z and B Z (this last one expressed in Alfvén units, see Appendix B.1) for two different time intervals, which have been enlarged in the two inserted small panels. The high velocity interval shows a remarkable anti-correlation which, since the mean magnetic field B0 is oriented away from the Sun, suggests a clear presence of outward oriented Alfvénic fluctuations given that the sign of the correlation is the sign[−k · B0]. At odds with the previous interval, the slow wind shows that the two traces are rather uncorrelated. For sake of brevity, we omit to show the very similar behavior for the other two components, within both fast and slow wind. The discovery of Alfvén correlations in the solar wind stimulated fundamental remarks by Kraichnan (1974) who, following previous theoretical works by Kraichnan (1965) and Iroshnikov (1963), showed that the presence of a strong correlation between velocity and magnetic fluctuations renders non-linear transfer to small scales less efficient than for the Navier-Stokes equations, leading to a turbulent behavior which is different from that described by Kolmogorov (1941). In particular, when Equation (59) is exactly satisfied, non-linear interactions in MHD turbulent flows cannot exist. This fact introduces a problem in understanding the evolution of MHD turbulence as observed in the interplanetary space Both a strong correlation between velocity and magnetic fluctuations and a well defined turbulence spectrum (Figures 29, 37) are observed, and the existence of the correlations is in contrast with the existence of a spectrum which in turbulence is due to a non-linear energy cascade. Dobrowolny et al. (1980b) started to solve the puzzle on the existence of Alfvén turbulence, say the presence of predominately outward propagation and the fact that MHD turbulence with the presence of both Alfvén modes present will evolve towards a state where one of the mode disappears. However, a lengthy debate based on whether the highly Alfvén nature of fluctuations is what remains of the turbulence produced at the base of the corona or the solar wind itself is an evolving turbulent magnetofluid, has been stimulating the scientific community for quite a long time. 3.1.9 Radial evolution of Alfvénic turbulence The degree of correlation not only depends on the type of wind we look at, i.e., fast or slow, but also on the radial distance from the Sun and on the time scale of the fluctuations. Figure 38 shows the radial evolution of σc (see Appendix B.1) as observed by Helios and Voyager s/c (Roberts et al., (1987b). It is clear enough that σc not only tends to values around 0 as the heliocentric distance increases, but larger and larger time scales are less and less Alfvénic. Values of σc ~ 0 suggest a comparable amount of "outward" and "inward" correlations. The radial evolution affects also the Alfvén ratio r A (see Appendix B.3.1) as it was found by Bruno et al. (1985). However, early analyses (Belcher and Davis Jr, (1971; Solodyna and Belcher, (1976; Matthaeus and Goldstein, (1982b) had already shown that this parameter is usually less than unit. Spectral studies by Marsch and Tu (1990a), reported in Figure 39, showed that within slow wind it is the lowest frequency range the one that experiences the strongest decrease with distance, while the highest frequency range remains almost unaffected. Moreover, the same study showed that, within fast wind, the whole frequency range experiences a general depletion. The evolution is such that close to 1 AU the value of rA in fast wind approaches that in slow wind. Moreover, comparing these results with those by Matthaeus and Goldstein (1982b) obtained from Voyager at 2.8 AU, it seems that the evolution recorded within fast wind tends to a sort of limit value around 0.4 ∓ 0.5. Also Roberts et al. (1990), analyzing fluctuations between 9 h and 3 d found a similar radial trend. These authors showed that r A dramatically decreases from values around unit at the Earth's orbit towards 0.4 . 0.5 at approximately 8 AU. For larger heliocentric distances, rA seems to stabilize around this last value. The reason why r A tends to a value less than unit is still an open question although MHD computer simulations (Matthaeus, (1986) showed that magnetic reconnection and high plasma viscosity can produce values of r A < 1 within the inertial range. Moreover, the magnetic energy excess can be explained as a competing action between the equipartition trend due to linear propagation (or Alfvén effect, Kraichnan (1965)), and a local dynamo effect due to non-linear terms (Grappin et al., (1991), see closure calculations by Grappin et al. (1983); DNS by Müller and Grappin (2005). However, this argument forecasts an Alfvén ratio r A ≠ 1 but, it does not say whether it would be larger or smaller than "1", i.e., we could also have a final excess of kinetic energy. Histograms of normalized cross-helicity σc showing its evolution between 0.3 (circles), 2 (triangles), and 20 (squares) AU for different time scales: 3 h (top panel), 9 h (middle panel), and 81 h (bottom panel). Image Image reproduced by permission Roberts et al. (1987b, copyright by AGU. Values of the Alfvén ratio r A as a function of frequency and heliocentric distance, within slow (left column) and fast (right column) wind. Image reproduced by permission from Marsch and Tu (1990a), copyright by AGU. Similar unbalance between magnetic and kinetic energy has recently been found in numerical simulations by Mininni et al. (2003a), already cited in Section 3.1.7. These authors studied the effect of a weak magnetic field at small scales in a system kept in a stationary regime of hydrodynamic turbulence In these conditions, the dynamo action causes the initial magnetic energy to grow exponentially towards a state of quasi equipartition between kinetic and magnetic energy. This simulation was aiming to provide more insights on a microscopic theory of the alpha-effect, which is responsible to convert part of the toroidal magnetic field on the Sun back to poloidal to sustain the cycle. However, when the simulation saturates, the unbalance between kinetic and magnetic energy reminds the conditions in which the Alfvén ratio is found in interplanetary space Results from the above study can be viewed in the animation of Figure 40. At very early time the fluid is in a strongly turbulent regime as a result of the action of the external force at wave number kforce = 3. An initial magnetic fluctuation is introduced at t = 0 at kseed = 35. The magnetic energy starts growing exponentially fast and, when the simulation reaches the saturation stage, the magnetic power spectrum exceeds the kinetic power spectrum at large wave numbers (i.e., k > kforce), as also observed in Alfvénic fluctuations of the solar wind (Bruno et al., (1985; Tu and Marsch, (1990a) as an asymptotic state (Roberts et al., (1987a,b; Bavassano et al., (2000b) of turbulence mpg-Movie (1780.71484375 KB)Still from a movie showing A 1283 numerical simulation, as in Figure 35, but with an external force applied at wave number kforce = 3 (movie kindly provided by D. Gómez). (For video see appendix) However, when the two-fluid effect, such as the Hall current and the electron pressure (Mininni et al., (2003b), is included in the simulation, the dynamo can work more efficiently and the final stage of the simulation is towards equipartition between kinetic and magnetic energy. On the other hand, Marsch and Tu (1993a) analyzed several intervals of interplanetary observations to look for a linear relationship between the mean electromotive force ε = δVδB, generated by the turbulent motions, and the mean magnetic field B0, as predicted by simple dynamo theory (Krause and Rädler, (1980). Although sizable electromotive force was found in interplanetary fluctuations, these authors could not establish any simple linear relationship between B0 and ε. Lately, Bavassano and Bruno (2000) performed a three-fluid analysis of solar wind Alfvénic fluctuations in the inner heliosphere, in order to evaluate the effect of disregarding the multifluid nature of the wind on the factor relating velocity and magnetic field fluctuations. It is well known that converting magnetic field fluctuations into Alfvén units we divide by the factor F p = (4πM p N p )1/2. However, fluctuations in velocity tend to be smaller than fluctuations in Alfvén units. In Figure 41 we show scatter plots between the z-component of the Alfvén velocity and the proton velocity fluctuations. The z-direction has been chosen as the same of V p ×B, where V p is the proton bulk flow velocity and B is the mean field direction. The reason for such a choice is due to the fact that this direction is the least affected by compressive phenomena deriving from the wind dynamics. These results show that although the correlation coefficient in both cases is around −0.95, the slope of the best fit straight line passes from 1 at 0.29 AU to a slope considerably different from 1 at 0.88 AU. Scatter plot between the z-component of the Alfvén velocity and the proton velocity fluctuations at about 2 mHz. Data refer to Helios 2 observations at 0.29 AU (left panel) and 0.88 AU (right panel). Image adapted from Bavassano and Bruno (2000). Belcher and Davis Jr (1971) suggested that this phenomenon had to be ascribed to the presence of α particles and to an anisotropy in the thermal pressure. Moreover, taking into account the multi-fluid nature of the solar wind, the dividing factor should become F = F p F i F a , where F i would take into account the presence of other species besides protons, and F a would take into account the presence of pressure anisotropy P∥ ≠ P⊥, where ∥ and ⊥ refer to the background field direction. In particular, following Bavassano and Bruno (2000), the complete expressions for F i and F i are $$F_i = \left[ {1 + \sum\limits_s {(M_s N_s )/(M_p N_p )} } \right]^{1/2}$$ $$F_a = \left[ {1 - \frac{{4\pi }} {{B_0^2 }}\sum\limits_s {(P_{\parallel s} - P_{ \bot s} + M_s N_s U_s^2 )} } \right]^{ - 1/2} ,$$ where the letter "s" stands for the s-th species, being U s = V s − V its velocity in the center of mass frame of reference V s is the velocity of the species "s" in the s/c frame and V = (Σ s M s N s V s )/(Σ s M s N s ) is the velocity of the center of mass. Bavassano and Bruno (2000) analyzed several time intervals within the same co-rotating high velocity stream observed at 0.3 and 0.9 AU and performed the analysis using the new factor "F" to express magnetic field fluctuations in Alfvén units, taking into account the presence of α particles and electrons, besides the protons. However, the correction resulted to be insufficient to bring back to "1" the slope of the δVPz ∓ δVAz relationship shown in the right panel of Figure 41. In conclusion, the radial variation of the Alfvén ratio rA towards values less than 1 is not completely due to a missed inclusion of multi-fluid effects in the conversion from magnetic field to Alfvén units. Thus, we are left with the possibility that the observed depletion of rA is due to a natural evolution of turbulence towards a state in which magnetic energy becomes dominant (Grappin et al., (1991; Roberts et al., (1992; Roberts, (1992), as observed in the animation of Figure 40 taken from numerical simulations by Mininni et al. (2003a) or, it is due to the increased presence of magnetic structures like MFDT (Tu and Marsch, (1993). 3.2 Turbulence studied via Elsässer variables The Alfvénic character of solar wind fluctuations,especially within co-rotating high velocity streams, suggests to use the Elsässer variables (Appendix B.3) to separate the "outward" from the "inward" contribution to turbulence These variables, used in theoretical studies by Dobrowolny et al. (1980a,b); Veltri et al. (1982); Marsch and Mangeney (1987); and Zhou and Matthaeus (1989), were for the first time used in interplanetary data analysis by Grappin et al. (1990) and Tu et al. (1989b). In the following, we will describe and discuss several differences between "outward" and "inward" modes, but the most important one is about their origin. As a matter of fact, the existence of the Alfvénic critical point implies that only "outward" propagating waves of solar origin will be able to escape from the Sun. "Inward" waves, being faster than the wind bulk speed, will precipitate back to the Sun if they are generated before this point. The most important implication due to this scenario is that "inward" modes observed beyond the Alfvénic point cannot have a solar origin but they must have been created locally by some physical process. Obviously, for the other Alfvénic component, both solar and local origins are still possible. 3.2.1 Ecliptic scenario Early studies by Belcher and Davis Jr (1971), performed on magnetic field and velocity fluctuations recorded by Mariner 5 during its trip to Venus in 1967, already suggested that the majority of the Alfvénic fluctuations are characterized by an "outward" sense of propagation, and that the best regions where to observe these fluctuations are the trailing edge of high velocity streams. Moreover, Helios spacecraft, repeatedly orbiting around the Sun between 0.3 to 1 AU, gave the first and unique opportunity to study the radial evolution of turbulence (Bavassano et al., (1982b; Denskat and Neubauer, (1983). Successively, when Elsässer variables were introduced in the analysis (Grappin et al., (1989), it was finally possible not only to evaluate the "inward" and "outward" Alfvénic contribution to turbulence but also to study the behavior of these modes as a function of the wind speed and radial distance from the Sun. Figure 42 (Tu et al., (1990) clearly shows the behavior of e± (see Appendix B.3) across a high speed stream observed at 0.3 AU. Within fast wind e+ is much higher than e− and its spectral slope shows a break. Lower frequencies have a flatter slope while the slope of higher frequencies is closer to a Kolmogorov-like. e− has a similar break but the slope of lower frequencies follows the Kolmogorov slope, while higher frequencies form a sort of plateau. This configuration vanishes when we pass to the slow wind where both spectra have almost equivalent power density and follow the Kolmogorov slope. This behavior, for the first time reported by Grappin et al. (1990), is commonly found within co-rotating high velocity streams, although much more clearly expressed at shorter heliocentric distances, as shown below. Spectral power associated with outward (right panel) and inward (left panel) Alfvénic fluctuations, based on Helios 2 observations in the inner heliosphere, are concisely reported in Figure 43. The e− spectrum, if we exclude the high frequency range of the spectrum relative to fast wind at 0.4 AU, shows an average power law profile with a slope of −1.64, consistent with Kolmogorov's scaling. The lack of radial evolution of e− spectrum brought Tu and Marsch (1990a) to name it "the background spectrum" of solar wind turbulence Power density spectra e± computed from δz± fluctuations for different time intervals indicated by the arrows. Image reproduced by permission from Tu et al. (1990), copyright by AGU. Power density spectra e− and e+ computed from δz− and δz+ fluctuations. Spectra have been computed within fast (H) and slow (L) streams around 0.4 and 0.9 AU as indicated by different line styles. The thick line represents the average power spectrum obtained from all the about 50 e− spectra, regardless of distances and wind speed. The shaded area is the 1σ width related to the average. Image reproduced by permission from Tu and Marsch (1990b), copyright by AGU. Quite different is the behavior of e+ spectrum. Close to the Sun and within fast wind, this spectrum appears to be flatter at low frequency and steeper at high frequency. The overall evolution is towards the "background spectrum" by the time the wind reaches 0.8 AU. In particular, Figure 43 tells us that the radial evolution of the normalized cross-helicity has to be ascribed mainly to the radial evolution of e+ rather than to both Alfvénic fluctuations (Tu and Marsch, (1990a). In addition, Figure 44, relative to the Elsässer ratio rE, shows that the hourly frequency range, up to ~ 2 × 10−3 Hz, is the most affected by this radial evolution. Ratio of e− over e+ within fast wind at 0.3 and 0.9 AU in the left and right panels, respectively. Image reproduced by permission from Marsch and Tu (1990a), copyright by AGU. As a matter of fact, this radial evolution can be inferred from Figure 45 where values of e− and e+ together with solar wind speed, magnetic field intensity, and magnetic field and particle density compression are shown between 0.3 and 1 AU during the primary mission of Helios 2. It clearly appears that enhancements of e− and depletion of e+ are connected to compressive events, particularly within slow wind. Within fast wind the average level of e− is rather constant during the radial excursion while the level of e+ dramatically decreases with a consequent increase of the Elsässer ratio (see Appendix B.3.1). Further ecliptic observations (see Figure 46) do not indicate any clear radial trend for the Elsässer ratio between 1 and 5 AU, and its value seems to fluctuate between 0.2 and 0.4. However, low values of the normalized cross-helicity can also be associated with a particular type of incompressive events, which Tu and Marsch (1991) called Magnetic Field Directional Turnings or MFDT. These events, found within slow wind, were characterized by very low values of δc close to zero and low values of the Alfvén ratio, around 0.2. Moreover, the spectral slope of e+, e− and the power associated with the magnetic field fluctuations was close to the Kolmogorov slope. These intervals were only scarcely compressive, and short period fluctuations, from a few minutes to about 40 min, were nearly pressure balanced. Thus, differently from what had previously been observed by Bruno et al. (1989), who found low values of cross-helicity often accompanied by compressive events, these MFDTs were mainly incompressive. In these structures most of the fluctuating energy resides in the magnetic field rather than velocity as shown in Figure 47 taken from Tu and Marsch (1991). It follows that the amplitudes of the fluctuating Alfvénic fields δz± result to be comparable and, consequently, the derived parameter σc → 0. Moreover, the presence of these structures would also be able to explain the fact that rA < 1. Tu and Marsch (1991) suggested that these fluctuations might derive from a special kind of magnetic structures, which obey the MHD equations, for which (B · ∇)B = 0, field magnitude, proton density, and temperature are all constant. The same authors suggested the possibility of an interplanetary turbulence mainly made of outwardly propagating Alfvén waves and convected structures represented by MFDTs. In other words, this model assumed that the spectrum of e− would be caused by MFDTs. The different radial evolution of the power associated with these two kind of components would determine the radial evolution observed in both σc and rA. Although the results were not quantitatively satisfactory, they did show a qualitative agreement with the observations. Upper panel: solar wind speed and solar wind speed multiplied by σc. In the lower panels the authors reported: σc, rE, e−, e+, magnetic compression, and number density compression, respectively. Image reproduced by permission from Bruno and Bavassano (1991), copyright by AGU. Ratio of e− over e+ within fast wind between 1 and 5 AU as observed by Ulysses in the ecliptic. Image reproduced by permission from Bavassano et al. (2001), copyright by AGU. Left column: e+ and e− spectra (top) and σc (bottom) during a slow wind interval at 0.9 AU. Right column: kinetic e u and magnetic e B energy spectra (top) computed from the trace of the relative spectral tensor, and spectrum of the Alfvén ratio rA (bottom) Image reproduced by permission from Tu and Marsch (1991). These convected structures are an important ingredient of the turbulent evolution of the fluctuations and can be identified as the 2D incompressible turbulence suggested by Matthaeus et al. (1990) and Tu and Marsch (1991). As a matter of fact, a statistical analysis by Bruno et al. (2007) showed that magnetically dominated structures represent an important component of the interplanetary fluctuations within the MHD range of scales. As a matter of fact, these magnetic structures and Alfvénic fluctuations dominate at scales typical of MHD turbulence For instance, this analysis suggested that more than 20% of all analyzed intervals of 1 hr scale are magnetically dominated and only weakly Alfvénic. Observations in the ecliptic performed by Helios and WIND s/c and out of the ecliptic, performed by Ulysses, showed that these advected, mostly incompressive structures are ubiquitous in the heliosphere and can be found in both fast and slow wind. It proves interesting enough to look at the radial evolution of interplanetary fluctuations in terms of normalized cross-helicity σc and normalized residual energy σr (see Appendix B.3). These results, shown in the left panels of Figure 48, highlight the presence of a radial evolution of the fluctuations towards a double-peaked distribution during the expansion of the solar wind. The relative analysis has been performed on a co-rotating fast stream observed by Helios 2 at three different heliocentric distances over consecutive solar rotations (see Figure 16 and related text). Closer to the Sun, at 0.3 AU, the distribution is well centered around σr = 0 and σc = 1, suggesting that Alfvénic fluctuations, outwardly propagating, dominate the scenario. By the time the wind reaches 0.7 AU, the appearance of a tail towards negative values of σr and lower values of σc indicates a partial loss of the Alfvénic character in favor of fluctuations characterized by a stronger magnetic energy content. This clear tendency ends up with the appearance of a secondary peak by the time the wind reaches 0.88 AU. This new family of fluctuations forms around σr = −1 and σc = 0. The values of σr and σc which characterize this new population are typical of MFDT structures described by Tu and Marsch (1991). Together with the appearance of these fluctuations, the main peak characterized by Alfvén like fluctuations looses much of its original character shown at 0.3 AU. The yellow straight line that can be seen in the left panels of Figure 48 would be the linear relation between σr and σc in case fluctuations were made solely by Alfvén waves outwardly propagating and advected MFDTs (Tu and Marsch, (1991) and it would replace the canonical, quadratic relation σ r 2 + σ c 2 ≤ 1 represented by the yellow circle drawn in each panel. However, the yellow dashed line shown in the left panels of Figure 48 does not seem to fit satisfactorily the observed distributions. Left, from top to bottom: frequency histograms of σr vs. σc (here σC and σR) for fast wind observed by Helios 2 at 0.29, 0.65 and 0.88 AU, respectively. The color code, for each panel, is normalized to the maximum of the distribution. The yellow circle represents the limiting value given by σ c 2 σ 2 2 = 1 while, the yellow dashed line represents the relation σr = σr − 1, see text for details. Right, from top to bottom: frequency histograms of σr vs. σc (here σC and σR) for slow wind observed by Helios 2 at 0.32, 0.69 and 0.90 AU, respectively. The color code, for each panel, is normalized to the maximum of the distribution. Image reproduced by permission from Bruno et al. (2007), copyright EGU. Quite different is the situation within slow wind, as shown in the right panels of Figure 48. As a matter of fact, these histograms do not show any striking radial evolution like in the case of fast wind. High values of σc are statistically much less relevant than in fast wind and a well defined population characterized by σc = −1 and σc = 0, already present at 0.3 AU, becomes one of the dominant peaks of the histogram as the wind expands. This last feature is really at odds with what happens in fast wind and highlights the different nature of the fluctuations which, in this case, are magnetically dominated. The same authors obtained very similar results for fast and slow wind also from the same type of analysis performed on WIND and Ulysses data which, in addition, confirmed the incompressive character of the Alfvénic fluctuations and highlighted a low compressive character also for the populations characterized by σr ~ −1 and σc ~ 0. About the origin of these structures, these authors suggest that they might be not only created locally during the non linear evolution of the fluctuations but they might also have a solar origin. The reason why they are not seen close to the Sun, within fast wind, might be due to the fact that these fluctuations, mainly non-compressive, change the direction of the magnetic field similarly to Alfvénic fluctuations but produce a much smaller effect since the associated δb is smaller than the one corresponding to Alfvénic fluctuations. As the wind expands, the Alfvénic component undergoes non-linear interactions which produce a transfer of energy to smaller and smaller scales while, these structures, being advected, have a much longer lifetime. As the expansion goes on, the relative weight of these fluctuations grows and they start to be detected. 3.2.2 On the nature of Alfvénic fluctuations The Alfvénic nature of outward modes has been widely recognized through several frequency decades up to periods of the order of several hours in the s/c rest frame (Bruno et al., (1985). Conversely, the nature of those fluctuations identified by δd−, called "inward Alfvén modes", is still not completely clear. There are many clues which would suggest that these fluctuations, especially in the hourly frequencies range, have a non-Alfvénic nature. Several studies on this topic in the low frequency range have suggested that structures convected by the wind could well mimic non-existent inward propagating modes (see the review by Tu and Marsch, (1995a). However, other studies (Tu et al., (1989b) have also found, in the high frequency range and within fast streams, a certain anisotropy in the components which resembles the same anisotropy found for outward modes. So, these observations would suggest a close link between inward modes at high frequency and outward modes, possibly the same nature. Power density spectra for e+ and e− during a high velocity stream observed at 0.3 AU. Best fit lines for different frequency intervals and related spectral indices are also shown. Vertical lines fix the limits of five different frequency intervals analyzed by Bruno et al. (1996). Image reproduced by permission, copyright by AIP. Figure 49 shows power density spectra for e+ and e− during a high velocity stream observed at 0.3 AU (similar spectra can be also found in the paper by Grappin et al., (1990 and Tu et al., (1989b). The observed spectral indices, reported on the plot, are typically found within high velocity streams encountered at short heliocentric distances. Bruno et al. (1996) analyzed the power relative to e+ and e− modes, within five frequency bands, ranging from roughly 12 h to 3 min, delimited by the vertical solid lines equally spaced in log-scale. The integrated power associated with e+ and e− within the selected frequency bands is shown in Figure 50. Passing from slow to fast wind e+ grows much more within the highest frequency bands. Moreover, there is a good correlation between the profiles of e− and e+ within the first two highest frequency bands, as already noticed by Grappin et al. (1990) who looked at the correlation between daily averages of e− and e+ in several frequency bands, even widely separated in frequency. The above results stimulated these authors to conclude that it was reminiscent of the non-local coupling in k-space between opposite modes found by Grappin et al. (1982) in homogeneous MHD. Expansion effects were also taken into account by Velli et al. (1990) who modeled inward modes as that fraction of outward modes back-scattered by the inhomogeneities of the medium due to expansion effects (Velli et al., (1989). However, following this model we would often expect the two populations to be somehow related to each other but, in situ observations do not favor this kind of forecast (Bavassano and Bruno, (1992). An alternative generation mechanism was proposed by Tu et al. (1989b) based on the parametric decay of e+ in high frequency range (Galeev and Oraevskii, (1963). This mechanism is such that large amplitude Alfvénic waves, unstable to perturbations of random field intensity and density fluctuations, would decay into two secondary Alfvénic modes propagating in opposite directions and a sound-like wave propagating in the same direction of the pump wave. Most of the energy of the mother wave would go into the sound-like fluctuation and the backward propagating Alfvénic mode. On the other hand, the production of e− modes by parametric instability is not particularly fast if the plasma β ~ 1, like in the case of solar wind (Goldstein, (1978; Derby, (1978), since this condition slows down the growth rate of the instability. It is also true that numerical simulations by Malara et al. (2000, (2001a, (2002), and Primavera et al. (2003) have shown that parametric decay can still be thought as a possible mechanism of local production of turbulence within the polar wind (see Section 4). However, the strong correlation between e+ and e− profiles found only within the highest frequency bands would support this mechanism and would suggest that e− modes within these frequency bands would have an Alfvénic nature. Another feature shown in Figure 50 that favors these conclusions is the fact that both δz+ and δz− keep the direction of their minimum variance axis aligned with the background magnetic field only within the fast wind, and exclusively within the highest frequency bands. This would not contradict the view suggested by Barnes (1981). Following this model, the majority of Alfvénic fluctuations propagating in one direction have the tip of the magnetic field vector randomly wandering on the surface of half a sphere of constant radius, and centered along the ambient field B∘. In this situation the minimum variance would be oriented along B∘, although this would not represent the propagation direction of each wave vector which could propagate even at large angles from this direction. This situation can be seen in the right hand panel of Figure 98 of Section 10, which refers to a typical Alfvénic interval within fast wind. Moreover, δz+ fluctuations show a persistent anisotropy throughout the fast stream since the minimum variance axis remains quite aligned to the background field direction. This situation downgrades only at the very low frequencies where θ+, the angle between the minimum variance direction of δz+ and the direction of the ambient magnetic field, starts wandering between 0° and 90°. On the contrary, in slow wind, since Alfvénic modes have a smaller amplitude, compressive structures due to the dynamic interaction between slow and fast wind or, of solar origin, push the minimum variance direction to larger angles with respect to B。, not depending on the frequency range. Left panel: wind speed profile is shown in the top panel. Power density associated with e+ (thick line) and e− (thin line), within the five frequency bands chosen, is shown in the lower panels. Right panel: wind speed profile is shown in the top panel. Values of the angle θ± between the minimum variance direction of δz+ (thick line) and δz− (thin line) and the direction of the ambient magnetic field are shown in the lower panels, relatively to each frequency band. Image reproduced by permission from Bruno et al. (1996), copyright by AIP. In a way, we can say that within the stream, both θ+ and θ−, the angle between the minimum variance direction of δz− and the direction of the ambient magnetic field, show a similar behavior as we look at lower and lower frequencies. The only difference is that θ− reaches higher values at higher frequencies than θ+. This was interpreted (Bruno et al., (1996) as due to the fact that transverse fluctuations of δz− carry much less power than those of δz+ and, consequently, they are more easily influenced by perturbations represented by the background, convected structure of the wind (e.g., TD's and PBS's). As a consequence, at low frequency δz− fluctuations may represent a signature of the compressive component of the turbulence while, at high frequency, they might reflect the presence of inward propagating Alfvén modes. Thus, while for periods of several hours δz+ fluctuations can still be considered as the product of Alfvén modes propagating outward (Bruno et al., (1985), δz− fluctuations are rather due to the underlying convected structure of the wind. In other words, high frequency turbulence can be looked at mainly as a mixture of inward and outward Alfvénic fluctuations plus, presumably, sound-like perturbations (Marsch and Tu, (1993a). On the other hand, low frequency turbulence would be made of outward Alfvénic fluctuations and static convected structures representing the inhomogeneities of the background medium. 4 Observations of MHD Turbulence in the Polar Wind In 1994 – 1995, Ulysses gave us the opportunity to look at the solar wind out-of-the-ecliptic, providing us with new exciting observations. For the first time heliospheric instruments were sampling pure, fast solar wind, free of any dynamical interaction with slow wind. There is one figure that within our scientific community has become as popular as "La Gioconda" by Leonardo da Vinci within the world of art. This figure produced at LANL (McComas et al., (1998) is shown in the upper left panel of Figure 51, which has been taken from a successive paper by (McComas et al., (2003), and summarizes the most important aspects of the large scale structure of the polar solar wind during the minimum of the solar activity phase, as indicated by the low value of the Wolf's number reported in the lower panel. It shows speed profile, proton number density profile and magnetic field polarity vs. heliographic latitude during the first complete Ulysses' polar orbit. Fast wind fills up north and south hemispheres of the Sun almost completely, except a narrow latitudinal belt around the equator, where the slow wind dominates. Flow velocity, which rapidly increases from the equator towards higher latitudes, quickly reaches a plateau and the wind escapes the polar regions with a rather uniform speed. Moreover, polar wind is characterized by a lower number density and shows rather uniform magnetic polarity of opposite sign, depending on the hemisphere. Thus, the main difference between ecliptic and polar wind is that this last one completely lacks of dynamical interactions with slower plasma and freely flows into the interplanetary space The presence or not of this phenomenon, as we will see in the following pages, plays a major role in the development of MHD turbulence during the wind expansion. During solar maximum (look at the upper right panel of Figure 51) the situation dramatically changes and the equatorial wind extends to higher latitudes, to the extent that there is no longer difference between polar and equatorial wind. Large scale solar wind profile as a function of latitude during minimum (left panel) and maximum (right panel) solar cycle phases. The sunspot number is also shown at the bottom panels. Image reproduced by permission from McComas et al. (2003), copyright by AGU. 4.1 Evolving turbulence in the polar wind Ulysses observations gave us the possibility to test whether or not we could forecast the turbulent evolution in the polar regions on the basis of what we had learned in the ecliptic. We knew that, in the ecliptic, velocity shear, parametric decay, and interaction of Alfvénic modes with convected structures (see Sections 3.2.1, 5.1) all play some role in the turbulent evolution and, before Ulysses reached the polar regions of the Sun, three possibilities were given: Alfvénic turbulence would have not relaxed towards standard turbulence because the large scale velocity shears would have been much less relevant (Grappin et al., (1991); since the magnetic field would be smaller far from the ecliptic, at large heliocentric distances, even small shears would lead to an isotropization of the fluctuations and produce a turbulent cascade faster than the one observed at low latitudes, and the subsequent evolution would take less time (Roberts et al., (1990); there would still be evolution due to interaction with convected plasma and field structures but it would be slower than in the ecliptic since the power associated with Alfvénic fluctuations would largely dominate over the inhomogeneities of the medium. Thus, Alfvénic correlations should last longer than in the ecliptic plane, with a consequent slower evolution of the normalized cross-helicity (Bruno, (1992). A fourth possibility was added by Tu and Marsch (1995a), based on their model (Tu and Marsch, (1993). Following this model they assumed that polar fluctuations were composed by outward Alfvénic fluctuations and MFDT. The spectra of these components would decrease with radial distance because of a WKB evolution and convective effects of the diverging flow. As the distance increases, the field becomes more transverse with respect to the radial direction, the s/c would sample more convective structures and, as a consequence, would observe a decrease of both σc and rA. Today we know that polar Alfvénic turbulence evolves in the same way it does in the ecliptic plane, but much more slowly. Moreover, the absence of strong velocity shears and enhanced compressive phenomena suggests that also some other mechanism based on parametric decay instability might play some role in the local production of turbulence (Bavassano et al., (2000a; Malara et al., (2001a, (2002; Primavera et al., (2003). The first results of Ulysses magnetic field and plasma measurements in the polar regions, i.e., above ±30. latitude (left panel of Figure 51), revealed the presence of Alfvénic correlations in a frequency range from less than 1 to more than 10 h (Balogh et al., (1995; Smith et al., (1995; Goldstein et al., (1995a) in very good agreement with ecliptic observations (Bruno et al., (1985). However, it is worth noticing that Helios observations referred to very short heliocentric distances around 0.3 AU while the above Ulysses observations were taken up to 4 AU. As a matter of fact, these long period Alfvén waves observed in the ecliptic, in the inner solar wind, become less prominent as the wind expands due to stream-stream dynamical interaction effects (Bruno et al., (1985) and strong velocity shears (Roberts et al., (1987a). At high latitude, the relative absence of enhanced dynamical interaction between flows at different speed and, as a consequence, the absence of strong velocity shears favors the survival of these extremely low frequency Alfvénic fluctuations for larger heliocentric excursions. Figure 52 shows the hourly correlation coefficient for the transverse components of magnetic and velocity fields as Ulysses climbs to the south pole and during the fast latitude scanning that brought the s/c from the south to the north pole of the Sun in just half a year. While the equatorial phase of Ulysses journey is characterized by low values of the correlation coefficients, a gradual increase can be noticed starting at half of year 1993 when the s/c starts to increase its heliographic latitude from the ecliptic plane up to 80.2° south, at the end of 1994. Not only the degree of δb − δv correlation resembled Helios observations but also the spectra of these fluctuations showed characteristics which were very similar to those observed in the ecliptic within fast wind like the spectral index of the components, that was found to be flat at low frequency and more Kolmogorov-like at higher frequencies (Smith et al., (1995). Balogh et al. (1995) and Forsyth et al. (1996) discussed magnetic fluctuations in terms of latitudinal and radial dependence of their variances. Similarly to what had been found within fast wind in the ecliptic (Mariani et al., (1978; Bavassano et al., (1982b; Tu et al., (1989b; Roberts et al., (1992), variance of magnetic magnitude was much less than the variance associated with the components. Moreover, transverse variances had consistently higher values than the one along the radial direction and were also much more sensitive to latitude excursion, as shown in Figure 53. In addition, the level of the normalized hourly variances of the transverse components observed during the ecliptic phase, right after the compressive region ahead of co-rotating interacting regions, was maintained at the same level once the s/c entered the pure polar wind. Again, these observations showed that the fast wind observed in the ecliptic was coming from the equatorward extension of polar coronal holes. Magnetic field and velocity hourly correlation vs. heliographic latitude. Image reproduced by permission from Smith et al. (1995), copyright by AGU. Horbury et al. (1995c) and Forsyth et al. (1996) showed that the interplanetary magnetic field fluctuations observed by Ulysses continuously evolve within the fast polar wind, at least out to 4 AU. Since this evolution was observed within the polar wind, rather free of co-rotating and transient events like those characterizing low latitudes, they concluded that some other mechanism was at work and this evolution was an intrinsic property of turbulence. Results in Figure 54 show the evolution of the spectral slope computed across three different time scale intervals. The smallest time scales show a clear evolution that keeps on going past the highest latitude on day 256, strongly suggesting that this evolution is radial rather than latitudinal effect. Horbury et al. (1996a) worked on determining the rate of turbulent evolution for the polar wind. They calculated the spectral index at different frequencies from the scaling of the second order structure function (see Section 7 and papers by Burlaga, (1992a,b; Marsch and Tu, (1993a; Ruzmaikin et al., (1995; and Horbury et al., (1996b) since the spectral scaling α is related to the scaling of the structure function s by the following relation: α = s+1 (Monin and Yaglom, (1975). Horbury et al. (1996a), studying variations of the spectral index with frequency for polar turbulence, found that there are two frequency ranges where the spectral index is rather steady. The first range is around 10−2 Hz with a spectral index around ∡5/3, while the second range is at very low frequencies with a spectral index around −1. This last range is the one where Goldstein et al. (1995a) found the best example of Alfvénic fluctuations. Similarly, ecliptic studies found that the best Alfvénic correlations belonged to the hourly, low frequency regime (Bruno et al., (1985). Normalized magnetic field components and magnitude hourly variances plotted vs. heliographic latitude during a complete latitude survey by Ulysses. Image reproduced by permission from Forsyth et al. (1996), copyright by AGU. Spectral indexes of magnetic fluctuations within three different time scale intervals as indicated in the plot. The bottom panel shows heliographic latitude and heliocentric distance of Ulysses. Image reproduced by permission from Horbury et al. (1995c), copyright by AGU. Horbury et al. (1995a) presented an analysis of the high latitude magnetic field using a fractal method. Within the solar wind context, this method has been described for the first time by Burlaga and Klein (1986) and Ruzmaikin et al. (1993), and is based on the estimate of the scaling of the length function L(τ) with the scale τ. This function is closely related to the first order structure function and, if statistical self-similar, has scaling properties L(τ) ~ τℓ, where ℓ is the scaling exponent. It follows that L(τ) is an estimate of the amplitude of the fluctuations at scale τ, and the relation that binds L(τ) to the variance of the fluctuations (δB)2 ~ τs(2) is: $$L(\tau )\~N(\tau )[(\delta B)^2 ]^{1/2} \propto \tau ^{s(2)/2 - 1} ,$$ where N(τ) represents the number of points at scale τ and scales like τ−1. Since the power density spectrum fW(f) is related to (δB)2 through the relation fW(f) ~ (δB)2, if W(f) ~ f−α, then s(2) = α − 1, and, as a consequence α = 2ℓ + 3 (Marsch and Tu, (1996). Thus, it results very easy to estimate the spectral index at a given scale or frequency, without using spectral methods but simply computing the length function. Spectral exponents for the B z component estimated from the length function computed from Ulysses magnetic field data, when the s/c was at about 4 AU and ~ −50° latitude. Different symbols refer to different time intervals as reported in the graph. Image reproduced by permission from (from Horbury et al., 1995a). Results in Figure 55 show the existence of two different regimes, one with a spectral index around the Kolmogorov scaling extending from 101.5 to 103 s and, separated by a clear breakpoint at scales of 103 s, a flatter and flatter spectral exponent for larger and larger scales. These observations were quite similar to what had been observed by Helios 2 in the ecliptic, although the turbulence state recorded by Ulysses resulted to be more evolved than the situation seen at 0.3 AU and, perhaps, more similar to the turbulence state observed around 1 AU, as shown by Marsch and Tu (1996). These authors compared the spectral exponents, estimated using the same method of Horbury et al. (1995a), from Helios 2 magnetic field observations at two different heliocentric distances: 0.3 and 1.0 AU. The comparison with Ulysses results is shown in Figure 56 where it appears rather clear that the slope of the B z spectrum experiences a remarkable evolution during the wind expansion between 0.3 and 4 AU. Obviously, this comparison is meaningful in the reasonable hypothesis that fluctuations observed by Helios 2 at 0.3 AU are representative of out-of- the-ecliptic solar wind (Marsch and Tu, 1996). This figure also shows that the degree of spectral evolution experienced by the fluctuations when observed at 4 AU at high latitude, is comparable to Helios observations at 1 AU in the ecliptic. Thus, the spectral evolution at high latitude is present although quite slower with respect to the ecliptic. Spectral exponents for the B z component estimated from the length function computed from Helios and Ulysses magnetic field data. Ulysses length function (dotted line) is the same shown in the paper by Horbury et al. (1995a) when the s/c was at about 4 AU and ~ −50° latitude. Image reproduced by permission from Marsch and Tu (1996), copyright by AGU. Forsyth et al. (1996) studied the radial dependence of the normalized hourly variances of the components B R , B T and B N and the magnitude |B| of the magnetic field (see Appendix D to learn about the |B| reference system). The variance along the radial direction was computed as σ R 2 = 〈B R 2 > − < B R 2 and successively normalized to |B|2 to remove the field strength dependence Moreover, variances along the other two directions T and N were similarly defined. Fitting the radial dependence with a power law of the form r−α, but limiting the fit to the radial excursion between 1.5 and 3 AU, these authors obtained α = 3.39 ± 0.07 for σ r 2 , α = 3.45 ± 0.09 for σ T 2 , α = 3.37 ± 0.09 for σ N 2 , and α = 2.48 ± 0.14 for σ B 2 . Thus, for hourly variances, the power associated with the components showed a radial dependence stronger than the one predicted by the WKB approximation, which would provide α = 3. These authors also showed that including data between 3 and 4 AU, corresponding to intervals characterized by compressional features mainly due to high latitude CMEs, they would obtain less steep radial gradients, much closer to a WKB type. These results suggested that compressive effects can feed energy at the smallest scales, counteracting dissipative phenomena and mimicking a WKB-like behavior of the fluctuations. However, they concluded that for lower frequencies, below the frequency break point, fluctuations do follow the WKB radial evolution. Horbury and Balogh (2001) presented a detailed comparison between Ulysses and Helios observations about the evolution of magnetic field fluctuations in high-speed solar wind. Ulysses results, between 1.4 and 4.1 AU, were presented as wave number dependence of radial and latitudinal power scaling. The first results of this analysis showed (Figure 3 of their work) a general decrease of the power levels with solar distance, in both magnetic field components and magnitude fluctuations. In addition, the power associated with the radial component was always less than that of the transverse components, as already found by Forsyth et al. (1996). However, Horbury and Balogh (2001), supposing a possible latitude dependence, performed a multiple linear regression of the type: $$\log _{10} w = A_p + B_p \log _{10} r + C_p \sin \theta ,$$ where w is the power density integrated in a given spectral band, r is the radial distance and θ is the heliolatitude (0° at the equator). Moreover, the same procedure was applied to spectral index estimates α of the form α = A α + B α log10 r + C α sin θ. Results obtained for B p , C p , B α , C α are shown in Figure 58. Hourly variances of the components and the magnitude of the magnetic field vs. radial distance from the Sun. The meaning of the different symbols is also indicated in the upper right corner. Image reproduced by permission from Forsyth et al. (1996), copyright by AGU. On the basis of variations of spectral index and radial and latitudinal dependencies, these authors were able to identify four wave number ranges as indicated by the circled numbers in the top panel of Figure 58. Range 1 was characterized by a radial power decrease weaker than WKB (−3), positive latitudinal trend for components (more power at higher latitude) and negative for magnitude (less compressive events at higher latitudes). Range 2 showed a more rapid radial decrease of power for both magnitude and components and a negative latitudinal power trend, which implies less power at higher latitudes. Moreover, the spectral index of the components (bottom panel) is around 0.5 and tends to 0 at larger scales. Within range 3 the power of the components follows a WKB radial trend and the spectral index is around −1 for both magnitude and components. This hourly range has been identified as the most Alfvénic at low latitudes and its radial evolution has been recognized to be consistent with WKB radial index (Roberts, (1989; Marsch and Tu, (1990a). Even within this range, and also within the next one, the latitude power trend is slightly negative for both components and magnitude. Finally, range 4 is clearly indicative of turbulent cascade with a radial power trend of the components much faster than WKB expectation and becoming even stronger at higher wave numbers. Moreover, the radial spectral index reveals that steepening is at work only for the previous wave number ranges as expected since the breakpoint moves to smaller wave number during spectrum evolution. The spectral index of the components tends to −5/3 with increasing wave number while that of the magnitude is constantly flatter. The same authors gave an estimate of the radial scale-shift of the breakpoint during the wind expansion around k ∝ r1.1, in agreement with earlier estimates (Horbury et al., 1996a). Although most of these results support previous conclusions obtained for the ecliptic turbulence, the negative value of the latitudinal power trend that starts within the second range, is unexpected. As a matter of fact, moving towards more Alfvén regions like the polar regions, one would perhaps expect a positive latitudinal trend similarly to what happens in the ecliptic when moving from slow to fast wind. (a) Scale dependence of radial power, (b) latitudinal power, (c) radial spectral index, (d) latitudinal spectral index, and (e) spectral index computed at 2.5 AU. Solid circles refer to the trace of the spectral matrix of the components, open squares refer to field magnitude. Correspondence between wave number scale and time scale is based on a wind velocity of 750 km s−1. Image reproduced by permission from Horbury and Balogh (2001), copyright by AGU. Horbury and Balogh (2001) and Horbury and Tsurutani (2001) estimated that the power observed at 80° is about 30% less than that observed at 30°. These authors proposed a possible effect due to the over-expansion of the polar coronal hole at higher latitudes. In addition, within the fourth range, field magnitude fluctuations radially decrease less rapidly than the fluctuations of the components, but do not show significant latitudinal variations. Finally, the smaller spectral index reveals that the high frequency range of the field magnitude spectrum shows a flattening. The same authors investigated the anisotropy of these fluctuations as a function of radial and latitudinal excursion. Their results, reported in Figure 59, show that, at 2.5 AU, the lowest compressibility is recorded within the hourly frequency band (third and part of the fourth band), which has been recognized as the most Alfvénic frequency range. The anisotropy of the components confirms that the power associated with the transverse components is larger than that associated with the radial one, and this difference slightly tends to decrease at higher wave numbers. (a) Scale dependence of power anisotropy at 2.5 AU plotted as the log10 of the ratio of B R (solid circles), B T (triangles), B N (diamonds), and |B| (squares) to the trace of the spectral matrix; (b) the radial, and (c) latitudinal behavior of the same values, respectively. Image reproduced by permission from Horbury and Balogh (2001), copyright by AGU. As already shown by Horbury et al. (1995b), around the 5 min range, magnetic field fluctuations are transverse to the mean field direction the majority of the time. The minimum variance direction lies mainly within an angle of about 26° from the average background field direction and fluctuations are highly anisotropic, such that the ratio between perpendicular to parallel power is about 30. Since during the observations reported in Horbury and Balogh (2001) and Horbury and Tsurutani (2001) the mean field resulted to be radially oriented most of the time, the radial minimum variance direction at short time scales is an effect induced by larger scales behavior. Anyhow, radial and latitudinal anisotropy trends tend to disappear for higher frequencies. In the mean time, interesting enough, there is a strong radial increase of magnetic field compression (top panel of Figure 59), defined as the ratio between the power density associated with magnetic field intensity fluctuations and that associated with the fluctuations of the three components (Bavassano et al., (1982a; Bruno and Bavassano, (1991). The attempt to attribute this phenomenon to parametric decay of large amplitude Alfvén waves or dynamical interactions between adjacent flux tubes or interstellar pick-up ions was not satisfactory in all cases. Comparing high latitude with low latitude results for high speed streams, Horbury and Balogh (2001) found remarkable good agreement between observations by Ulysses at 2.5 AU and by Helios at 0.7 AU. In particular, Figure 60 shows Ulysses and Helios 1 spectra projected to 1 AU for comparison. It is interesting to notice that the spectral slope of the spectrum of the components for Helios 1 is slightly higher than that of Ulysses, suggesting a slower radial evolution of turbulence in the polar wind (Bruno, (1992; Bruno and Bavassano, (1992). However, the faster spectral evolution at low latitudes does not lead to strong differences between the spectra. Power spectra of magnetic field components (solid circles) and magnitude (open squares) from Ulysses (solid line) and Helios 1 (dashed line). Spectra have been extrapolated to 1 AU using radial trends in power scalings estimated from Ulysses between 1.4 and 4.1 AU and Helios between 0.3 and 1 AU. Image reproduced by permission from Horbury and Balogh (2001), copyright by AGU. 4.2 Polar turbulence studied via Elsässer variables Goldstein et al. (1995a) for the first time showed a spectral analysis of Ulysses observations based on Elsässer variables during two different time intervals, at 4 AU and close to −40°, and at 2 AU and around the maximum southern pass, as shown in Figure 61. Comparing the two Ulysses observations it clearly appears that the spectrum closer to the Sun is less evolved than the spectrum measured farther out, as will be confirmed by the next Figure 62, where these authors reported the normalized cross-helicity and the Alfvén ratio for the two intervals. Moreover, following these authors, the comparison between Helios spectra at 0.3 AU and Ulysses at 2 and 4 AU suggests that the radial scaling of e+ at the low frequency end of the spectrum follows the WKB prediction of 1/r decrease (Heinemann and Olbert, 1980). However, the selected time interval for Helios s/c was characterized by rather slow wind taken during the rising phase the solar cycle, two conditions which greatly differ from those referring to Ulysses data. As a consequence, comparing Helios results with Ulysses results obtained within the fast polar wind might be misleading. It would be better to choose Helios observations within high speed co-rotating streams which resemble much better solar wind conditions at high latitude. Anyhow, results relative to the normalized cross-helicity σc (see Figure 62) clearly show high values of σc, around 0.8, which normally we observe in the ecliptic at much shorter heliocentric distances (Tu and Marsch, (1995a). A possible radial effect would be responsible for the depleted level of σc at 4 AU. Moreover, a strong anisotropy can also be seen for frequencies between 10−6 − 10−5 Hz with the transverse σc much larger than the radial one. This anisotropy is somewhat lost during the expansion to 4 AU. The Alfvén ratio (bottom panels of Figure 62) has values around 0.5 for frequencies higher than roughly 10.5 Hz, with no much evolution between 2 and 4 AU. A result similar to what was originally obtained in the ecliptic at about 1 AU (Martin et al., (1973; Belcher and Solodyna, (1975; Solodyna et al., (1977; Neugebauer et al., (1984; Bruno et al., (1985; Marsch and Tu, (1990a; Roberts et al., (1990). The low frequency extension of rA⊥ together with σc⊥, where the subscript ⊥ indicates that these quantities are calculated from the transverse components only, was interpreted by the authors as due to the sampling of Alfvénic features in longitude rather than to a real presence of Alfvénic fluctuations. However, by the time Ulysses reaches to 4 AU, σc⊥ has strongly decreased as expected while rA⊥ gets closer to 1, making the situation less clear. Anyhow, these results suggest that the situation at 2 AU and, even more at 4 AU, can be considered as an evolution of what Helios 2 recorded in the ecliptic at shorter heliocentric distance Ulysses observations at 2 AU resemble more the turbulence conditions observed by Helios at 0.9 AU rather than at 0.3 AU. Trace of e+ (solid line) and e− (dash-dotted line) power spectra. The central and right panels refer to Ulysses observations at 2 and 4 AU, respectively, when Ulysses was embedded in the fast southern polar wind during 1993 – 1994. The leftmost panel refers to Helios observations during 1978 at 0.3 AU. Image reproduced by permission from Goldstein et al. (1995a), copyright by AGU. Normalized cross-helicity and Alfvén ratio at 2 and 4 AU, as observed by Ulysses at −80° and −40° latitude, respectively. Image reproduced by permission from Goldstein et al. (1995a), copyright by AGU. Bavassano et al. (2000a) studied in detail the evolution of the power e+ and e− associated with outward δz+ and inward δz− Alfvénic fluctuations, respectively. The study referred to the polar regions, during the wind expansion between 1.4 and 4.3 AU. These authors analyzed 1 h variances of δz± and found two different regimes, as shown in Figure 63. Inside 2.5 AU outward modes e+ decrease faster than inward modes e−, in agreement with previous ecliptic observations performed within the trailing edge of co-rotating fast streams (Bruno and Bavassano, (1991; Tu and Marsch, (1990b; Grappin et al., (1989). Beyond this distance, the radial gradient of e− becomes steeper and steeper while that of e+ remains approximately unchanged. This change in e− is rather fast and both species keep declining with the same rate beyond 2.5 AU. The radial dependence of e+ between r−1.39 and r−1.48, reported by Bavassano et al. (2000a), indicate a radial decay faster than r−1 predicted by WKB approximation. This is in agreement with the analysis performed by Forsyth et al. (1996) using magnetic field observations only. Left panel: values of hourly variance of δz− (i.e., e±) vs. heliocentric distance, as observed by Ulysses. Helios observations are shown for comparison and appear to be in good agreement. Right panel: Elsässer ratio (top) and Alfvén ratio (bottom) are plotted vs. radial distance while Ulysses is embedded in the polar wind. Image reproduced by permission from Bavassano et al. (2000a), copyright by AGU. This different radial behavior is readily seen in the radial plot of the Elsässer ratio rE shown in the top panel of the right column of Figure 63. Before 2.5 AU this ratio continuously grows to about 0.5 near 2.5 AU. Beyond this region, since the radial gradient of the inward and outward components is approximately the same, rE stabilizes around 0.5. On the other hand, also the Alfvén ratio rA shows a clear radial dependence that stops at about the same limit distance of 2.5 AU. In this case, rA constantly decreases from ~ 0.4 at 1.4 AU to ~ 0.25 at 2.5 AU, slightly fluctuating around this value for larger distances. A different interpretation of these results was offered by Grappin (2002). For this author, since Ulysses has not explored the whole three-dimensional heliosphere, solar wind parameters experience different dependencies on latitude and distance which would result in the same radial distance variation along Ulysses trajectory as claimed in Bavassano's works. Another interesting feature observed in polar turbulence is unraveled by Figure 64 from Bavassano et al. (1998, 2000b). The plot shows 2D histograms of normalized cross-helicity and normalized residual energy (see Appendix B.3.1 for definition) for different heliospheric regions (ecliptic wind, mid-latitude wind with strong velocity gradients, polar wind). A predominance of outward fluctuations (positive values of σc) and of magnetic fluctuations (negative values of σr) seems to be a general feature. It results that the most Alfvénic region is the one at high latitude and at shorter heliocentric distances. However, in all the panels there is always a relative peak at σc ≃ 0 and σr ≃ −1, which might well be due to magnetic structures like the MFDT found by Tu and Marsch (1991) in the ecliptic. In a successive paper, Bavassano et al. (2002a) tested whether or not the radial dependence observed in e± was to be completely ascribed to the radial expansion of the wind or possible latitudinal dependencies also contributed to the turbulence evolution in the polar wind. As already discussed in the previous section, Horbury and Balogh (2001), using Ulysses data from the northern polar pass, evaluated the dependence of magnetic field power levels on solar distance and latitude using a multiple regression analysis based on Equation (60). In the Alfvénic range, the latitudinal coefficient "C" for power in field components was appreciably different from 0 (around 0.3). However, this analysis was limited to magnetic field fluctuations alone and cannot be transferred sic et simpliciter to Alfvénic turbulence In their analysis, Bavassano et al. (2002b) used the first southern and northern polar passes and removed from their dataset all intervals with large gradients in plasma velocity, and/or plasma density, and/or magnetic field magnitude, as already done in Bavassano et al. (2000a). As a matter of fact, the use of Elsässer variables (see Appendix B.3.1) instead of magnetic field, and of selected data samples, leads to very small values of the latitudinal coefficient as shown in Figure 65, where different contributions are plotted with different colors and where the top panel refers to the same dataset used by Horbury and Balogh (2001), while the bottom panel refers to a dataset omni-comprehensive of south and north passages free of strong compressive events (Bavassano et al., (2000a). Moreover, the latitudinal effect appears to be very weak also for the data sample used by Horbury and Balogh (2001), although this is the sample with the largest value of the "C" coefficient. Results from the multiple regression analysis showing radial and latitudinal dependence of the power e+ associated with outward modes (see Appendix B.3.1). The top panel refers to the same dataset used by Horbury and Balogh (2001). The bottom panel refers to a dataset omni-comprehensive of south and north passages free of strong compressive events (Bavassano et al., (2000a). Values of e+ have been normalized to the value e+° assumed by this parameter at 1.4 AU, closest approach to the Sun. The black line is the total regression, the blue line is the latitudinal contribution and the red line is the radial contribution. Image reproduced by permission from Bavassano et al. (2002a), copyright by AGU. A further argument in favor of radial vs. latitudinal dependence is represented by the comparison of the radial gradient of e+ in different regions, in the ecliptic and in the polar wind. These results, shown in Figure 66, provide the radial slopes for e+ (red squares) and e+ (blue diamonds) in different regions. The first three columns (labeled EQ) summarize ecliptic results obtained with different values of an upper limit (TBN) for relative fluctuations of density and magnetic intensity. The last two columns (labeled POL) refer to the results for polar turbulence (north and south passes) outside and inside 2.6 AU, respectively. A general agreement exists between slopes in ecliptic and in polar wind with no significant role left for latitude, the only exception being e+ in the region inside 2.6 AU. The behavior of the inward component cannot be explained by a simple power law over the range of distances explored by Ulysses. Moreover, a possible latitudinal effect has been clearly rejected by the results from a multiple regression analysis performed by Bavassano et al. (2002a) similar to that reported above for e+. e+ (red) and e− (blue) radial gradient for different latitudinal regions of the solar wind. The first three columns, labeled EQ, refer to ecliptic observations obtained with different values of the upper limit of TBN defined as the relative fluctuations of density and magnetic intensity. The last two columns, labeled POL, refer to observations of polar turbulence outside and inside 2.6 AU, respectively. Image reproduced by permission from Bavassano et al. (2001), copyright by AGU. 5 Numerical Simulations Numerical simulations currently represent one of the main source of information about non-linear evolution of fluid flows. The actual super-computers are now powerful enough to simulate equations (NS or MHD) that describe turbulent flows with Reynolds numbers of the order of 104 in twodimensional configurations, or 103 in three-dimensional one. Of course, we are far from achieving realistic values, but now we are able to investigate turbulence with an inertial range extended for more than one decade. Rather the main source of difficulties to get results from numerical simulations is the fact that they are made under some obvious constraints (say boundary conditions, equations to be simulated, etc.), mainly dictated by the limited physical description that we are able to use when numerical simulations are made, compared with the extreme richness of the phenomena involved: numerical simulations, even in standard conditions, are used tout court as models for the solar wind behavior. Perhaps the only exception, to our knowledge, is the attempt to describe the effects of the solar wind expansion on turbulence evolution like, for example, in the papers by Velli et al. (1989, (1990); Hellinger and Trávníček (2008). Even with this far too pessimistic point of view, used here solely as a few words of caution, simulations in some cases were able to reproduce some phenomena observed in the solar wind. Nevertheless, numerical simulations have been playing a key role, and will continue to do so in our seeking an understanding of turbulent flows. Numerical simulations allows us to get information that cannot be obtained in laboratory. For example, high resolution numerical simulations provide information at every point on a grid and, for some times, about basic vector quantities and their derivatives. The number of degree of freedom required to resolve the smaller scales is proportional to a power of the Reynolds number, say to Re9/4, although the dynamically relevant number of modes may be much less. Then one of the main challenge remaining is how to handle and analyze the huge data files produced by large simulations (of the order of Terabytes). Actually a lot of papers appeared in literature on computer simulations related to MHD turbulence The interested reader can look at the book by Biskamp (1993) and the reviews by Pouquet (1993, (1996). 5.1 Local production of Alfvénic turbulence in the ecliptic The discovery of the strong correlation between velocity and magnetic field fluctuations has represented the motivation for some MHD numerical simulations, aimed to confirm the conjecture by Dobrowolny et al. (1980b). The high level of correlation seems to be due to a kind of selforganization (dynamical alignment) of MHD turbulence, generated by the natural evolution of MHD towards the strongest attractive fixed point of equations (Ting et al., (1986; Carbone and Veltri, (1987, (1992). Numerical simulations (Carbone and Veltri, (1992; Ting et al., (1986) confirmed this conjecture, say MHD turbulence spontaneously can tends towards a state were correlation increases, that is, the quantity σc = 2Hc/E, where Hc is the cross-helicity and E the total energy of the flow (see Appendix B.1), tends to be maximal. The picture of the evolution of incompressible MHD turbulence, which comes out is rather nice but solar wind turbulence displays a more complicated behavior. In particular, as we have reported above, observations seems to point out that solar wind evolves in the opposite way. The correlation is high near the Sun, at larger radial distances, from 1 to 10 AU the correlation is progressively lower, while the level in fluctuations of mass density and magnetic field intensity increases. What is more difficult to understand is why correlation is progressively destroyed in the solar wind, while the natural evolution of MHD is towards a state of maximal normalized cross-helicity. A possible solution can be found in the fact that solar wind is neither incompressible nor statistically homogeneous, and some efforts to tentatively take into account more sophisticated effects have been made. A mechanism, responsible for the radial evolution of turbulence, was suggested by Roberts and Goldstein (1988); Goldstein et al. (1989); and Roberts et al. (1991, (1992) and was based on velocity shear generation. The suggestion to adopt such a mechanism came from a detailed analysis made by Roberts et al. (1987a,b) of Helios and Voyager interplanetary observations of the radial evolution of the normalized cross-helicity σc at different time scales. Moreover, Voyager's observations showed that plasma regions, which had not experienced dynamical interactions with neighboring plasma, kept the Alfvénic character of the fluctuations at distances as far as 8 AU (Roberts et al., (1987b). In particular, the vicinity of Helios trajectory to the interplanetary current sheet, characterized by low velocity flow, suggested Roberts et al. (1991) to include in his simulations a narrow low speed flow surrounded by two high speed flows. The idea was to mimic the slow, equatorial solar wind between north and south fast polar wind. Magnetic field profile and velocity shear were reconstructed using the six lowest Z± Fourier modes as shown in Figure 67. An initial population of purely outward propagating Alfvénic fluctuations (z+) was added at large k and was characterized by a spectral slope of k−1. No inward modes were present in the same range. Results of Figure 67 show that the time evolution of z+ spectrum is quite rapid at the beginning, towards a steeper spectrum, and slows down successively. At the same time, z− modes are created by the generation mechanism at higher and higher k but, along a Kolmogorov-type slope k−5/3. These results, although obtained from simulations performed using 2D incompressible spectral and pseudo-spectral codes, with fairly small Reynolds number of Re ≃ 200, were similar to the spectral evolution observed in the solar wind (Marsch and Tu, (1990a). Moreover, spatial averages across the simulation box revealed a strong cross-helicity depletion right across the slow wind, representing the heliospheric current sheet. However, magnetic field inversions and even relatively small velocity shears would largely affect an initially high Alfvénic flow (Roberts et al., (1992). However, Bavassano and Bruno (1992) studied an interaction region, repeatedly observed between 0.3 and 0.9 AU, characterized by a large velocity shear and previously thought to be a good candidate for shear generation (Bavassano and Bruno, (1989). They concluded that, even in the hypothesis of a very fast growth of the instability, inward modes would not have had enough time to fill up the whole region as observed by Helios 2. The above simulations by Roberts et al. (1991) were successively implemented with a com- pressive pseudo-spectral code (Ghosh and Matthaeus, (1990) which provided evidence that, during this turbulence evolution, clear correlations between magnetic field magnitude and density fluctuations, and between z− and density fluctuations should arise. However, such a clear correlation, by-product of the non-linear evolution, was not found in solar wind data (Marsch and Tu, (1993b; Bruno et al., (1996). Moreover, their results did not show the flattening of e− spectrum at higher frequency, as observed by Helios (Tu et al., (1989b). As a consequence, velocity shear alone cannot explain the whole phenomenon, other mechanisms must also play a relevant role in the evolution of interplanetary turbulence Time evolution of the power density spectra of z+ and z− showing the turbulent evolution of the spectra due to velocity shear generation (from Roberts et al., (1991). Compressible numerical simulations have been performed by Veltri et al. (1992) and Malara et al. (1996, (2000) which invoked the interactions between small scale waves and large scale magnetic field gradients and the parametric instability, as characteristic effects to reduce correlations. In a compressible, statistically inhomogeneous medium such as the heliosphere, there are many processes which tend to destroy the natural evolution toward a maximal correlation, typical of standard MHD. In such a medium an Alfvén wave is subject to parametric decay instability (Viñas and Goldstein, 1991; Del Zanna et al., 2001; Del Zanna, 2001), which means that the mother wave decays in two modes: i) a compressive mode that dissipates energy because of the steepening effect, and ii) a backscattered Alfvénic mode with lower amplitude and frequency. Malara et al. (1996) showed that in a compressible medium, the correlation between the velocity and the magnetic field fluctuations is reduced because of the generation of the backward propagating Alfvénic fluctuations, and of a compressive component of turbulence, characterized by density fluctuations δρ ≠ 0 and magnetic intensity fluctuations δ|B| ≠ 0. From a technical point of view it is worthwhile to remark that, when a large scale field which varies on a narrow region is introduced (typically a tanh-like field), periodic boundaries conditions should be used with some care. Roberts et al. (1991, 1992) used a double shear layer, while Malara et al. (1992) introduced an interesting numerical technique based on both the glue between two simulation boxes and a Chebyshev expansion, to maintain a single shear layer, say non periodic boundary conditions, and an increased resolution where the shear layer exists. Grappin et al. (1992) observed that the solar wind expansion increases the lengths normal to the radial direction, thus producing an effect similar to a kind of inverse energy cascade. This effect perhaps should be able to compete with the turbulent cascade which transfers energy to small scales, thus stopping the non-linear interactions. In absence of non-linear interactions, the natural tendency towards an increase of σc is stopped. These inferences have been corroborated by further studies like those by Grappin and Velli (1996) and Goldstein and Roberts (1999). A numerical model treating the evolution of e+ and e−, including parametric decay of e+, was presented by Marsch and Tu (1993a). The parametric decay source term was added in order to reproduce the decreasing cross-helicity observed during the wind expansion. As a matter of fact, the cascade process, when spectral equations for both e+ and e− are included and solved self-consistently, can only steepen the spectra at high frequency. Results from this model, shown in Figure 68, partially reproduce the observed evolution of the normalized cross-helicity. While the radial evolution of e+ is correctly reproduced, the behavior of e− shows an over-production of inward modes between 0.6 and 0.8 AU probably due to an overestimation of the strength of the pump-wave. However, the model is applied to the situation observed by Helios at 0.3 AU where a rather flat e− spectrum already exists. Radial evolution of e+ and e− spectra obtained from the Marsch and Tu (1993a) model, in which a parametric decay source term was added to the Tu's model (Tu et al., (1984) that was, in turn, extended by including both spectrum equations for e+ and e− and solved them self-consistently. Image reproduced by permission from Marsch and Tu (1993a), copyright by AGU. 5.2 Local production of Alfvénic turbulence at high latitude An interesting solution to the radial behavior of the minority modes might be represented by local generation mechanisms, like parametric decay (Malara et al., (2001a; Del Zanna et al., 2001), which might saturate and be inhibited beyond 2.5 AU. Parametric instability has been studied in a variety of situations depending on the value of the plasma β (among others Sagdeev and Galeev, (1969; Goldstein, (1978; Hoshino and Goldstein, (1989; Malara and Velli, (1996). Malara et al. (2000) and Del Zanna et al. (2001) recently studied the non-linear growth of parametric decay of a broadband Alfvén wave, and showed that the final state strongly depends on the value of the plasma β (thermal to magnetic pressure ratio). For β < 1 the instability completely destroys the initial Alfvénic correlation. For β ~ 1 (a value close to solar wind conditions) the instability is not able to go beyond some limit in the disruption of the initial correlation between velocity and magnetic field fluctuations, and the final state is σc ~ 0.5 as observed in the solar wind (see Section 4.2). These authors solved numerically the fully compressible, non-linear MHD equations in a onedimensional configuration using a pseudo-spectral numerical code. The simulation starts with a non-monochromatic, large amplitude Alfvén wave polarized on the yz plane, propagating in a uniform background magnetic field. Successively, the instability was triggered by adding some noise of the order 10−6 to the initial density level. During the first part of the evolution of the instability the amplitude of unstable modes is small and, consequently, non-linear couplings are negligible. A subsequent exponential growth, predicted by the linear theory, increases the level of both e− and density compressive fluctuations. During the second part of the development of the instability, non-linear couplings are not longer disregardable and their effect is firstly to slow down the exponential growth of unstable modes and then to saturate the instability to a level that depends on the value of the plasma β. Spectra of e± are shown in Figure 69 for different times during the development of the instability. At the beginning the spectrum of the mother-wave is peaked at k = 10, and before the instability saturation (t ≤ 35) the back-scattered e− and the density fluctuations e ρ are peaked at k = 1 and k = 11, respectively. After saturation, as the run goes on, the spectrum of e− approaches that of e+ towards a common final state characterized by a Kolmogorov-like spectrum and e+ slightly larger than e−. The behavior of outward and inward modes, density and magnetic magnitude variances and the normalized cross-helicity σc is summarized in the left column of Figure 70. The evolution of σc, when the instability reaches saturation, can be qualitatively compared with Ulysses observations (courtesy of B. Bavassano) in the right panel of the same figure, which shows a similar trend. Obviously, making this comparison, one has to take into account that this model has strong limitations like the presence of a peak in e+ not observed in real polar turbulence Another limitation, partly due to dissipation that has to be included in the model, is that the spectra obtained at the end of the instability growth are steeper than those observed in the solar wind. Finally, a further limitation is represented by the fact that this code is 1D. However, although for an incompressible 1-D simulation we do not expect to have turbulence development, in this case, since parametric decay is based on compressive phenomena, an energy transfer along the spectrum might be at work. In addition, Umeki and Terasawa (1992) studying the non-linear evolution of a large-amplitude incoherent Alfvén wave via 1D magnetohydrodynamic simulations, reported that while in a low beta plasma (B ≈ 0.2) the growth of backscattered Alfvén waves, which are opposite in helicity and propagation direction from the original Alfvén waves, could be clearly detected, in a high beta plasma (B ≈ 2) there was no production of backscattered Alfvén waves. Consequently, although numerical results obtained by Malara et al. (2001b) are very encouraging, the high beta plasma (B ≈ 2), characteristic of fast polar wind at solar minimum, plays against a relevant role of parametric instability in developing solar wind turbulence as observed by Ulysses. However, these simulations do remain an important step forward towards the understanding of turbulent evolution in the polar wind until other mechanisms will be found to be active enough to justify the observations shown in Figure 63. Spectra of e+ (thick line), e− (dashed line), and e ρ (thin line) are shown for 6 different times during the development of the instability. For t ≥ 50 a typical Kolmogorov slope appears. These results refer to β = 1. Image reproduced by permission from Malara et al. (2001b), copyright by EGU. Top left panel: time evolution of e+ (solid line) and e− (dashed line). Middle left panel: density (solid line) and magnetic magnitude (dashed line) variances. Bottom left panel: normalized cross helicity σc. Right panel: Ulysses observations of σc radial evolution within the polar wind (left column is from Malara et al., 2001b, right panel is a courtesy of B. Bavassano). 6 Compressive Turbulence Interplanetary medium is slightly compressive, magnetic field intensity and proton number density experience fluctuations over all scales and the compression depends on both the scale and the nature of the wind. As a matter of fact, slow wind is generally more compressive than fast wind, as shown in Figure 71 where, following Bavassano et al. (1982a) and Bruno and Bavassano (1991), we report the ratio between the power density associated with magnetic field intensity fluctuations and that associated with the fluctuations of the three components. In addition, as already shown by Bavassano et al. (1982a), this parameter increases with heliocentric distance for both fast and slow wind as shown in the bottom panel, where the ratio between the compression at 0.9 AU and that at 0.3 AU is generally greater than 1. It is also interesting to notice that within the Alfvénic fast wind, the lowest compression is observed in the middle frequency range, roughly between 10−4 − 10−3 Hz. On the other hand, this frequency range has already been recognized as the most Alfvénic one, within the inner heliosphere (Bruno et al., (1996). As a matter of fact, it seems that high Alfvénicity is correlated with low compressibility of the medium (Bruno and Bavassano, (1991; Klein et al., (1993; Bruno and Bavassano, (1993) although compressibility is not the only cause for a low Alfvénicity (Roberts et al., (1991, (1992; Roberts, (1992). The radial dependence of the normalized number density fluctuations δn/n for the inner and outer heliosphere were studied by Grappin et al. (1990) and Roberts et al. (1987b for the hourly frequency range, but no clear radial trend emerged from these studies. However, interesting enough, Grappin et al. (1990) found that values of e− were closely associated with enhancements of δn/n on scales longer than 1 h. On the other hand, a spectral analysis of proton number density, magnetic field intensity, and proton temperature performed by Marsch and Tu (1990b) and Tu et al. (1991) in the inner heliosphere, separately for fast and slow wind (see Figure 72), showed that normalized spectra of the above parameters within slow wind were only marginally dependent on the radial distance On the contrary, within fast wind, magnetic field and proton density normalized spectra showed not only a clear radial dependence but also similar level of power for k < 4×10−4 km s−1. For larger k these spectra show a flattening that becomes steeper for increasing distance, as was already found by Bavassano et al. (1982b) for magnetic field intensity. Normalized temperature spectra does not suffer any radial dependence neither in slow wind nor in fast wind. Spectral index is around .5/3 for all the spectra in slow wind while, fast wind spectral index is around −5/3 for k < 4 × 10−4 km.1 and slightly less steep for larger wave numbers. 6.1 On the nature of compressive turbulence Considerable efforts, both theoretical and observational, have been made in order to disclose the nature of compressive fluctuations. It has been proposed (Montgomery et al., (1987; Matthaeus and Brown, (1988; Zank et al., (1990; Zank and Matthaeus, (1990; Matthaeus et al., (1991; Zank and Matthaeus, (1992) that most of compressive fluctuations observed in the solar wind could be accounted for by the Nearly Incompressible (NI) model. Within the framework of this model, (Montgomery et al., (1987) showed that a spectrum of small scale density fluctuations follows a k−5/3 when the spectrum of magnetic field fluctuations follows the same scaling. Moreover, it was showed (Matthaeus and Brown, (1988; Zank and Matthaeus, (1992) that if compressible MHD equations are expanded in terms of small turbulent sonic Mach numbers, pressure balanced structures, Alfvénic and magnetosonic fluctuations naturally arise as solutions and, in particular, the RMS of small density fluctuations would scale like M2, being M = δu/Cs the turbulent sonic Mach number, δu the RMS of velocity fluctuations and Cs the sound speed. In addition, if heat conduction is allowed in the approximation, temperature fluctuations dominate over magnetic and density fluctuations, temperature and density are anticorrelated and would scale like M. However, in spite of some examples supporting this theory (Matthaeus et al., (1991 reported 13% of cases satisfied the requirements of NI-theory), wider statistical studies, conducted by Tu and Marsch (1994), Bavassano et al. (1995) and Bavassano and Bruno (1995), showed that NI theory is not applicable sic et simpliciter to the solar wind. The reason might be in the fact that interplanetary medium is highly inhomogeneous because of the presence of an underlying structure convected by the wind. As a matter of fact, Thieme et al. (1989) showed evidence for the presence of time intervals characterized by clear anti-correlation between kinetic pressure and magnetic pressure while the total pressure remained fairly constant. These pressure balance structures were for the first time observed by Burlaga and Ogilvie (1970) for a time scale of roughly one to two hours. Later on, Vellante and Lazarus (1987) reported strong evidence for anti-correlation between field intensity and proton density, and between plasma and field pressure on time scales up to 10 h. The anti-correlation between kinetic and magnetic pressure is usually interpreted as indicative of the presence of a pressure balance structure since slow magnetosonic modes are readily damped (Barnes, (1979). The first two rows show magnetic field compression (see text for definition) for fast (left column) and slow (right column) wind at 0.3 AU (upper row) and 0.9 AU (middle row). The bottom panels show the ratio between compression at 0.9 AU and compression at 0.3 AU. This ratio is generally greater than 1 for both fast and slow wind. From left to right: normalized spectra of number density, magnetic field intensity fluctuations (adapted from Marsch and Tu, (1990b), and proton temperature (adapted from Tu et al., (1991). Different lines refer to different heliocentric distances for both slow and fast wind. These features, observed also in their dataset, were taken by Thieme et al. (1989) as evidence of stationary spatial structures which were supposed to be remnants of coronal structures convected by the wind. Different values assumed by plasma and field parameters within each structure were interpreted as a signature characterizing that particular structure and not destroyed during the expansion. These intervals, identifiable in Figure 73 by vertical dashed lines, were characterized by pressure balance and a clear anti-correlation between magnetic field intensity and temperature. These structures were finally related to the fine ray-like structures or plumes associated with the underlying chromospheric network and interpreted as the signature of interplanetary flowtubes. The estimated dimension of these structures, back projected onto the Sun, suggested that they over-expand in the solar wind. In addition, Grappin et al. (2000) simulated the evolution of Alfvén waves propagating within such pressure equilibrium ray structures in the framework of global Eulerian solar wind approach and found that the compressive modes in these simulations are very much reduced within the ray structures, which indeed correspond to the observational findings (Buttighoffer et al., (1995, (1999). From top to bottom: field intensity |B|; proton and alpha particle velocity u p and u α ; corrected proton velocity u pc = u p − δu A , where u A is the Alfvén speed; proton and alpha number density n p and n α ; proton and alpha temperature T p and T α ; kinetic and magnetic pressure P k and P m , which the authors call Pgas and Pmag; total pressure Ptot and β = Pgas/Pmag (from Tu and Marsch, (1995a). The idea of filamentary structures in the solar wind dates back to Parker (1964), followed by other authors like McCracken and Ness (1966), Siscoe et al. (1968), and more recently has been considered again in the literature with new results (see Section 10). These interplanetary flow tubes would be of different sizes, ranging from minutes to several hours and would be separated from each other by tangential discontinuities and characterized by different values of plasma parameters and a different magnetic field orientation and intensity. This kind of scenario, because of some similarity to a bunch of tangled, smoking "spaghetti" lifted by a fork, was then named "spaghetti-model". A spectral analysis performed by Marsch and Tu (1993a) in the frequency range 6×10−3 – 6×10−6 showed that the nature and intensity of compressive fluctuations systematically vary with the stream structure. They concluded that compressive fluctuations are a complex superposition of magnetoacoustic fluctuations and pressure balance structures whose origin might be local, due to stream dynamical interaction, or of coronal origin related to the flow tube structure. These results are shown in Figure 74 where the correlation coefficient between number density n and total pressure Ptot (indicated with the symbols pT in the figure), and between kinetic pressure P k and magnetic pressure Pm (indicated with the symbols p k and p b , respectively) is plotted for both Helios s/c relatively to fast wind. Positive values of correlation coefficients C(n, pT) and C(p k , p b ) identify magnetosonic waves, while positive values of C(n, pT) and negative values of C(p k , p b ) identify pressure balance structures. The purest examples of each category are located at the upper left and right corners. Correlation coefficient between number density pT and total pressure pT plotted vs. the correlation coefficient between kinetic pressure and magnetic pressure for both Helios relatively to fast wind. Image reproduced by permission from Marsch and Tu (1993b). Following these observations, Tu and Marsch (1994) proposed a model in which fluctuations in temperature, density, and field directly derive from an ensemble of small amplitude pressure balanced structures and small amplitude fast perpendicular magnetosonic waves. These last ones should be generated by the dynamical interaction between adjacent flow tubes due to the expansion and, eventually, they would experience also a non-linear cascade process to smaller scales. This model was able to reproduce most of the correlations described by Marsch and Tu (1993a) for fast wind. Later on, Bavassano et al. (1996a) tried to characterize compressive fluctuations in terms of their polytropic index, which resulted to be a useful tool to study small scale variations in the solar wind. These authors followed the definition of polytropic fluid given by Chandrasekhar (1967): "a polytropic change is a quasi-static change of state carried out in such a way that the specific heat remains constant (at some prescribed value) during the entire process". For such a variation of state the adiabatic laws are still valid provided that the adiabatic index γ is replaced by a new adiabatic index γ' = (cp − c)/(cv − c) where c is the specific heat of the polytropic variation, and cp and cv are the specific heat at constant pressure and constant volume, respectively. This similarity is lost if we adopt the definition given by Courant and Friedrichs (1976), for whom a fluid is polytropic if its internal energy is proportional to the temperature. Since no restriction applies to the specific heats, relations between temperature, density, and pressure do not have a simple form as in Chandrasekhar approach (Zank and Matthaeus, (1991). Bavassano et al. (1996a) recovered the polytropic index from the relation between density n and temperature T changes for the selected scale Tn1−γ' = const. and used it to determine whether changes in density and temperature were isobaric (γ' = 0), isothermal (γ' = 1), adiabatic (γ' = γ), or isochoric (γ' = ∞). Although the role of the magnetic field was neglected, reliable conclusions could be obtained whenever the above relations between temperature and density were strikingly clear. These authors found intervals characterized by variations at constant thermal pressure P. They interpreted these intervals as a subset of total-pressure balanced structures where the equilibrium was assured by the thermal component only, perhaps tiny flow tubes like those described by Thieme et al. (1989) and Tu and Marsch (1994). Adiabatic changes were probably related to magnetosonic waves excited by contiguous flow tubes (Tu and Marsch, (1994). Proton temperature changes at almost constant density were preferentially found in fast wind, close to the Sun. These regions were characterized by values of B and N remarkable stable and by strong Alfvénic fluctuations (Bruno et al., (1985). Thus, they suggested that these temperature changes could be remnants of thermal features already established at the base of the corona. Thus, the polytropic index offers a very simple way to identify basic properties of solar wind fluctuations, provided that the magnetic field does not play a major role. 6.2 Compressive turbulence in the polar wind Compressive fluctuations in high latitude solar wind have been extensively studied by Bavassano et al. (2004) looking at the relationship between different parameters of the solar wind and comparing these results with predictions by existing models. These authors indicated with N, P m , P k , and P t the proton number density n, magnetic pressure, kinetic pressure and total pressure (Ptot = P m + P k ), respectively, and computed correlation coefficients ρ between these parameters. Figure 75 clearly shows that a pronounced positive correlation for N − P t and a negative pronounced correlation for P m − P k is a constant feature of the observed compressive fluctuations. In particular, the correlation for N − P t is especially strong within polar regions at small heliocentric distance In mid-latitude regions the correlation weakens, while almost disappears at low latitudes. In the case of P m − P k , the anticorrelation remains strong throughout the whole latitudinal excursion. For polar wind the anticorrelation appears to be less strong at small distances, just where the N − P t correlation is highest. The role played by density and temperature in the anticorrelation between magnetic and thermal pressures is investigated in Figure 76, where the magnetic field magnitude is directly compared with proton density and temperature. As regards the polar regions, a strong B-T anticorrelation is clearly apparent at all distances (right panel). For B-N an anticorrelation tends to emerge when solar distance increases. This means that the magnetic-thermal pressure anticorrelation is mostly due to an anticorrelation of the magnetic field fluctuations with respect to temperature fluctuations, rather than density (see, e.g., Bavassano et al., (1996a,b). Outside polar regions the situation appears in part reversed, with a stronger role for the B-N anticorrelation. In Figure 77 scatter plots of total pressure vs. density fluctuations are used to test a model by Tu and Marsch (1994), based on the hypothesis that the compressive fluctuations observed in solar wind are mainly due to a mixture of pressure-balanced structures (PBS) and fast magnetosonic waves (W).Waves can only contribute to total pressure fluctuations while both waves and pressurebalanced structures may contribute to density fluctuations. A tunable parameter in the model is the relative PBS/W contribution to density fluctuations α. Straight lines in Figure 77 indicate the model predictions for different values of α. It is easily seen that for all polar wind samples the great majority of experimental data fall in the α > 1 region. Thus, pressure-balanced structures appear to play a major role with respect to magnetosonic waves. This is a feature already observed by Helios in the ecliptic wind (Tu and Marsch, (1994), although in a less pronounced way. Different panels of Figure 77 refer to different heliocentric distances within the polar wind. Namely, going from P1 to P4 is equivalent to move from 1.4 to 4 AU. A comparison between these panels indicates that the observed distribution tends to shift towards higher values of α (i.e., pressure-balanced structures become increasingly important), which probably is a radial distance effect. Histograms of ρ(N − P t ) and ρ(P m − P k ) per solar rotation. The color bar on the left side indicates polar (red), mid-latitude (blue), and low latitude (green) phases. Moreover, universal time UT, heliocentric distance, and heliographic latitude are also indicated on the left side of the plot. Occurrence frequency is indicated by the color bar shown on the right hand side of the figure. Image reproduced by permission from Bavassano et al. (2004), copyright EGU. Finally, the relative density fluctuations dependence on the turbulent Mach number M (the ratio between velocity fluctuation amplitude and sound speed) is shown in Figure 78. The aim is to look for the presence, in the observed fluctuations, of nearly incompressible MHD behaviors. In the framework of the NI theory (Zank and Matthaeus, (1991, (1993) two different scalings for the relative density fluctuations are possible, as M or as M2, depending on the role that thermal conduction effects may play in the plasma under study (namely a heat-fluctuation-dominated or a heat-fluctuation-modified behavior, respectively). These scalings are shown in Figure 78 as solid (for M) and dashed (for M2) lines. It is clearly seen that for all the polar wind samples no clear trend emerges in the data. Thus, NI-MHD effects do not seem to play a relevant role in driving the polar wind fluctuations. This confirms previous results in the ecliptic by Helios in the inner heliosphere (Bavassano et al., (1995; Bavassano and Bruno, (1995) and by Voyagers in the outer heliosphere (Matthaeus et al., (1991). It is worthy of note that, apart from the lack of NI trends, the experimental data from Ulysses, Voyagers, and Helios missions in all cases exhibit quite similar distributions. In other words, for different heliospheric regions, solar wind regimes, and solar activity conditions, the behavior of the compressive fluctuations in terms of relative density fluctuations and turbulent Mach numbers seems almost to be an invariant feature. Solar rotation histograms of B-N and B-T in the same format of Figure 75. Image reproduced by permission from Bavassano et al. (2004), copyright EGU. Scatter plots of the relative amplitudes of total pressure vs. density fluctuations for polar wind samples P1 to P4. Straight lines indicate the Tu and Marsch (1994) model predictions for different values of α, the relative PBS/W contribution to density fluctuations. Image reproduced by permission from Bavassano et al. (2004), copyright EGU. Relative amplitude of density fluctuations vs. turbulent Mach number for polar wind. Solid and dashed lines indicate the M and M2 scalings, respectively. Image reproduced by permission from Bavassano et al. (2004), copyright EGU. The above observations fully support the view that compressive fluctuations in high latitude solar wind are a mixture of MHD modes and pressure balanced structures. It has to be reminded that previous studies (McComas et al., (1995, (1996; Reisenfeld et al., (1999) indicated a relevant presence of pressure balanced structures at hourly scales. Moreover, nearly-incompressible (see Section 6.1) effects do not seem to play any relevant role. Thus, polar observations do not show major differences when compared with ecliptic observations in fast wind, the only possible difference being a major role of pressure balanced structures. 6.3 The effect of compressive phenomena on Alfvénic correlations A lack of δV − δB correlation does not strictly indicate a lack of Alfvénic fluctuations since a superposition of both outward and inward oriented fluctuations of the same amplitude would produce a very low correlation as well. In addition, the rather complicated scenario at the base of the corona, where both kinetic and magnetic phenomena contribute to the birth of the wind, suggest that the imprints of such a structured corona is carried away by the wind during its expansion. At this point, we would expect that solar wind fluctuations would not solely be due to the ubiquitous Alfvénic and other MHD propagating modes but also to an underlying structure convected by the wind, not necessarily characterized by Alfvén-like correlations. Moreover, dynamical interactions between fast and slow wind, built up during the expansion, contribute to increase the compressibility of the medium. It has been suggested that disturbances of the mean magnetic field intensity and plasma density act destructively on δV − δB correlation. Bruno and Bavassano (1993) analyzed the loss of the Alfvénic character of interplanetary fluctuations in the inner heliosphere within the low frequency part of the Alfvénic range, i.e., between 2 and 10 h. Figure 79, from their work, shows the wind speed profile, σc, the correlation coefficients, phase and coherence for the three components (see Appendix B.2.1), the angle between magnetic field and velocity minimum variance directions, and the heliocentric distance Magnetic field sectors were rectified (see Appendix B.3) and magnetic field and velocity components were rotated into the magnetic field minimum variance reference system (see Appendix D). Although the three components behave in a similar way, the most Alfvénic ones are the two components Y and Z transverse to the minimum variance component X. As a matter of fact, for an Alfvén mode we would expect a high δV − δB correlation, a phase close to zero for outward waves and a high coherence Moreover, it is rather clear that the most Alfvénic intervals are located within the trailing edges of high velocity streams. However, as the radial distance increases, the Alfvénic character of the fluctuations decreases and the angle Θ bu increases. The same authors found that high values of Θ bu are associated with low values of σc and correspond to the most compressive intervals. They concluded that the depletion of the Alfvénic character of the fluctuations, within the hourly frequency range, might be driven by the interaction with static structures or magnetosonic perturbations able to modify the homogeneity of the background medium on spatial scales comparable to the wavelength of the Alfvénic fluctuations. A subsequent paper by Klein et al. (1993) showed that the δV − δB decoupling increases with the plasma β, suggesting that in regions where the local magnetic field is less relevant, compressive events play a major role in this phenomenon. Wind speed profile V and |σc|V are shown in the top panel. The lower three panels refer to correlation coefficient, phase angle and coherence for the three components of δV and δB fluctuations, respectively. The successive panel indicates the value of the angle between magnetic field and velocity fluctuations minimum variance directions. The bottom panel refers to the heliocentric distance (from Bruno and Bavassano, (1993). 7 A Natural Wind Tunnel The solar wind has been used as a wind tunnel by Burlaga who, at the beginning of the 1990s, started to investigate anomalous fluctuations (Burlaga, (1991a,b,c, (1995) as observed by measurements in the outer heliosphere by the Voyager spacecraft. In 1991, Marsch, in a review on solar wind turbulence given at the Solar Wind Seven conference, underlined the importance of investigating scaling laws in the solar wind and we like to report his sentence: "The recent work by Burlaga (1991a,b) opens in my mind a very promising avenue to analyze and understand solar wind turbulence from a new theoretical vantage point. ...This approach may also be useful for MHD turbulence Possible connections between intermittent turbulence and deterministic chaos have recently been investigated ...We are still waiting for applications of these modern concepts of chaos theory to solar wind MHD fluctuations." (cf. Marsch, (1992, p. 503). A few years later Carbone (1993) and, independently, Biskamp (1993) faced the question of anomalous scaling from a theoretical point of view. More than ten years later the investigation of statistical mechanics of MHD turbulence from one side, and of low-frequency solar wind turbulence on the other side, has produced a lot of papers, and is now mature enough to be tentatively presented in a more organic way. 7.1 Scaling exponents of structure functions The phenomenology of turbulence developed by Kolmogorov (1941) deals with some statistical hypotheses for fluctuations. The famous footnote remark by Landau (Landau and Lifshitz, (1971) pointed out a defect in the Kolmogorov theory, namely the fact that the theory does not take proper account of spatial fluctuations of local dissipation rate (Frisch, (1995). This led different authors to investigate the features related to scaling laws of fluctuations and, in particular, to investigate the departure from the Kolmogorov's linear scaling of the structure functions (cf. Section 2.8). An up-to-date comprehensive review of these theoretical efforts can be found in the book by Frisch (1995). Here we are interested in understanding what we can learn from solar wind turbulence about the basic features of scaling laws for fluctuations. We use velocity and magnetic fields time series, and we investigate the scaling behavior of the high-order moments of stochastic variables defined as variations of fields separated by a time8 interval τ. First of all, it is worthwhile to remark that scaling laws and, in particular, the exact relation (41) which defines the inertial range in fluid flows, is valid for longitudinal (streamwise) fluctuations. In common fluid flows the Kolmogorov linear scaling law is compared with the moments of longitudinal velocity differences. In the same way for the solar wind turbulence we investigate the scaling behavior of Δu τ = u(t+τ)−u(t), where u(t) represents the component of the velocity field along the radial direction. As far as the magnetic differences are concerned Δb τ = B(t+τ) − B(t), we are free for different choices and, in some sense, this is more interesting from an experimental point of view. We can use the reference system where B(t) represents the magnetic field projected along the radial direction, or the system where B(t) represents the magnetic field along the local background magnetic field, or B(t) represents the field along the minimum variance direction. As a different case we can simply investigate the scaling behavior of the fluctuations of the magnetic field intensity. Let us consider the p-th moment of both absolute values9 of velocity fluctuations R p (τ) = 〈|Δuτ| p 〉 and magnetic fluctuations S p (τ) = 〈|Δb τ | p 〉, also called p-th order structure function in literature (brackets being time average). Here we use magnetic fluctuations across structures at intervals τ calculated by using the magnetic field intensity. Typical structure functions of magnetic field fluctuations, for two different values of p, for both a slow wind and a fast wind at 0.9 AU, are shown in Figures 80. The magnetic field we used is that measured by Helios 2 spacecraft. Structure functions calculated for the velocity fields have roughly the same shape. Looking at these Figures the typical scaling features of turbulence can be observed. Starting from low values at small scales, the structure functions increase towards a region where S p → const. at the largest scales. This means that at these scales the field fluctuations are uncorrelated. A kind of "inertial range", that is a region of intermediate scales τ where a power law can be recognized for both $$\begin{array}{*{20}c} {R_p (\tau ) = \left\langle {\left| {\Delta u_\tau } \right|^p } \right\rangle \sim \tau ^{\zeta _p } } \\ {S_p (\tau ) = \left\langle {\left| {\Delta b_\tau } \right|^p } \right\rangle \sim \tau ^{\xi _p } } \\ \end{array} $$ is more or less visible only for the slow wind. In this range correlations exists, and we can obtain the scaling exponents ζ p and ξ p through a simple linear fit. Structure functions for the magnetic field intensity S n (r) for two different orders, n = 3 and n = 5, for both slow wind and fast wind, as a function of the time scale r. Data come from Helios 2 spacecraft at 0.9 AU. Since as we have seen, Yaglom's law is observed only in some few samples, the inertial range in the whole solar wind is not well defined. A look at Figure 80 clearly shows that we are in a situation similar to a low-Reynolds number fluid flow. In order to compare scaling exponents of the solar wind turbulent fluctuations with other experiments, it is perhaps better to try to recover exponents using the Extended Self-Similarity (ESS), introduced some time ago by Benzi et al. (1993), and used here as a tool to determine relative scaling exponents. In the fluid-like case, the third-order structure function can be regarded as a generalized scaling using the inverse of Equation (42) or of Equation (41) (Politano et al., (1998). Then, we can plot the p-th order structure function vs. the third-order one to recover at least relative scaling exponents ζ p /ζ3 and ζ p /ξ3 (61). Quite surprisingly (see Figure 81), we find that the range where a power law can be recovered extends well beyond the inertial range, covering almost all the experimental range. In the fluid case the scaling exponents which can be obtained through ESS at low or moderate Reynolds numbers, coincide with the scaling exponents obtained for high Reynolds, where the inertial range is very well defined Benzi et al. (1993). This is due to the fact that, since by definition ζ3 = 1 in the inertial range (Frisch, (1995), whatever its extension might be. In our case scaling exponents obtained through ESS can be used as a surrogate, since we cannot be sure that an inertial range exists. Structure functions S n (r) for two different orders, n = 3 and n = 5, for both slow wind and high wind, as a function of the fourth-order structure function S4(r). Data come from Helios 2 spacecraft at 0.9 AU. It is worthwhile to remark (as shown in Figure 81) that we can introduce a general scaling relation between the q-th order velocity structure function and the q-th order structure function, with a relative scaling exponent α p (q). It has been found that this relation becomes an exact relation $$S_q (r) = [S_p (r)]^{\alpha _p (q)} ,$$ when the velocity structure functions are normalized to the average velocity within each period used to calculate the structure function (Carbone et al., (1996a). This is very interesting because it implies (Carbone et al., (1996a) that the above relationship is satisfied by the following probability distribution function, if we assume that odd moments are much smaller than the even ones: $$PDF(\Delta u_\tau ) = \int_{ - \infty }^\infty {dk e^{ik\Delta u_\tau } } \sum\limits_{q = 0}^\infty {\frac{{(ik)^{2q} }} {{2\pi (2q)!}}[S_p (\tau )]^{\alpha _p (2q)} .}$$ That is, for each scale τ the knowledge of the relative scaling exponents α p (q) completely determines the probability distribution of velocity differences as a function of a single parameter S p (τ). Relative scaling exponents, calculated by using data coming from Helios 2 at 0.9 AU, are reported in Table 1. As it can be seen, two main features can be noted: Table 1: Scaling exponents for velocity ζ p and magnetic ξ p variables calculated through ESS. Errors represent the standard deviations of the linear fitting. The data used comes from a turbulent sample of slow wind at 0.9 AU from Helios 2 spacecraft. As a comparison we show the normalized scaling exponents of structure functions calculated in a wind tunnel on Earth (Ruíz-Chavarría et al., (1995) for velocity and temperature. The temperature is a passive scalar in this experiment. ζ p ξ p u(t)(fluid) T(t) (fluid) 0.37 ± 0.06 There is a significant departure from the Kolmogorov linear scaling, that is, real scaling exponents are anomalous and seem to be non-linear functions of p, say ζ p /ζ3 > p/3 for p < 3, while ζ p /ζ3 < p/3 for p > 3. The same behavior can be observed for ξ p /ξ3. In Table 1 we report also the scaling exponents obtained in usual fluid flows for velocity and temperature, the latter being a passive scalar. Scaling exponents for velocity field are similar to scaling exponents obtained in turbulent flows on Earth, showing a kind of universality in the anomaly. This effect is commonly attributed to the phenomenon of intermittency in fully developed turbulence (Frisch, (1995). Turbulence in the solar wind is intermittent, just like its fluid counterpart on Earth. The degree of intermittency is measured through the distance between the curve ζ p /ζ3 and the linear scaling p/3. It can be seen that the magnetic field is more intermittent than the velocity field. The same difference is observed between the velocity field and a passive scalar (in our case the temperature) in ordinary fluid flows (Ruíz-Chavarría et al., (1995). That is the magnetic field, as long as intermittency properties are concerned, has the same scaling laws of a passive field. Of course this does not mean that the magnetic field plays the same role as a passive field. Statistical properties are in general different from dynamical properties. In Table 1 we show scaling exponents up to the sixth order. Actually, a question concerns the validation of high-order moments estimates, say the maximum value of the order p which can be determined with a finite number of points of our dataset. As the value of p increases, we need an increasing number of points for an optimal determination of the structure function (RuíTennekes). Anomalous scaling laws are generated by rare and intense events due to singularities in the gradients: the higher their intensity the more rare these events are. Of course, when the data set has a finite extent, the probability to get singularities stronger than a certain value approaches zero. In that case, scaling exponents ζ p of order higher than a certain value become linear functions of p. Actually, the structure function S p (τ) depends on the probability distribution function PDF(Δuτ) through $$S_p (\tau ) = \int {\Delta u_\tau ^p PDF(\delta u_\tau )d\Delta u_\tau }$$ and, the function S p is determined only when the integral converges. As p increases, the function F p (δuτ) = Δu τ p PDF(Δuτ) becomes more and more disturbed, with some spikes, so that the integral becomes more and more undefined, as can be seen for example in Figure 1 of the paper by Dudok de Wit (2004). A simple calculation (Dudok de Wit, (2004) for the maximum value of the order p m which can reliably be estimated with a given number N of points in the dataset, gives $$P(\Delta z_{\lambda \ell }^ \pm ) = PDF(\lambda ^h \Delta z_{\lambda \ell }^ \pm ).$$ Normalized scaling exponents ξ p /ξ3 for radial magnetic fluctuations in a laboratory plasma, as measured at different distances a/R (R ≃ 0.45 cm being the minor radius of the torus in the experiment) from the external wall. Errors represent the standard deviations of the linear fitting. Scaling exponents have been obtained using the ESS. a/R = 0.96 the empirical criterion p m ≃ log N. Structure functions of order p > p m cannot be determined accurately. Only few large structures are enough to generate the anomalous scaling laws. In fact, as shown by Salem et al. (2009), by suppressing through wavelets analysis just a few percentage of large structures on all scales, the scaling exponents become linear functions of p, respectively p/4 and p/3 for the kinetic and magnetic fields. As far as a comparison between different plasmas is concerned, the scaling exponents of magnetic structure functions, obtained from laboratory plasma experiments of a Reversed-Field Pinch at different distances from the external wall (Carbone et al., (2000) are shown in Table 2. In laboratory plasmas it is difficult to measure all the components of the vector field at the same time, thus, here we show only the scaling exponents obtained using magnetic field differences B r (t+τ)−B r (t) calculated from the radial component in a toroidal device where the z-axis is directed along the axis of the torus. As it can be seen, intermittency in magnetic turbulence is not so strong as it appears to be in the solar wind, actually the degree of intermittency increases when going toward the external wall. This last feature appears to be similar to what is currently observed in channel flows, where intermittency also increases when going towards the external wall (Pope, (2000). Scaling exponents of structure functions for Alfvén variables, velocity, and magnetic variables have been calculated also for high resolution 2D incompressible MHD numerical simulations (Politano et al., (1998). In this case, we are freed from the constraint of the Taylor hypothesis when calculating the fluctuations at a given scale. From 2D simulations we recover the fields u(r, t) and b(r, t) at some fixed times. We calculate the longitudinal fluctuations directly in space at a fixed time, namely Δu∓ = [u(r+ℓ, t)− u(r, t)] · ℓ/ℓ (the same are made for different fields, namely the magnetic field or the Elsässer fields). Finally, averaging both in space and time, we calculate the scaling exponents through the structure functions. These scaling exponents are reported in Table 3. Note that, even in numerical simulations, intermittency for magnetic variables is stronger than for the velocity field. 7.2 Probability distribution functions and self-similarity of fluctuations The presence of scaling laws for fluctuations is a signature of the presence of self-similarity in the phenomenon. A given observable u(ℓ), which depends on a scaling variable ℓ, is invariant with respect to the scaling relation ℓ → λℓ, when there exists a parameter μ(λ) such that u(ℓ) = μ(λ)u(λℓ). The solution of this last relation is a power law u(ℓ) = Cℓ h , where the scaling exponent is h = −logλμ. Since, as we have just seen, turbulence is characterized by scaling laws, this must be a signature of self-similarity for fluctuations. Let us see what this means. Let us consider fluctuations at two different scales, namely Δz ℓ ± and Δz λℓ ± . Their ratio Δz λℓ ± /Δz ℓ ± depends only on the value of h, and this should imply that fluctuations are self-similar. This means that PDFs are related through Normalized scaling exponents ξ p /ξ3 for Alfvénic, velocity, and magnetic fluctuations obtained from data of high resolution 2D MHD numerical simulations. Scaling exponents have been calculated from spatial fluctuations; different times, in the statistically stationary state, have been used to improve statistics. The scaling exponents have been calculated by ESS using Equation (41) as characteristic scale rather than the third-order structure function (cf. Politano et al., (1998, for details). Z + Z − Let us consider the normalized variables $$y_\ell ^ \pm = \frac{{\Delta z_\ell ^ \pm }} {{\left\langle {(\Delta z_\ell ^ \pm )^2 } \right\rangle ^{1/2} }}.$$ When h is unique or in a pure self-similar situation, PDFs are related through P(y ℓ ± ) = PDF(y λℓ ± ), say by changing scale PDFs coincide. The PDFs relative to the normalized magnetic fluctuations δbτ = Δbτ/〈Δb τ 2 〉1/2, at three different scales τ, are shown in Figure 82. It appears evident that the global self-similarity in real turbulence is broken. PDFs do not coincide at different scales, rather their shape seems to depend on the scale τ. In particular, at large scales PDFs seem to be almost Gaussian, but they become more and more stretched as τ decreases. At the smallest scale PDFs are stretched exponentials. This scaling dependence of PDFs is a different way to say that scaling exponents of fluctuations are anomalous, or can be taken as a different definition of intermittency. Note that the wings of PDFs are higher than those of a Gaussian function. This implies that intense fluctuations have a probability of occurrence higher than that they should have if they were Gaussianly distributed. Said differently, intense stochastic fluctuations are less rare than we should expect from the point of view of a Gaussian approach to the statistics. These fluctuations play a key role in the statistics of turbulence The same statistical behavior can be found in different experiments related to the study of the atmosphere (see Figure 83) and the laboratory plasma (see Figure 84). Left panel: normalized PDFs for the magnetic fluctuations observed in the solar wind turbulence by using Helios data. Right panel: distribution function of waiting times Δt between structures at the smallest scale. The parameter β is the scaling exponent of the scaling relation PDF(Δt) ~ Δt−β for the distribution function of waiting times. Left panel: normalized PDFs of velocity fluctuations in atmospheric turbulence. Right panel: distribution function of waiting times Δt between structures at the smallest scale. The parameter β is the scaling exponent of the scaling relation PDF(Δt) ~ Δt−β for the distribution function of waiting times. The turbulent samples have been collected above a grass-covered forest clearing at 5 m above the ground surface and at a sampling rate of 56 Hz (Katul et al., (1997). Left panel: normalized PDFs of the radial magnetic field collected in RFX magnetic turbulence (Carbone et al., (2000). Right panel: distribution function of waiting times Δt between structures at the smallest scale. The parameter β is the scaling exponent of the scaling relation PDF(Δt) ~ Δt−β for the distribution function of waiting times. 7.3 What is intermittent in the solar wind turbulence? The multifractal approach Time dependence of Δu τ and Δb τ for three different scales τ is shown in Figures 85 and 86, respectively. These plots show that, as τ becomes small, intense fluctuations become more and more important, and they dominate the statistics. Fluctuations at large scales appear to be smooth while, as the scale becomes smaller, intense fluctuations becomes visible. These dominating fluctuations represent relatively rare events. Actually, at the smallest scales, the time behavior of both Δu τ and Δb τ is dominated by regions where fluctuations are low, in between regions where fluctuations are intense and turbulent activity is very high. Of course, this behavior cannot be described by a global self-similar behavior. Allowing the scaling laws to vary with the region of turbulence we are investigating would be more convincing. The behavior we have just described is at the heart of the multifractal approach to turbulence (Frisch, (1995). In that description of turbulence, even if the small scales of fluid flow cannot be globally self-similar, self-similarity can be reintroduced as a local property. In the multifractal description it is conjectured that turbulent flows can be made by an infinite set of points S h (r), each set being characterized by a scaling law ΔZ ℓ ± ~ ℓh(r), that is, the scaling exponent can depend on the position r. The usual dimension of that set is then not constant, but depends on the local value of h, and is quoted as D(h) in literature. Then, the probability of occurrence of a given fluctuation can be calculated through the weight the fluctuation assumes within the whole flow, i.e., $$P(\Delta z_\ell ^ \pm ) \sim (\Delta z_\ell ^ \pm )^h \times volume occupied by fluctuations,$$ and the p-th order structure function is immediately written through the integral over all (continuous) values of . weighted by a smooth function μ(h) ~ 0(1), i.e., $$S_p (\ell ) = \int {\mu (h)(\Delta z_\ell ^ \pm )^{ph} (\Delta z_\ell ^ \pm )^{3 - D(h)} dh} .$$ Differences for the longitudinal velocity δu τ = u(t + τ) − uu(t) at three different scales τ, as shown in the figure. Differences for the magnetic intensity Δb τ = B(t + τ) − B(t) at three different scales τ, as shown in the figure. A moment of reflection allows us to realize that in the limit ℓ → 0 the integral is dominated by the minimum value (over .) of the exponent and, as shown by Frisch (1995), the integral can be formally solved using the usual saddle-point method. The scaling exponents of the structure function can then be written as $$\zeta _p = \mathop {\min }\limits_h [ph + 3 - D(h)].$$ (62f) In this way, the departure of ζ p from the linear Kolmogorov scaling and thus intermittency, can be characterized by the continuous changing of D(h) as h varies. That is, as p varies we are probing regions of fluid where even more rare and intense events exist. These regions are characterized by small values of h, that is, by stronger singularities of the gradient of the field. Owing to the famous Landau footnote on the fact that fluctuations of the energy transfer rate must be taken into account in determining the statistics of turbulence, people tried to interpret the non-linear energy cascade typical of turbulence theory, within a geometrical framework. The old Richardson's picture of the turbulent behavior as the result of a hierarchy of eddies at different scales has been modified and, as realized by Kraichnan (1974), once we leave the idea of a constant energy cascade rate we open a "Pandora's box" of possibilities for modeling the energy cascade. By looking at scaling laws for Δz ℓ ± and introducing the scaling exponents for the energy transfer rate 〈∈ ℓ p ~ r τ p , it can be found that ζ p = p/m + τ p/m (being m = 3 when the Kolmogorov-like phenomenology is taken into account, or m = 4 when the Iroshnikov-Kraichnan phenomenology holds). In this way the intermittency correction are determined by a cascade model for the energy transfer rate. When τ p is a non-linear function of p, the energy transfer rate can be described within the multifractal geometry (see, e.g., Meneveau, (1991, and references therein) characterized by the generalized dimensions D p = 1 − τ p /(p − 1) (Hentschel and Procaccia, (1983). The scaling exponents of the structure functions are then related to D p by $$\zeta _p = \left( {\frac{p} {m} - 1} \right)D_{p/m} + 1.$$ (62g) The correction to the linear scaling p/m is positive for p < m, negative for p > m, and zero for p = m. A fractal behavior where D p = const. < 1 gives a linear correction with a slope different from 1/m. 7.4 Fragmentation models for the energy transfer rate Cascade models view turbulence as a collection of fragments at a given scale ℓ, which results from the fragmentation of structures at the scale ℓ' > ℓ, down to the dissipative scale (Novikov, (1969). Sophisticated statistics are applied to obtain scaling exponents ζ p for the p-th order structure function. The starting point of fragmentation models is the old β-model, a "pedagogical" fractal model introduced by Frisch et al. (1978) to account for the modification of the cascade in a simple way. In this model, the cascade is realized through the conjecture that active eddies and non-active eddies are present at each scale, the space-filling factor for the fragments being fixed for each scale. Since it is a fractal model, the β-model gives a linear modification to ζ p . This can account for a fit on the data, as far as small values of p are concerned. However, the whole curve ζ p is clearly nonlinear, and a multifractal approach is needed. The random-β model (Benzi et al., (1984), a multifractal modification of the β-model, can be derived by invoking that the space-filling factor for the fragments at a given scale in the energy cascade is not fixed, but is given by a random variable β. The probability of occurrence of a given β is assumed to be a bimodal distribution where the eddies fragmentation process generates either space-filling eddies with probability ξ or planar sheets with probability (1 − ξ) (for conservation 0 ≤ ξ ≤ 1). It can be found that $$\zeta _p = \frac{p} {m} - \log _2 [1 - \xi + \xi 2^{p/m - 1} ],$$ where the free parameter ξ can be fixed through a fit on the data. The p-model (Meneveau, (1991; Carbone, (1993) consists in an eddies fragmentation process described by a two-scale Cantor set with equal partition intervals. An eddy at the scale ℓ, with an energy derived from the transfer rate ∈ r , breaks down into two eddies at the scale ℓ/2, with energies μ∈ r and (1 − μ)∈ r . The parameter 0.5 ≤ μ ≤ 1 is not defined by the model, but is fixed from the experimental data. The model gives $$\zeta _p = 1 - \log _2 [\mu ^{p/m} + (1 - \mu )^{p/m} ].$$ In the model by She and Leveque (see, e.g., She and Leveque, (1994; Politano and Pouquet, (1998) one assumes an infinite hierarchy for the moments of the energy transfer rates, leading to ∈ r (p+1) ~ [∈ r (p) ] β [∈ r (∞) ]1−β, and a divergent scaling law for the infinite-order moment ∈ r (∞) ~ r−x, which describes the most singular structures within the flow. The model reads $$\zeta _p = \frac{p} {m}(1 - x) + C\left[ {1 - \left( {1 - \frac{x} {C}} \right)^{p/m} } \right].$$ The parameter C = x/(1 − β) is identified as the codimension of the most singular structures. In the standard MHD case (Politano and Pouquet, (1995) x = β = 1/2, so that C = 1, that is, the most singular dissipative structures are planar sheets. On the contrary, in fluid flows C = 2 and the most dissipative structures are filaments. The large p behavior of the p-model is given by ζ p ~ (p/m) log2(1/μ) + 1, so that Equations (64, 65) give the same results providing μ ≃ 2−x. As shown by Carbone et al. (1996b) all models are able to capture intermittency of fluctuations in the solar wind. The agreement between the curves ζ p and normalized scaling exponents is excellent, and this means that we realistically cannot discriminate between the models we reported above. The main problem is that all models are based on a conjecture which gives a curve ζ p as a function of a single free parameter, and that curve is able to fit the smooth observed behavior of ζ p . Statistics cannot prove, just disprove. We can distinguish between the fractal model and multifractal models, but we cannot realistically distinguish among the various multifractal models. 7.5 A model for the departure from self-similarity Besides the idea of self-similarity underlying the process of energy cascade in turbulence, a different point of view can be introduced. The idea is to characterize the behavior of the PDFs through the scaling laws of the parameters, which describe how the shape of the PDFs changes when going towards small scales. The model, originally introduced by Castaing et al. (2001), is based on a multiplicative process describing the cascade. In its simplest form the model can be introduced by saying that PDFs of increments δZ ℓ ± , at a given scale, are made as a sum of Gaussian distributions with different widths σ = 〈(δZ ℓ ± )2〉1/2. The distribution of widths is given by Gλ(σ), namely $$P(\delta Z_\ell ^ \pm ) = \frac{1} {{2\pi }}\int_0^\infty {G_\lambda (\sigma )\exp \left( { - \frac{{\left( {\delta Z_\ell ^ \pm } \right)^2 }} {{2\sigma ^2 }}} \right)\frac{{d\sigma }} {\sigma }} .$$ In a purely self-similar situation, where the energy cascade generates only a trivial variation of σ with scales, the width of the distribution Gλ(σ) is zero and, invariably, we recover a Gaussian distribution for P(δZ ℓ ± ). On the contrary, when the cascade is not strictly self-similar, the width of Gλ(σ) is different from zero and the scaling behavior of the width λ2 of Gλ(σ) can be used to characterize intermittency. 7.6 Intermittency properties recovered via a shell model Shell models have remarkable properties which closely resemble those typical of MHD phenomena (Gloaguen et al., (1985; Biskamp, (1994; Giuliani and Carbone (1998; Plunian et al., (2012). However, the presence of a constant forcing term always induces a dynamical alignment, unless the model is forced appropriately, which invariably brings the system towards a state in which velocity and magnetic fields are strongly correlated, that is, where Z n ± ≠ = 0 and Z n ∓ ≠ = 0. When we want to compare statistical properties of turbulence described by MHD shell models with solar wind observations, this term should be avoided. It is possible to replace the constant forcing term by an exponentially time-correlated Gaussian random forcing which is able to destabilize the Alfvénic fixed point of the model (Giuliani and Carbone (1998), thus assuring the energy cascade. The forcing is obtained by solving the following Langevin equation: $$\frac{{dF_n }} {{dt}} = - \frac{{F_n }} {\tau } + \mu (t),$$ where μ(t) is a Gaussian stochastic process δ-correlated in time 〈μ(t)μ(t') = 2Dδ(t' − t). This kind of forcing will be used to investigate statistical properties. We show the kinetic energy spectrum |u n (t)|2 as a function of log2 k n for the MHD shell model. The full line refer to the Kolmogorov spectrum k n −2/3 . A statistically stationary state is reached by the system Gloaguen et al. (1985); Biskamp (1994); Giuliani and Carbone (1998); Plunian et al. (2012), with a well defined inertial range, say a region where Equation (49) is verified. Spectra for both the velocity |u n (t)|2 and magnetic |b n (t)|2 variables, as a function of k n , obtained in the stationary state using the GOY MHD shell model, are shown in Figures 87 and 88. Fluctuations are averaged over time. The Kolmogorov spectrum is also reported as a solid line. It is worthwhile to remark that, by adding a random term like ik n B0(t)Z n ± to a little modified version of the MHD shell models (B0 is a random function with some statistical characteristics), a Kraichnan spectrum, say E(k n ) ~ k n −3/2 , where E(k n ) is the total energy, can be recovered (Biskamp, (1994; Hattori and Ishizawa, (2001). The term added to the model could represent the effect of the occurrence of a large-scale magnetic field. Intermittency in the shell model is due to the time behavior of shell variables. It has been shown (Okkels, (1997) that the evolution of GOY model consists of short bursts traveling through the shells and long period of oscillations before the next burst arises. In Figures 89 and 90 we report the time evolution of the real part of both velocity variables u n (t) and magnetic variables b n (t) at three different shells. It can be seen that, while at smaller k n variables seems to be Gaussian, at larger k n variables present very sharp fluctuations in between very low fluctuations. We show the magnetic energy spectrum |b n (t)|2 as a function of log2 k n for the MHD shell model. The full line refer to the Kolmogorov spectrum k n −2/3 . The time behavior of variables at different shells changes the statistics of fluctuations. In Figure 91 we report the probability distribution functions P(δu n ) and P(δB n ), for different shells n, of normalized variables $$\delta u_n = \frac{{\Re e(u_n )}} {{\sqrt {\left\langle {\left| {u_n } \right|^2 } \right\rangle } }} and \delta B_n = \frac{{\Re e(b_n )}} {{\sqrt {\left\langle {\left| {b_n } \right|^2 } \right\rangle } }} ,$$ where Re indicates that we take the real part of u n and b n . Typically we see that PDFs look differently at different shells: At small k n fluctuations are quite Gaussian distributed, while at large k n they tend to become increasingly non-Gaussian, by developing fat tails. Rare fluctuations have a probability of occurrence larger than a Gaussian distribution. This is the typical behavior of intermittency as observed in usual fluid flows and described in previous sections. The same phenomenon gives rise to the departure of scaling laws of structure functions from a Kolmogorov scaling. Within the framework of the shell model the analogous of structure functions are defined as $$\left\langle {|u_n |^p } \right\rangle \sim k_n^{ - \xi _p } ;\left\langle {|b_n |^p } \right\rangle \sim k_n^{ - \eta _p } ;\left\langle {|Z_n^ \pm |^p } \right\rangle \sim k_n^{ - \xi _p^ \pm } .$$ For MHD turbulence it is also useful to report mixed correlators of the flux variables, i.e., $$\left\langle {|T_n^ \pm |^{p/3} } \right\rangle \sim k_n^{ - \beta _p^ \pm } .$$ Scaling exponents have been determined from a least square fit in the inertial range 3 ≤ n ≤ 12. The values of these exponents are reported in Table 4. It is interesting to notice that, while scaling exponents for velocity are the same as those found in the solar wind, scaling exponents for the magnetic field found in the solar wind reveal a more intermittent character. Moreover, we notice that velocity, magnetic and Elsässer variables are more intermittent than the mixed correlators and we think that this could be due to the cancelation effects among the different terms defining the mixed correlators. Time intermittency in the shell model generates rare and intense events. These events are the result of the chaotic dynamics in the phase-space typical of the shell model (Okkels, (1997). That dynamics is characterized by a certain amount of memory, as can be seen through the statistics of waiting times between these events. The distributions P(δt) of waiting times is reported in the bottom panels of Figures 91, at a given shell n = 12. The same statistical law is observed for the bursts of total dissipation (Boffetta et al., (1999). Time behavior of the real part of velocity variable u n (t) at three different shells n, as indicated in the different panels. Time behavior of the real part of magnetic variable b n (t) at three different shells n, as indicated in the different panels. In the first three panels we report PDFs of both velocity (left column) and magnetic (right column) shell variables, at three different shells ℓ n . The bottom panels refer to probability distribution functions of waiting times between intermittent structures at the shell n = 12 for the corresponding velocity and magnetic variables. Scaling exponents for velocity and magnetic variables, Elsässer variables, and fluxes. Errors on β p ± are about one order of magnitude smaller than the errors shown. η p ξ p + ξ p − β p + β p − 1.8 ± 0.10 8 Observations of Yaglom's Law in Solar Wind Turbulence To avoid the risk of misunderstanding, let us start by recalling that Yaglom's law (40) has been derived from a set of equations (MHD) and under assumptions which are far from representing an exact mathematical model for the solar wind plasma. Yaglom's law is valid in MHD under the hypotheses of incompressibility, stationarity, homogeneity, and isotropy. Also, the form used for the dissipative terms of MHD equations is only valid for collisional plasmas, characterized by quasi-Maxwellian distribution functions, and in case of equal kinematic viscosity and magnetic diffusivity coefficients (Biskamp, (2003). In solar wind plasmas the above hypotheses are only rough approximations, and MHD dissipative coefficients are not even defined (Tu and Marsch, (1995a). At frequencies higher than the ion cyclotron frequency, kinetic processes are indeed present, and a number of possible dissipation mechanisms can be discussed. When looking for the Yaglom's law in the SW, the strong conjecture that the law remains valid for any form of the dissipative term is needed. Despite the above considerations, Yaglom's law results surprisingly verified in some solar wind samples. Results of the occurrence of Yaglom's law in the ecliptic plane, has been reported by MacBride et al. (2008, (2010) and Smith et al. (2009) and, independently, in the polar wind by Sorriso-Valvo et al. (2007). It is worthwhile to note that, the occurrence of Yaglom's law in polar wind, where fluctuations are Alfvénic, represents a double surprising feature because, according to the usual phenomenology of MHD turbulence, a nonlinear energy cascade should be absent for Alfénic turbulence. In a first attempt to evaluate phenomenologically the value of the energy dissipation rate, MacBride et al. (2008) analyzed the data from ACE to evaluate the occurrence of both the Kolmogorov's 4/5-law and their MHD analog (40). Although some words of caution related to spikes in wind speed, magnetic field strength caused by shocks and other imposed heliospheric structures that constitute inhomogeneities in the data, authors found that both relations are more or less verified in solar wind turbulence They found a distribution for the energy dissipation rate, defined in the above paper as ∈ = (∈ ii + + ∈ ii − )/2, with an average of about ∈ ≃ 1.22 × 104 J/Kg s. In order to avoid variations of the solar activity and ecliptic disturbances (like slow wind sources, coronal mass ejections, ecliptic current sheet, and so on), and mainly mixing between fast and slow wind, Sorriso-Valvo et al. (2007) used high speed polar wind data measured by the Ulysses spacecraft. In particular, authors analyze the first seven months of 1996, when the heliocentric distance slowly increased from 3 AU to 4 AU, while the heliolatitude decreased from about 55° to 30°. The third-order mixed structure functions have been obtained using 10-days moving averages, during which the fields can be considered as stationary. A linear scaling law, like the one shown in Figure 92, has been observed in a significant fraction of samples in the examined period, with a linear range spanning more than two decades. The linear law generally extends from few minutes up to 1 day or more, and is present in about 20 periods of a few days in the 7 months considered. This probably reflects different regimes of driving of the turbulence by the Sun itself, and it is certainly an indication of the nonstationarity of the energy injection process. According to the formal definition of inertial range in the usual fluid flows, authors attribute to the range where Yaglom's law appear the role of inertial range in the solar wind turbulence (Sorriso-Valvo et al., (2007). This range extends on scales larger than the usual range of scales where a Kolmogorov relation has been observed, say up to about few hours (cf. Figure 25). An example of the linear scaling for the third-order mixed structure functions Y±, obtained in the polar wind using Ulysses measurements. A linear scaling law represents a range of scales where Yaglom's law is satisfied. Image reproduced by permission from Sorriso-Valvo et al. (2007), copyright by APS. Several other periods are found where the linear scaling range is reduced and, in particular, the sign of Y ℓ ± is observed to be either positive or negative. In some other periods the linear scaling law is observed either for Y ℓ + or Y ℓ − rather than for both quantities. It is worth noting that in a large fraction of cases the sign switches from negative to positive (or viceversa) at scales of about 1 day, roughly indicating the scale where the small scale Alfvénic correlations between velocity and magnetic fields are lost. This should indicate that the nature of fluctuations changes across the break. The values of the pseudo-energies dissipation rates ∈± has been found to be of the order of magnitude about few hundreds of J/Kg s, higher than that found in usual fluid flows which result of the order of 1 ÷ 50 J/Kg s. The occurrence of Yaglom's law in solar wind turbulence has been evidenced by a systematic study by MacBride et al. (2010), which, using ACE data, found a reasonable linear scaling for the mixed third-order structure functions, from about 64 s. to several hours at 1 AU in the ecliptic plane. Assuming that the third-order mixed structure function is perpendicular to the mean field, or assuming that this function varies only with the component of the scale ℓ α that is perpendicular to the mean field, and is cylindrically symmetric, the Yaglom's law would reduce to a 2D state. On the other hand, if the third-order function is parallel to the mean field or varies only with the component of the scale that is parallel to the mean field, the Yaglom'slaw would reduce to a 1D-like case. In both cases the result will depend on the angle between the average magnetic field and the flow direction. In both cases the energy cascade rate varies in the range 103 ÷ 104 J/Kg s (see MacBride et al., (2010, for further details). Quite interestingly, Smith et al. (2009) found that the pseudo-energy cascade rates derived from Yaglom's scaling law reveal a strong dependence on the amount of cross-helicity. In particular, they showed that when the correlation between velocity and magnetic fluctuations are higher than about 0.75, the third-order moment of the outward-propagating component, as well as of the total energy and cross-helicity are negative. As already made by Sorriso-Valvo et al. (2007), they attribute this phenomenon to a kind of inverse cascade, namely a back-transfer of energy from small to large scales within the inertial range of the dominant component. We should point out that experimental values of energy transfer rate in the incompressive case, estimated with different techniques from different data sets (Vasquez et al., (2007; MacBride et al., (2010), are only partially in agreement with that obtained by Sorriso-Valvo et al. (2007). However, the different nature of wind (ecliptic vs. polar, fast vs. slow, at different radial distances from the Sun) makes such a comparison only indicative. As far as the scaling law (47) is concerned, Carbone et al. (2009a) found that a linear scaling for W ℓ ± as defined in (47), appears almost in all Ulysses dataset. In particular, the linear scaling for W ℓ ± is verified even when there is no scaling at all for Y ℓ ± (40). In particular, it has been observed (Carbone et al., (2009a) that a linear scaling for W ℓ + . appears in about half the whole signal, while W ℓ − displays scaling on about a quarter of the sample. The linear scaling law generally extends on about two decades, from a few minutes up to one day or more, as shown in Figure 93. At variance to the incompressible case, the two fluxes W ℓ ± coexist in a large number of cases. The pseudoenergies dissipation rates so obtained are considerably larger than the relative values obtained in the incompressible case. In fact it has been found that on average ∈+ ≃ 3 × 103 J/Kg s. This result shows that the nonlinear energy cascade in solar wind turbulence is considerably enhanced by density fluctuations, despite their small amplitude within the Alfvénic polar turbulence Note that the new variables Δw i ± are built by coupling the Elsässer fields with the density, before computing the scale-dependent increments. Moreover, the third-order moments are very sensitive to intense field fluctuations, that could arise when density fluctuations are correlated with velocity and magnetic field. Similar results, but with a considerably smaller effect, were found in numerical simulations of compressive MHD (Mac Low and Klessen, (2004). The linear scaling relation is reported for both the usual third-order structure function Y ℓ + and the same quantity build up with the density-mediated variables W ℓ + . A linear relation full line is clearly observed. Data refer to the Ulysses spacecraft. Image reproduced by permission from Carbone et al. (2009a), copyright by APS. Finally, it is worth reporting that the presence of Yaglom's law in solar wind turbulence is an interesting theoretical topic, because this is the first real experimental evidence that the solar wind turbulence, at least at large-scales, can be described within the magnetohydrodynamic model. In fact, Yaglom's law is an exact law derived from MHD equations and, let us say once more, their occurrence in a medium like the solar wind is a welcomed surprise. By the way, the presence of the law in the polar wind solves the paradox of the presence of Alfvénic turbulence as first pointed out by Dobrowolny et al. (1980a). Of course, the presence of Yaglom's law generates some controversial questions about data selection, reliability and a brief discussion on the extension of the inertial range. The interested reader can find some questions and relative answers in Physical Review Letters (Forman et al., (2010; Sorriso-Valvo et al., (2010a). 9 Intermittency Properties in the 3D Heliosphere: Taking a Look at the Data In this section, we present a reasoned look at the main aspect of what has been reported in literature about the problem of intermittency in the solar wind turbulence In particular, we present results from data analysis. 9.1 Structure functions Apart from the earliest investigations on the fractal structure of magnetic field as observed in interplanetary space (Burlaga and Klein, (1986), the starting point for the investigation of intermittency in the solar wind dates back to 1991, when Burlaga (1991a) started to look at the scaling of the bulk velocity fluctuations at 8.5 AU using Voyager 2 data. This author found that anomalous scaling laws for structure functions could be recovered in the range 0.85 ≤ r ≤ 13.6 h. This range of scales has been arbitrarily identified as a kind of "inertial range", say a region were a linear scaling exists between log S r (p) and log r, and the scaling exponents have been calculated as the slope of these curves. However, structure functions of order p ≤ 20 were determined on the basis of only about 4500 data points. Nevertheless the scaling was found to be quite in agreement with that found in ordinary fluid flows. Although the data might be in agreement with the random-β model, from a theoretical point of view Carbone (1993, (1994b) showed that normalized scaling exponents ζ p /ζ4 calculated by Burlaga (1991a) would be better fitted by using a p-model derived from the Kraichnan phenomenology (Kraichnan, (1965; Carbone, (1993), and considering the parameter μ ≃ 0.77. The same author (Burlaga, 1991b) investigated the multifractal structure of the interplanetary magnetic field near 25 AU and analyzed positive defined fields as magnetic field strength, temperature, and density using the multifractal machinery of dissipation fields (Paladin and Vulpiani, 1987; Meneveau, (1991). Burlaga (1991c) showed that intermittent events observed in co-rotating streams at 1 AU should be described by a multifractal geometry. Even in this case the number of points used was very low to assure the reliability of high-order moments. Marsch and Liu (1993) investigated the structure of intermittency of the turbulence observed in the inner heliosphere by using Helios 2 data. They analyzed both bulk velocity and Alfvén speed to calculate structure functions in the whole range 40.5 s (the instrument resolution) up to 24 h to estimate the p-th order scaling exponents. Note that also in this analysis the number of data points used was too small to assure a reliability for order p = 20 structure functions as reported by Marsch and Liu (1993). From the analysis analogous to Burlaga (1991a), authors found that anomalous scaling laws are present. A comparison between fast and slow streams at two heliocentric distances, namely 0.3 AU and 1 AU, allows authors to conjecture a scenario for high speed streams were Alfvénic turbulence, originally self-similar (or poorly intermittent) near the Sun, "... loses its self-similarity and becomes more multifractal in nature" (Marsch and Liu, (1993), which means that intermittent corrections increase from 0.3 AU to 1 AU. No such behavior seems to occur in the slow solar wind. From a phenomenological point of view, Marsch and Liu (1993) found that data can be fitted with a piecewise linear function for the scaling exponents ζ p , namely a β-model ζpp = 3 − D +p(D−2)/3, where D ≃ 3 for p ≤ 6 and D ≃ 2.6 for p > 6. Authors say that "We believe that we see similar indications in the data by Burlaga, who still prefers to fit his whole ζ p dataset with a single fit according to the non-linear random β-model". We like to comment that the impression by Marsch and Liu (1993) is due to the fact that the number of data points used was very small. As a matter of fact, only structure functions of order p ≤ 4 are reliably described by the number of points used by Burlaga (1991a). However, the data analyses quoted above, which in some sense present some contradictory results, are based on high order statistics which is not supported by an adequate number of data points and the range of scales, where scaling laws have been recovered, is not easily identifiable. To overcome these difficulties Carbone et al. (1996a) investigated the behavior of the normalized ratios ζ p /ζ3 through the ESS procedure described above, using data coming from low-speed streams measurements of Helios 2 spacecraft. Using ESS the whole range covered by measurements is linear, and scaling exponent ratios can be reliably calculated. Moreover, to have a dataset with a high number of points, authors mixed in the same statistics data coming from different heliocentric distances (from 0.3 AU up to 1 AU). This is not correct as far as fast wind fluctuations are taken into account, because, as found by Marsch and Liu (1993) and Bruno et al. (2003b), there is a radial evolution of intermittency. Results showed that intermittency is a real characteristic of turbulence in the solar wind, and that the curve ζ p /ζ3 is a non-linear function of p as soon as values of p ≤ 6 are considered. Marsch et al. (1996) for the first time investigated the geometrical and scaling properties of the energy flux along the turbulent cascade and dissipation rate of kinetic energy. They showed the multifractal nature of the dissipation field and estimated, for the first time in solar wind MHD turbulence, the associated singularity spectrum which resulted to be very similar to those obtained for ordinary fluid turbulence (Meneveau and Sreenivasan, (1987). They also estimated the energy dissipation rate for time scales of 102 s to be around 5.4 × 10−16 erg cm−3 s−1. This value was similar to the theoretical heating rate required in the model by Tu (1988) with Alfvén waves to explain the radial temperature dependence observed in fast solar wind. Looking at the literature, it can be realized that often scaling exponents ζ p , as observed mainly in the high-speed streams of the inner solar wind, cannot be explained properly by any cascade model for turbulence This feature has been attributed to the fact that this kind of turbulence is not in a fully-developed state with a well defined spectral index. Models developed by Tu et al. (1984) and Tu (1988) were successful in describing the evolution of the observed power spectra. Using the same idea Tu et al. (1996) and Marsch and Tu (1997). investigated the behavior of an extended cascade model developed on the base of the p-model (Meneveau and Sreenivasan, (1987; Carbone, (1993). Authors conjectured that: i) the scaling laws for fluctuations are still valid in the form δZ ℓ ± ~ ℓ h , even when turbulence is not fully developed; ii) the energy cascade rate is not constant, its moments rather depend not only on the generalized dimensions D p but also on the spectral index α of the power spectrum, say 〈∈ r p 〉 ~ ∈ p (ℓ, α)ℓ(p−1)D p , where the averaged energy transfer rate is assumed to be $$\varepsilon (\ell ,\alpha ) \sim \ell ^{ - (m/2 + 1)} P_\ell ^{\alpha /2} ,$$ being Pℓ ~ ℓ α the usual energy spectrum (ℓ ~ 1/k). The model gives $$\zeta _p = 1 + \left( {\frac{p} {m} - 1} \right)D_{p/m} + \left[ {\alpha \frac{m} {2} - \left( {1 + \frac{m} {2}} \right)} \right]\frac{p} {m},$$ where the generalized dimensions are recovered from the usual p-model $$D_p = \frac{{\log _2 [\mu ^p + (1 - \mu )^p ]}} {{(1 - p)}}.$$ In the limit of "fully developed turbulence", say when the spectral slope is α = 2/m + 1 the usual Equation (64) is recovered. The Helios 2 data are consistent with this model as far as the parameters are μ ≃ 0.77 and α ≃ 1.45, and the fit is relatively good (Tu et al., (1996). Recently, Horbury et al. (1997) and Horbury and Balogh (1997) studied the magnetic field fluctuations of the polar high-speed turbulence from Ulysses measurements at 3.1 AU and at 63. heliolatitude. These authors showed that the observed magnetic field fluctuations were in agreement with the intermittent turbulence p-model of Meneveau and Sreenivasan (1987). They also showed that the scaling exponents of structure functions of order p ≤ 6, in the scaling range 20 ≤ r ≤ 300 s followed the Kolmogorov scaling instead of Kraichnan scaling as expected. In addition, the same authors (Horbury et al., 1997) estimated the applicability of the model by Tu et al. (1996) and Marsch and Tu (1997). to the spectral transition range where the spectral index changes during the spectral evolution and concluded that this model was able to fit the observations much better than the p-model when values of the parameters p change continuously with the scale. Analysis of scaling exponents of p-th order structure functions has been performed using different spacecraft datasets of Ulysses spacecraft. Horbury et al. (1995a) and Horbury et al. (1995c) investigated the structure functions of magnetic field as obtained from observations recorded between 1.7 and 4 AU, and covering a heliographic latitude between 40° and 80° south. By investigating the spectral index of the second order structure function, they found a decrease with heliocentric distance attributed to the radial evolution of fluctuations. Further investigations (see, e.g., Ruzmaikin et al., (1995) were obtained using structure functions to study the Ulysses magnetic field data in the range of scales 1 ≤ r ≤ 32 min. Ruzmaikin et al. (1995). showed that intermittency is at work and developed a bi-fractal model to describe Alfvénic turbulence They found that intermittency may change the spectral index of the second order structure function and this modifies the calculation of the spectral index (Carbone, (1994a.). Ruzmaikin et al. (1995). found that polar Alfvénic turbulence should be described by a Kraichnan phenomenology (Kraichnan, (1965). However, the same data can be fitted also with a fluid-like scaling law (Tu et al., (1996) and, due to the relatively small amount of data, it is difficult to decide, on the basis of the second order structure function, which scaling relation describes appropriately intermittency in the solar wind. In a further paper Carbone et al. (1995b) provided evidence for differences in the ESS scaling laws between ordinary fluid flows and solar wind turbulence Through the analysis of different datasets collected in the solar wind and in ordinary fluid flows, it was shown that normalized scaling exponents ζ p /ζ3 are the same as far as p ≤ 8 are considered. This indicates a kind of universality in the scaling exponents for the velocity structure functions. Differences between scaling exponents calculated in ordinary fluid flows and solar wind turbulence are confined to highorder moments. Nevertheless, the differences found in the datasets were related to different kind of singular structures in the model described by Equation (65). Solar wind data can be fitted by that model as soon as the most intermittent structures are assumed to be planar sheets C = 1 and m = 4, that is a Kraichnan scaling is used. On the contrary, ordinary fluid flows can be fitted only when C = 2 and m = 3, that is, structures are filaments and the Kolmogorov scaling have been used. However it is worthwhile to remark that differences have been found for high-order structure functions, just where measurements are unreliable. 9.2 Probability distribution functions As said in Section 7.2 the statistics of turbulent flows can be characterized by the PDF of field differences over varying scales. At large scales PDFs are Gaussian, while tails become higher than Gaussian (actually, PDFs decay as exp[−δZ ℓ ± ]) at smaller scales. Marsch and Tu (1994) started to investigate the behavior of PDFs of fluctuations against scales and they found that PDFs are rather spiky at small scales and quite Gaussian at large scales. The same behavior have been obtained by Sorriso-Valvo et al. (1999, (2001) who investigated Helios 2 data for both velocity and magnetic field. In order to make a quantitative analysis of the energy cascade leading to the scaling dependence of PDFs just described, the distributions obtained in the solar wind have been fitted (Sorriso-Valvo et al., (1999) by using the log-normal ansatz $$G_\lambda (\sigma ) = \frac{1} {{\sqrt {2\pi \lambda } }}\exp \left( { - \frac{{\ln ^2 \sigma /\sigma _0 }} {{2\lambda ^2 }}} \right).$$ The width of the log-normal distribution of σ is given by λ2(ℓ) = √〈(δσ)2〉, while σ0 is the most probable value of σ. The values of the parameters σ0, μ, and γ, in the fit of λ2(τ) (see Equation (69) as a kernel for the scaling behavior of PDFs. FW and SW refer to fast and slow wind, respectively, as obtained from the Helios 2 spacecraft, by collecting in a single dataset all periods. B field (SW) V field (SW) B field (FW) V field (FW) σ0 The Equation (66) has been fitted to the experimental PDFs of both velocity and magnetic intensity, and the corresponding values of the parameter λ have been recovered. In Figure 94 the solid lines show the curves relative to the fit. It can be seen that the scaling behavior of PDFs, in all cases, is very well described by Equation (66). At every scale r, we get a single value for the width λ2(r), which can be approximated by a power law λ2(r) = μr−γ for r < 1 h, as it can be seen in Figure 95. The values of parameters μ and γ obtained in the fit, along with the values of σ0, are reported in Table 5. The fits have been obtained in the range of scales τ ≤ 0.72 h for the magnetic field, and τ ≤ 1.44 h for the velocity field. The analysis of PDFs shows once more that magnetic field is more intermittent than the velocity field. Left: normalized PDFs of fluctuations of the longitudinal velocity field at four different scales τ. Right: normalized PDFs of fluctuations of the magnetic field magnitude at four different scales τ. Solid lines represent the fit made by using the log-normal model. Image reproduced by permission from Sorriso-Valvo et al. (1999), copyright by AGU. The same analysis has been repeated by Forman and Burlaga (2003). These authors used 64 s averages of radial solar wind speed reported by the SWEPAM instrument on the ACE spacecraft, increments have been calculated over a range of lag times from 64 s to several days. From the PDF obtained through the Equation (69) authors calculated the structure functions and compared the free parameters of the model with the scaling exponents of the structure functions. Then a fit on the scaling exponents allows to calculate the values of λ2 and σ0. Once these parameters have been calculated, the whole PDF is evaluated. The same authors found that the PDFs do not precisely fit the data, at least for large values of the moment order. Interesting enough, Forman and Burlaga (2003) investigated the behavior of PDFs when different kernels Gλ(σ), derived from different cascade models, are taken into account in Equation (66). They discussed the physical content of each model, concluding that a cascade model derived from lognormal or log-Lévy theories,10 modified by self-organized criticality proposed by Schertzer et al. (1997), seems to avoid all problems present in other cascade models. Scaling laws of the parameter λ2(τ) as a function of the scales τ, obtained by the fits of the PDFs of both velocity and magnetic variables (see Figure 94). Solid lines represent fits made by power laws. Image reproduced by permission from Sorriso-Valvo et al. (1999), copyright by AGU. 10 Turbulent Structures The non-linear energy cascade towards smaller scales accumulates fluctuations only in relatively small regions of space, where gradients become singular. As a rather different point of view (see Farge, (1992) these regions can be viewed as localized zones of fluid where phase correlation exists, in some sense coherent structures. These structures, which dominate the statistics of small scales, occur as isolated events with a typical lifetime greater than that of stochastic fluctuations surrounding them. The idea of a turbulence in the solar wind made by a mixture of structures convected by the wind and stochastic fluctuations is not particularly new (see, e.g., Tu and Marsch, (1995a). However, these large-scale structures cannot be considered as intermittent structures at all scales. Structures continuously appear and disappear apparently in a random fashion, at some random location of fluid, and carry a great quantity of energy of the flow. In this framework intermittency can be considered as the result of the occurrence of coherent (non-Gaussian) structures at all scales, within the sea of stochastic Gaussian fluctuations. This point of view is the result of data analysis of scaling laws of turbulent fluctuations made by using wavelets filters (see Appendix C) instead of the usual Fourier transform. Unlike the Fourier basis, wavelets allow a decomposition both in time and frequency (or space and scale). In analyzing intermittent structures it is useful to introduce a measure of local intermittency, as for example the Local Intermittency Measure (LIM) introduced by Farge (1992) (see Appendix C). The spatial structures generating intermittency have been investigated by Veltri and Mangeney (1999), using the Haar basis applied to time series of thirteen months of velocity and magnetic data from ISEE s/c. Analyzing intermittent events, they found that intermittent events occur on time scale of the order of few minutes and that they are one-dimensional structures (in agreement with Carbone et al., 1995b). In particular, they found different types of structures which can represent two different categories: i. Some of the structures are the well known one-dimensional current sheets, characterized by pressure balance and almost constant density and temperature. When a minimum variance analysis is made on the magnetic field near the structure, it can be seen that the most variable component of the magnetic field changes sign. This component is perpendicular to the average magnetic field, the third component being zero. An interesting property of these structures is that the correlation between velocity and magnetic field within them is opposite with respect to the rest of fluctuations. That is, when they occur during Alfvénic periods velocity and magnetic field correlation is low; on the contrary, during non-Alfvénic periods the correlation of structure increases. ii. A different kind of structures looks like a shock wave. They can be parallel shocks or slowmode shocks. In the first case they are observed on the radial component of the velocity field, but are also seen on the magnetic field intensity, proton temperature, and density. In the second case they are characterized by a very low value of the plasma β parameter, constant pressure, anti-correlation between density and proton temperature, no magnetic fluctuations, and velocity fluctuations directed along the average magnetic field. However, Salem et al. (2009), as already anticipated in Section 3.1.1, demonstrated that a monofractal can be recovered and intermittency eliminated simply by subtracting a small subset of the events at small scales. Given a turbulent time series, as derived in the solar wind, a very interesting statistics can be made on the time separation between the occurrence of two consecutive structures. Let us consider a signal, for example u(t) or b(t) derived from solar wind, and let us define the wavelets set w s (r, t) as the set which captures, at time t, the occurrence of structures at the scale r. Then define the waiting times δt, as that time between two consecutive structures at the scale r, that is, between w s (r, t) and w s (r, t + δt). The PDFs of waiting times P(δt) are reported in Figure 82. As it can be seen, waiting times are distributed according to a power law P(δt) ~ δt−β extended over at least two decades. This property is very interesting, because this means that the underlying process for the energy cascade is non-Poissonian. Waiting times occurring between isolated Poissonian events, must be distributed according to an exponential function. The power law for P(δt) represents the asymptotic behavior of a LLevy function with characteristic exponent α = β − 1. This describes self-affine processes and are obtained from the central limit theorem by relaxing the hypothesis that the variance of variables is finite. The power law for waiting times we found is a clear evidence that long-range correlation (or in some sense "memory") exists in the underlying cascade process. On the other hand, Bruno et al. (2001), analyzing the statistics of the occurrence of waiting times of magnetic field intensity and wind speed intermittent events for a short time interval within the trailing edge of a high velocity stream, found a possible Poissonian-like behavior with a characteristic time around 30 min for both magnetic field and wind speed. These results are to be compared with previous estimates of the occurrence of interplanetary discontinuities performed by Tsurutani and Smith (1979), who found a waiting time around 14 min. In addition, Bruno et al. (2001), taking into account the wind speed and the orientation of the magnetic field vector at the site of the observation, in the hypothesis of spherical expansion, estimated the corresponding size at the Sun surface that resulted to be of the order of the photospheric structures estimated also by Thieme et al. (1989). Obviously, the Poissonian statistics found by these authors does not agree with the clear power law shown in Figure 82. However, Bruno et al. (2001) included intermittent events found at all scales while results shown in Figure 82 refer to waiting times between intermittent events extracted at the smallest scale, which results to be about an order of magnitude smaller than the time resolution used by Bruno et al. (2001). A detailed study on this topic would certainly clarify possible influences on the waiting time statistics due to the selection of intermittent events according to the corresponding scale. In the same study by Bruno et al. (2001), these authors analyzed in detail an event characterized by a strong intermittent signature in the magnetic field intensity. A comparative study was performed choosing a close-by time interval which, although intermittent in velocity, was not characterized by strong magnetic intermittency. This time interval was located a few hours apart from the previous one. These two intervals are indicated in Figure 96 by the two vertical boxes labeled 1 and 2, respectively. Wind speed profile and magnetic field magnitude are shown in the first two panels. In the third panel, the blue line refers to the logarithmic value of the magnetic pressure P m , here indicated by P B ; the red line refers to the logarithmic value of the thermal pressure P k , here indicated by P K and the black line refers to the logarithmic value of the total pressure Ptot, here indicated by PT = P B + P K , including an average estimate of the electrons and αs contributions. Magnetic field intensity residuals, obtained from the LIM technique, are shown in the bottom panel. The first interval is characterized by strong magnetic field intermittency while the second one is not. In particular, the first event corresponds to a relatively strong field discontinuity which separates two regions characterized by a different bulk velocity and different level of total pressure. While kinetic pressure (red trace) does not show any major jump across the discontinuity but only a light trend, magnetic pressure (blue trace) clearly shows two distinct levels. A minimum variance analysis further reveals the intrinsic different nature of these two intervals as shown in Figure 97 where original data have been rotated into the field minimum variance reference system (see Appendix D.1) where maximum, intermediate and minimum variance components are identified by λ3, λ2, and λ1, respectively. Moreover, at the bottom of the column we show the hodogram on the maximum variance plane λ3 − λ2, as a function of time on the vertical axis. The good correlation existing between magnetic and velocity variations for both time intervals highlights the presence of Alfvénic fluctuations. However, only within the first interval the magnetic field vector describes an arc-like structure larger than 90° on the maximum variance plane (see rotation from A to B on the 3D graph at the bottom of the left column in Figure 97) in correspondence with the time interval identified, in the profile of the magnetic field components, by the green color. At this location, the magnetic field intensity shows a clear discontinuity, B[λ3] changes sign, B[λ2] shows a hump whose maximum is located where the previous component changes sign and, finally, B[λ1] keeps its value close to zero across the discontinuity. Velocity fluctuations are well correlated with magnetic field fluctuations and, in particular, the minimum variance component V[λ2] has the same value on both sides of the discontinuity, approximately 350 km s−1, indicating that there is no mass flux through the discontinuity. During this interval, which lasts about 26 min, the minimum variance direction lies close to the background magnetic field direction at 11.9° so that the arc is essentially described on a plane perpendicular to the average background magnetic field vector. However, additional although smaller and less regular arc-like structures can be recognized on the maximum variance plane λ2 −λ3, and they tend to cover the whole 2τ interval. From top to bottom: 81 s averages of velocity wind profile in km s−1, magnetic field intensity in nT, the logarithmic value of magnetic (blue line), thermal (red line), and total pressure (black line) in dyne/cm2 and field intensity residuals in nT. The two vertical boxes delimit the two time intervals #1 and #2 which were chosen for comparison. While the first interval shows strong magnetic intermittency, the second one does not. Image reproduced by permission from Bruno et al. (2001), copyright by Elsevier. Within the second interval, magnetic field intensity is rather constant and the three components do not show any particular fluctuation, which could resemble any sort of rotation. In other words, the projection on the maximum variance plane does not show any coherent path. Even in this case, these fluctuations happen to be in a plane almost perpendicular to the average field direction since the angle between this direction and the minimum variance direction is about 9.3°. Further insights about differences between these two intervals can be obtained when we plot the trajectory followed by the tip of the magnetic field vector in the minimum variance reference system, as shown in Figure 98. The main difference between these two plots is that the one relative to the first interval shows a rather patchy trajectory with respect to the second interval. As a matter of fact, if we follow the displacements of the tip of the vector as the time goes by, we observe that the two intervals have a completely different behavior. Left column, from top to bottom: we show magnetic field intensity, maximum λ3, intermediate λ2 and minimum λ1 variance components for magnetic field (blue color) and wind velocity relative to the time interval #1 shown in Figure 96. Right below, we show the hodogram on the maximum variance plane λ3 − λ2, as a function of time (blue color line). The red lines are the projection of the blue line. The large arc, from A to B, corresponds to the green segment in the profile of the magnetic field components shown in the upper panel. The same parameters are shown for interval #2 (Figure 96), in the same format, on the right hand side of the figure. The time resolution of the data is 81 s. Image reproduced by permission from Bruno et al. (2001), copyright by Elsevier. Within the first time interval, the magnetic field vector experiences for some time small displacements around a given direction in space and then it suddenly performs a much larger displacement towards another direction in space, about which it starts to wander again. This process keeps on going several times within this time interval. In particular, the thick green line extending from label A to label B refers to the arc-like discontinuity shown in Figure 97, which is also the largest directional variation within this time interval. Within the second interval, the vector randomly fluctuates in all direction and, as a consequence, both the 3D trajectory and its projection on the maximum variance plane do not show any large empty spot. In practice, the second time interval, although longer, is similar to any sub-interval corresponding to one of the trajectory patches recognizable in the left hand side panel. As a matter of fact, selecting a single patch from the first interval and performing a minimum variance analysis, the maximum variance plane would result to be perpendicular to the local average magnetic field direction and the tip of the vector would randomly fluctuate in all directions. The first interval can be seen as a collection of several sub-intervals similar to interval #2 characterized by different field orientations and, possibly, intensities. Thus, magnetic field intermittent events mark the border between adjacent intervals populated by stochastic Alfvénic fluctuations. Trajectory followed by the tip of the magnetic field vector (blue color line) in the minimum variance reference system for interval #1 (left) and #2 (right). Projections on the three planes (red color lines) formed by the three eigenvectors λ1, λ2, λ3, and the average magnetic field vector, with its projections on the same planes, are also shown. The green line extending from label A to label B refers to the arc-like discontinuity shown in Figure 97. The time resolution of the magnetic field averages is 6 s. Image reproduced by permission from Bruno et al. (2001), copyright by Elsevier. (To see animations relative to similar time intervals click on Figures 99 for a timeseries affected by the intermittency phenomenon or at 100 for non-intermittent and intermittent samples. These differences in the dynamics of the orientation of the field vector can be appreciated running the two animations behind Figures 99 and 100. Although the data used for these movies do not exactly correspond to the same time intervals analyzed in Figure 96, they show the same dynamics that the field vector has within intervals #1 and #2. In particular, the animation corresponding to Figure 99 represents interval #2 while, Figure 100 represents interval #1. The observations reported above suggested these authors to draw the sketch shown in Figure 101 that shows a simple visualization of hypothetical flux tubes, convected by the wind, which tangle up in space Each flux tube is characterized by a local field direction and intensity, and within each flux tube the presence of Alfvénic fluctuations makes the magnetic field vector randomly wander bout this direction. Moreover, the large scale is characterized by an average background field direction aligned with the local interplanetary magnetic field. This view, based on the idea that solar wind fluctuations are a superposition of propagating Alfvén waves and convected structures (Bavassano and Bruno, (1989), strongly recalls the work by Tu and Marsch (1990a, (1993) who suggested the solar wind fluctuations being a uperposition of pressure balance structure (PBS) type flux tubes and Alfvén waves. In the inner heliosphere these PBS-type flux tubes are embedded in the large structure of fast solar wind streams and would form a kind of spaghetti-like sub-structure, which probably has its origin t the base of the solar atmosphere. mpg-Movie (4926.71972656 KB) Still from a movie showing Trajectory followed by the tip of the magnetic field vector in the minimum variance reference system during a time interval not characterized by intermittency. The duration of the interval is 2000 × 6 s but the magnetic field vector moves only for 100 × 6 s in order to make a smaller file (movie kindly provided by A. Vecchio). (For video see appendix) Figure 100: mpg-Movie (3897.08105469 KB) Still from a movie showing Trajectory followed by the tip of the magnetic field vector in the minimum variance reference system during a time interval characterized by intermittent events. The duration of the interval is 2000 × 6 s but the magnetic field vector moves only for 100 × 6 s in order to make a smaller file (movie kindly provided by A. Vecchio). (For video see appendix) The border between these flux tubes can be a tangential discontinuity where the total pressure on both sides of the discontinuity is in equilibrium or, as in the case of interval #1, the discontinuity is located between two regions not in pressure equilibrium. If the observer moves across these tubes he will record the patchy configuration shown in Figure 100 relative to interval #1. Within each flux tube he will observe a local average field direction and the magnetic field vector would mainly fluctuate on a plane perpendicular to this direction. Moving to the next tube, the average field direction would rapidly change and magnetic vector fluctuations would cluster around this new direction. Moreover, if we imagine a situation with many flux tubes, each one characterized by a different magnetic field intensity, moving across them would possibly increase the intermittent level of the fluctuations. On the contrary, moving along a single flux tube, the same observer would constantly be in the situation typical of interval #2, which is mostly characterized by a rather constant magnetic field intensity and directional stochastic fluctuations mainly on a plane quasi perpendicular to the average magnetic field direction. In such a situation, magnetic field intensity fluctuations would not increase their intermittency. Simple visualization of hypothetical flux tubes which tangle up in space. Each flux tube is characterized by a local field direction, and within each flux tube the presence of Alfvénic fluctuations makes the magnetic field vector randomly wander about this direction. Moreover, the large scale is characterized by an average background field direction aligned with the local interplanetary magnetic field. Moving across different flux-tubes, characterized by a different values of |B|, enhances the intermittency level of the magnetic field intensity time series (cf. Bruno et al., (2001). A recent theoretical effort by Chang et al. (2004), Chang (2003), and Chang and Wu (2002) models MHD turbulence in a way that recalls the interpretation of the interplanetary observations given by Bruno et al. (2001) and, at the same time, reminds also the point of view expressed by Farge (1992) in this section. These authors stress the fact that propagating modes and coherent, convected structures share a common origin within the general view described by the physics of complexity. Propagating modes experience resonances which generate coherent structures, possibly flux tubes, which, in turn, will migrate, interact, and, eventually, generate new modes. This process, schematically represented in Figure 102, which favors the local generation of coherent structures in the solar wind, fully complement the possible solar origin of the convected component of interplanetary MHD turbulence. Composite figure made adapting original figures from the paper by Chang et al. (2004). The first element on the upper left corner represents field-aligned spatio-temporal coherent structures. A cross-section of two of these structures of the same polarity is shown in the upper right corner. Magnetic flux iso-contours and field polarity are also shown. The darkened area represents intense current sheet during strong magnetic shear. The bottom element of the figure is the result of 2D MHD simulations of interacting coherent structures, and shows intermittent spatial distribution of intense current sheets. In this scenario, new fluctuations are produced which can provide new resonance sites, possibly nucleating new coherent structures. 10.1 On the statistics of magnetic field directional fluctuations Interesting enough is to look at the statistics of the angular jumps relative to the orientation of the magnetic field vector. Studies of this kind can help to infer the relevance of modes and advected structures within MHD turbulent fluctuations. Bruno et al. (2004) found that PDFs of interplanetary magnetic field vector angular displacements within high velocity streams can be reasonably fitted by a double log-normal distribution, reminiscent of multiplicative processes following turbulence evolution. As a matter of fact, the multiplicative cascade notion was introduced by Kolmogorov into his statistical theory (Kolmogorov, (1941, (1991, (1962) of turbulence as a phenomenological framework to accommodate extreme behavior observed in real turbulent fluids. The same authors, studying the radial behavior of the two lognormal components of this distribution concluded that they could be associated with Alfvénic fluctuations and advected structures, respectively. In particular, it was also suggested that the nature of these advected structures could be intimately connected to tangential discontinuities separating two contiguous flux tubes (Bruno et al., (2001). Whether or not these fluctuations should be identified with the 2D turbulence was uncertain since their relative PDF, differently from the one associated with Alfvénic fluctuations, did not show a clear radial evolution. As a matter of fact, since 2D turbulence is characterized by having its k vectors perpendicular to the local field it should experience a remarkable evolution given that the turbulent cascade acts preferably on wave numbers perpendicular to the ambient magnetic field direction, as suggested by the three-wave resonant interaction (Shebalin et al., (1983). Obviously, an alternative solution would be the solar origin of these fluctuations. However, it is still unclear whether these structures come directly from the Sun or are locally generated by some mechanism. Some theoretical results (Primavera et al., (2003) would indicate that coherent structures causing intermittency in the solar wind (Bruno et al., (2003a), might be locally created by parametric decay of Alfvén waves. As a matter of fact, coherent structures like current sheets are continuously created when the instability is active (Primavera et al., (2003). Probability distributions of the angular displacements experienced by magnetic vector on a time scale of 6 s at 0.3 and 0.9 AU, for a fast wind, respectively. Solid curves refer to lognormals contributing to form the thick solid curve which best fits the distribution. Image reproduced by permission from Bruno et al. (2004), copyright EGU.) A more recent analysis (Borovsky, (2008) on changes in the field direction experienced by the solar wind magnetic field vector reproposed the picture that the inner heliosphere is filled with a network of entangled magnetic flux tubes (Bruno et al., (2001) and interpreted these flux tubes like fossil structures that originate at the solar surface These tubes are characterized by strong changes in the magnetic field direction as shown by the distribution illustrated in Figure 104 that refers to the occurrence of changes in the magnetic field direction observed by ACE for about 7 years for a time scale of roughly 2 minutes. Two exponential curves have been used to fit the distribution, one for the small angular change population and one for the large angular change population. The small angular-change population is associated with fluctuations active within the flux tube while, the second population would be due to large directional jumps identifying the crossing of the border between adjacent flux tubes. The same authors performed similar analyses on several plasma and magnetic field parameters like velocity fluctuations, alpha to proton ratio, proton and electron entropies, and found that also for these parameters small/large changes of these parameters are associated with small/large angular changes confirming the different nature of these two populations. Larger flux tubes, originating at the Sun, thanks to wind expansion which would inhibit reconnection, would eventually reach 1 AU. Measurements of angular differences of magnetic field direction on time scale of 128 s. Data set is from ACE measurements for the years 1998 – 2004. Exponential fits to two portions of the distribution are shown as dashed curves. Images reproduced by permission from Borovsky (2008), copyright by AGU. In another recent paper, Li (2008) developed a genuine data analysis method to localize individual current sheets from a turbulent solar wind magnetic field sample. He noticed that, in the presence of a current sheet, a scaling law appears for the cumulative distribution function of the angle between two magnetic field vectors separated by some time lags. In other words, if we define the function F(θ, ζ) to represent the frequency of having the measured angle between magnetic vectors separated by a time lag θ larger than θ we expect to have the following scaling relation: $$F(\theta ,N\zeta )\~NF(\theta ,\zeta ).$$ As a matter of fact, if the distribution function F(θ, ζ) above a certain critical angle θ0 is dominated by current-sheet crossing separating two adjacent flux tubes, we expect to find the scaling represented by relation 70. On the contrary, if we are observing these fluctuations within the same side of the current sheet F(θ, ζ) is dominated by small angular fluctuations and we do not expect to find any scaling. Using the same methodology, Li et al. (2008) also studied fluctuations in the Earth's magnetotail to highlight the absence of similar structures and to conclude that most of those advected structures observed in the solar wind must be of solar origin. 10.2 Radial evolution of intermittency in the ecliptic Marsch and Liu (1993) investigated for the first time solar wind scaling properties in the inner heliosphere. These authors provided some insights on the different intermittent character of slow and fast wind, on the radial evolution of intermittency, and on the different scaling characterizing the three components of velocity. In particular, they found that fast streams were less intermittent than slow streams and the observed intermittency showed a weak tendency to increase with heliocentric distance They also concluded that the Alfvénic turbulence observed in fast streams starts from the Sun as self-similar but then, during the expansion, decorrelates becoming more multifractal. This evolution was not seen in the slow wind, supporting the idea that turbulence in fast wind is mainly made of Alfvén waves and convected structures (Tu and Marsch, (1993), as already inferred by looking at the radial evolution of the level of cross-helicity in the solar wind (Bruno and Bavassano, (1991). Distribution function for two time periods. The left panels show the dependence of F(θ, ζ) on θ, and the right panels show the dependence of F(θ, ζ) on ζ. The presence of a current sheet makes ζ) on θ to increases linearly with ζ (dashed lines in the right panels). Image reproduced by permission from Li (2008), copyright by AAS. Bruno et al. (2003a) investigated the radial evolution of intermittency in the inner heliosphere, using the behavior of the flatness of the PDF of magnetic field and velocity fluctuations as a function of scale. As a matter of fact, probability distribution functions of fluctuating fields affected by intermittency become more and more peaked at smaller and smaller scales. Since the peakedness of a distribution is measured by its flatness factor, they studied the behavior of this parameter at different scales to estimate the degree of intermittency of their time series, as suggested by Frisch (1995). In order to study intermittency they computed the following estimator of the flatness factor F: $$\mathcal{F}(\tau ) = \frac{{\left\langle {S_\tau ^4 } \right\rangle }} {{\left\langle {S_\tau ^2 } \right\rangle ^2 }},$$ where τ is the scale of interest and S τ p = 〈|V(t+τ)−V(t)| p 〉 is the structure function of order p of the generic function V(t). They considered a given function to be intermittent if the factor F increased when considering smaller and smaller scales or, equivalently, higher and higher frequencies. In particular, vector field, like velocity and magnetic field, encompasses two distinct contributions, a compressive one due to intensity fluctuations that can be expressed as δ|B(t, τ)| = |B(t + τ)| − |B(t)|, and directional one due to changes in the vector orientation δB(t, τ) = √Σi=x,y,z,(B i (t + τ) − B i (t))2. Obviously, relation δB(t, τ) takes into account also compressive contributions, and the expression δB(t, τ) ≥ |δ|B(t, τ)|| is always true. Flatness F vs. time scale τ relative to magnetic field fluctuations. The left column (panels A and C) refers to slow wind and the right column (panels B and D) refers to fast wind. The upper panels refer to compressive fluctuations and the lower panels refer to directional fluctuations. Vertical bars represent errors associated with each value of F. The three different symbols in each panel refer to different heliocentric distances as reported in the legend. Image reproduced by permission from Bruno et al. (2003b), copyright by AGU. Flatness F vs. time scale τ relative to wind velocity fluctuations. In the same format of Figure 106 panels A and C refer to slow wind and panels B and D refer to fast wind. The upper panels refer to compressive fluctuations and the lower panels refer to directional fluctuations. Vertical bars represent errors associated with each value of F. Image reproduced by permission from Bruno et al. (2003b), copyright by AGU. Looking at Figures 106 and 107, taken from the work of Bruno et al. (2003a), the following conclusions can be drawn: Magnetic field fluctuations are more intermittent than velocity fluctuations. Compressive fluctuations are more intermittent than directional fluctuations. Slow wind intermittency does not show appreciable radial dependence Fast wind intermittency, for both magnetic field and velocity, clearly increases with distance Magnetic and velocity fluctuations have a rather Gaussian behavior at large scales, as expected, regardless of type of wind or heliocentric distance. Moreover, they also found that the intermittency of the components rotated into the mean field reference system (see Appendix D.1) showed that the most intermittent component of the magnetic field is the one along the mean field, while the other two show a similar level of intermittency within the associated uncertainties. Finally, with increasing the radial distance, the component along the mean field becomes more and more intermittent with respect to the transverse components. These results agree with conclusions drawn by Marsch and Tu (1994) who, analyzing fast and slow wind at 0.3 AU in Solar Ecliptic (SE hereafter) coordinate system, found that the PDFs of the fluctuations of transverse components of both velocity and magnetic fields, constructed for different time scales, were appreciably more Gaussian-like than fluctuations observed for the radial component, which resulted to be more and more spiky for smaller and smaller scales. However, at odds with results by Bruno et al. (2003a), Tu et al. (1996) could not establish any radial dependence due to the fact that their analysis was performed in the SE reference system instead of the mean field reference system as in the analysis of Bruno et al. (2003a). As a matter of fact, the mean field reference system is a more natural reference system where to study magnetic field fluctuations. The reason is that components normal to the mean field direction are more influenced by Alfvénic fluctuations and, as a consequence, their fluctuations are more stochastic and less intermittent. This effect largely reduces during the radial excursion mainly because in the SE reference system cross-talking between different components is artificially introduced. As a matter of fact, the presence of the large scale spiral magnetic field breaks the spatial symmetry introducing a preferential direction parallel to the mean field. The same Bruno et al. (2003b) showed that it was not possible to find a clear radial trend unless magnetic field data were rotated into this more natural reference system. On the other hand, it looks more difficult to reconcile the radial evolution of intermittency found by Bruno et al. (2003b) and Marsch and Liu (1993) in fast wind with conclusions drawn by Tu et al. (1996), who stated that "Neither a clear radial evolution nor a clear anisotropy can be established. The value of P1 in high-speed and low-speed wind are not prominent different.". However, it is very likely that the conclusions given above are related with how to deal with the flat slope of the spectrum in fast wind near 0.3 AU. Tu et al. (1996) concluded, indeed: "It should be pointed out that the extended model cannot be used to analyze the intermittency of such fluctuations which have a flat spectrum. If the index of the power spectrum is near or less than unity ... P1 would be 0.5. However, this does not mean there is no intermittency. The model simply cannot be used in this case, because the structure function(1) does not represent the effects of intermittency adequately for those fluctuations which have a flat spectrum and reveal no clear scaling behavior". Bruno et al. (2003a) suggested that, depending on the type of solar wind sample and on the heliocentric distance, the observed scaling properties would change accordingly. In particular, as the radial distance increases, convected, coherent structures of the wind assume a more relevant role since the Alfvénic component of the fluctuations is depleted. This would be reflected in the increased intermittent character of the fluctuations. The coherent nature of the convected structures would contribute to increase intermittency while the stochastic character of the Alfvénic fluctuations would contribute to decrease it. This interpretation would also justify why compressive fluctuations are always more intermittent than directional fluctuations. As a matter of fact, coherent structures would contribute to the intermittency of compressive fluctuations and, at the same time, would also produce intermittency in directional fluctuations. However, since directional fluctuations are greatly influenced by Alfvénic stochastic fluctuations, their intermittency will be more or less reduced depending on the amplitude of the Alfvén waves with respect to the amplitude of compressive fluctuations. The radial dependence of the intermittency behavior of solar wind fluctuations stimulated Bruno et al. (1999b) to reconsider previous investigations on fluctuations anisotropy reported in Section 3.1.4. These authors studied magnetic field and velocity fluctuations anisotropy for the same co-rotating, high velocity stream observed by Bavassano et al. (1982a) within the framework of the dynamics of non-linear systems. Using the Local Intermittency Measure (Farge et al., (1990; Farge, (1992; Bruno et al., (1999b) were able to justify the controversy between results by Klein et al. (1991) in the outer heliosphere and Bavassano et al. (1982a) in the inner heliosphere. Exploiting the possibility offered by this technique to locate in space and time those events which produce intermittency, these authors were able to remove intermittent events and perform again the anisotropy analysis. They found that intermittency strongly affected the radial dependence of magnetic fluctuations while it was less effective on velocity fluctuations. In particular, after intermittency removal, the average level of anisotropy decreased for both magnetic and velocity field at all distances. Although magnetic fluctuations remained more anisotropic than their kinetic counterpart, the radial dependence was eliminated. On the other hand, the velocity field anisotropy showed that intermittency, although altering the anisotropic level of the fluctuations, does not markedly change its radial trend. 10.3 Radial evolution of intermittency at high latitude Recently, Pagel and Balogh (2003), studied intermittency in the outer heliosphere using Ulysses observations at high heliographic latitude, well within high speed solar wind. In particular, these authors used Castaing distribution Castaing et al. (2001) to study the Probability Distribution Functions (PDF) of the fluctuations of magnetic field components (see Section 9.2 for description of Castaing distribution and related governing parameters definition λ and σ). They found that intermittency of small scales fluctuations, within the inertial range, increased with increasing the radial distance from the Sun as a consequence of the growth to larger scales of the inertial range. As a matter of fact, using the scaling found by Horbury et al. (1996a) between the transition scale (the inverse of the frequency corresponding to the break-point in the magnetic field spectrum) RB ~ r1.1±0.1, Pagel and Balogh (2003), quantitatively evaluated how the top of the inertial range in their data should shift to larger time scales with increasing heliocentric distance Moreover, taking into account that inside the inertial range λ2 ~ τ−β ⟹ λ2 = aτ−β and that the proposed scaling from Castaing et al. (2001) would be λ2 ~ const.(τ/T)−β, we should expect that for τ = T the parameter λ2 = const.. Thus, these authors calculated σ2 and λσ2 at different heliocentric distances and made the hypothesis of a similar scaling for λ2 and σ2, although this is not assured by the model. Figure 108 reports values of λ2 and σ2 vs. distance calculated for the top of the inertial range at that distance using the above procedure. The radial behavior shown in this figure suggests that there is no radial dependence for these parameters for all the three components (indicated by different symbols), as expected if the observed radial increase of intermittency in the inertial range is due to a broadening of the inertial range itself. They also found that, in the RTN reference system, transverse magnetic field components exhibit less Gaussian behavior with respect to the radial component. This result should be compared with results from similar studies by Marsch and Tu (1994) and Bruno et al. (2003b) who, studying the radial evolution of intermittency in the ecliptic, found that the components transverse to the local magnetic field direction, are the most Gaussian ones. Probably, the above discrepancy depends totally on the reference system adopted in these different studies and it would be desirable to perform a new comparison between high and low latitude intermittency in the mean-field reference system. Values of λ2 (upper panel) and σ2 (lower panel) vs. heliocentric distance (see Section 9.2 for description of Castaing distribution and definition of λ and σ). These values have been calculated for the projected low frequency beginning of the inertial range relative to each distance (see text for details). R, T, and N components are indicated by asterisks, crosses and circles, respectively. Image reproduced by permission from Pagel and Balogh (2003), copyright by AGU. Pagel and Balogh (2002) focused also on the different intermittent level of magnetic field fluctuations during two fast latitudinal scans which happened to be during solar minimum the first one, and during solar maximum the second one. Their results showed a strong latitudinal dependence but were probably not, or just slightly, affected by radial dependence given the short heliocentric radial variations during these time intervals. They analyzed the anomalous scaling of the third order magnetic field structure functions looking at the value of the parameter μparticular, this last model was an obtained from the best fit performed using the p-model (see Section 7.4). In a previous analysis of the same kind, but focalized on the first latitudinal scan, the same authors tested three intermittency models, namely: "lognormal", "p" and "G-infinity" models. In particular, this last model was an empirical model introduced by Pierrehumbert (1999), and Cho et al. (2000), and was not intended for turbulent systems. Anyhow, the best fits were obtained with the lognormal and Kolmogorov-p model. These authors concluded that magnetic field components display a very high level of intermittency throughout minimum and maximum phases of solar cycle, and slow wind shows a lower level of intermittency compared with the Alfvénic polar flows. These results do not seem to agree with ecliptic observations (Marsch and Liu, (1993,; Bruno et al., (2003a) which showed that fast wind is generally less intermittent than slow wind not only for wind speed and magnetic field magnitude, but also for the components. At this point, since it has been widely recognized that low latitude fast wind collected within co-rotating streams and fast polar wind share many common turbulence features, they should be expected to have many similarities also as regards intermittency. Thus, it is possible that also in this case the reference system in which the analysis is performed plays some role in determining some of the results regarding the behavior of the components. In any case, further analyses should clarify the reasons for this discrepancy. 11 Solar Wind Heating by the Turbulent Energy Cascade The Parker theory of solar wind (Parker, (1964) predicts an adiabatic expansion from the hot corona without further heating. For such a model, the proton temperature T(r) should decrease with the heliocentric distance r as T(r) ~ r−4/3. The radial profile of proton temperature have been obtained from measurements by the Helios spacecraft at 0.3 AU (Marsch et al., (1982,; Marsch, (1983,; Schwenn, (1983,; Freeman, (1988,; Goldstein, (1996), up to 100 AU or more by Voyager and Pioneer spacecrafts (Gazis, (1984,; Gazis et al., (1994,; Richardson et al., (1995). These measurements show that the temperature decay is in fact considerably slower than expected. Fits of the radial temperature profile gave an effective decrease T ~ T0(r0/r)ξ in the ecliptic plane, with the exponent ξ ∈ [0.7; 1], much smaller than the adiabatic case. Actually ξ ≃ 1 within 1 AU, while ξ flattens to ξ ≃ 0.7 beyond 30 AU, where pickup ions probably contribute significantly (Richardson et al., (1995,; Zank et al., (1996,; Smith et al., (2001b). These observations imply that some heating mechanism must be at work within the wind plasma to supply the energy required to slow down the decay. The nature of the heating process of solar wind is an open problem. The primary process governing the solar wind heating is probably active locally in the wind. However, since collisions are very rare in the solar wind plasma, the usual viscous coefficients have no meaning, say energy must be transferred to very small scales before it can be efficiently dissipated, perhaps by kinetic processes. As a consequence, the presence of a turbulent energy flux is the crucial first step towards the understanding of solar wind heating (Coleman, (1968,; Tu and Marsch, (1995a) because, as said in Section 2.4, the turbulent energy cascade represents nothing but the way for energy to be efficiently dissipated in a high-Reynolds number flow.11 In other words, before to face the problem of what actually be the physical mechanisms responsible for energy dissipation, if we conjecture that these processes happens at small scales, the turbulent energy flux towards small scales must be of the same order of the heating rate. Using the hypothesis that the energy dissipation rate is equal to the heat addition, one can use the omnidirectional power law spectrum derived by Kolmogorov $$P(k) = C_K \varepsilon _P^{2/3} k^{ - 5/3}$$ (C K is the Kolmogorov constant that can be obtained from measurements) to infer the energy dissipation rate (Leamon et al., (1999) $$\varepsilon _P = \left[ {\frac{5} {3}P(k)C_K^{ - 1} } \right]^{3/2} k^{5/2} ,$$ where k = 2πf/V (f is the frequency in the spacecraft frame and V is the solar wind speed). The same conjecture can be made by using Elsässer variables, thus obtaining a generalized Kolmogorov phenomenology for the power spectra P±(k) of the Elsässer variables (Zhou and Matthaeus, (1989, (1990; Marsch, (1991) $$\varepsilon _P^ \pm = C_k^{ - 3/2} P^ \pm (k)\sqrt {P^ \mp (k)} k^{5/2} .$$ Even if the above expressions are affected by the presence of intermittency, namely extreme fluctuations of the energy transfer rate, and an estimated value for the Kolmogorov constant is required, the estimated energy dissipation rates roughly agree with the heating rates derived from gradients of the thermal proton distribution (MacBride et al., (2010). A different estimate for the energy dissipation rate in spherical symmetry can be derived from an expression that uses the adiabatic cooling in combination with local heating rate ∈. In a steady state situation the equation for the radial profile of ions temperature can be written as (Verma et al., (1995) $$\frac{{dT(r)}} {{dr}} + \frac{4} {3}\frac{{T(r)}} {r} = \frac{{m_p \varepsilon }} {{(3/2)V_{SW} (r)k_B }},$$ where m p is the proton mass and VSW(r) is the radial profile of the bulk wind speed in km s−1. (k B is the Boltzmann constant). Equation (74) can be solved using the actual radial profile of temperature thus obtaining an expression for the radial profile of the heating rate needed to heat the wind at the actual value (Vasquez et al., (2007) $$\varepsilon (r) = \frac{3} {2}\left( {\frac{4} {3} - \xi } \right)\frac{{V_{SW} (r)k_B T(r)}} {{rm_p }}.$$ This relation is obtained by considering a polytropic index γ = 5/3 for the adiabatic expansion of the solar wind plasma, the protons being the only particles heated in the process. Such assumptions are only partially correct, since the electrons could play a relevant role in the heat exchange. Heating rates obtained using Equation (75) should thus be only seen as a first approximation that could be improved with better models of the heating processes. Using the expected solar wind parameters at 1 AU, the expected heating rate ranges from 102 J/Kg s for cold wind to 104 J/Kg s in hot wind. Cascade rates estimated from the energy-containing scale of turbulence at 1 AU obtained by evaluating triple correlations of fluctuations and the correlation length scale of turbulence give values in this range (Smith et al., (2001a, (2006; Isenberg, (2005,; Vasquez et al., (2007) Rather than estimating the heating rate by typical solar wind fluctuations and the Kolmogorov constant, it is perhaps much more convenient to get a direct estimate of the energy dissipation rate by measurements of the turbulent energy cascade using the Yaglom's law, say from measurements of the third-order mixed moments of fluctuations. In fact, the roughly constant values of Y ℓ ± /ℓ, or alternatively their compressible counterpart W ℓ ± /. will result in an estimate for the pseudo-energy dissipation rates ∈± (at least within a constant of order unity), over a range of scales ℓ, which by definition is unaffected by intermittency. This has been done both in the ecliptic plane (MacBride et al., (2008, (2010) and in polar wind (Marino et al., (2009,; Carbone et al., (2009b). Even preliminary attempts (MacBride et al., (2008) result in an estimate for the energy dissipation rate ∈ E which is close to the value required for the heating of solar wind. However, refined analysis (MacBride et al., (2010) give results which indicate that at 1 AU in the ecliptic plane the solar wind can be heated by a turbulent energy cascade. As a different approach, Marino et al. (2009), using the data from the Ulysses spacecraft in the polar wind, calculate the values of the pseudo-energies from the relation Y ℓ ± /ℓ, and compare these values with the radial profile of the heating rate (75) required to maintain the observed temperature against the adiabatic cooling. The Ulysses database provides two different estimates for the temperature, T1, indicated as Tlarge in literature, and T2, known as Tsmall. In general, T1 and T2 are known to sometimes give an overestimate and an underestimate of the true temperature, respectively, so that analysis are performed using both temperatures (Marino et al., (2009). The heating rate are estimated at the same positions for which the energy cascade was observed. As shown in Figure 109 results indicate that turbulent transfer rate represents a significant amount of the expected heating, say the MHD turbulent cascade contributes to the in situ heating of the wind from 8% to 50% (for T1 and T2 respectively), up to 100% in some cases. The authors concluded that, although the turbulent cascade in the polar wind must be considered an important ingredient of the heating, the turbulent cascade alone seems unable to provide all the heating needed to explain the observed slowdown of the temperature decrease, in the framework of the model profile given in Equation (75). The situation is completely different as far as compressibility is taken into account. In fact, when the pseudo-energy transfer rates have been calculated through W ℓ ± /ℓ, the radial profile of energy dissipation rate is well described thus indicating that the turbulent energy cascade provides the amount of energy required to locally heat the solar wind to the observed values. Radial profile of the pseudoenergy transfer rates obtained from the turbulent cascade rate through the Yaglom relation, for both the compressive and the incompressive case. The solid lines represent the radial profiles of the heating rate required to obtain the observed temperature profile. 11.1 Dissipative/dispersive range in the solar wind turbulence As we saw in Section 8, the energy cascade in turbulence can be recognized by looking at Yaglom's law. The presence of this law in the solar wind turbulence showed that an energy cascade is at work, thus transferring energy to small scales where it is dissipated by some mechanism. While, as we showed before, the inertial range of turbulence in solar wind can be described more or less in a fluid framework, the small scales dissipative region can be much more (perhaps completely) different. The main motivation for this is the fact that the collision length in the solar wind, as a rough estimate the thermal velocity divided by the collision frequency, results to be of the order of 1 AU. Then the solar wind behaves formally as a collisionless plasma, that is the usual viscous dissipation is negligible. At the same time, in a magnetized plasma there are a number of characteristic scales, then understanding the physics of the generation of the small-scale region of turbulence in solar wind is a challenging topic from the point of view of basic plasma physics. With small-scales we mean scales ranging between the ion-cyclotron frequency fci = eB/m i (which in the solar wind at 1 AU is about fci ≃ 0.1 Hz), or the ion inertial length λ i = c/ω pi , and the electron-cyclotron frequency. At these scales the usual MHD approximation could breaks down in favour of a more complex description of plasma where kinetic processes must take place. Some times ago Leamon et al. (1998), analyzed small-scales magnetic field measurements at 1 AU, by using 33 one-hour intervals of the MFI instrument on board Wind spacecraft. Figure 110 shows the trace of the power spectral density matrix for hour 1300 on day 30 of 1995, which is the typical interplanetary power spectrum measured by Leamon et al. (1998),. It is evident that a spectral break exists at about fbr ≃ 0.44 Hz, close to the ion-cyclotron frequency. Below the ioncyclotron frequency, the spectrum follows the usual power law f−α, where the spectral index is close to the Kolmogorov value α ≃ 5/3. At small-scales, namely at frequencies above fbr, the spectrum steepens significantly, but is still described by a power law with a slope in the range α ∈ [2 − 4] (Leamon et al., (1998; Smith et al., (2006), typically α ≃ 7/3. As a direct analogy to hydrodynamic where the steepening of the inertial range spectrum corresponds to the onset of dissipation, the authors attribute the steepening of the spectrum to the occurrence of a "dissipative" range (Leamon et al., (1998). Statistical analysis by Smith et al. (2006), showed that the distribution of spectral slopes (cf. Figure 2 of Smith et al., (2006), is broader for the high-frequency region, while it is more peaked around the Kolmogorov's value in the low-frequency region. Moreover, as a matter of fact, the high-frequency region of the spectrum seems to be related to the low-frequency region (Smith et al., (2006). In particular, the steepening of the high-frequency range spectrum is clearly dependent on the rate of the energy cascade ∈ obtained as a rough estimate (cf. Figure 4 of Smith et al., (2006). a) Typical interplanetary magnetic field power spectrum obtained from the trace of the spectral matrix. A spectral break at about ~ 0.4 Hz is clearly visible. b) Corresponding magnetic helicity spectrum. Image reproduced by permission from Leamon et al. (1998), copyright by AGU. Further properties of turbulence in the high-frequency region have been evidenced by looking at solar wind observations by the FGM instrument onboard Cluster satellites (Alexandrova et al., (2008) spanning a 0.02 ÷ 0.5 Hz frequency range. The authors found that the same spectral break by Leamon et al. (1998), exists when different datasets (Helios for large scales and Cluster for small scales) are used. The break (cf. Figure 1 of Alexandrova et al., (2008) has been found at about fbr ≃ 0.3 Hz, near the ion cyclotron frequency fci ≃ 0.1 Hz, which roughly corresponds to spatial scales of about 1900 km ≃ 15λ i (being λ i ≃ 130 km the ion-skin-depth). However, as evidenced in Figure 1 of Alexandrova et al. (2008), the compressible magnetic fluctuations, measured by magnetic field parallel spectrum S∥, are enhanced at small scales. This means that, after the break compressible fluctuations become much more important than in the low-frequency part. The parameter 〈S∥〉/〈S〉/ ≃ 0.03 in the low-frequency range (S is the total power spectrum density and brackets means averages value over the whole range) while compressible fluctuations are increased to about 〈S∥〉/〈S〉/ ≃ 0.26 in the high-frequency part. The increase of the above ratio were already noted in the paper by Leamon et al. (1998),. Moreover, Alexandrova et al. (2008), found that, as results in the low-frequency region (cf. Section 7.2), intermittency is a basic property of the high-frequency range. In fact, the authors found that PDFs of normalized magnetic field increments strongly depend on the scale (Alexandrova et al., (2008), a typical signature of intermittency in fully developed turbulence (cf. Section 7.2). More quantitatively, the behavior of the fourth-order moment of magnetic fluctuations at different frequencies K(f) is shown in Figure 111 The fourth-order moment K(f) of magnetic fluctuations as a function of frequency f is shown. Dashed line refers to data from Helios spacecraft while full line refers to data from Cluster spacecrafts at 1 AU. The inset refers to the number of intermittent structures revealed as da function of frequency. Image reproduced by permission from Alexandrova et al. (2008), copyright by AAS. It is evident that this quantity increases as the frequency becomes smaller, thus indicating the presence of intermittency. However the rate at which K(f) increases is pronounced above the ion cyclotron frequency, meaning that intermittency in the high-frequency range is much more effective than in the low-frequency region. Recently, by analyzing a different dataset from Cluster spacecraft, Kiyani et al. (2009), using high-order statistics of magnetic differences, showed that the scaling exponents of structure functions, evaluated at small scales, are no more anomalous as the low-frequency range, even if the situation is not so clear (Yordanova et al., (2008, (2009). This is a good example of absence of universality in turbulence, a topic which received renewed attention in the last years (Chapman et al., (2009; Lee et al., (2010; Matthaeus, (2009). 12 The Origin of the High-Frequency Region How is the high-frequency region of the spectrum generated? This has become the urgent topic which must be addressed. Ghosh et al. (1996) appeals to change of invariants in controlling the flow of spectral energy transfer in the cascade process, and in this picture no dissipation is required to explain the steepening of the magnetic power spectrum. Furthermore it is believed that the high-frequency region is highly anisotropic, with a significant fraction of turbulent energy cascades mostly in the quasi 2D structures, perpendicular to the background magnetic field. How magnetic energy dissipated in the anisotropic energy cascade still remains an open question. 12.1 A dissipation range As we have already said, in their analysis of Wind data Leamon et al. (1998), attribute the presence of the region at frequencies higher than the ion-cyclotron frequency to a kind of dissipative range. Apart for the power spectrum, the authors examined the normalized reduced magnetic helicity σ(f), and they found an excess of negative values at high frequencies. Since this quantity is a measure of the spatial handedness of the magnetic field (Moffatt, (1978) and can be related to the polarization in the plasma frame once the direction propagation direction is known (Smith et al., (1983), the above observations should be consistent with the ion-cyclotron damping of Alfvén waves. Using a reference system relative to the mean magnetic field direction e B and radial direction e R as (e B × e R , e B × (e B × e R ), e B ), they conclude that transverse fluctuations are less dominant than in the inertial range and the high frequency range is best described by a mixture of 46% slab waves and of 54% 2D geometry. Since in the low-frequency range they found 11% and 89% respectively, the increased slab fraction my be explained by the preferential dissipation of oblique structures. Thermal particles interactions with the 2D slab component may be responsible for the formation of dissipative range, even if the situation seems to be more complicated. In fact they found that kinetic Alfvén waves propagating at large angles to the background magnetic field might be also consistent with the observations and may form some portion of the 2D component. Recently the question of the increased anisotropy of the high-frequency region has been addressed by Perri et al. (2009) who investigated the scaling behavior of the eigenvalues of the variance matrix of magnetic fluctuations, which give information on the anisotropy due to different polarizations of fluctuations. The authors investigated data coming from Cluster spacecrafts when satellites orbited in front of the Earth's parallel Bow Shock (Perri et al., (2009). Results indicates that magnetic turbulence in the high-frequency region is strongly anisotropic, the minimum variance direction being almost parallel to the background magnetic field at scales larger than the ion cyclotron scale. A very interesting result is the fact that the eigenvalues of the variance matrix have a strong intermittent behavior, with very high localized fluctuations below the ion cyclotron scale. This behavior, never investigated before, generates a cross-scale effect in magnetic turbulence Indeed, PDFs of eigenvalues evolve with the scale, namely they are almost Gaussian above the ion cyclotron scale and become power laws at scales smaller than the ion cyclotron scale. As a consequence it is not possible to define a characteristic value (as the average value) for the eigenvalues of the variance matrix at small scales. Since the wave-vector spectrum of magnetic turbulence is related to the characteristic eigenvalues of the variance matrix (Carbone et al., (1995a), the absence of a characteristic value means that a typical power spectrum at small scales cannot be properly defined. This is a feature which received little attention, and represents a further indication for the absence of universal characteristics of turbulence at small scales. 12.2 A dispersive range The presence of a magnetic power spectrum with a slope close to 7/3 (Leamon et al., (1998; Smith et al., (2006), suggests the fact that the high-frequency region above the ion-cyclotron frequency might be interpreted as a kind of different energy cascade due to dispersive effects. Then turbulence in this region can be described through the Hall-MHD models, which is the most simple model to investigate dispersive effects in a fluid-like framework. In fact, at variance with the usual MHD, when the effect of ion inertia is taken into account the generalized Ohm's law reads $$E = - V \times B + \frac{{m_i }} {{\rho e}}(\nabla \times B) \times B,$$ where the second term on the r.h.s. of this equation represents the Hall term (m i being the ion mass). This means that MHD equations are enriched by a new term in the equation describing the magnetic field and derived from the induction equation $$\frac{{\partial B}} {{\partial t}} = \nabla \times \left[ {V \times B - \frac{{m_i }} {{\rho e}}(\nabla \times B) \times B + \eta \nabla \times B} \right],$$ which is quadratic in the magnetic field. The above equation contains three different physical processes characterized by three different times. By introducing a length scale ℓ and characteristic fluctuations ρℓ, Bℓ, and uℓ, we can define an eddy-turnover time T NL ~ ℓ/uℓ, related to the convective process, an Hall time T H ~ ρℓℓ2/Bℓ which characterizes typical processes related to the presence of the Hall term, and a dissipative time T D ~ ℓ2/η. At large scales the first term on the r.h.s. of Equation (76) describes the Alfvénic turbulent cascade, realized in a time T NL . At very small scales, the dissipative time becomes the smallest timescale, and dissipation takes place.12 However, one can conjecture that at intermediate scales a cascade is realized in a time which is no more T NL and not yet T D , rather the cascade is realized in a time T H . This happens when T H ~ T NL . Since at these scales density fluctuations becomes important, the mean volume rate of energy transfer can be defined as ∈ V ~ B ℓ 2 /T H ~ B ℓ 3 /ℓ2ρℓ, where T H is used as a characteristic time for the cascade. Using the usual Richardson's cartoon for the energy cascade which is viewed as a hierarchy of eddies at different scales, and following (von Weizsäcker, (1951), the ratio of the mass density ρℓ at two successive levels ℓ v > ℓv+1 of the hierarchy is related to the corresponding scale size by $$\frac{{\rho _\nu }} {{\rho _{\nu + 1} }} \sim \left( {\frac{{\ell _\nu }} {{\ell _{nu + 1} }}} \right)^{ - 3r} ,$$ where 0 ≤ |r| ≤ 1 is a measure of the degree of compression at each level ℓ v . Using a scaling law for compressive effects ρℓ ~ ℓ−3r and assuming a constant spectrum energy transfer rate, we have Bℓ ~ ℓ(2/3−2r), from which the spectral energy density $$E(k) \sim k^{ - 7/3 + r} .$$ The observed range of scaling exponents observed in solar wind α ∈ [2, 4] (Leamon et al., (1998; Smith et al., (2006), can then be reproduced by different degree of compression of the solar wind plasma −5/6 ≤ r ≤ 1/6. 13 Two Further Questions About Small-Scale Turbulence The most "conservative" way to describe the presence of a dissipative/dispersive region in the solar wind turbulence, as we reported before, is for example through the Hall-MHD model. While when dealing with large scale we can successfully approach the problem of turbulence by saying that some form of dissipation must exist at small scales, the dissipationless character of solar wind cannot be avoided when we deal with small scales. The full understanding of the physical mechanisms that allow the dissipation of energy in the absence of collisional viscosity would be a step of crucial importance in the problem of high frequency turbulence in space plasmas. Another fundamental question concerns the dispersive properties of small-scale turbulence beyond the spectral break. This last question has been reformulated by saying: what are the principal constituent modes of small-scale turbulence? This approach explicitly assumes that small-scale fluctuations in solar wind can be described through a weak turbulence framework. In other words, a dispersion relation, namely a precise relationship between the frequency ω and the wave-vector k, is assumed. As it is well known from basic plasma physics, linear theory for homogeneous, collisionless plasma yields three kind of modes at and below the proton cyclotron frequency Ω p . At wavevectors transverse to the background magnetic field and Ω p > ω r (being ω r the real part of the frequency of fluctuation), two modes are present, namely a left-hand polarized Alfvén cyclotron mode and a right-hand polarized magnetosonic mode. A third ion-acoustic (slow) mode exists but is damped, except when T e ≫ T p , which is not common in solar wind turbulence At quasiperpendicular propagation the Alfvénic branch evolves into Kinetic Alfvén Waves (KAW), while magnetosonic modes may propagate at Ω p ≪ ω r as whistler modes. As the wave-vector becomes oblique to the background magnetic field both modes develop a nonzero magnetic compressibility where parallel fluctuations becomes important. There are two distinct scenarios for the subsequent energy cascade of KAW and whistlers (Gary and Smith, (2009). 13.1 Whistler modes scenario This scenario involves a two-mode cascade process, both Alfvénic and magnetosonic modes which are only weakly damped as the plasma β ≤ 1, transfer energy to quasi-perpendicular propagating wave-vectors. The KAW are damped by Landau damping which is proportional to k ⊥ 2 , so that they cannot contribute to the formation of dispersive region (unless for fluctuations propagating along the perpendicular direction). Even left-hand polarized Alfvén modes at quasi-parallel propagation suffer for proton cyclotron damping at scales k∥ ~ ω p /c and do not contribute. Quasi-parallel magnetosonic modes are not damped at the above scale, so that a weak cascade of right-hand polarized fluctuations can generate a dispersive region of whistler modes (Stawicki et al., (2001; Gary and Borovsky, (2004, (2008; Goldstein et al., (1994). The cascade of weakly damped whistler modes has been reproduced through electron MHD numerical simulations (Biskamp et al., (1996, (1999; Wareing and Hollerbach, (2009; Cho and Lazarian, (2004) and Particle-in-Cell (PIC) codes (Gary et al., (2008; Saito et al., (2008). 13.2 Kinetic Alfvén waves scenario In this scenario (Howes, (2008; Schekochihin et al., (2009) long-wavelength Alfvénic turbulence transfer energy to quasi-perpendicular propagation for the primary turbulent cascade up to the thermal proton gyroradius where fluctuations are subject to the proton Landau damping. The remaining fluctuation energy continues the cascade to small scales as KAW at quasi-perpendicular propagation and at frequencies ω r < Ω p Bale et al. (2005); Sahraoui et al. (2009)). Fluctuations are completely damped via electron Landau resonance at wavelength of the order of the electron gyroradius. This scenario has been observed through gyrokinetic numerical simulations Howes et al. (2008b)), where the spectral breakpoint k⊥ ~ Ω p /u th (being u th the proton thermal speed) has been observed. 13.3 Where does the fluid-like behavior break down in solar wind turbulence? Up to now spacecraft observations do not allow us to unambiguously distinguish between both previous scenarios. As stated by Gary and Smith (2009) at our present level of understanding of linear theory, the best we can say is that quasi-parallel whistlers, quasi-perpendicular whistlers, and KAW all probably could contribute to dispersion range turbulence in solar wind. Thus, the critical question is not which mode is present (if any exists in a nonlinear, collisionless medium as solar wind), but rather, what are the conditions which favor one mode over the others. On the other hand, starting from observations, we cannot rule out the possibility that strong turbulence rather than "modes" are at work to account for the high-frequency part of the magnetic energy spectrum. One of the most striking observations of small-scale turbulence is the fact that the electric field is strongly enhanced after the spectral break (Bale et al., (2005)). This means that turbulence at small scales is essentially electrostatic in nature, even if weak magnetic fluctuations are present. The enhancement of the electrostatic part has been viewed as a strong indication for the presence of KAW, because gyrokinetic simulations show the same phenomenon Howes et al. (2008b)). However, as pointed out by Matthaeus et al. (2008)) (see also the Reply by Howes et al., (2008a) to the comment by Matthaeus et al., (2008)), the enhancement of electrostatic fluctuations can be well reproduced by Hall-MHD turbulence, without the presence of KAW modes. Actually, the enhancement of the electric field turns out to be a statistical property of the inviscid Hall MHD (Servidio et al., 2008), that is in the absence of viscous and dissipative terms the statistical equilibrium ensemble of Hall- MHD equations in the wave-vectors space is build up with an enhancement of the electric field at large wave-vectors. This represents a thermodynamic equilibrium property of equations, and has little to do with a non-equilibrium turbulent cascade13. This would means that the enhancement of the electrostatic part of fluctuations cannot be seen as a proof firmly establishing that KAW are at work in the dispersive region. One of the most peculiar possibility from the Cluster spacecraft was the possibility to separate the time domain from the space domain, using the tetrahedral formation of the four spacecrafts which form the Cluster mission (Escoubet et al., (2001)). This allows us to obtain a 3D wavevector spectrum and the possibility to identify the actual dispersion relation of solar wind turbulence, if any exists, at small scales. This can be made by using the k-filtering technique which is based on the strong assumption of plane-wave propagation (Glassmeier et al., (2001)). Of course, due to the relatively small distances between spacecrafts, this cannot be applied to large-scale turbulence. Apart for the spectral break identified by Leamon et al. (1998), a new break has been identified in the solar wind turbulence using high-frequency Cluster data, at about few tens of Hz. In fact, Cluster data at the burst mode can reach the characteristic electron inertial scale λ e and the electron Larmor radius ρ e . Using FluxGate Magnetometer and Spatiotemporal Analysis of Field Fluctuations experiment/search coil, Sahraoui et al. (2009)) showed that the turbulent spectrum changes shape at wavevectors of about kρ e ~ kλ e ≃ 1. This result, which perhaps identify the occurrence of a dissipative range in solar wind turbulence, has been obtained in the upstream solar wind magnetically connected to the bow shock. However, in these studies the plasma β was of the order of β ≃ 1, thus not allowing the separation between both scales. Alexandrova et al. (2009)), using three instruments onboard Cluster spacecrafts operating in different frequency ranges, resolved the spectrum up to 300 Hz. They confirmed the presence of the high-frequency spectral break at about kρ e ~ [0.1, 1] and, what is mainly interesting, they fitted this part of the spectrum through an exponential decay ~ exp[− √kρ e ], thus indicating the onset of dissipation. The 3D spectral shape reveals poor surprise, that is the energy distribution exhibits anisotropic features characterized by a prominently extended structure perpendicular to the mean magnetic field preferring the ecliptic north direction and also by a moderately extended structure parallel to the mean field (Narita et al., (2010)). Results of the 3D energy distribution suggest the dominance of quasi 2D turbulence toward smaller spatial scales, overall symmetry to changing the sign of the wave vector (reflectional symmetry) and absence of spherical and axial symmetry. This last was one of the main hypothesis for the Maltese Cross (Matthaeus et al., (1990)), even if bias due to satellite fly through can generate artificial deviations from axisymmetry (Turner et al., (2011)). More interestingly, (Sahraoui et al., (2010b) investigate the occurrence of a dispersion relation. They claim that the energy cascade should be carried by highly oblique KAW with doppler-shifted plasma frequency ωplas ≤ 0.1ω ci down to k⊥ρ i ~ 2. Each wavevector spectrum in the direction perpendicular to an "average" magnetic field B0 shows two scaling ranges separated by a breakpoint in the interval [0.1, 1]k⊥ρ i , say a Kolmogorov scaling followed by a steeper scaling. The authors conjecture that the turbulence undergoes a transition-range, where part of energy is dissipated into proton heating via Landau damping, and the remaining energy cascades down to electron scales where Electron Landau damping may dominate. The dispersion relation, compared with linear solutions of the Maxwell.Vlasov equations (Sahraoui et al., (2010b), cf. Figure 5 of), seems to identify KAW as responsible for the cascade at small scales. The conjecture by Sahraoui et al. (2010b)) does not take into account the fact that Landau damping is rapidly saturating under solar wind conditions (Marsch, (2006); Valentini et al., (2008)). TObserved dispersion relations (dots), with estimated error bars, compared to linear solutions of the Maxwell.Vlasov equations for three observed angles between the k vector and the local magnetic field direction (damping rates are represented by the dashed lines). Proton and electron Landau resonances are represented by the black curves L p,e . Proton cyclotron resonance are shown by the curves C p (the electron cyclotron resonance lies out of the plotted frequency range). Image reproduced by permission from Sahraoui et al. (2010a)), copyright by APS. The question of the existence of a dispersion relation was investigated by Narita et al. (2011a)), which investigated three selected time intervals of magnetic field data of CLUSTER FGM in the solar wind. They used a refined version of the k-filtering technique, called MSR technique, to obtain high-resolution energy spectra in the wavevector domain. Like the wave telescope, the MSR technique performs fitting of the measured data with a propagating plane wave as a function of frequency and wave vector. The main result is the strong spread in the frequency-wavevector domain, namely none of the three intervals exhibits a clear organization of dispersion relation (see Figure 113). Frequencies and wave vectors appear to be strongly scattered, thus not allowing for the identification of wave-like behavior. Top: Angles between the wave vectors and the mean magnetic field as a function of the wave number. Bottom: Frequency-wave number diagram of the identified waves in the plasma rest frame. Magnetosonic (MS), whistler (WHL), and kinetic Alfvén waves (KAW)dispersion relations are represented by dashed, straight, and dotted lines, respectively. Image reproduced by permission from Narita et al. (2011a)), copyright by AGU. The above discussed papers shed some "darkness" on the scenario of small scales solar wind turbulence as made by "modes", or at least they indicate that solar wind turbulence, at least at small scales, is far from universality. As a further stroke of the grey brush, Perri et al. (2011)) simply calculated the frequency of the spectral break as a function of radial distances from the Sun. In fact, since plasma parameters, and in particular the magnetic field intensity, changes when going towards large radial distances, the frequency break should change accordingly. They used Messenger data, as far as the inner heliosphere is concerned, and Ulysses data for outer heliosphere. Data from 0.5 AU, up to 5 AU, are summarized in Figure 2 of Perri et al. (2011)). While the characteristic frequencies of plasma lower going to higher radial distances, the position of the spectral break remains constant over all the interval of distances investigated. That is the observed high-frequency spectral break seems to be independent of the distance from the Sun, and then of both the ion-cyclotron frequency and the proton gyroradius. So, where does the fluid-like behavior break down in solar wind turbulence? 13.4 What physical processes replace "dissipation" in a collisionless plasma? As we said before, the understanding of the small-scale termination of the turbulent energy cascade in collisionless plasmas is nowadays one of the outstanding unsolved problem in space plasma physics. In the absence of collisional viscosity and resistivity the dynamics of small scales is kinetic in nature and must be described by the kinetic theory of plasma. The identification of the physical mechanism that "replaces" dissipation in the collisionless solar wind plasma and establishes a link between the macroscopic and the microscopic scales would open new scenarios in the study of the turbulent heating in space plasmas. This problem is yet in its infancy. Kinetic theory is known since long time from plasma physics, the interested reader can read the excellent review by Marsch (2006)). However, it is restricted mainly to linear theoretical arguments. The fast technological development of supercomputers gives nowadays the possibility of using kinetic Eulerian Vlasov codes that solve the Vlasov.Maxwell equations in multi-dimensional phase space The only limitation to the "dream" of solving 3D-3V problems (3D in real space and 3D in velocity space) resides in the technological development of fast enough solvers. The use of almost noiseless codes is crucial and allows for the first time the possibility of analyzing kinetic nonlinear effects as the nonlinear evolution of particles distribution function, nonlinear saturation of Landau damping, etc. Of course, faster numerical way to solve the dissipation issue in collisionless plasmas might consist in using intermediate gyrokinetic descriptions (Brizard and Hahm, (2007)) based on a gyrotropy and strong anisotropy assumptions k∥ ≪ k⊥. As we said before, observations of small-scale turbulence showed the presence of a significant level of electrostatic fluctuations (Gurnett and Anderson, (1977); Gurnett and Frank, (1978); Gurnett et al., (1979); Bale et al., (2005)). Old observations of plasma wave measurements on the Helios 1 and 2 spacecrafts (Gurnett and Anderson, (1977); Gurnett and Frank, (1978); Gurnett et al., (1979)) have revealed the occurrence of electric field wave-like turbulence in the solar wind at frequencies between the electron and ion plasma frequencies. Wavelength measurements using the IMP 6 spacecraft provided strong evidence for the presence of electric fluctuations which were identified as ion acoustic waves which are Doppler-shifted upward in frequency by the motion of the solar wind (Gurnett and Frank, (1978)). Comparison of the Helios results showed that the ion acoustic wave-like turbulence detected in interplanetary space has characteristics essentially identical to those of bursts of electrostatic turbulence generated by protons streaming into the solar wind from the earth's bow shock (Gurnett and Frank, (1978); Gurnett et al., (1979)). Gurnett and Frank (1978)) observed that in a few cases of Helios data, ion acoustic wave intensities are enhanced in direct association with abrupt increases in the anisotropy of the solar wind electron distribution. This relationship strongly suggests that the ion acoustic wave-like structures detected by Helios far from the earth are produced by an electron heat flux instability or by protons streaming into the solar wind from the earth's bow shock. Further evidences (Marsch, (2006)) revealed the strong association between the electrostatic peak and nonthermal features of the velocity distribution function of particles like temperature anisotropy and generation of accelerated beams. Araneda et al. (2008) using Vlasov kinetic theory and one-dimensional Particle-in-Cell hybrid simulations provided a novel explanation of the bursts of ion-acoustic activity occurring in the solar wind. These authors studied the effect on the proton velocity distributions in a low-β plasma of compressible fluctuations driven by the parametric instability of Alfvén-cyclotron waves. Simulations showed that field-aligned proton beams are generated during the saturation phase of the wave-particle interaction, with a drift speed which is slightly greater than the Alfvén speed. As a consequence, the main part of the distribution function becomes anisotropic due to phase mixing. This observation is relevant, because the same anisotropy is typically observed in the velocity distributions measured in the fast solar wind (Marsch, (2006). In recent papers, Valentini et al. (2008) and Valentini and Veltri (2009) used hybrid Vlasov. Maxwell model where ions are considered as kinetic particles, while electrons are treated as a fluid. Numerical simulations have been obtained in 1D-3V phase space (1D in the physical space and 3D in the velocity space) where a turbulent cascade is triggered by the nonlinear coupling of circularly left-hand polarized Alfvén waves, in the perpendicular plane and in parallel propagation, at plasma-β of the order of unity. Numerical results show that energy is transferred to short scales in longitudinal electrostatic fluctuations of the acoustic form. The numerical dispersion relation in the k − ω plane displays the presence of two branches of electrostatic waves. The upper branch, at higher frequencies, consists of ion-acoustic waves while the new lower frequency branch consists of waves propagating with a phase speed of the order of the ion thermal speed. This new branch is characterized by the presence of a plateau around the thermal speed in the ion distribution function, which is a typical signature of the nonlinear saturation of wave-particle interaction process. Numerical simulations show that energy should be "dissipated" at small scales through the generation of an ion-beam in the velocity distribution function as a consequence of the trapping process and the nonlinear saturation of Landau damping, which results in bursts of electrostatic activity. Whether or not this picture, which seems to be confirmed by recent numerical simulations (Araneda et al., (2008; Valentini et al., (2008; Valentini and Veltri, (2009), represents the final fate of the real turbulent energy cascade observed at macroscopic scales, requires further investigations. Available measurements in the interplanetary space, even using Cluster spacecrafts, do not allow analysis at typical kinetic scales. 14 Conclusions and Remarks Now that the reader finally reached the conclusions, hoping that he was so patient to read the whole paper, we suggest him to go back for a moment to the List of Contents, not to start all over again, but just to take a look at the various problems that have been briefly touched by this review. He will certainly realize how complex is the phenomenon of turbulence in general and, in particular, in the solar wind. Almost four decades of observations and theoretical efforts have not yet been sufficient to fully understand how this natural and fascinating phenomenon really works in the solar wind. We certainly are convinced that we cannot think of a single mechanism able to reproduce all the details we have directly observed since physical boundary conditions favor or inhibit different generation mechanisms, like for instance, velocity-shear or parametric decay, depending on where we are in the heliosphere. On the other hand, there are some aspects which we believe are at the basis of turbulence generation and evolution like: a) we do need non-linear interactions to develop the observed Kolmogorov-like spectrum; b) in order to have non-linear interactions we need to have inward modes and/or convected structures which the majority of the modes can interact with; c) outward and inward modes can be generated by different mechanisms like velocity shear or parametric decay; d) convected structures actively contribute to turbulent development of fluctuations and can be of solar origin or locally generated. In particular, ecliptic observations have shown that what we call Alfvénic turbulence, mainly observed within high velocity streams, tends to evolve towards the more "standard" turbulence that we mainly observe within slow wind regions, i.e., a turbulence characterized by e+ ~ e−, an excess of magnetic energy, and a Kolmogorov-like spectral slope. Moreover, the presence of a well established "background" spectrum already at short heliocentric distances and the low Alfvénicity of the fluctuations suggest that within slow wind turbulence is mainly due to convected structures frozen in the wind which may well be the remnants of turbulent processes already acting within the first layers of the solar corona. In addition, velocity shear, whenever present, seems to have a relevant role in driving turbulence evolution in low-latitude solar wind. Polar observations performed by Ulysses, combined with previous results in the ecliptic, finally allowed to get a comprehensive view of the Alfvénic turbulence evolution in the 3D heliosphere, inside 5 AU. However, polar observations, when compared with results obtained in the ecliptic, do not appear as a dramatic break. In other words, the polar evolution is similar to that in the ecliptic, although slower. This is a middle course between the two opposite views (a non-relaxing turbulence, due to the lack of velocity shear, or a quick evolving turbulence, due to the large relative amplitude of fluctuations) which were popular before the Ulysses mission. The process driving the evolution of polar turbulence still is an open question although parametric decay might play some role. As a matter of fact, simulations of non-linear development of the parametric instability for large-amplitude, broadband Alfvénic fluctuations have shown that the final state resembles values of σc not far from solar wind observations, in a state in which the initial Alfvénic correlation is partially preserved. As already observed in the ecliptic, polar Alfvénic turbulence appears characterized by a predominance of outward fluctuations and magnetic fluctuations. As regards the outward fluctuations, their dominant character extends to large distances from the Sun. At low solar activity, with the polar wind filling a large fraction of the heliosphere, the outward fluctuations should play a relevant role in the heliospheric physics. Relatively to the imbalance in favor of the magnetic energy, it does not appear to go beyond an asymptotic value. Several ways to alter the balance between kinetic and magnetic energy have been proposed (e.g., 2D processes, propagation in a non-uniform medium, and effect of magnetic structures, among others). However, convincing arguments to account for the existence of such a limit have not yet been given, although promising results from numerical simulations seem to be able to qualitatively reproduce the final imbalance in favor of the magnetic energy. Definitely, the relatively recent adoption of numerical methods able to highlight scaling laws features hidden to the usual spectral methods, allowed to disclose a new and promising way to analyze turbulent interplanetary fluctuations. Interplanetary space is now looked at as a natural wind tunnel where scaling properties of the solar wind can be studied on scales of the order of (or larger than) 109 times than laboratory scales. Within this framework, intermittency represents an important topic in both theoretical and observational studies. Intermittency properties have been recovered via very promising models like the MHD shell models, and the nature of intermittent events has finally been disclosed thanks to new numerical techniques based on wavelet transforms. Moreover, similar techniques have allowed to tackle the problem of identify the spectral anisotropic scaling although no conclusive and final analyses have been reported so far. In addition, recent studies on intermittency of magnetic field and velocity vector fluctuations, together with analogous analyses on magnitude fluctuations, contributed to sketch a scenario in which propagating stochastic Alfvénic fluctuations and advected structures, possibly flux tubes embedded in the wind, represent the main ingredients of interplanetary turbulence The varying predominance of one of the two species, waves or structures would make the observed turbulence more or less intermittent. However, the fact that we can make measurements just at one point of this natural wind tunnel represented by the solar wind does not allow us to discriminate temporal from spatial phenomena. As a consequence, we do not know whether these advected structures are somehow connected to the complicated topology observed at the Sun surface or can be considered as by-product of chaotic developing phenomena. Comparative studies based on the intermittency phenomenon within fast and slow wind during the wind expansion would suggest a solar origin for these structures which would form a sort of turbulent background frozen in the wind. As a matter of fact, intermittency in the solar wind is not limited to the dissipation range of the spectrum but abundantly extends orders of magnitude away from dissipative scales, possibly into the inertial range which can be identified taking into account all the possible caveats related to this problem and briefly reported in this review. This fact introduces serious differences between hydrodynamic turbulence and solar wind MHD turbulence, and the same "intermittency" assumes a different intrinsic meaning when observed in interplanetary turbulence In practice, coherent structures observed in the wind are at odds with filaments or vortices observed in ordinary fluid turbulence since these last ones are dissipative structures continuously created and destroyed by turbulent motion. Small-scale turbulence, namely observations of turbulent fluctuations at frequencies greater than say 0.1 Hz. revealed a rich and yet poorly understood physics, mainly related to the big problem of dissipation in a dissipationless plasma. Data analysis received a strong impulse from the Cluster spacecrafts, thus revealing a few number of well established and not contradictory observations, as the presence of a double spectral breaks. However, the interpretation of the presence of a power spectrum at small scales is far from being clear and a number of contradictory interpretations can be found in literature. Numerical simulations, based on Vlasov-Maxwell, gyrokinetic and PIC codes, have been made possible due to the increasingly power of computers. They indicated some possible interpretation of the high-frequency part of the turbulent spectrum, but unfortunately the interpretation is not unequivocal. The study of high-frequency part of the turbulent spectrum is a rapidly growing field of research, here we reported the up to date state of the art, while a more complete, systematic and thought-out analysis of the wide literature will be done in a future version of the paper. As a final remark, we would like to point out that we tried to start writing a particular point of view on the turbulence in the solar wind. We apologize for the lack of some aspects of the phenomenon at hand which can be found in the existing literature. There are still several topics which we did not discuss in this revised version of our review. In particular, we leave for a future version: recent (non-shell) turbulent modeling; simulation of turbulence in the expanding solar wind; numerical simulations of anisotropic turbulence; a deeper view on Vlasov-Maxwell and gyrokinetic approaches. Fortunately, we are writing a Living Review paper and mistakes and/or omissions will be adequately fixed in the next version also with the help of all our colleagues, whom we strongly encourage to send us comments and/or different points of view on particularly interesting topics which we have not yet taken into account or discussed properly. This concept will be explained better in the next sections. A fluid particle is defined as an infinitesimal portion of fluid which moves with the local velocity. As usual in fluid dynamics, infinitesimal means small with respect to large scale, but large enough with respect to molecular scales. The translation (Kolmogorov, (1991) of the original paper by Kolmogorov (1941) can also be found in the book by Hunt et al. (1991). These authors were the first ones to use physical technologies and methodologies to investigate turbulent flows from an experimental point of view. Before them, experimental studies on turbulence were motivated mainly by engineering aspects. We can use a different definition for the third invariant H(t), for example a quantity positive defined, without the term (−1)n and with α = 2. This can be identified as the surrogate of the square of the vector potential, thus investigating a kind of 2D MHD. In this case, we obtain a shell model with λ = 2, a = 5/4, and c = .1/3. However, this model does not reproduce the inverse cascade of the square of magnetic potential observed in the true 2D MHD equations. We have already defined fluctuations of a field as the difference between the field itself and its average value. This quantity has been defined as ..... Here, the differences Δψℓ of the field separated by a distance ℓ represents characteristic fluctuations at the scale ℓ, say characteristic fluctuations of the field across specific structures (eddies) that are present at that scale. The reader can realize the difference between both definitions. To be precise, it is worth remarking again that there are no convincing arguments to identify as inertial range the intermediate range of frequencies where the observed spectral properties are typical of fully developed turbulence. From a theoretical point of view here the association "intermediate range" ≃ "inertial range" is somewhat arbitrary. Really an operative definition of inertial range of turbulence is the range of scales ℓ where relation (42) (for fluid flows) or (41) (for MHD flows) is verified. Since the solar wind moves at supersonic speed Vsw, the usual Taylor's hypothesis is verified, and we can get information on spatial scaling laws ℓ by using time differences τ = ℓ/Vsw. Note that, according to the occurrence of the Yaglom's law, that is a third-order moment is different from zero, the fluctuations at a given scale in the inertial range must present some non-Gaussian features. From this point of view the calculation of structure functions with the absolute value is unappropriate because in this way we risk to cancel out non-Gaussian features. Namely we symmetrize the probability density functions of fluctuations. However, in general, the number of points at disposal is much lower than required for a robust estimate of odd structure functions, even in usual fluid flows. Then, as usually, we will obtain structure functions by taking the absolute value, even if some care must be taken in certain conclusions which can be found in literature. The lognormal model is derived by using a multiplicative process, where random variable generates the cascade. Then, according to the Central Limit Theorem, the process converges to a lognormal distribution of finite variance. The log-Lévy model is a modification of the lognormal model. In such case, the Central Limit Theorem is used to derive the limit distribution of an infinite sum of random variables by relaxing the hypothesis of finite variance usually used. The resulting limit function is a Lévy function. For a discussion on non-turbulent mechanism of solar wind heating cf. Tu and Marsch (1995a). Of course, this is based on classical turbulence. As said before, in the solar wind the dissipative term is unknown, even if it might happens at very small kinetic scales. It is worthwhile to remark that a turbulent fluid flows is out of equilibrium, say the cascade requires the injection of energy (input) and a dissipation mechanism (output), usually lying on well separated scales, along with a transfer of energy. Without input and output, the nonlinear term of equations works like an energy redistribution mechanism towards an equilibrium in the wave vectors space. This generates an equilibrium energy spectrum which should in general be the same as that obtained when the cascade is at work (cf., e.g., Frisch et al., (1975). However, even if the turbulent spectra could be anticipated by looking at the equilibrium spectra, the physical mechanisms are different. Of course, this should also be the case for the Hall MHD. Writing a large review paper is not an easy task and it would not have been possible to accomplish this goal without having a good interaction with our colleagues, whom we have been working with in our Institutions. To this regard, we like to acknowledge the many discussions (more or less "heated") we had with and the many advices and comments we had from all of them, particularly from B. Bavassano and P. Veltri. We also like to acknowledge the use of plasma and magnetic field data from Helios spacecraft to freshly produce some of the figures shown in the present review. In particular, we like to thank H. Rosenbauer and R. Schwenn, PIs of the plasma experiment, and F. Mariani and N.F. Ness, PIs of the second magnetic experiment on board Helios. We thank A. Pouquet, H. Politano, and V. Antoni for the possibility we had to compare solar wind data with both high-resolution numerical simulations and laboratory plasmas. Finally, we owe special thanks to E. Marsch and S.K. Solanki for giving us the opportunity to write this review. Finally, we greatly appreciate the great effort made by an anonymous referee to review such a long paper. We like to thank her/him for her/his meticulous, efficient, and competent analysis of this first updated version of our review which helped us to make a better and more readable document. Open image in new window Open image in new window Open image in new window Open image in new window Open image in new window Open image in new window Open image in new window Open image in new window Open image in new window Open image in new window Open image in new window Open image in new window Open image in new window 41116_2015_2_MOESM1_ESM.mpg (2.3 mb) mpg-Movie (2362.87792969 KB) Still from a movie showing An animation built on SOHO/EIT and SOHO/SUMER observations of the solar-wind source regions and magnetic structure of the chromospheric network. Outflow velocities, at the network cell boundaries and lane junctions below the polar coronal hole, reach up to 10 km s−1 are represented by the blue colored areas (original figures from Hassler et al., (1999). 41116_2015_2_MOESM2_ESM.avi (1.7 mb) mpg-Movie (1752.1640625 KB) Still from a movie showing A numerical simulation of the incompressible MHD equations in three dimensions, assuming periodic boundary conditions (see details in Mininni et al., (2003a). The left panel shows the power spectra for kinetic energy (green), magnetic energy (red), and total energy (blue) vs. time. The right panel shows the spatially integrated kinetic, magnetic, and total energies vs. time. The vertical (orange) line indicates the current time. These results correspond to a 1283 simulation with an external force applied at wave number kforce = 10 (movie kindly provided by D. Gómez). mpg-Movie (1780.71484375 KB) Still from a movie showing A 1283 numerical simulation, as in Figure 35, but with an external force applied at wave number kforce = 3 (movie kindly provided by D. Gómez). 41116_2015_2_MOESM4_ESM.gif (4.8 mb) mpg-Movie (4926.71972656 KB) Still from a movie showing Trajectory followed by the tip of the magnetic field vector in the minimum variance reference system during a time interval not characterized by intermittency. The duration of the interval is 2000 × 6 s but the magnetic field vector moves only for 100 × 6 s in order to make a smaller file (movie kindly provided by A. Vecchio). A Some Characteristic Solar Wind Parameters Although solar wind is a highly variable medium, it is possible to identify some characteristic values for its most common parameters. Since the wind is an expanding medium, we ought to choose one heliocentric distance to refer to and, usually, this distance is 1 AU. In the following, we will provide different tables referring to several solar wind parameters, velocities, characteristic times, and lengths. As it can be seen, the solar wind is a super-Alfvénic, collisionless plasma, and MHD turbulence can be investigated for frequencies smaller than ~ 10−1 Hz. Typical values of several solar wind parameters as measured by Helios 2 at 1 AU. Wind Parameter Slow wind Fast wind number density ~ 15 cm−3 ~ 4 cm−3 bulk velocity ~ 350 km s−1 proton temperature ~ 5 × 104 K electron temperature α-particles temperature ~ 6 nT Typical values of different speeds obtained at 1 AU. The Alfvén speed has been measured, while all the others have been obtained from the parameters reported in Table 6. Alfvén ~ 30 km s−1 ion sound proton thermal electron thermal ~ 3000 km s−1 Typical values of different frequencies at 1 AU. These values have been obtained from the parameters reported in Table 6. proton cyclotron ~ 0.1 Hz electron cyclotron ~ 2 × 102 Hz proton-proton collision ~ 2 × 10−6 Hz Typical values of different lengths at 1 AU plus the distance traveled by a proton before colliding with another proton. These values have been obtained from the parameters reported in Table 6. Debye ~ 4 m ~ 15 m proton gyroradius ~ 130 km electron gyroradius ~ 1.3 km distance between 2 proton collisions ~ 1.2 AU ~ 40 AU B Tools to Analyze MHD Turbulence in Space Plasmas No matter where we are in the solar wind, short scale data always look rather random as shown in Figure 114. B Y component of the IMF recorded within a high velocity stream. This aspect introduces the problem of determining the time stationarity of the dataset. The concept of stationarity is related to ensembled averaged properties of a random process. The random process is the collection of the N samples x(t), it is called ensemble and indicated as {x(t)}. Properties of a random process {x(t)} can be described by averaging over the collection of all the N possible sample functions x(t) generated by the process. So, chosen a begin time t1, we can define the mean value μ x and the autocorrelation function R x , i.e., the first and the joint moment: $$\mu _x (t_1 ) = \mathop {\lim }\limits_{N \to \infty } \sum\limits_{k = 1}^N {x_k (t_1 )} ,$$ $$R_x (t_1 ,t_1 + \tau ) = \mathop {\lim }\limits_{N \to \infty } \sum\limits_{k = 1}^N {x_k (t_1 )x_k (t_1 + \tau )} .$$ In case μ x (t1) and R x (t1, t1 + τ) do not vary as time t1 varies, the sample function x(t) is said to be weakly stationary, i.e., $$\mu _x (t_1 ) = \mu _x ,$$ $$R_x (t_1 ,t_1 + \tau ) = R_x (\tau ).$$ Strong stationarity would require all the moments and joint moments to be time independent. However, if x(t) is normally distributed, the concept of weak stationarity naturally extends to strong stationarity. Generally, it is possible to describe the properties of {x(t)} simply computing time-averages over just one x(t). If the random process is stationary and μ x (t) and R x (τ, k) do not vary when computed over different sample functions, the process is said ergodic. This is a great advantage for data analysts, especially for those who deals with data from s/c, since it means that properties of stationary random phenomena can be properly measured from a single time history. In other words, we can write: $$u_x (k) = u_x ,$$ $$r_x (\tau ,k) = (\tau ).$$ Thus, the concept of stationarity, which is related to ensembled averaged properties, can now be transferred to single time history records whenever properties computed over a short time interval do not vary from one interval to the next more than the variation expected for normal dispersion. Fortunately, Matthaeus and Goldstein (1982a) established that interplanetary magnetic field often behaves as a stationary and ergodic function of time, if coherent and organized structures are not included in the dataset. Actually, they proved the weak stationarity of the data, i.e., the stationarity of the average and two-point correlation function. In particular, they found that the average and the autocorrelation function computed within a subinterval would converge to the values estimated from the whole interval after a few correlation times tc. More recent analysis (Perri and Balogh, (2010) extended the above studies to different parameter ranges by using Ulysses data, showing that the stationarity assumption in the inertial range of turbulence on timescales of 10 min to 1 day is reasonably satisfied in fast and uniform solar wind flows, but that in mixed, interacting fast, and slow solar wind streams the assumption is frequently only marginally valid. if our time series approximates a Markov process (a process whose relation to the past does not extend beyond the immediately preceding observation), its autocorrelation function can be shown (Doob, (1953) to approximate a simple exponential: $$R(t) = R(0)e^{ - \tfrac{t} {{t_c }}}$$ from which we obtain the definition given by Batchelor (1970): $$t_c = \int_0^\infty {\frac{{R(t)}} {{R(0)}}dt.}$$ Just to have an idea of the correlation time of magnetic field fluctuations, we show in Figure 115 magnetic field correlation time computed at 1 AU using Voyager 2's data. In this case, using the above definition, tc ≃ 3.2 × 103 s. B.1 Statistical description of MHD turbulence When an MHD fluid is turbulent, it is impossible to know the detailed behavior of velocity field u(x, t) and magnetic field b(x, t), and the only description available is the statistical one. Very useful is the knowledge of the invariants of the ideal equations of motion for which the dissipative terms μ∇2b and v∇2v are equal to zero because the magnetic resistivity μ and the viscosity v are both equal to zero. Following Frisch et al. (1975) there are three quadratic invariants of the ideal system which can be used to describe MHD turbulence: total energy E, cross-helicity Hc, and magnetic helicity Hm. The above quantities are defined as follows: $$E = \frac{1} {2}\left\langle {v^2 + b^2 } \right\rangle ,$$ $$H_c = \left\langle {v \cdot b} \right\rangle ,$$ $$H_m = \left\langle {A \cdot B} \right\rangle ,$$ where v and b are the fluctuations of velocity and magnetic field, this last one expressed in Alfvén units ({ie-163}), and A is the vector potential so that B = ∇ × A. The integrals of these quantities over the entire plasma containing regions are the invariants of the ideal MHD equations: $$E = \frac{1} {2}\int {\left( {v^2 + b^2 } \right)d^3 x,}$$ $$ H_c = \frac{1} {2}\int {\left( {v \cdot b} \right)d^3 x,}$$ $$H_m = \int {\left( {A \cdot B} \right)d^3 x,}$$ Magnetic field auto-correlation function at 1 AU. Image reproduced by permission from Matthaeus and Goldstein (1982b), copyright by AGU. In particular, in order to describe the degree of correlation between u and b, it is convenient to use the normalized cross-helicity σc: $$\sigma _c = \frac{{2H_c }} {E},$$ since this quantity simply varies between +1 and −1. B.2 Spectra of the invariants in homogeneous turbulence Statistical information about the state of a turbulent fluid is contained in the n-point correlation function of the fluctuating fields. In homogeneous turbulence these correlations are invariant under arbitrary translation or rotation of the experimental apparatus. We can define the magnetic field auto-correlation matrix $$R_{ij}^b (r) = \left\langle {b_i (x)b_j (x + r)} \right\rangle ,$$ the velocity auto-correlation matrix $$R_{ij}^v (r) = \left\langle {v_i (x)v_j (x + r)} \right\rangle ,$$ and the cross-correlation matrix $$R_{ij}^v (r) = \frac{1} {2}\left\langle {v_i (x)b_j (x + r) + b_i (x)v_j (x + r)} \right\rangle .$$ At this point, we can construct the spectral matrix in terms of Fourier transform of R ij $$S_{ij}^b (k) = \frac{1} {{2\pi }}\int {R_{ij}^b (r)e^{ - ik \cdot r} d^3 r,}$$ $$S_{ij}^v (k) = \frac{1} {{2\pi }}\int {R_{ij}^v (r)e^{ - ik \cdot r} d^3 r,}$$ $$S_{ij}^{vb} (k) = \frac{1} {{2\pi }}\int {R_{ij}^{vb} (r)e^{ - ik \cdot r} d^3 r,}$$ However, in space experiments, especially in the solar wind, data from only a single spacecraft are available. This provides values of R ij b , R ij u , and R ij ub , for separations along a single direction r. In this situation, only reduced (i.e., one-dimensional) spectra can be measured. If r1 is the direction of co-linear separations, we may only determine R ij (r1, 0, 0) and, as a consequence, the Fourier transform on R ij yields the reduced spectral matrix $$S_{ij}^r (k_1 ) = \frac{1} {{2\pi }}\int {R_{ij} (r_1 ,0,0)e^{ - ik_1 \cdot r_1 } dr_1 = \int {S_{ij} (k_1 ,k_2 ,k_3 )dk_2 dk_3 .} }$$ Then, we define H m r , H c r , and E r = E b r + E u r as the reduced spectra of the invariants, depending only on the wave number k1. Complete information about S ij might be lost when computing its reduced version since we integrate over the two transverse k. However, for isotropic symmetry no information is lost performing the transverse wave number integrals (Batchelor, (1970). That is, the same spectral information is obtained along any given direction. Coming back to the ideal invariants, now we have to deal with the problem of how to extract information about Hm from R ij (r). We know that the Fourier transform of a real, homogeneous matrix R ij (r) is an Hermitian form S ij , i.e., {ie164}, and that any square matrix A can be decomposed into a symmetric and an antisymmetric part, A s and A a : $$A = A^s + A^a ,$$ $$A^s = \frac{1} {2}(A + \tilde A),$$ $$A^a = \frac{1} {2}(A - \tilde A).$$ Since the Hermitian form implies that $$S = \tilde S* \to s_{ij} = s_{ji}^* ,$$ it follows that $$S^s = \frac{1} {2}(S + \tilde S) = \frac{1} {2}(S_{ij} + S_{ji} ) = real,$$ $$S^a = \frac{1} {2}(S - \tilde S) = \frac{1} {2}(S_{ij} - S_{ji} ) = imaginary.$$ It has been shown (Batchelor, (1970; Matthaeus and Goldstein, (1982b; Montgomery, (1983) that, while the trace of the symmetric part of the spectral matrix accounts for the magnetic energy, the imaginary part of the spectral matrix accounts for the magnetic helicity. In particular, Matthaeus and Goldstein (1982b) showed that $$H_m^r (k_1 ) = 2\operatorname{Im} S_{23}^r (k_1 )/k_1 ,$$ where Hm has been integrated over the two transverse components $$\int {\operatorname{Im} S_{23} (k)dk_2 dk_3 = \frac{{k_1 }} {2}\int {H_m (k)dk_2 dk_3 } }.$$ In practice, if co-linear measurements are made along the X direction, the reduced magnetic helicity spectrum is given by: $$H_m^r (k_1 ) = 2\operatorname{Im} S_{23}^r (k_1 )/k_1 = 2\operatorname{Im} (YZ*)/k_1 ,$$ where Y and Z are the Fourier transforms of B y and B z components, respectively. Hm can be interpreted as a measure of the correlation between the two transverse components, being one of them shifted by 90° in phase at frequency f. This parameter gives also an estimate of how magnetic field lines are knotted with each other. Hm can assume positive and negative values depending on the sense of rotation of the correlation between the two transverse components. However, another parameter, which is a combination of Hm and E b , is usually used in place of Hm alone. This parameter is the normalized magnetic helicity $$\sigma _m (k) = kH_m (k)/E_b (k),$$ where E b is the magnetic spectral power density and σm varies between +1 and −1. B.2.1 Coherence and phase Since the cross-correlation function is not necessarily an even function, the cross-spectral density function is generally a complex number: $$W_{xy} (f) = C_{xy} (f) + jQ_{xy} (f),$$ (110a) where the real part C xy (f) is the coincident spectral density function, and the imaginary part Q xy (f) is the quadrature spectral density function (Bendat and Piersol, (1971). While C xy (f) can be thought of as the average value of the product x(t)y(t) within a narrow frequency band (f, f + δf), Q xy (f) is similarly defined but one of the components is shifted in time sufficiently to produce a phase shift of 90° at frequency f. In polar notation $$W_{xy} (f) = \left| {W_{xy} (f)} \right|e^{ - j\theta _{xy} } (f).$$ (110b) In particular, $$\left| {W_{xy} (f)} \right| = \sqrt {C_{xy}^2 (f) + Q_{xy}^2 (f)} ,$$ (110c) and the phase between C and Q is given by $$\theta _{xy} (f) = \arctan \frac{{Q_{xy} (f)}} {{C_{xy} (f)}}.$$ (110d) Moreover, $$\left| {W_{xy} (f)} \right|^2 \leqslant W_{xy} (f)W_y (f),$$ (110e) so that the following relation holds $$\gamma _{xy}^2 (f) = \frac{{\left| {W_{xy} (f)} \right|^2 }} {{W_x (f)W_y (f)}} \leqslant 1.$$ (110f) function γ xy 2 (f), called coherence, estimates the correlation between x(t) and y(t) for a given frequency f. Just to give an example, for an Alfvén wave at frequency f whose k vector is outwardly oriented as the interplanetary magnetic field, we expect to find θ ub (f) = 180° and γ ub 2 (f) = 1, where the indexes u and b refer to the magnetic field and velocity field fluctuations. B.3 Introducing the ElsNasser variables The Alfvénic character of turbulence suggests to use the ElsNasser variables to better describe the inward and outward contributions to turbulence. Following Elsässer (1950); Dobrowolny et al. (1980b); Goldstein et al. (1986); Grappin et al. (1989); Marsch and Tu (1989); Tu and Marsch (1990a); and Tu et al. (1989c), ElsNasser variables are defined as $$z^ \pm = v \pm \frac{b} {{\sqrt {4\pi \rho } }},$$ where v and b are the proton velocity and the magnetic field measured in the s/c reference frame, which can be looked at as an inertial reference frame. The sign in front of b, in Equation (111), is decided by sign[−k · B0]. In other words, for an outward directed mean field B0, a negative correlation would indicate an outward directed wave vector k and vice-versa. However, it is more convenient to define the ElsNassers variables in such a way that z+ always refers to waves going outward and z− to waves going inward. In order to do so, the background magnetic field B0 is artificially rotated by 180° every time it points away from the Sun, in other words, magnetic sectors are rectified (Roberts et al., (1987a,b). B.3.1 Definitions and conservation laws If we express b in Alfvén units, that is we normalize it by √4πρ we can use the following handy formulas relative to definitions of fields and second order moments. Fields: $$z^ \pm = v \pm b,$$ $$v = \frac{1} {2}(z^ + + z^ - ),$$ $$b = \frac{1} {2}(z^ + - z^ - ).$$ Second order moments: $$z^ + and z^ - energies \to e^ \pm = \frac{1} {2}\left\langle {(z^ \pm )^2 } \right\rangle ,$$ $$kinetic energy \to e^v = \frac{1} {2}\left\langle {v^2 } \right\rangle ,$$ $$magnetic energy \to e^b = \frac{1} {2}\left\langle {b^2 } \right\rangle ,$$ $$total energy \to e = e^v + e^b ,$$ $$residual energy \to e^r = e^v - e^b ,$$ $$cross - helicity \to e^c = \frac{1} {2}\left\langle {v \cdot b} \right\rangle .$$ Normalized quantities: $$normalized cross - helicity \to \sigma _c = \frac{{e^ + - e^ - }} {{e^ + + e^ - }} = \frac{{2e^c }} {{e^v + e^b }},$$ $$normalized residual - energy \to \sigma _r = \frac{{e^v - e^b }} {{e^v + e^b }} = \frac{{2e^r }} {{e^ + + e^ - }},$$ $$Alfv\'e n ratio \to r_A = \frac{{e^v }} {{e^b }} = \frac{{1 + \sigma _r }} {{1 - \sigma _r }},$$ $$Els\"a sser ratio \to r_E = \frac{{e^ - }} {{e^ + }} = \frac{{1 - \sigma _c }} {{1 + \sigma _c }}.$$ We expect an Alfvèn wave to satisfy the following relations: Table 10: Expected values for Alfvèn ratio rA, normalized cross-helicity σc, and normalized residual energy σr for a pure Alfvèn wave outward or inward oriented. Expected Value r A e V /e B σc (e+ − e−)/(e+ + e−) σr (e V − e B )/(e V + e B ) B.3.2 Spectral analysis using ElsNasser variables A spectral analysis of interplanetary data can be performed using z+ and z− fields. Following Tu and Marsch (1995a) the energy spectrum associated with these two variables can be defined in the following way: $$e_j^ \pm (f_k ) = \frac{{2\delta T}} {n}\delta z_{j,k}^ \pm (\delta z_{j,k}^ \pm )*,$$ where δz j,k ± are the Fourier coefficients of the j-component among x, y, and z, n is the number of data points, δT is the sampling time, and f k = k/nδT, with k = 0, 1, 2, ..., n/2 is the k-th frequency. The total energy associated with the two Alfvèn modes will be the sum of the energy of the three components, i.e., $$e_j^ \pm (f_k ) = \sum\limits_{j = x,y,z} {e_j^ \pm (f_k ).}$$ Obviously, using Equations (125 and 126), we can redefine in the frequency domain all the parameters introduced in the previous section. C Wavelets as a Tool to Study Intermittency Following Farge et al. (1990) and Farge (1992), intermittent events can be viewed as localized zones of fluid where phase correlation exists, in some sense coherent structures. These structures, which dominate the statistics of small scales, occur as isolated events with a typical lifetime which is greater than that of stochastic fluctuations surrounding them. Structures continuously appear and disappear, apparently in a random fashion, at some random location of fluid, and they carry most of the flow energy. In this framework, intermittency can be considered as the result of the occurrence of coherent (non-Gaussian) structures at all scales, within the sea of stochastic Gaussian fluctuations. It follows that, since these structures are well localized in spatial scale and time, it would be advisable to analyze them using wavelets filter instead of the usual Fourier transform. Unlike the Fourier basis, wavelets allow a decomposition both in time and frequency (or space and scale). The wavelet transform W{f(t)} of a function f(t) consists of the projection of f(t) on a wavelet basis to obtain wavelet coefficients w(τ, t). These coefficients are obtained through a convolution between the analyzed function and a shifted and scaled version of an optional wavelet base $$w(\tau ,t) = \int {f(t')\frac{1} {{\sqrt \tau }}\Psi \left( {\frac{{t - t'}} {\tau }} \right)dt'} ,$$ where the wavelet function $$\Psi _{t',\tau } (t) = \frac{1} {{\sqrt \tau }}\Psi \left( {\frac{{t - t'}} {\tau }} \right)$$ has zero mean and compact support. Some examples of translated and scaled version of this function for a particular wavelet called "charro", because its profile resembles the Mexican hat "El Charro", are given in Figure 116, and the analytical expression for this wavelet is $$\Psi _{t',\tau } (t) = \frac{1} {{\sqrt \tau }}\left[ {\left( {1 - \left( {\frac{{t - t'}} {\tau }} \right)^2 } \right)\exp \left( { - \frac{1} {2}\left( {\frac{{t - t'}} {\tau }} \right)^2 } \right)} \right].$$ Since the Parceval's theorem exists, the square modulus |ω(τ, t)|2 represents the energy content of fluctuations f(t + τ) − f(t) at the scale τ at position t. In analyzing intermittent structures it is useful to introduce a measure of local intermittency, as for example the Local Intermittency Measure (LIM) introduced by Farge (see, e.g., Farge et al., (1990; Farge, (1992) $$LIM = \frac{{\left| {w(\tau ,t)} \right|^2 }} {{\left\langle {\left| {w(\tau ,t)} \right|^2 } \right\rangle _t }}$$ (averages are made over all positions at a given scale τ). The quantity from Equation (128) represents the energy content of fluctuations at a given scale with respect to the standard deviation of fluctuations at that scale. The whole set of wavelets coefficients can then be split in two sets: a set which corresponds to "Gaussian" fluctuations ω g (τ, t), and a set which corresponds to "structure" fluctuations ω g (τ, t), that is, the whole set of coefficients ω(τ, t) = ω g (τ, t) ⊕ ω s (τ, t) (the symbol ⊕ stands here for the union of disjoint sets). A coefficient at a given scale and position will belong to a structure or to the Gaussian background according whether LIM will be respectively greater or lesser than a threshold value. An inverse wavelets transform performed separately on both sets, namely f g (t) = W−1{ω g (τ, t)} and f s (t) = W−1{ω s (τ, t)}, gives two separate fields: a field f g (t) where the Gaussian background is collected, and the field f s (t) where only the non-Gaussian fluctuations of the original turbulent flow are taken into account. Looking at the field f s (t) one can investigate the spatial behavior of structures generating been applied to time series of thirteen months of velocity and magnetic data from ISEE space experiment for the first time by Veltri and Mangeney (1999). Some examples of Mexican Hat wavelet, for different values of the parameters τ and t'. In our analyses we adopted a recursive method (Bianchini et al., (1999; Bruno et al., (1999a) similar to the one introduced by Onorato et al. (2000) to study experimental turbulent jet flows. The method consists in eliminating, for each scale, those events which cause LIM to exceed a given threshold. Subsequently, the flatness value for each scale is checked and, in case this value exceeds the value of 3 (characteristic of a Gaussian distribution), the threshold is lowered, new events are eliminated and a new flatness is computed. The process is iterated until the flatness is equal to 3, or reaches some constant value, for each scale of the wavelet decomposition. This process is usually accomplished eliminating only a few percent of the wavelet coefficients for each scale, and this percentage reduces moving from small to large scales. The black curve in Figure 117 shows the original profile of the magnetic field intensity observed by Helios 2 between day 50 and 52 within a highly velocity stream at 0.9 AU. The overlapped red profile refers to the same time series after intermittent events have been removed using the LIM method. Most of the peaks, present in the original time series, are not longer present in the LIMed curve. The intermittent component that has been removed can be observed as the blue curve centered around zero. The black curve indicates the original time series, the red one refers to the LIMed data, and the blue one shows the difference between these two curves. D Reference Systems Interplanetary magnetic field and plasma data are provided, usually, in two main reference systems: RTN and SE. The RTN system (see top part of Figure 118) has the R axis along the radial direction, positive from the Sun to the s/c, the T component perpendicular to the plane formed by the rotation axis of the Sun Ω and the radial direction, i.e., T = Ω × R, and the N component resulting from the vector product N = R × T. The Solar Ecliptic reference system SE, is shown (see bottom part of Figure 118) in the configuration used for Helios magnetic field data, i.e., s/c centered, with the X-axis positive towards the Sun, and the Y-axis lying in the ecliptic plane and oriented opposite to the orbital motion. The third component Z is defined as Z = X × Y. However, solar wind velocity is given in the Sun-centered SE system, which is obtained from the previous one after a rotation of 180. around the Z-axis. Sometimes, studies are more meaningful if they are performed in particular reference systems which result to be rotated with respect to the usual systems, in which the data are provided in the data centers, for example RTN or SE. Here we will recall just two reference systems commonly used in data analysis. D.1 Minimum variance reference system The minimum variance reference system, i.e., a reference system with one of its axes aligned with a direction along whit the field has the smallest fluctuations (Sonnerup and Cahill, (1967). This method provides information on the spatial distribution of the fluctuations of a given vector. Given a generic field B(x, y, z), the variance of its components is $$\left\langle {B_x^2 } \right\rangle - \left\langle {B_x } \right\rangle ^2 ;\left\langle {B_y^2 } \right\rangle - \left\langle {B_y } \right\rangle ^2 ;\left\langle {B_z^2 } \right\rangle - \left\langle {B_z } \right\rangle ^2 .$$ Similarly, the variance of B along the direction S would be given by $$V_S = \left\langle {B_S^2 } \right\rangle - \left\langle {B_S } \right\rangle ^2 .$$ The top reference system is the RTN while the one at the bottom is the Solar Ecliptic reference system. This last one is shown in the configuration used for Helios magnetic field data, with the X-axis positive towards the Sun. Let us assume, for sake of simplicity, that all the three components of B fluctuate around zero, then $$\left\langle {B_x } \right\rangle = \left\langle {B_y } \right\rangle = \left\langle {B_z } \right\rangle = 0 \Rightarrow \left\langle {B_S } \right\rangle = x\left\langle {B_x } \right\rangle + y\left\langle {B_y } \right\rangle + z\left\langle {B_z } \right\rangle = 0.$$ Then, the variance V S can be written as $$V_S = \left\langle {B_S^2 } \right\rangle = x^2 \left\langle {B_x^2 } \right\rangle + y^2 \left\langle {B_y^2 } \right\rangle + z^2 \left\langle {B_z^2 } \right\rangle + 2xy\left\langle {B_x B_y } \right\rangle + 2xz\left\langle {B_x B_z } \right\rangle + 2yz\left\langle {B_y B_z } \right\rangle ,$$ which can be written (omitting the sign of average 〈〉) as $$V_S = x\left\langle {xB_x^2 + yB_x B_y + zB_x B_z } \right\rangle + y\left\langle {yB_y^2 + xB_x B_y + zB_y B_z } \right\rangle + z\left\langle {zB_z^2 + xB_x B_z + yB_y B_z } \right\rangle .$$ This expression can be interpreted as a scalar product between a vector S(x, y, z) and another vector whose components are the terms in parentheses. Moreover, these last ones can be expressed as a product between a matrix M built with the terms B x 2 , B y 2 , B z 2 , B x B y , B x B z , B y B z , and a vector S(x, y, z). Thus, $$V_S = (S,MS),$$ $$S \equiv \left( {\begin{array}{*{20}c} x \\ y \\ z \\ \end{array} } \right)$$ (128g) $$M \equiv \left( {\begin{array}{*{20}c} {Bx^2 } & {B_x B_y } & {B_x B_z } \\ {B_x B_y } & {B_y ^2 } & {B_y B_z } \\ {B_x B_z } & {B_y B_z } & {B_z ^2 } \\ \end{array} } \right).$$ (128h) At this point, M is a symmetric matrix and is the matrix of the quadratic form V S which, in turn, is defined positive since it represents a variance. It is possible to determine a new reference system [x, y, z] such that the quadratic form V S does not contain mix terms, i.e., $$V_S = {x'}^2 {B'}_x^2 + {y'}^2 {B'}_y^2 + {z'}^2 {B'}_z^2 .$$ (128i) Thus, the problem reduces to compute the eigenvalues λ i and eigenvectors Ṽ i of the matrix M. The eigenvectors represent the axes of the new reference system, the eigenvalues indicate the variance along these axes as shown in Figure 119. Original reference system [x, y, z] and minimum variance reference system whose axes are V1, V2, and V3 and represent the eigenvectors of M. Moreover, λ1, λ2, and λ3 are the eigenvalues of M. At this point, since we know the components of unit vectors of the new reference system referred to the old reference system, we can easily rotate any vector, defined in the old reference system, into the new one. D.2 The mean field reference system The mean field reference system (see Figure 120) reduces the problem of cross-talking between the components, due to the fact that the interplanetary magnetic field is not oriented like the axes of the reference system in which we perform the measurement. As a consequence, any component will experience a contribution from the other ones. Let us suppose to have magnetic field data sampled in the RTN reference system. If the largescale mean magnetic field is oriented in the [x, y, z] direction, we will look for a new reference system within the RTN reference system with the x-axis oriented along the mean field and the other two axes lying on a plane perpendicular to this direction. Thus, we firstly determine the direction of the unit vector parallel to the mean field, normalizing its components $$\begin{array}{*{20}c} {e_{x1} = B_x /\left| B \right|,} \\ {e_{x2} = B_y /\left| B \right|,} \\ {e_{x3} = B_z /\left| B \right|,} \\ \end{array}$$ (128j) so that ê x '(ex1, ex2, ex3) is the orientation of the first axis, parallel to the ambient field. As second direction it is convenient to choose the radial direction in RTN, which is roughly the direction of the solar wind flow, ê R (1, 0, 0). At this point, we compute a new direction perpendicular to the plane ê R − ê x $$\hat e'_z (e_{z1} ,e_{z2} ,e_{z3} )\hat e'_x \times \hat e_R .$$ Consequently, the third direction will be $$\hat e'_y (e_{y1} ,e_{y2} ,e_{y3} )\hat e'_z \times \hat e'_x .$$ (128l) At this point, we can rotate our data into the new reference system. Data indicated as B(x, y, z) in the old reference system, will become B'(x', y', z') in the new reference system. The transformation is obtained applying the rotation matrix A $$A = \left( {\begin{array}{*{20}c} {e_{x1} } & {e_{x2} } & {e_{x3} } \\ {e_{y1} } & {e_{x2} } & {e_{x3} } \\ {e_{z1} } & {e_{z2} } & {e_{z3} } \\ \end{array} } \right)$$ (128m) to the vector B, i.e., B' = AB. Mean field reference system. E On-board Plasma and Magnetic Field Instrumentation In this section, we briefly describe the working principle of two popular instruments commonly used on board spacecraft to measure magnetic field and plasma parameters. For sake of brevity, we will only concentrate on one kind of plasma and field instruments, i.e., the top-hat ion analyzer and the flux-gate magnetometer. Ample review on space instrumentation of this kind can be found, for example, in Pfaff et al. (1998a,b). E.1 Plasma instrument: The top-hat The top-hat electrostatic analyzer is a well known type of ion deflector and has been introduced by Carlson et al. (1982). It can be schematically represented by two concentric hemispheres, set to opposite voltages, with the outer one having a circular aperture centered around the symmetry axis (see Figure 121). This entrance allows charged particles to penetrate the analyzer for being detected at the base of the electrostatic plates by the anodes, which are connected to an electronic chain. To amplify the signal, between the base of the plates and the anodes are located the Micro- Channel Plates (not shown in this picture). The MCP is made of a huge amount of tiny tubes, one close to the next one, able to amplify by a factor up to 106 the electric charge of the incoming particle. The electron avalanche that follows hits the underlying anode connected to the electronic chain. The anode is divided in a certain number of angular sectors depending on the desired angular resolution. Outline of a top-hat plasma analyzer. The electric field E(r) generated between the two plates when an electric potential difference δV is applied to them, is simply obtained applying the Gauss theorem and integrating between the internal (R1) and external (R2) radii of the analyzer $$E(r) = \delta V\frac{{R_1 R_2 }} {{R_1 - R_2 }}\frac{1} {{r^2 }}.$$ In order to have the particle q to complete the whole trajectory between the two plates and hit the detector located at the bottom of the analyzer, its centripetal force must be equal to the electric force acting on the charge. From this simple consideration we easily obtain the following relation between the kinetic energy of the particle E k and the electric field E(r): $$\frac{{E_k }} {q} = \frac{1} {2}E(r)r.$$ Replacing E(r) with its expression from Equation (129) and differentiating, we get the energy resolution of the analyzer $$\frac{{\delta E_k }} {{E_k }} = \frac{{\delta r}} {r} = const.,$$ where δr is the distance between the two plates. Thus, δE k /E k depends only on the geometry of the analyzer. However, the field of view of this type of instrument is limited essentially to two dimensions since δΨ is usually rather small (~5°). However, on a spinning s/c, a full coverage of the entire solid angle 4π is obtained by mounting the deflector on the s/c, keeping its symmetry axis perpendicular to the s/c spin axis. In such a way the entire solid angle is covered during half period of spin. Such an energy filter would be able to discriminate particles within a narrow energy interval (E k , E k + δE k ) and coming from a small element dΩ of the solid angle. Given a certain energy resolution, the 3D particle velocity distribution function would be built sampling the whole solid angle 4π, within the energy interval to be studied. E.2 Measuring the velocity distribution function In this section, we will show how to reconstruct the average density of the distribution function starting from the particles detected by the analyzer. Let us consider the flux through a unitary surface of particles coming from a given direction. If f(u x , u y , u z ) is the particle distribution function in phase space, f(u x , u y , u z ) du x du y du z is the number of particles per unit volume (pp/cm3) with velocity between u x and u x + du x , u y and du y , uz and u z + du z , the consequent incident flux Φ i through the unit surface is $$\Phi _i = \iiint {vfd^3 \omega ,}$$ where d3ω = u2du sin θ dθ dφ is the unit volume in phase space (see Figure 122). The transmitted flux C t will be less than the incident flux Φ i because not all the incident particles will be transmitted and Φ i will be multiplied by the effective surface S(< 1), i.e., $$C^t = \iiint {Svfd^3 \omega = \iiint {Svfv^2 dv\sin \theta d\theta d\varphi }}$$ Since for a top-hat Equation 131 is valid, then $$v^2 dv = v^3 \frac{{dv}} {v} \sim v^3 $$ . We have that the counts recorded within the unit phase space volume would be given by $$C^t _{\varphi ,\theta ,v} = f_{\varphi ,\theta ,v} Sv^4 \delta \theta \delta \varphi \frac{{dv}} {v}\sin \theta = f_{\varphi ,\theta ,v} v^4 G,$$ where G is called Geometrical Factor and is a characteristic of the instrument. Then, from the previous expression it follows that the phase space density function f φ, θ, u can be directly reconstructed from the counts $$f_{\varphi ,\theta ,v} = \frac{{C^t _{\varphi ,\theta ,v} }} {{v^4 G}}.$$ Unit volume in phase space. E.3 Computing the moments of the velocity distribution function Once we are able to measure the density particle distribution function f φ,θ,u , we can compute the most used moments of the distribution in order to obtain the particle number density, velocity, pressure, temperature, and heat-flux (Paschmann et al., (1998). If we simply indicate with f(u) the density particle distribution function, we define as moment of order n of the distribution the quantity M n , i.e., $$M_n = \int {v_n f(v)d^3 \omega }.$$ It follows that the first 4 moments of the distribution are the following: the number density $$n = \int {(v)d^3 \omega },$$ the number flux density vector $$nV = \int {f(v)vd^3 \omega },$$ the momentum flux density tensor $$\Pi = m\int {f(v)vvd^3 \omega },$$ the energy flux density vector $$Q = \frac{m} {2}\int {f(v)v^2 vd^3 \omega }.$$ Once we have computed the zero-order moment, we can obtain the velocity vector from Equation (138). Moreover, we can compute Π and Q in terms of velocity differences with respect to the bulk velocity, and Equations (139) and (140) become $$P = m\int {f(v)(v - V)(v - V)d^3 \omega },$$ $$H = \frac{m} {2}\int {f(v)\left| {v - V} \right|^2 (v - V)d^3 \omega }.$$ The new Equations (141) and (142) represent the pressure tensor and the heat flux vector, respectively. Moreover, using the relation P = nKT we extract the temperature tensor from Equations (141) and (137). Finally, the scalar pressure P and temperature T can be obtained from the trace of the relative tensors $$P = \frac{{Tr(P_{ij} )}} {3}$$ $$T = \frac{{Tr(T_{ij} )}} {3}.$$ E.4 Field instrument: The flux-gate magnetometer There are two classes of instruments to measure the ambient magnetic field: scalar and vector magnetometers. While nuclear precession and optical pumping magnetometers are the most common scalar magnetometers used on board s/c (see Pfaff et al., (1998b, for related material), the flux-gate magnetometer is, with no doubt, the mostly used one to perform vector measurements of the ambient magnetic field. In this section, we will briefly describe only this last instrument just for those who are not familiar at all with this kind of measurements in space. The working principle of this magnetometer is based on the phenomenon of magnetic hysteresis. The primary element (see Figure 123) is made of two bars of high magnetic permeability material. A magnetizing coil is spooled around the two bars in an opposite sense so that the magnetic field created along the two bars will have opposite polarities but the same intensity. A secondary coil wound around both bars will detect an induced electric potential only in the presence of an external magnetic field. Outline of a flux-gate magnetometer. The driving oscillator makes an electric current, at frequency f, circulate along the coil. This coil is such to induce along the two bars a magnetic field with the same intensity but opposite direction so that the resulting magnetic field is zero. The presence of an external magnetic field breaks this symmetry and the resulting field ≠ 0 will induce an electric potential in the secondary coil, proportional to the intensity of the component of the ambient field along the two bars. The field amplitude BB produced by the magnetizing field H is such that the material periodically saturates during its hysteresis cycle as shown in Figure 124. In absence of an external magnetic field, the magnetic field B1 and B2 produced in the two bars will be exactly the same but out of phase by 180° since the two coils are spooled in an opposite sense. As a consequence, the resulting total magnetic field would be 0 as shown in Figure 124. In these conditions no electric potential would be induced on the secondary coil because the magnetic flux Φ through the secondary is zero. Left panel: This figure refers to any of the two sensitive elements of the magnetometer. The thick black line indicates the magnetic hysteresis curve, the dotted green line indicates the magnetizing field H, and the thin blue line represents the magnetic field B produced by H in each bar. The thin blue line periodically reaches saturation producing a saturated magnetic field B. The trace of B results to be symmetric around the zero line. Right panel: magnetic fields B1 and B2 produced in the two bars, as a function of time. Since B1 and B2 have the same amplitude but out of phase by 180°, they cancel each other. On the contrary, in case of an ambient field HA ≠ 0, its component parallel to the axis of the bar is such to break the symmetry of the resulting B (see Figure 125). HA represents an offset that would add up to the magnetizing field H, so that the resulting field B would not saturate in a symmetric way with respect to the zero line. Obviously, the other sensitive element would experience a specular effect and the resulting field B = B1 + B2 would not be zero, as shown in Figure 125. In these conditions the resulting field B, fluctuating at frequency f, would induce an electric potential V = dΦ/dt, where Φ is the magnetic flux of B through the secondary coil. At this point, the detector would measure this voltage which would result proportional to the component of the ambient field HA along the axis of the two bars. To have a complete measurement of the vector magnetic field B it will be sufficient to mount three elements on board the spacecraft, like the one shown in Figure 123, mutually orthogonal, in order to measure all the three Cartesian components. Left panel: the net effect of an ambient field HA is that of introducing an offset which will break the symmetry of B with respect to the zero line. This figure has to be compared with Figure 124 when no ambient field is present. The upper side of the B curve saturates more than the lower side. An opposite situation would be shown by the second element. Right panel: trace of the resulting magnetic field B = B1 + B2. The asymmetry introduced by HA is such that the resulting field B is different from zero. Time derivative of the curve B = B1 + B2 shown in Figure 125 assuming the magnetic flux is referred to a unitary surface. F Spacecraft and Datasets Measurements performed by spacecrafts represent a unique chance to investigate a wide range of scales of low-frequency turbulence in a magnetized medium. The interested readers are strongly encouraged to visit the web pages of each specific space mission or, more simply, the Heliophysics Data Portal (formerly VSPO) (http://heliophysicsdata.gsfc.nasa.gov) as a wide source of information. This portal represents an easy way to access all the available datasets, related to magnetospheric and heliospheric missions, allowing the user to quickly find data files and interfaces to data from a large number of space missions and ground-based observations. Two of the s/c which have contributed most to the study of MHD turbulence are the old Helios and Voyager spacecraft, which explored the inner and outer heliosphere, respectively, providing us with an almost complete map of the gross features of low-frequency plasma turbulence. The Helios project was a German-American mission consisting in two interplanetary probes: Helios 1, which was launched in December 1974, and Helios 2, launched one year later. These s/c had a highly elliptic orbit, lying in the ecliptic, which brought the s/c from 1 AU to 0.3 AU in only 6 months. Helios dataset is, with no doubt, the most important and unique one to study MHD turbulence in the inner heliosphere. Most of the knowledge we have today about this topic is based on Helios data mainly because this s/c is the only one that has gone so close to the Sun. As a matter of fact, the orbit of this s/c allowed to observe the radial evolution of turbulence within regions of space (< 0.7 AU) where dynamical processes between fast and slow streams have not yet reprocessed the plasma. The two Voyagers were launched in 1977. One of them, Voyager 1 will soon reach the termination shock and enter the interstellar medium. As a consequence, for the first time, we will be able to measure interstellar particles and fields not affected by the solar wind. Within the study of MHD turbulence, the importance of the two Voyagers in the outer heliosphere is equivalent to that of the two Helios in the inner regions of the heliosphere. However, all these s/c have been limited to orbit in the, or close to the ecliptic plane. Finally, in October 1990, Ulysses was launched and, after a fly-by with Jupiter it reached its final orbit tilted at 80.2° with respect to the solar equator. For the first time, we were able to sample the solar wind coming from polar coronal holes, the pure fast wind not "polluted" by the dynamical interaction with slow equatorial wind. As a matter of fact, the Ulysses scientific mission has been dedicated to investigate the heliospheric environment out of the ecliptic plane. This mission is still providing exciting results. Another spacecraft called WIND was launched in November 1994 and is part of the ISTP Project. WIND, was initially located at the Earth-Sun Lagrangian point L1 to sample continuously the solar wind. Afterwards, it was moved to a much more complicated orbit which allows the spacecraft to repeatedly visit different regions of space around Earth, while continuing to sample the solar wind. The high resolution of magnetic field and plasma measurements of WIND makes this spacecraft very useful to investigate small scales phenomena, where kinetic effects start to play a key role. The Advanced Composition Explorer (ACE) represents another solar wind monitor located at L1. This spacecraft was launched by NASA in 1997 and its solar wind instruments are characterized by a high rate sampling. Finally, we like to call the attention of the reader on the possibility to easily view and retrieve from the web real time solar wind data from both WIND and ACE. A few years ago, Voyager 1 and Voyager 2 reached the termination shock, extending our exploration to almost the whole heliosphere. However, the exploration will not be complete until we will reach the base of the solar corona. All the fundamental physical processes concurring during the birth of the solar wind take place in this part of the heliosphere. Moreover, this is a key region also for the study of turbulence, since here non-linear interactions between inward and outward modes start to be active and produce the turbulence spectrum that we observe in the heliosphere. This region is so important for our understanding of the solar wind that both ESA and NASA are planning space mission dedicated to explore it. In particular, the European Space Agency is planning to launch the Solar Orbiter mission in January 2017 (http://sci.esa.int/ solarorbiter). Solar Orbiter is proposed as a space mission dedicated to study the solar surface, the corona, and the solar wind by means of remote sensing and in-situ measurements, respectively. Consequently, the s/c will carry a heliospheric package primarily designed to measure ions and electrons of the solar wind, energetic particles, radio waves, and magnetic fields and a remote sensing package for for EUV/X-ray spectroscopy and imaging of the disk and the extended corona. In particular, the high resolution imaging of the Sun will give close-up observations of the solar atmosphere, with the capability of resolving fine structures (of the order of 100 km) in the transition region and corona. This will certainly represent a major step forward in investigating the occurrence of intermittent structures of turbulence at very small scales. The observations provided by Helios 25 years ago and, more recently, by Ulysses suggest that the local production of Alfvén waves is much stronger in the region just inside 0.3 AU, and Solar Orbiter, repeatedly reaching 0.28 AU, will provide excellent observations to study problems related to local generation and non-linear coupling between outward and inward waves. Moreover, the high data sampling will provide extremely useful and totally new insight about wave dissipation via wave-particle coupling mechanism and the role that the damping of slow, fast, and Alfvén waves can have in the heating of the solar wind ions. Finally, the opportunity given by Solar Orbiter to correlate in-situ plasma measures with the simultaneous imaging of the same flow element of the solar wind during the co-rotation phase, will provide the possibility to separate temporal effects from spatial effects for the first time in the solar wind. This will be of primary importance for finally understand the physical mechanisms at the basis of the solar wind generation. A similar mission, Solar Probe Plus (http://solarprobe.jhuapl.edu/), is under development by NASA, on a schedule to launch no later than 2018. Solar Probe Plus will orbit the Sun 24 times, gradually approaching the Sun with each pass. On the final three orbits, Solar Probe Plus will fly to within 8.5 solar radii of the Sun's surface. This mission, although very risky, will allow us to tremendously advance our knowledge about the physical processes that heat and accelerate the solar wind. Thus, future key missions for investigating turbulence properties in the solar wind plasma are not just behind the corner and, for the time being, we have to use observations from already flown or still flying spacecraft. This does not mean that exciting results are over, while we wait for these new missions. The main difference with the past is that now we are in a different phase of our research. This phase aims to refine many of the concepts we learned in the past, especially those concerning the radial evolution and the local production of turbulence. As a consequence more refined data analysis and computer simulations are now discovering new and very interesting aspects of MHD turbulence which, we hope, we contributed to illustrate in this review. Alexandrova, O., Carbone, V., Veltri, P. and Sorriso-Valvo, L., 2008, "Small-Scale Energy Cascade of the Solar Wind Turbulence", Astrophys. J., 674, 1153–1157. [DOI], [ADS], [arXiv:0710.0763] (Cited on pages 146 and 147.)ADSCrossRefGoogle Scholar Alexandrova, O., Saur, J., Lacombe, C., Mangeney, A., Mitchell, J., Schwartz, S.J. and Robert, P., 2009, "Universality of Solar-Wind Turbulent Spectrum from MHD to Electron Scales", Phys. Rev. Lett., 103(16), 165003. [DOI], [ADS], [arXiv:0906.3236 [physics.plasm-ph]] (Cited on page 152.)ADSCrossRefGoogle Scholar Araneda, J.A., Marsch, E. and F.-Viñas, A., 2008, "Proton Core Heating and Beam Formation via Parametrically Unstable Alfvén-Cyclotron Waves", Phys. Rev. Lett., 100, 125003. [DOI], [ADS] (Cited on pages 154 and 155.)ADSCrossRefGoogle Scholar Arge, C.N. and Pizzo, V.J., 2000, "Improvement in the prediction of solar wind conditions using near-real time solar magnetic field updates", J. Geophys. Res., 105, 10,465–10,479. [DOI], [ADS] (Cited on page 34.)ADSCrossRefGoogle Scholar Bale, S.D., Kellogg, P.J., Mozer, F.S., Horbury, T.S. and Reme, H., 2005, "Measurement of the Electric Fluctuation Spectrum of Magnetohydrodynamic Turbulence", Phys. Rev. Lett., 94, 215002. [DOI], [ADS], [arXiv:physics/0503103] (Cited on pages 150, 151, and 154.)ADSCrossRefGoogle Scholar Balogh, A., Horbury, T.S., Forsyth, R.J. and Smith, E.J., 1995, "Variances of the components and magnitude of the polar heliospheric magnetic field", in Solar Wind Eight, Proceedings of the Eighth International Solar Wind Conference, Dana Point, CA 1995, (Eds.) Winterhalter, D., Gosling, J.T., Habbal, S.R., Kurth, W.S., Neugebauer, M., AIP Conference Proceedings, 382, pp. 38–43, American Institute of Physics, Woodbury, NY. [ADS] (Cited on page 75.)Google Scholar Balogh, A., Forsyth, R.J., Lucek, E.A., Horbury, T.S. and Smith, E.J., 1999, "Heliospheric magnetic field polarity inversions at high heliographic latitudes", Geophys. Res. Lett., 26, 631–634. [DOI], [ADS] (Cited on page 34.)ADSCrossRefGoogle Scholar Barnes, A., 1979, "Hydromagnetic waves and turbulence in the solar wind", in Solar System Plasma Physics, (Eds.) Parker, E.N., Kennel, C.F., Lanzerotti, L.J., 1, pp. 249–319, North-Holland, Amsterdam; New York (Cited on page 96.)Google Scholar Barnes, A., 1981, "Interplanetary Alfvénic fluctuations: a stochastic model", J. Geophys. Res., 86, 7498–7506. [DOI], [ADS] (Cited on page 72.)ADSCrossRefGoogle Scholar Barnes, A. and Hollweg, J.V., 1974, "Large-amplitude hydromagnetic waves", J. Geophys. Res., 79, 2302–2318. [DOI], [ADS] (Cited on page 36.)ADSCrossRefGoogle Scholar Batchelor, G.K., 1970, Theory of Homogeneous Turbulence, Cambridge University Press, Cambridge; New York. [Google Books]. Originally published 1953 (Cited on pages 44, 55, 162, 164, and 165.)zbMATHGoogle Scholar Bavassano, B. and Bruno, R., 1989, "Evidence of local generation of Alfvénic turbulence in the solar wind", J. Geophys. Res., 94(13), 11,977–11,982. [DOI], [ADS] (Cited on pages 89 and 133.)ADSCrossRefGoogle Scholar Bavassano, B. and Bruno, R., 1992, "On the role of interplanetary sources in the evolution of low-frequency Alfvénic turbulence in the solar wind", J. Geophys. Res., 97(16), 19,129–19,137. [DOI], [ADS] (Cited on pages 71 and 89.)ADSCrossRefGoogle Scholar Bavassano, B. and Bruno, R., 1995, "Density fluctuations and turbulent Mach number in the inner solar wind", J. Geophys. Res., 100, 9475–9480. [ADS] (Cited on pages 31, 96, and 100.)ADSCrossRefGoogle Scholar Bavassano, B. and Bruno, R., 2000, "Velocity and magnetic field fluctuations in Alfvénic regions of the inner solar wind: Three-fluid observations", J. Geophys. Res., 105(14), 5113–5118. [DOI], [ADS] (Cited on pages 62 and 63.)ADSCrossRefGoogle Scholar Bavassano, B., Dobrowolny, M., Fanfoni, G., Mariani, F. and Ness, N.F., 1982a, "Statistical properties of MHD fluctuations associated with high-speed streams from Helios 2 observations", Solar Phys., 78, 373–384. [DOI], [ADS] (Cited on pages 48, 54, 82, 94, and 141.)ADSCrossRefGoogle Scholar Bavassano, B., Dobrowolny, M., Mariani, F. and Ness, N.F., 1982b, "Radial evolution of power spectra of interplanetary Alfvénic turbulence", J. Geophys. Res., 87, 3617–3622. [DOI], [ADS] (Cited on pages 44, 46, 47, 64, 76, and 94.)ADSCrossRefGoogle Scholar Bavassano, B., Bruno, R. and Klein, L., 1995, "Density-Temperature correlation in solar wind MHD fluctuations: a test for nearly incompressible models", J. Geophys. Res., 100, 5871–5875. [DOI], [ADS] (Cited on pages 96 and 100.)ADSCrossRefGoogle Scholar Bavassano, B., Bruno, R. and Rosenbauer, H., 1996a, "Compressive fluctuations in the solar wind and their polytropic index", Ann. Geophys., 14(5), 510–517. [DOI], [ADS] (Cited on pages 98 and 99.)ADSCrossRefGoogle Scholar Bavassano, B., Bruno, R. and Rosenbauer, H., 1996b, "MHD compressive turbulence in the solar wind and the nearly incompressible approach", Astrophys. Space Sci., 243, 159–169. [DOI] (Cited on page 99.)ADSCrossRefGoogle Scholar Bavassano, B., Woo, R. and Bruno, R., 1997, "Heliospheric plasma sheet and coronal streamers", Geophys. Res. Lett., 24, 1655–1658. [DOI], [ADS] (Cited on pages 36 and 37.)ADSCrossRefGoogle Scholar Bavassano, B., Pietropaolo, E. and Bruno, R., 1998, "Cross-helicity and residual energy in solar wind turbulence. Radial evolution and latitudinal dependence in the region from 1 to 5 AU", J. Geophys. Res., 103(12), 6521–6530. [DOI], [ADS] (Cited on page 86.)ADSCrossRefGoogle Scholar Bavassano, B., Pietropaolo, E. and Bruno, R., 2000a, "On the evolution of outward and inward Alfvénic fluctuations in the polar wind", J. Geophys. Res., 105(14), 15,959–15,964. [DOI], [ADS] (Cited on pages 75, 85, and 87.)ADSCrossRefGoogle Scholar Bavassano, B., Pietropaolo, E. and Bruno, R., 2000b, "Alfvénic turbulence in the polar wind: A statistical study on cross helicity and residual energy variations", J. Geophys. Res., 105(14), 12,697–12,704. [DOI], [ADS] (Cited on pages 62 and 86.)ADSCrossRefGoogle Scholar Bavassano, B., Pietropaolo, E. and Bruno, R., 2001, "Radial evolution of outward and inward Alfvénic fluctuations in the solar wind: A comparison between equatorial and polar observations by Ulysses", J. Geophys. Res., 106(15), 10,659–10,668. [DOI], [ADS] (Cited on pages 67 and 88.)ADSCrossRefGoogle Scholar Bavassano, B., Pietropaolo, E. and Bruno, R., 2002a, "Alfvénic turbulence in high-latitude solar wind: Radial versus latitudinal variations", J. Geophys. Res., 107(A12), 1452. [DOI], [ADS] (Cited on pages 86 and 87.)CrossRefGoogle Scholar Bavassano, B., Pietropaolo, E. and Bruno, R., 2002b, "On parametric instability and MHD turbulence evolution in high-latitude heliosphere", in Solspa 2001, Proceedings of the Second Solar Cycle and Space Weather Euroconference, 24–29 September 2001, Vico Equense, Italy, (Ed.) Sawaya-Lacoste, H., ESA Conference Proceedings, SP-477, pp. 313–316, ESA Publications Division, Noordwijk. [ADS] (Cited on page 86.)Google Scholar Bavassano, B., Pietropaolo, E. and Bruno, R., 2004, "Compressive fluctuations in high-latitude solar wind", Ann. Geophys., 22(2), 689–696. [DOI], [ADS] (Cited on pages 99, 100, 101, and 102.)ADSCrossRefGoogle Scholar Belcher, J.W. and Davis Jr, L., 1971, "Large-Amplitude Alfvén Waves in the Interplanetary Medium, 2", J. Geophys. Res., 76(16), 3534–3563. [DOI] (Cited on pages 34, 48, 52, 58, 59, 60, 63, and 64.)ADSCrossRefGoogle Scholar Belcher, J.W. and Solodyna, C.V., 1975, "Alfvén waves and directional discontinuities in the interplanetary medium", J. Geophys. Res., 80(9), 181–186. [DOI], [ADS] (Cited on pages 34, 48, 52, 58, 59, and 83.)ADSCrossRefGoogle Scholar Bendat, J.S. and Piersol, A.G., 1971, Random Data: Analysis and Measurement Procedures, Wiley-Interscience, New York. [Google Books] (Cited on page 165.)zbMATHGoogle Scholar Benzi, R., Paladin, G., Vulpiani, A. and Parisi, G., 1984, "On the multifractal nature of fully developed turbulence and chaotic systems", J. Phys. A: Math. Gen., 17, 3521–3531. [DOI], [ADS] (Cited on page 113.)ADSMathSciNetCrossRefGoogle Scholar Benzi, R., Ciliberto, S., Tripiccione, R., Baudet, C., Massaioli, F. and Succi, S., 1993, "Extended self-similarity in turbulent flows", Phys. Rev. E, 48, 29–35. [DOI] (Cited on pages 105 and 106.)ADSCrossRefGoogle Scholar Bianchini, L., Pietropaolo, E. and Bruno, R., 1999, "An improved method for local intermittency recognition", in Magnetic Fields and Solar Processes, Proceedings of the 9th European Meeting on Solar Physics, 12–18 September 1999, Florence, Italy, (Ed.) Wilson, A., ESA Conference Proceedings, SP-448, pp. 1141–1146, ESA Publications Division, Noordwijk. [ADS] (Cited on page 169.)Google Scholar Bieber, J.W., Wanner, W. and Matthaeus, W.H., 1996, "Dominant two-dimensional solar wind turbulence with implications for cosmic ray transport", J. Geophys. Res., 101(A2), 2511–2522. [DOI], [ADS] (Cited on page 51.)ADSCrossRefGoogle Scholar Biferale, L., 2003, "Shell Models of Energy Cascade in Turbulence", Annu. Rev. Fluid Mech., 35, 441–468. [DOI], [ADS] (Cited on page 25.)ADSMathSciNetzbMATHCrossRefGoogle Scholar Bigazzi, A., Biferale, L., Gama, S.M.A. and Velli, M., 2006, "Small-Scale Anisotropy and Intermittence in High- and Low-Latitude Solar Wind", Astrophys. J., 638, 499–507. [DOI], [ADS], [arXiv:astroph/0412320] (Cited on page 48.)ADSCrossRefGoogle Scholar Biskamp, D., 1993, Nonlinear Magnetohydrodynamics, Cambridge Monographs on Plasma Physics, 1, Cambridge University Press, Cambridge; New York. [Google Books] (Cited on pages 7, 16, 28, 88, and 104.)Google Scholar Biskamp, D., 1994, "Cascade models for magnetohydrodynamic turbulence", Phys. Rev. E, 50, 2702–2711. [DOI], [ADS] (Cited on page 115.)ADSCrossRefGoogle Scholar Biskamp, D., 2003, Magnetohydrodynamic Turbulence, Cambridge University Press, Cambridge; New York. [Google Books] (Cited on pages 7, 16, and 119.)zbMATHCrossRefGoogle Scholar Biskamp, D., Schwarz, E. and Drake, J.F., 1996, "Two-Dimensional Electron Magnetohydrodynamic Turbulence", Phys. Rev. Lett., 76, 1264–1267. [DOI], [ADS] (Cited on page 150.)ADSCrossRefGoogle Scholar Biskamp, D., Schwarz, E., Zeiler, A., Celani, A. and Drake, J.F., 1999, "Electron magnetohydrodynamic turbulence", Phys. Plasmas, 6, 751–758. [DOI], [ADS] (Cited on page 150.)ADSMathSciNetCrossRefGoogle Scholar Boffetta, G., Carbone, V., Giuliani, P., Veltri, P. and Vulpiani, A., 1999, "Power Laws in Solar Flares: Self-Organized Criticality or Turbulence?", Phys. Rev. Lett., 83, 4662–4665. [DOI], [ADS] (Cited on page 119.)ADSCrossRefGoogle Scholar Bohr, T., Jensen, M.H., Paladin, G. and Vulpiani, A., 1998, Dynamical Systems Approach to Turbulence, Cambridge Nonlinear Science Series, 8, Cambridge University Press, Cambridge; New York. [Google Books] (Cited on pages 7 and 22.)Google Scholar Borovsky, J.E., 2008, "Flux tube texture of the solar wind: Strands of the magnetic carpet at 1 AU?", J. Geophys. Res., 113, A08110. [DOI], [ADS] (Cited on pages 135 and 136.)ADSGoogle Scholar Boyd, T.J.M. and Sanderson, J.J., 2003, The Physics of Plasmas, Cambridge University Press, Cambridge; New York. [Google Books] (Cited on page 16.)zbMATHCrossRefGoogle Scholar Braginskii, S.I., 1965, "Transport processes in plasma", in Review of Plasma Physics, (Ed.) Leontovich, M.A., Review of Plasma Physics, 1, pp. 201–311, Consultants Bureau, New York (Cited on page 19.)Google Scholar Brandenburg, A., 2001, "The Inverse Cascade and Nonlinear Alpha-Effect in Simulations of Isotropic Helical Hydromagnetic Turbulence", Astrophys. J., 550, 824–840. [DOI], [ADS], [arXiv:astro-ph/0006186] (Cited on page 57.)ADSCrossRefGoogle Scholar Bravo, S. and Stewart, G.A., 1997, "Flux tube expansion factors and solar wind velocity: Results from a self-consistent MHD model", Adv. Space Res., 20, 35. [DOI], [ADS] (Cited on page 34.)ADSCrossRefGoogle Scholar Brizard, A.J. and Hahm, T.S., 2007, "Foundations of nonlinear gyrokinetic theory", Rev. Mod. Phys., 79, 421–468. [DOI], [ADS] (Cited on page 154.)ADSMathSciNetzbMATHCrossRefGoogle Scholar Bruno, R., 1992, "Inner heliosphere observations of MHD turbulence in the solar wind — Challenges to theory", in Solar Wind Seven, Proceedings of the 3rd COSPAR Colloquium held in Goslar, Germany, 16–20 September 1991, (Eds.) Marsch, E., Schwenn, R., COSPAR Colloquia Series, 3, pp. 423–428, Pergamon Press, Oxford; New York (Cited on pages 46, 75, and 82.)CrossRefGoogle Scholar Bruno, R. and Bavassano, B., 1991, "Origin of low cross-helicity regions in the solar wind", J. Geophys. Res., 96, 7841–7851. [DOI], [ADS] (Cited on pages 67, 82, 85, 94, and 137.)ADSCrossRefGoogle Scholar Bruno, R. and Bavassano, B., 1992, "Evolution of the Alfvénic correlation in the solar wind", Nuovo Cimento C, 15, 599–605. [DOI], [ADS] (Cited on page 82.)ADSCrossRefGoogle Scholar Bruno, R. and Bavassano, B., 1993, "Cross-helicity depletions in the inner heliosphere, and magnetic field and velocity fluctuation decoupling", Planet. Space Sci., 41, 677–685. [DOI], [ADS] (Cited on pages 94, 102, and 103.)ADSCrossRefGoogle Scholar Bruno, R. and Dobrowolny, M., 1986, "Spectral measurements of magnetic energy and magnetic helicity between 0.29 and 0.97 AU", Ann. Geophys., 4, 17–22. [ADS] (Cited on pages 46, 50, 55, and 56.)ADSGoogle Scholar Bruno, R., Bavassano, B. and Villante, U., 1985, "Evidence for Long Period Alfvén Waves in the Inner Solar System", J. Geophys. Res., 90(9), 4373–4377. [DOI], [ADS] (Cited on pages 58, 60, 62, 70, 73, 75, 78, 83, and 99.)ADSCrossRefGoogle Scholar Bruno, R., Bavassano, B., Rosenbauer, H. and Mariani, F., 1989, "On the local generation of interplanetary Alfvénic fluctuations", Adv. Space Res., 9, 131–133. [DOI], [ADS] (Cited on page 66.)ADSCrossRefGoogle Scholar Bruno, R., Bavassano, B. and Pietropaolo, E., 1996, "On the nature of Alfvénic 'inward' modes in the solar wind", in Solar Wind Eight, Proceedings of the Eighth International Solar Wind Conference, Dana Point, CA 1995, (Eds.) Winterhalter, D., Gosling, J.T., Habbal, S.R., Kurth, W.S., Neugebauer, M., AIP Conference Proceedings, 382, pp. 229–232, American Institute of Physics, Woodbury, NY. [DOI], [ADS] (Cited on pages 71, 72, 73, 90, and 94.)Google Scholar Bruno, R., Bavassano, B., Bianchini, L., Pietropaolo, E., Villante, U., Carbone, V. and Veltri, P., 1999a, "Solar wind intermittency studied via Local Intermittency Measure", in Magnetic Fields and Solar Processes, The 9th European Meeting on Solar Physics, held 12–18 September, 1999, in Florence, Italy, (Ed.) Wilson, A., ESA Conference Proceedings, SP-448, pp. 1147–1152, ESA, Noordwijk. [ADS] (Cited on page 169.)Google Scholar Bruno, R., Bavassano, B., Pietropaolo, E., Carbone, V. and Veltri, P., 1999b, "Effects of intermittency on interplanetary velocity and magnetic field fluctuations anisotropy", Geophys. Res. Lett., 26, 3185–3188. [DOI], [ADS] (Cited on pages 48 and 141.)ADSCrossRefGoogle Scholar Bruno, R., Carbone, V., Veltri, P., Pietropaolo, E. and Bavassano, B., 2001, "Identifying intermittent events in the solar wind", Planet. Space Sci., 49, 1201–1210. [DOI], [ADS] (Cited on pages 128, 129, 130, 131, 133, 134, and 135.)ADSCrossRefGoogle Scholar Bruno, R., Carbone, V., Sorriso-Valvo, L. and Bavassano, B., 2003a, "On the role of coherent and stochastic fluctuations in the evolving solar wind MHD turbulence: Intermittency", in Solar Wind Ten, Proceedings of the Tenth International Solar Wind Conference, Pisa, Italy, 17–21 June 2002, (Eds.) Velli, M., Bruno, R., Malara, F., AIP Conference Proceedings, 679, pp. 453–456, American Institute of Physics, Melville, NY (Cited on pages 135, 137, 140, and 142.)Google Scholar Bruno, R., Carbone, V., Sorriso-Valvo, L. and Bavassano, B., 2003b, "Radial evolution of solar wind intermittency in the inner heliosphere", J. Geophys. Res., 108(3), 8–24. [DOI], [ADS] (Cited on pages 123, 138, 139, 140, and 142.)Google Scholar Bruno, R., Carbone, V., Primavera, L., Malara, F., Sorriso-Valvo, L., Bavassano, B. and Veltri, P., 2004, "On the probability distribution function of small-scale interplanetary magnetic field fluctuations", Ann. Geophys., 22, 3751–3769. [DOI], [ADS], [arXiv:physics/0409056] (Cited on pages 134 and 135.)ADSCrossRefGoogle Scholar Bruno, R., D'Amicis, R., Bavassano, B., Carbone, V. and Sorriso-Valvo, L., 2007, "Magnetically dominated structures as an important component of the solar wind turbulence", Ann. Geophys., 25, 1913–1927. [DOI], [ADS] (Cited on pages 68 and 69.)ADSCrossRefGoogle Scholar Bruno, R., Carbone, V., Vörös, Z. et al., 2009, "Coordinated Study on Solar Wind Turbulence During the Venus-Express, ACE and Ulysses Alignment of August 2007", Earth Moon Planets, 104, 101–104. [DOI], [ADS] (Cited on pages 43 and 46.)ADSCrossRefGoogle Scholar Burlaga, L.F., 1991a, "Intermittent turbulence in the solar wind", J. Geophys. Res., 96(15), 5847–5851. [DOI], [ADS] (Cited on pages 104 and 122.)ADSCrossRefGoogle Scholar Burlaga, L.F., 1991b, "Multifractal Structure of the Interplanetary Magnetic Field: Voyager 2 Observations near 25 AU, 1987 - 1988", Geophys. Res. Lett., 18, 69–72. [DOI], [ADS] (Cited on pages 104 and 122.)ADSCrossRefGoogle Scholar Burlaga, L.F., 1991c, "Multifractal structure of speed fluctuations in recurrent streams at 1 AU and near 6 AU", Geophys. Res. Lett., 18, 1651–1654. [DOI], [ADS] (Cited on pages 104 and 122.)ADSCrossRefGoogle Scholar Burlaga, L.F., 1992a, "Multifractal structure of the magnetic field and plasma in recurrent streams at 1 AU", J. Geophys. Res., 97(16), 4283–4293. [DOI], [ADS] (Cited on page 76.)ADSCrossRefGoogle Scholar Burlaga, L.F., 1992b, "Multifractals in the solar wind", in Solar Wind Seven, Proceedings of the 3rd COSPAR Colloquium held in Goslar, Germany, 16–20 September 1991, (Eds.) Marsch, E., Schwenn, R., COSPAR Colloquia Series, 3, pp. 429–432, Pergamon Press, Oxford; New York (Cited on page 76.)CrossRefGoogle Scholar Burlaga, L.F., 1993, "Intermittent turbulence in large-scale velocity fluctuations at 1 AU near solar maximum", J. Geophys. Res., 98(17), 17,467–17,474. [DOI], [ADS] (Cited on page 7.)ADSCrossRefGoogle Scholar Burlaga, L.F., 1995, Interplanetary Magnetohydrodynamics, International Series on Astronomy and Astrophysics, 3, Oxford University Press, New York. [Google Books] (Cited on pages 7 and 104.)Google Scholar Burlaga, L.F. and Klein, L.W., 1986, "Fractal structure of the interplanetary magnetic field", J. Geophys. Res., 91(10), 347–350. [DOI] (Cited on pages 78 and 122.)ADSCrossRefGoogle Scholar Burlaga, L.F. and Ogilvie, K.W., 1970, "Magnetic and Thermal Pressures in the Solar Wind", Solar Phys., 15, 61–99. [DOI], [ADS] (Cited on page 96.)ADSCrossRefGoogle Scholar Burlaga, L.F. and Turner, J.M., 1976, "Microscale' Alfvén waves' in the solar wind at 1 AU", J. Geophys. Res., 81(10), 73–77. [DOI], [ADS] (Cited on page 48.)ADSCrossRefGoogle Scholar Buttighoffer, A., Pick, M., Roelof, E.C., Hoang, S., Mangeney, A., Lanzerotti, L.J., Forsyth, R.J. and Phillips, J.L., 1995, "Coronal electron stream and Langmuir wave detection inside a propagation channel at 4.3 AU", J. Geophys. Res., 100, 3369–3381. [DOI], [ADS] (Cited on page 97.)ADSCrossRefGoogle Scholar Buttighoffer, A., Lanzerotti, L.J., Thomson, D.J., Maclennan, C.G. and Forsyth, R.J., 1999, "Spectral analysis of the magnetic field inside particle propagation channels detected by ULYSSES", Astron. Astrophys., 351, 385–392. [ADS] (Cited on page 97.)ADSGoogle Scholar Carbone, V., 1993, "Cascade model for intermittency in fully developed magnetohydrodynamic turbulence", Phys. Rev. Lett., 71, 1546–1548. [DOI], [ADS] (Cited on pages 7, 28, 104, 114, 122, and 123.)ADSCrossRefGoogle Scholar Carbone, V., 1994a, "Scale similarity of the velocity structure functions in fully developed magnetohydrodynamic turbulence", Phys. Rev. E, 50, 671–674. [DOI], [ADS] (Cited on page 124.)ADSCrossRefGoogle Scholar Carbone, V., 1994b, "Scaling exponents of the velocity structure functions in the interplanetary medium", Ann. Geophys., 12(7), 585–590. [DOI], [ADS] (Cited on page 122.)ADSCrossRefGoogle Scholar Carbone, V. and Veltri, P., 1987, "A simplified cascade model for MHD turbulence", Astron. Astrophys., 188, 239–250 (Cited on page 89.)ADSGoogle Scholar Carbone, V. and Veltri, P., 1990, "A shell model for anisotropic magnetohydrodynamic turbulence", Geophys. Astrophys. Fluid Dyn., 52, 153–181. [DOI] (Cited on pages 49 and 50.)ADSCrossRefGoogle Scholar Carbone, V. and Veltri, P., 1992, "Relaxation processes in magnetohydrodynamics: a triad-interaction model", Astron. Astrophys., 259, 359–372. [ADS] (Cited on page 89.)ADSGoogle Scholar Carbone, V., Malara, F. and Veltri, P., 1995a, "A model for the three-dimensional magnetic field correlation spectra of low-frequency solar wind fluctuations during Alfvénic periods", J. Geophys. Res., 100(9), 1763–1778. [DOI], [ADS] (Cited on pages 53, 54, and 148.)ADSCrossRefGoogle Scholar Carbone, V., Veltri, P. and Bruno, R., 1995b, "Experimental Evidence for Differences in the Extended Self-Similarity Scaling Laws between Fluid and Magnetohydrodynamic Turbulent Flows", Phys. Rev. Lett., 75, 3110–3113. [DOI], [ADS] (Cited on pages 124 and 127.)ADSCrossRefGoogle Scholar Carbone, V., Bruno, R. and Veltri, P., 1996a, "Evidences for extended self-similarity in hydromagnetic turbulence", Geophys. Res. Lett., 23, 121–124. [DOI], [ADS] (Cited on pages 106 and 123.)ADSCrossRefGoogle Scholar Carbone, V., Veltri, P. and Bruno, R., 1996b, "Solar wind low-frequency magnetohydrodynamic turbulence: extended self-similarity and scaling laws", Nonlinear Proc. Geophys., 3, 247–261. [DOI], [ADS] (Cited on page 114.)ADSCrossRefGoogle Scholar Carbone, V., Sorriso-Valvo, L., Martines, E., Antoni, V. and Veltri, P., 2000, "Intermittency and turbulence in a magnetically confined fusion plasma", Phys. Rev. E, 62, 49–56. [DOI], [ADS] (Cited on pages 108 and 111.)ADSCrossRefGoogle Scholar Carbone, V., Marino, R., Sorriso-Valvo, L., Noullez, A. and Bruno, R., 2009a, "Scaling Laws of Turbulence and Heating of Fast Solar Wind: The Role of Density Fluctuations", Phys. Rev. Lett., 103(6), 061102. [DOI], [ADS], [arXiv:1003.0533 [physics.space-ph]] (Cited on pages 31 and 121.)ADSCrossRefGoogle Scholar Carbone, V., Marino, R., Sorriso-Valvo, L., Noullez, A. and Bruno, R., 2009b, "Scaling Laws of Turbulence and Heating of Fast Solar Wind: The Role of Density Fluctuations", Phys. Rev. Lett., 103(6), 061102. [DOI], [ADS], [arXiv:1003.0533 [physics.space-ph]] (Cited on page 144.)ADSCrossRefGoogle Scholar Carbone, V., Sorriso-Valvo, L. and Marino, R., 2009c, "On the turbulent energy cascade in anisotropic magnetohydrodynamic turbulence", Europhys. Lett., 88, 25 001. [DOI], [ADS] (Cited on page 29.)CrossRefGoogle Scholar Carlson, C.W., Curtis, D.W., Paschmann, G. and Michael, W., 1982, "An instrument for rapidly measuring plasma distribution functions with high resolution", Adv. Space Res., 2, 67.70. [DOI], [ADS] (Cited on page 174.)ADSCrossRefGoogle Scholar Castaing, B., Gagne, Y. and Hopfinger, V., 2001, "Velocity probability density functions of high Reynolds number turbulence", Physica D, 46, 177–200 (Cited on pages 114 and 141.)ADSzbMATHCrossRefGoogle Scholar Cattaneo, F. and Hughes, D.W., 1996, "Nonlinear saturation of the turbulent α effect", Phys. Rev. E, 54, R4532–R4535. [DOI], [ADS] (Cited on page 57.)ADSCrossRefGoogle Scholar Chandrasekhar, S., 1967, An Introduction to the Study of Stellar Structure, Dover, Mineola, NY. [ADS], [Google Books] (Cited on pages 29 and 98.)zbMATHGoogle Scholar Chang, S.C. and Nishida, A., 1973, "Spatial Structure of Transverse Oscillations in the Interplanetary Magnetic Field", Astrophys. Space Sci., 23, 301–301. [DOI] (Cited on page 48.)ADSCrossRefGoogle Scholar Chang, T., 2003, "Complexity Induced Plasma Turbulence in Coronal Holes and the Solar Wind", in Solar Wind Ten, Proceedings of the Tenth International Solar Wind Conference, Pisa, Italy, 17–21 June 2002, (Eds.) Velli, M., Bruno, R., Malara, F., AIP Conference Proceedings, 679, pp. 481–484, American Institute of Physics, Melville, NY (Cited on page 133.)Google Scholar Chang, T. and Wu, C., 2002, "Complexity and anomalous transport in space plasmas", Phys. Plasmas, 9, 3679–3684. [DOI], [ADS] (Cited on page 133.)ADSCrossRefGoogle Scholar Chang, T., Tam, S.W.Y. and Wu, C., 2004, "Complexity induced anisotropic bimodal intermittent turbulence in space plasmas", Phys. Plasmas, 11, 1287–1299. [DOI], [ADS] (Cited on pages 133 and 134.)ADSCrossRefGoogle Scholar Chapman, S.C., Nicol, R.M., Leonardis, E., Kiyani, K. and Carbone, V., 2009, "Observation of Universality in the Generalized Similarity of Evolving Solar Wind Turbulence as Seen by Ulysses", Astrophys. J. Lett., 695, L185–L188. [DOI], [ADS] (Cited on page 147.)ADSCrossRefGoogle Scholar Chian, A.C.L., Borotto, F.A. and Gonzalez, W.D., 1998, "Alfvén Intermittent Turbulence Driven by Temporal Chaos", Astrophys. J., 505, 993–998. [DOI], [ADS] (Cited on page 23.)ADSCrossRefGoogle Scholar Chian, A.C.L., Rempel, E.L., Macau, E.E.N., Rosa, R.R. and Christiansen, F., 2003, "Alfvén Turbulence Driven by High-Dimensional Interior Crisis in the Solar Wind", in Solar Wind Ten, Proceedings of the Tenth International Solar Wind Conference, Pisa, Italy, 17–21 June 2002, (Eds.) Velli, M., Bruno, R., Malara, F., AIP Conference Proceedings, 679, pp. 558–561, American Institute of Physics, Melville, NY (Cited on page 23.)Google Scholar Cho, J. and Lazarian, A., 2002, "Numerical Simulations of Compressible MHD turbulence", Bull. Am. Astron. Soc., 34, 1124. [ADS] (Cited on page 31.)ADSGoogle Scholar Cho, J. and Lazarian, A., 2004, "The Anisotropy of Electron Magnetohydrodynamic Turbulence", Astrophys. J. Lett., 615, L41–L44. [DOI], [ADS], [arXiv:astro-ph/0406595] (Cited on page 150.)ADSCrossRefGoogle Scholar Cho, J.Y.N., Newell, R.E. and Sachse, G.W., 2000, "Anomalous scaling of mesoscale tropospheric humidity fluctuations", Geophys. Res. Lett., 27, 377–380. [DOI], [ADS] (Cited on page 142.)ADSCrossRefGoogle Scholar Coleman, P.J., 1968, "Turbulence, Viscosity, and Dissipation in the Solar-Wind Plasma", Astrophys. J., 153, 371–388. [DOI], [ADS] (Cited on pages 36, 38, and 143.)ADSCrossRefGoogle Scholar Courant, R. and Friedrichs, K.O., 1976, Supersonic Flow and Shock Waves, Applied Mathematical Sciences, 21, Springer, Berlin; New York. [Google Books] (Cited on page 99.)Google Scholar Cranmer, S.R., van Ballegooijen, A.A. and Edgar, R.J., 2007, "Self-consistent Coronal Heating and Solar Wind Acceleration from Anisotropic Magnetohydrodynamic Turbulence", Astrophys. J. Suppl. Ser., 171, 520–551. [DOI], [ADS], [arXiv:astro-ph/0703333] (Cited on page 34.)ADSCrossRefGoogle Scholar Dasso, S., Milano, L.J., Matthaeus, W.H. and Smith, C.W., 2003, "Cross-helicity correlations in the solar wind", in Solar Wind Ten, Proceedings of the Tenth International Solar Wind Conference, Pisa, Italy, 17–21 June 2002, (Eds.) Velli, M., Bruno, R., Malara, F., AIP Conference Proceedings, 679, pp. 546–549, American Institute of Physics, Melville, NY (Cited on page 53.)Google Scholar Dasso, S., Milano, L.J., Matthaeus, W.H. and Smith, C.W., 2005, "Anisotropy in Fast and Slow Solar Wind Fluctuations", Astrophys. J. Lett., 635, L181–L184. [DOI], [ADS] (Cited on page 51.)ADSCrossRefGoogle Scholar Del Zanna, L., 2001, "Parametric decay of oblique arc-polarized Alfvén waves", Geophys. Res. Lett., 28, 2585–2588. [DOI], [ADS] (Cited on page 90.)ADSCrossRefGoogle Scholar Del Zanna, L., Velli, M. and Londrillo, P., 2001, "Parametric decay of circularly polarized Alfvén waves: Multidimensional simulations in periodic and open domains", Astron. Astrophys., 367, 705–718. [DOI], [ADS] (Cited on pages 90 and 92.)ADSCrossRefGoogle Scholar Denskat, K.U. and Neubauer, F.M., 1983, "Observations of hydromagnetic turbulence in the solar wind", in Solar Wind Five, Proceedings of a conference held in Woodstock, Vermont, November 1–5 1982, (Ed.) Neugebauer, M., NASA Conference Publication, 2280, pp. 81–91, NASA, Washington, DC (Cited on pages 44 and 64.)Google Scholar Derby, N.F., 1978, "Modulational instability of finite-amplitude, circularly polarized Alfvén waves", Astrophys. J., 224, 1013–1016. [DOI], [ADS] (Cited on page 72.)ADSCrossRefGoogle Scholar Dobrowolny, M., Mangeney, A. and Veltri, P., 1980a, "Fully developed anisotropic hydromagnetic turbulence in interplanetary space", Phys. Rev. Lett., 45, 144–147. [DOI], [ADS] (Cited on pages 64 and 121.)ADSMathSciNetCrossRefGoogle Scholar Dobrowolny, M., Mangeney, A. and Veltri, P., 1980b, "Properties of magnetohydrodynamic turbulence in the solar wind", Astron. Astrophys., 83, 26–32. [ADS] (Cited on pages 28, 60, 64, 89, and 166.)ADSGoogle Scholar Doob, J.L., 1953, Stochastic Processes, Wiley, New York (Cited on page 162.)zbMATHGoogle Scholar Dudok de Wit, T., 2004, "Can high-order moments be meaningfully estimated from experimental turbulence measurements?", Phys. Rev. E, 70, 055302(R). [DOI] (Cited on page 107.)ADSCrossRefGoogle Scholar Elsässer, W.M., 1950, "The hydromagnetic equations", Phys. Rev., 79, 183. [DOI] (Cited on pages 18 and 166.)ADSzbMATHCrossRefGoogle Scholar Escoubet, C.P., Fehringer, M. and Goldstein, M., 2001, "Introduction: The Cluster mission", Ann. Geophys., 19, 1197–1200. [DOI], [ADS] (Cited on page 151.)ADSCrossRefGoogle Scholar Farge, M., 1992, "Wavelet transforms and their applications to turbulence", Annu. Rev. Fluid Mech., 24, 395–457. [DOI], [ADS] (Cited on pages 127, 133, 141, and 168.)ADSMathSciNetzbMATHCrossRefGoogle Scholar Farge, M., Holschneider, M. and Colonna, J.F., 1990, "Wavelet analysis of coherent structures in two-dimensional turbulent flows", in Topological Fluid Mechanics, Proceedings of the IUTAM Symposium, Cambridge, UK, 13–18 August 1989, (Ed.) Moffat, H.K., pp. 765–766, Cambridge University Press, Cambridge; New York (Cited on pages 141 and 168.)Google Scholar Feynman, R.P., Leighton, R.B. and Sands, M., 1977, The Feynman Lectures on Physics, Vol. II: The Electromagnetic Field, Addison-Wesley, Reading, MA, 6th edn. (Cited on page 23.)zbMATHGoogle Scholar Forman, M.A. and Burlaga, L.F., 2003, "Exploring the Castaing Distribution Function to Study Intermittence in the Solar Wind at L1 in June 2000", in Solar Wind Ten, Proceedings of the Tenth International Solar Wind Conference, Pisa, Italy, 17–21 June 2002, (Eds.) Velli, M., Bruno, R., Malara, F., AIP Conference Proceedings, 679, pp. 554–557, American Institute of Physics, Melville, NY (Cited on page 125.)Google Scholar Forman, M.A., Smith, C.W. and Vasquez, B.J., 2010, "Comment on 'scaling Laws of Turbulence and Heating of Fast Solar Wind: The Role of Density Fluctuations"', Phys. Rev. Lett., 104(18), 189001. [DOI], [ADS] (Cited on page 121.)ADSCrossRefGoogle Scholar Forsyth, R.J. and Breen, A., 2002, "Meeting report: The 3-D Sun and heliosphere at solar maximum", Astron. Geophys., 43, 3–32. [DOI], [ADS] (Cited on page 34.)CrossRefGoogle Scholar Forsyth, R.J., Horbury, T.S., Balogh, A. and Smith, E.J., 1996, "Hourly variances of fluctuations in the heliospheric magnetic field out of the ecliptic plane", Geophys. Res. Lett., 23, 595–598. [DOI], [ADS] (Cited on pages 75, 76, 77, 79, 80, and 85.)ADSCrossRefGoogle Scholar Forsyth, R.J., Balogh, A., Horbury, T.S. and Smith, E.J., 1997, "The heliospheric magnetic field at solar minimum as observed by ULYSSES", Adv. Space Res., 19, 839–842. [DOI], [ADS] (Cited on page 34.)ADSCrossRefGoogle Scholar Freeman, J.W., 1988, "Estimates of solar wind heating inside 0.3 AU", Geophys. Res. Lett., 15, 88–91. [DOI], [ADS] (Cited on page 143.)ADSCrossRefGoogle Scholar Frick, P. and Sokoloff, D.D., 1998, "Cascade and dynamo action in a shell model of magnetohydrodynamic turbulence", Phys. Rev. E, 57, 4155–4164. [DOI], [ADS] (Cited on page 25.)ADSMathSciNetCrossRefGoogle Scholar Frisch, U., 1995, Turbulence: The Legacy of A.N. Kolmogorov, Cambridge University Press, Cambridge; New York. [ADS], [Google Books] (Cited on pages 7, 8, 9, 16, 19, 27, 28, 31, 104, 106, 107, 111, 113, and 137.)Google Scholar Frisch, U., Pouquet, A., Leorat, J. and Mazure, A., 1975, "Possibility of an inverse cascade of magnetic helicity in magnetohydrodynamic turbulence", J. Fluid Mech., 68, 769–778. [DOI], [ADS] (Cited on pages 151 and 162.)ADSzbMATHCrossRefGoogle Scholar Frisch, U., Sulem, P.-L. and Nelkin, M., 1978, "A simple dynamical model of intermittent fully developed turbulence", J. Fluid Mech., 87, 719–736. [DOI], [ADS] (Cited on page 113.)ADSzbMATHCrossRefGoogle Scholar Galeev, A.A. and Oraevskii, V.N., 1963, "The stability of Alfvén waves", Sov. Phys. Dokl., 7, 988–1003 (Cited on page 71.)ADSGoogle Scholar Gary, S.P. and Borovsky, J.E., 2004, "Alfvén-cyclotron fluctuations: Linear Vlasov theory", J. Geophys. Res., 109, A06105. [DOI], [ADS] (Cited on page 150.)ADSCrossRefGoogle Scholar Gary, S.P. and Borovsky, J.E., 2008, "Damping of long-wavelength kinetic Alfvén fluctuations: Linear theory", J. Geophys. Res., 113, A12104. [DOI], [ADS] (Cited on page 150.)ADSCrossRefGoogle Scholar Gary, S.P. and Smith, C.W., 2009, "Short-wavelength turbulence in the solar wind: Linear theory of whistler and kinetic Alfvén fluctuations", J. Geophys. Res., 114, A12105. [DOI], [ADS] (Cited on pages 150 and 151.)ADSCrossRefGoogle Scholar Gary, S.P., Saito, S. and Li, H., 2008, "Cascade of whistler turbulence: Particle-in-cell simulations", Geophys. Res. Lett., 35, L02104. [DOI], [ADS] (Cited on page 150.)ADSCrossRefGoogle Scholar Gazis, P.R., 1984, "Observations of plasma bulk parameters and the energy balance of the solar wind between 1 and 10 AU", J. Geophys. Res., 89, 775–785. [DOI], [ADS] (Cited on page 143.)ADSCrossRefGoogle Scholar Gazis, P.R., Barnes, A., Mihalov, J.D. and Lazarus, A.J., 1994, "Solar wind velocity and temperature in the outer heliosphere", J. Geophys. Res., 99, 6561–6573. [DOI], [ADS] (Cited on page 143.)ADSCrossRefGoogle Scholar Ghosh, S. and Matthaeus, W.H., 1990, "Relaxation processes in a turbulent compressible magnetofluid", Phys. Fluids B, 2, 1520–1534. [DOI], [ADS] (Cited on page 90.)ADSCrossRefGoogle Scholar Ghosh, S. and Papadopoulos, K., 1987, "The onset of Alfvénic turbulence", Phys. Fluids, 30, 1371–1387. [DOI], [ADS] (Cited on page 23.)ADSzbMATHCrossRefGoogle Scholar Ghosh, S., Siregar, E., Roberts, D.A. and Goldstein, M.L., 1996, "Simulation of high-frequency solar wind power spectra using Hall magnetohydrodynamics", J. Geophys. Res., 101, 2493–2504. [DOI], [ADS] (Cited on page 148.)ADSCrossRefGoogle Scholar Ghosh, S., Matthaeus, W.H., Roberts, D.A. and Goldstein, M.L., 1998a, "Waves, structures, and the appearance of two-component turbulence in the solar wind", J. Geophys. Res., 103(A10), 23,705–23,716. [DOI], [ADS] (Cited on page 53.)ADSCrossRefGoogle Scholar Ghosh, S., Matthaeus, W.H., Roberts, D.A. and Goldstein, M.L., 1998b, "The evolution of slab fluctuations in the presence of pressure-balanced magnetic structures and velocity shears", J. Geophys. Res., 103(A10), 23,691–23,704. [DOI], [ADS] (Cited on page 53.)ADSCrossRefGoogle Scholar Giuliani, P. and Carbone, V., 1998, "A note on shell models for MHD turbulence", Europhys. Lett., 43, 527–532. [DOI], [ADS] (Cited on pages 25, 26, and 115.)ADSCrossRefGoogle Scholar Glassmeier, K.-H., Motschmann, U., Dunlop, M. et al., 2001, "Cluster as a wave telescope - first results from the fluxgate magnetometer", Ann. Geophys., 19, 1439–1447. [DOI], [ADS] (Cited on page 151.)ADSCrossRefGoogle Scholar Gledzer, E.B., 1973, "System of hydrodynamic type admitting two quadratic integrals of motion", Sov. Phys. Dokl., 18, 216 (Cited on page 25.)zbMATHGoogle Scholar Gloaguen, C., Léorat, J., Pouquet, A. and Grappin, R., 1985, "A scalar model for MHD turbulence", Physica D, 17, 154–182. [DOI], [ADS] (Cited on pages 25 and 115.)ADSMathSciNetzbMATHCrossRefGoogle Scholar Goldreich, P. and Sridhar, S., 1995, "Toward a theory of interstellar turbulence. 2: Strong alfvenic turbulence", Astrophys. J., 438, 763–775. [DOI], [ADS] (Cited on pages 28, 31, and 41.)ADSCrossRefGoogle Scholar Goldstein, B.E., Smith, E.J., Balogh, A., Horbury, T.S., Goldstein, M.L. and Roberts, D.A., 1995a, "Properties of magnetohydrodynamic turbulence in the solar wind as observed by Ulysses at high heliographic latitudes", Geophys. Res. Lett., 22, 3393–3396. [DOI], [ADS] (Cited on pages 75, 76, 83, and 84.)ADSCrossRefGoogle Scholar Goldstein, M.L., 1978, "An instability of finite amplitude circularly polarized Alfvén waves", Astrophys. J., 219, 700–704. [DOI], [ADS] (Cited on pages 72 and 92.)ADSMathSciNetCrossRefGoogle Scholar Goldstein, M.L., 1996, "Turbulence in the solar wind: Kinetic effects", in Solar Wind Eight, Proceedings of the Eighth International Solar Wind Conference, Dana Point, CA 1995, (Eds.) Winterhalter, D., Gosling, J.T., Habbal, S.R., Kurth, W.S., Neugebauer, M., AIP Conference Proceedings, 382, pp. 239–244, American Institute of Physics, Woodbury, NY. [DOI], [ADS] (Cited on page 143.)
CommonCrawl
MATF: a multi-attribute trust framework for MANETs Muhammad Saleem Khan1, Majid Iqbal Khan1, Saif-Ur-Rehman Malik1, Osman Khalid2, Mukhtar Azim1 & Nadeem Javaid1 EURASIP Journal on Wireless Communications and Networking volume 2016, Article number: 197 (2016) Cite this article To enhance the mobile ad hoc networks (MANETs) security, various trust-based security schemes have been proposed. However, in most of the trust-based security schemes, a node's trust is computed based on a single trust attribute criteria, such as data forwarding. Using single trust attribute criteria may cause the bootstrapping problem, which refers to the time required by the trust-based scheme to build trust and reputation among nodes in the network. The bootstrapping problem in these schemes may provide more opportunities to misbehaving nodes to drop packets and remain undetected for longer time in the network. Moreover, using single trust attribute criteria does not effectively deal with the selective misbehavior by a smart malicious node. In this work, we propose a scheme that is based on the multi-attribute trust criteria to minimize the bootstrapping time, which ultimately improves the performance of the scheme in terms of high malicious node detection rate, low false-positive rate, and packet loss rate. The contributions of this paper are (a) identification of trust attributes along with the development of a comprehensive multi-attribute trust framework (MATF) using multiple watchdogs for malicious node identification and isolation, (b) formal modeling and verification of our proposed MATF using HLPN, SMT-Lib, and Z3 Solver, and (c) simulation-based validation and evaluation of the proposed trust framework in the context of optimized link state routing (OLSR) protocol against various security threats, such as message dropping, message modification, and link withholding attacks. The simulation results revealed that the proposed trust framework achieves about 98 % detection rate of malicious nodes with only 1–2 % false positives. Moreover, the proposed MATF has an improved packet delivery ratio as compared to the single attribute-based scheme. Due to the non-availability of central authority and the unreliability of wireless links, the routing protocols in mobile ad hoc networks (MANETs) are vulnerable to various types of security threats [1]. The resource-constrained nature of MANETs with continuously evolving topology and frequent network partitioning complicates the security challenges in MANETs' routing. Most of the secure routing protocols for MANETs utilize some form of cryptography to ensure the network security [2–4]. However, there are scenarios, where cryptography techniques fail to capture malicious behavior of a node. For example, (a) to disrupt the network topology, a node may provide falsified routing information to other nodes, (b) to preserve the battery, a node may not participate in the routing functions, and (c) a node may drop data packets instead of forwarding because of the malicious intention. To address these issues, trust-based security schemes [5–10] have been proposed to augment the security of traditional cryptography-based approaches. In MANETs, trust can be defined as to what extent a node can fulfill the expectations of other node(s) as per the specification of an underlying communication protocol [11]. In trust-based security schemes, each node within the network manages an independent trust table to compute and store the trust values of other nodes. The routing decisions are based on the computed trust values of the nodes. Although a lot of research work has been carried in the field of trust and reputation based systems in MANETs, however, almost all the proposed schemes suffer from one basic problem known as bootstrapping problem [12]. It refers to the time required by the trust-based scheme to build trust and reputation among nodes in the network. Such delay in accumulation of trust and reputation is often not acceptable in time-critical applications. Due to the slow trust building process, a misbehaving node may have more opportunities to drop packets before being detected as malicious. One of the basic reasons for the aforementioned bootstrapping problem is that in most of the trust-based security schemes, an evaluated node's trust is computed based on a single trust attribute, such as data forwarding [13–17]. Moreover, using single trust attribute may not effectively deal with the problem of selective misbehavior [12]. A smart malicious node may misbehave in the context of one network function and behave properly for other network functions. For example, a node may misbehave in the context of data forwarding while demonstrating good behavior when dealing with the control packet forwarding. As the existing schemes [7–10, 13–17] use single trust attribute, the aforementioned selective misbehaving node is declared as malicious node and isolated from the routing path, hence no longer will be available to be used for other network functions. In trust-based security schemes, each node collects two major types of information about other nodes: first-hand information (based on self-observations) and second-hand information (based on the other node observations). In literature, efforts have been made to minimize the bootstrapping time and to increase the detection rate by using second-hand information to evaluate the trustworthiness of the nodes [14, 17]. However, the aforementioned schemes still suffer from data sparsity problem [14]. In trust-based security schemes, data sparsity is a situation where lack of information or insufficient interaction experience makes it difficult to evaluate the node's trust, especially in the early time of network establishment. Moreover, using second-hand information without any filtration may cause bad-mouthing and false praise attack [11], which ultimately cause high false positive and false negative rate. In bad-mouthing attack, a misbehaving node propagates dishonest and unfair recommendations against an innocent node with a negative intention to confuse the trust model. Similarly, in false praise attack, a misbehaving node propagates unfairly positive recommendations against the malicious node to mislead the trust model. It is also of critical importance to prove the correctness of the trust-based security schemes in dynamic and unpredictable environments, such as of MANETs. A well-established approach to prove the correctness of a system's model is by employing a formal verification process [18]. To minimize the bootstrapping time and expedite the trust building process, and to effectively deal with the selective misbehavior, there is a strong need for a mechanism that works on multi-attribute-based trust strategy. Each node should be observed in the context of all the possible network functions, such as control message generation, control message forwarding, and data packet forwarding. Moreover, an efficient recommendation filtration technique is required to filter the source of information and information itself. To avoid bad-mouthing and false praise attacks, second-hand information from only designated and trustworthy nodes must be considered in a trust computation process. Our contributions: In this work, we address the bootstrapping and delusive trust dissemination problem when using second-hand information. We propose a trust-based security scheme which uses multi-attribute-based trust criteria, such as control packet generation, control packet forwarding, and data packet forwarding. Using multi-attribute trust criteria which minimizes the bootstrapping time and expedite the trust building process, as nodes are assessed in the context of different aforementioned network functions. Moreover, to avoid bad-mouthing and false praise attacks, second-hand information is considered from only those nodes called watchdog nodes, whose trust values are above some threshold. Furthermore, second-hand information from recommender nodes with trust deviation (τ dev) value1 less than the deviation threshold (τ dev−th) will be considered in the trust computation process. This paper has the following major contributions. Identification of the trust attributes for a node's trust building process. Development of a comprehensive multi-attribute and multiple watchdog nodes trust framework (MATF) for malicious node detection and isolation. Formal verification of our proposed MATF using high-level Petri nets (HLPNs), satisfiability modulo theories-library (SMT-Lib), and Z3 Solver. Implementation of the proposed trust framework in the context of optimized link state routing (OLSR) protocol in NS-2 [19]. Simulation-based validation and evaluation of the proposed MATF in comparison with the recently proposed trust scheme by Shabut et al. [14] (single trust-attribute-based scheme), against various security threats, such as message dropping, message modification, and link withholding attacks. Security analysis of the proposed MATF. The rest of the paper is organized as follows. In Section 2, we present the related work. Section 3 presents the discussion on trust and its formulation in MANETs along with a multi-attribute trust framework. Section 4 presents the formal modeling and verification of the proposed framework. Section 5 presents the simulation results and summarizes the performance evaluation of the proposed model. Security analysis of the proposed scheme is presented in Section 6, and the paper is concluded in Section 7. Trust-based security schemes are one of the active research areas for ensuring the security in MANETs [20]. In recent years, different trust-based security schemes have been proposed to enhance the security in MANETs. In these schemes, nodes evaluate their neighbor nodes based on the first-hand information or using recommendations from other nodes [12, 20]. Though, these schemes paid some attention to the problem of bootstrapping and delusive trust dissemination problem, however, an efficient mechanism to mitigate the aforementioned problem is still a challenging issue in MANETs. We categorize the state-of-the-art schemes in the following categories. Watchdog and path-rater schemes One of the key works in trust-based schemes was presented by Marti et al. [13]. They proposed watchdog and path-rater mechanisms implemented on the dynamic source routing (DSR) protocol to minimize the impact of malicious nodes on the throughput of the network. The aforementioned approach detects the misbehaving nodes by using only source node as a monitoring node. However, the proposed scheme has some major shortcomings, such as it cannot detect the misbehaving nodes in the case of ambiguous collision, receiver collision, limited transmission power, partial dropping, and collaborative attacks [21]. Moreover, watchdog and path-rater mechanisms utilize only first-hand information for node misbehavior detection that causes the aforementioned issues. Feedback-based schemes To solve the issues in the watchdog and path-rater schemes, various approaches were proposed, such as acknowledgment-based detection systems including two network-layer acknowledgment-based schemes, termed as TWOACK [22], adaptive acknowledgment (AACK) [23], and enhanced adaptive acknowledgment (EEACK) [21]. The TWOACK scheme has focused to solve the receiver collision and limited transmission power problems of the watchdog and path-rater approach. Every data packet transmitted is acknowledged by every three consecutive nodes along the path from the source to the destination. Sheltami et al. [23] proposed an improved version of the acknowledgment-based scheme, AACK. The AACK is the intrusion detection system which is a combination of TWOACK and end-to-end acknowledgement scheme. Although, AACK has significantly reduced the overhead as compared to TWOACK scheme, it still suffers from the problem of detecting malicious nodes generating false misbehavior report and forged acknowledgment packets. To remove the shortcomings of the acknowledgement-based schemes, Shakshuki et al. [21] proposed EAACK protocol to detect misbehavior nodes in MANETs' environment using digital signature algorithm (DSA) [24] and Rivest-Shamir-Adleman (RSA) algorithm [25] digital signatures. Although, their technique can validate and authenticate the acknowledgement packets, yet at the expense of extra resources, and it also requires pre-distributed keys for digital signatures. Network monitoring-based schemes Buchegger et al. presented a cooperation of nodes-fairness in distributed ad-hoc networks (CONFIDANT) protocol [7] to detect misbehaving nodes in the network. In addition to first-hand information, second-hand information is also used while computing a node's trustworthiness. In CONFIDANT protocol, first-hand information is propagated after every 3 s, while weight given to the second-hand information is 20 %. To avoid false praise attack [11], only negative experiences as second-hand information are shared among nodes. One of the shortcomings in CONFIDANT protocol is that ALARM messages used in the protocol can be exploited by the bad-mouthing nodes. Bad-mouthing nodes may generate ALARM messages against the legitimate nodes to induce biasness in the protocol's results [22]. Similarly, a collaborative reputation mechanism to enforce node cooperation in MANETs called CORE [9] also uses the second-hand information to compute the reputation of a node. Only positive experiences are shared by the node with other nodes in the network to avoid bad-mouthing attack. In contrast to CONFIDANT and CORE [9], observation-based cooperation enforcement in ad hoc networks (OCEAN) protocol [26] uses only first-hand observation to avoid false praising and bad-mouthing type of attack. In OCEAN, avoid-list strategy is implemented to not forward the traffic from misbehaving nodes. However, if a node identifies that its ID is inserted to the avoid-list, it may change its strategy. A tamper-proof hardware is required to secure the avoid-list to avoid the aforementioned incident. To filter the second-hand information, [14] proposed a defence trust scheme based on three parameters: (a) confidence value, indicating how many interactions took place between a recommender node and an evaluated node, (b) deviations in the opinions of recommender node and evaluating node, and (c) closeness value, indicating the distance-wise close of recommender node and the evaluating node. On the basis of the aforementioned values, an evaluating node filters the second-hand information in the proposed trust scheme. However, the second-hand information filtration mechanism in the proposed scheme may not work well in some scenario. For example, recommender nodes R 1,R 2…R N send the bad reputation value of misbehaving node M to evaluating node E, while node E has a good reputation value about node M based on its own first-hand information. In the aforementioned proposed scheme [14], such recommendations are filtered out because of more deviation in the trust values. In contrast, our proposed MATF scheme filters the recommendation by using the following methodology. When recommendations received at the evaluating node from the recommender node about some particular evaluated node, the evaluating node averages the recommendations already received from all the watchdog nodes (recommender nodes) then, finds the trust deviation of the recommender node's trust value from the average trust value. If the deviation in trust values is less than certain deviation threshold, weight is given to the recommendations in the trust computation; otherwise, no weight is given to these recommendations. Li et al. [27] proposed a simple trust model which takes into account the packet forwarding ratio as metric to evaluate the trustworthiness of neighbor nodes. A node's trust is computed by the weighted sum of packet forwarding ratio. To find a path trust, continued product of node's trust values in a routing path is computed. The aforementioned approach only considers packet forwarding behavior as a trust metric. A trust prediction model based on the node's historical behavior called trust-based source routing (TSR) protocol was presented in [28]. On the basis of assessment and prediction results, the nodes can select the shortest trusted route to transmit the required packets. One of the weaknesses of this work is that no second-hand information is considered for trust computation that may result in bootstrapping and data sparsity problem [14]. Trust-based security schemes like [16] only consider the security of data traffic, while schemes like [29, 30] only consider the security of control traffic. Moreover, the aforementioned solutions result in more energy consumption due to excessive information propagation and detection messages. In [31], energy efficiency is considered as one of the parameters and have improved previously existing trust-based algorithms. To summarize, the trust-based security schemes discussed in this section have some open problems that need to be solved. Most of the existing schemes use single trust criteria for the trust building process that causes the bootstrapping and data sparsity problem. Minimizing the bootstrapping time and the data sparsity problem is still an open issue [12, 14]. Moreover, using all the available information from each and every node in the network does help in building reputation and trust among nodes quickly, but as discussed earlier, it makes the system vulnerable to false report attacks. To solve the aforementioned false praise and bad-mouthing attacks, there should be a mechanism which filters the spurious second-hand information. Although, the aforementioned approaches suggest the misbehavior detection schemes, these schemes use single trust attributes like data forwarding. Moreover, second-hand information are considered from recommender nodes without any filtration that can result in erroneous trust estimation, especially under high nodal mobility. In contrast, our proposed MATF is based on multiple trust attributes with multiple observer nodes that results in better trust estimation. Second-hand information are considered from recommender nodes with deviation values less than the deviation threshold, which results in better trust estimation, especially under high nodal mobility. MATF: the proposed scheme In this section, we present the trust attributes, trust formulation in the proposed MATF, a mechanism for trust deviation test, and watchdog node selection process. In the proposed MATF, the watchdog node is the designated neighbor node of the evaluating node to monitor the activities of the evaluated node B on the basis of defined trust attributes and is represented by W. It can be the evaluating node itself or any other node that has been assigned the monitoring task by the evaluating node. The evaluating node computes the final trust of the evaluated node based on its own observations and those reported by the watchdog nodes. Our proposed trust model consists of three steps. The first step is the monitoring step, in which an evaluating node S and watchdog nodes W n observe the behavior of an evaluated node B in the context of trust attributes ρ. For clarity, in the following equations, we treat an evaluating node as one of the watchdog nodes. In the second step, an evaluating node aggregates its own observations and the watchdog nodes' observations in the context of each trust attribute. Finally, an evaluating node computes the final trust of an evaluated node in the context of all the trust attributes using the weighted sum. Also, the value range of ρ is [0,1], 0 being the minimum and 1 the maximum. Trust attributes Trust attributes are the factors responsible for shaping the trust levels and denoted by ρ. Each trust attribute value ranges between 0 and 1. Before going into the details as how we applied trust in MANETs, first, we discuss the basic trust attributes and then, define our trust model. We have identified the following trust attributes in the context of control and data traffic for the proposed trust model. Control packet generation (ρ cpg) Control packet is the protocol-specific information that nodes exchange to build routes and maintain topology. By using this trust attribute, an evaluating node assesses the trustworthiness of the evaluated node in the context of control packet generation behavior as specified in the underlying routing protocol. Observations of a node W about node B in terms of control packet generation is given in the following equation: $$ \rho_{\text{cpg}}^{W,B}\left(t,t+\Omega \right)= \frac{p}{p_{\text{exp}}}, $$ where t is the current time, Ω is the trust update period, p is the total actual number of control messages generated in the time interval (t,t+Ω) by node B as observed by W, and p exp is the expected number of control messages that should have been generated by node B. An evaluating node then aggregates its observations and the observations reported by the watchdog nodes to build a reputation about node B as shown in the following equation: $$ \rho_{\text{cpg}}(t,t+\Omega)= \alpha\rho_{\text{cpg}}^{S,B}+(1-\alpha)\left(\frac{1}{n}\sum\limits_{{i}=1}^{n} \rho_{\text{cpg}}^{W_{i},B}\right), $$ where α is the weight factor given to an evaluating node observation and watchdog node observations. Control packet forwarding (ρ cpf) Nodes in a MANET depend on mutual cooperation to forward traffic. A non-cooperative forwarding node may drop packet or forward control packet with delay that can result in the inconsistent view of the network topology. Let us denote the packets that are successfully overheard as p ack. The observations of a node W regarding node B in terms of control packet forwarding can be computed using following equation: $$ \rho_{\text{cpf}}^{W,B}\left(t,t+\Omega \right) =1- \frac{p - p_{\text{ack}}}{p}. $$ According to the above equation, the minimum possible packet loss rate observed at an evaluating/watchdog node W is 0, while the maximum possible packet loss rate is equal to 1, i.e., all the sent packets are dropped by misbehaving nodes. An evaluating node then aggregates its own observations and that of watchdog nodes to obtain an aggregated reputation of node B in terms of control packet forwarding as follows: $$ \rho_{\text{cpf}}\left(t,t+\Omega \right)=\alpha\rho_{\text{cpf}}^{(S,B)}+(1-\alpha)\frac{1}{n}\sum\limits_{{i}=1}^{n} \left(\rho_{\text{cpf}}^{(W_{i},B)}\right), $$ where α is the weight factor given to an evaluating node observations and watchdog node observations in the above equation. Data packet forwarding (ρ dpf) In addition of control traffic, nodes are also responsible of relaying data packets. A node may drop the data packet and forward data packets with delay or with maliciously modified contents. The observations of node W regarding node B in terms of data packet forwarding can be computed using the following equation: $$ \rho_{\text{dpf}}^{W,B}\left(t,t+\Omega \right) = 1-\frac{\xi - p_{\text{ack}}}{\xi}, $$ where ξ is the total number of data packet sent and p ack is the data packet successfully overheard at watchdog node W. Aggregating evaluating node's and watchdog node's observations, we get the aggregated reputation of an evaluated node in the context of data packet forwarding as given in the following equation: $$ \rho_{\text{dpf}}\left(t,t+\Omega \right) =\alpha\rho_{\text{dpf}}^{(S,B)}+ (1-\alpha)\frac{1}{n}\sum\limits_{{i}=1}^{n} \left(\rho_{\text{dpf}}^{(W_{i},B)}\right). $$ Trust formulation and algorithm We are now able to combine the equations introduced so far into our mathematical model for the multi-attribute trust computation. By combining Eqs. 2, 4 and 6, we obtain $$ {\tau_{S}^{B}}\left(t,t+\Omega \right) = \frac{\delta\rho_{\text{cpg}}+\beta\rho_{\text{cpf}}+\gamma\rho_{\text{dpf}}}{\delta+\beta+\gamma}, $$ where δ,β, and γ are weight factors assigned to each metric and δ+β+γ=3. The weights can be tuned based on the specific security goal to be achieved. For example, if a higher throughput and packet delivery is concerned, we consider the data traffic as vital, so data forwarding parameter carry more weight than other parameters, such as control packet generation and forwarding. An evaluating node S aggregates the trust computed for evaluated node B during the time interval (t,t+Ω) in the context of each trust attribute ρ and assigns weights to each aforementioned attributes in the above equation. The trust computed in Eq. (7) is compared with a threshold value to make a decision regarding trustworthiness of a node. Algorithm 1 presents the pseudo code for the MATF. In the proposed algorithm, an evaluating node and designated watchdog nodes observe the evaluated node in terms of different network functions during the monitoring period (lines 1–4). A filtration criteria is applied on the recommendations received from watchdog nodes (line 5). Based on the filtered recommendations, an evaluating node computes the trust of an evaluated node (lines 8–10). If the trust of an evaluated node is lower than a threshold (lines 12–13), it is isolated from the routing path and a new route selection process is initiated (Line 14). Trust deviation The trust computed by the watchdog nodes will be used as a second-hand information in the proposed scheme. To avoid bad-mouthing and false praise attacks, only those information will be used by the evaluating node which is received from the designated nodes and have a trust deviation value less than the deviation threshold. Trust deviation can be computed as given in the following equation. $$ \tau_{\text{dev}}=\left|\left(\frac{1}{k-1}\sum\limits_{{i}=1}^{k-1}\tau_{(W_{i},j)}\right)-\tau_{(W_{k},j)}\right| \leq \tau_{\mathrm{dev-th}}, $$ where \(\tau _{(W_{i},j)}\) is the average trust already received from the watchdog node W i about the evaluated node j and \(\tau _{(W_{k},j)}\) is the trust recommendation received from watchdog node W k about the evaluated node j. Watchdog selection process In order to avoid the bad-mouthing and false praise attacks, the second-hand information in the proposed MATF is considered from only designated and trustworthy watchdog nodes, as discussed in the previous subsection. In this section, we discuss the selection process of the watchdog nodes, which will perform the monitoring task. When the network is initialized, each evaluating node selects a set of neighboring nodes called watchdog set to monitor the behavior of a particular evaluated node. The proposed security scheme allows flexibility in the watchdog selection. Depending on the available network topology, one or multiple watchdogs may be selected. There is no fixed ratio per node of watchdog nodes to be selected. It will be varying depending on the available network topology. It is worth mentioning that in case of any change in network topology, an evaluating node will re-compute the watchdog nodes. The criteria and selection process of watchdog nodes are presented in Algorithm 2, which is a modified version of the relay node selection algorithm presented in [32]. An example scenario of the detailed working of the watchdog selection algorithm is presented below. In the given scenario, node S discovers its neighbors through exchange of control messages and calculates the one-hop neighbor set N 1 and the two-hop neighbor set N 2 (used as an input in Algorithm 2). From the set N 1, each evaluating node S computes the relay node set R(S) (lines 6–18) and the watchdog set W(S), having a trust value greater than the trust threshold (lines 20–26). R(S) is the smallest possible subset of N 1(S) required to reach all nodes in N 2(S). As an example, in Fig. 1, R(S)={B,C,E} contains the minimum number of one-hop neighbors of S required to reach all two-hop neighbors of S. Thereafter, node S selects the watchdog set for each node present in R(S)={B,C,E}. To calculate the watchdog set for each node in the R(S), the node S takes the intersection of the one-hop neighbor set N1(S) and the one-hop neighbor set of each relay node. An example working scenario of the MATF Node S broadcasts the W(S) to the neighboring nodes by appending it in the periodic control messages along with R(S). This enables the neighboring nodes of S to check whether or not they have been selected as a watchdog. By utilizing the broadcast information sent by the node S, each node builds the watchdog selector set. The watchdog selector set consists of all those nodes that have selected node W as a watchdog. For example, as reflected in Fig. 1, node S populates the W(S){A,H} for the relay node C by taking the intersection of sets N 1(S)={A,B,C,D,E,H} and N 1(C)={A,S,H,X}. Thereafter, node S broadcasts the watchdog set to inform both nodes A and H that from now onward, these nodes have to monitor node C. Formal modeling and verification of the MATF Formal verification is the process verifying that algorithms work correctly with respect to some formal property [33]. Formally modeling systems helps to analyze the interconnection of components and processes and how the information is processed in the system [34]. Formal modeling provides valuable tools to design, evaluate, and verify such protocols [35]. To verify the correctness of the MATF, we use HLPNs for the modeling and analysis [18]. HLPNs provide a mathematical representation and help to analyze the behavior and structural properties of the system. To perform a formal verification of the MATF, the HLPN models are first translated into SMT-Lib [36] using the Z3 Solver [37]. Then, the correctness properties were identified and verified to observe the expected behavior of the models. In this section, we present a brief overview of HLPNs and a formal verification of the MATF. High-level Petri nets Petri nets are used to model systems which are non-deterministic, distributed, and parallel in nature. HLPNs are a variation of conventional Petri nets. A HLPN is a structure comprised of a seven-tuple, N=(P,T,F,φ,R,L,M 0). The meaning of each variable is provided in Table 1. Table 1 Variables and meaning SMT-Lib and Z3 Solver SMT is an area of automated deduction for checking the satisfiability of formulas over some theories of interest and has the roots from Boolean satisfiability solvers (SAT) [34]. The SMT-Lib is an international initiative that provides a standard benchmarking platform that works on common input/output framework. In this work, we used Z3, a high-performance theorem solver and satisfiability checker developed by Microsoft Research [38]. Modeling and verification of the MATF To model and verify the design of the MATF, the places P and the associated types need to be specified. The data type refers to a non-empty set of data items associated with a P. The data types used in the HLPN model of the MATF are described in Table 2. Figure 2 a present the HLPN model for the relay and watchdog node selection in the MATF. Moreover, message forwarding, trust computation, and malicious node isolation are depicted in the HLPN model shown in Fig. 2 b. As depicted in Fig. 2 a, there are six places in relay and watchdog selection HPLN, whereas seven places in the HLPN model for trust computation, as shown in Fig. 2 b. The names of places and description are given in Table 3. The next step is to define the set of rules, pre-conditions, and post-conditions to map to T. The mapping of transition T to the processes used in the MATF, referred to as rules (R). After defining the notations, we can now define formulas (pre- and post-conditions) to map on transitions in the following. The set of transitions T ={Gen-Nlist, Gen-WDN, Gen-relay, Broadcast, Forward, Trust-obs, Comp-Mali }. The following are the rules used for modeling and verification. HLPN of the MATF (a, b) Table 2 Data types and their descriptions Table 3 Places and mappings of data types to the places The rule R1 depicts the HELLO message processing. When the network is initialized, nodes exchange the HELLO messages with each other to discover the neighbors in the network. The HELLO message contains the list of one-hop neighbors of a node. On the basis of received HELLO messages, a node compute the one-hop and one-hop neighbors. $${} \begin{aligned} & \mathbf{R(Gen-Nlist)}=\forall hp\in HP,\forall gn\in G-Node\mid \forall 1hl\in 1HL\\ &| 1hl[2]:= GenNeighbour \left(hp,gn[1]\right)\wedge 1HL\prime=1HL\cup \\ &\left\{\left(gn[1],1hl[2]\right)\right\}\wedge \forall 2hl\in 2HL | 2hl[2]:=Gen-2HN\\ &(hp,\ gn[1])\wedge 2HL\prime=2HL\cup \{(gn[1],2hl[2])\} \end{aligned} $$ (R1) After populating one-hop and two-hop list, watchdog nodes and relay nodes are selected for monitoring and packet relaying purpose, respectively, as depicted in Algorithm 1. In rule R2, using the one-hop list, watchdog nodes are selected. The nodes that are not relay nodes and in the one-hop list of the relay node and source node are selected as watchdog. Also, the set of relay nodes are selected from one-hop neighbor list to reach two-hop neighbors. The transition Gen-WDN and Gen-relay is mapped to the following rules R2 and R3, respectively. $${} \begin{aligned} & \mathbf{R(Gen-WDN)}=\! \forall g1\in G-1HL, \forall \ mpl\in \!G-MPL,\forall wdl \\ &\in WDL,\forall gn\in Gn | gn[1] \notin mpl\longrightarrow wdl[1]:=mpl [1]\wedge\\ &wdl\left [2\right] :=mpl\left [2\right]\wedge wdl[3]:=gn[1] \wedge WDL\prime=WDL\cup\\ &\left\{\left(wdl[1], wdl[2], wdl [3] \right)\right\} \end{aligned} $$ $${} \begin{aligned} & \mathbf{R(Gen-relay)}=\forall \ g1\in G-1HL,\forall g2\in G-2HL,\forall gn\in\\ &GN, \forall mpl\in relay-L| \left[Con\left(g1{[1]}_{i:g1[1]}, gn[1], g2[1]\right)>\right.\\ &Con \left(g1{[1]}_{j:g1[1] \wedge j\ne i}, gn[1] g2[1]\right)\vee Con-iso\left(g1[1], gn\right.\\ &\left.\left.\![1], g2 [1]\right)=True\right] \longrightarrow \ mpl[\!1]:=gn[1]\wedge mpl[2]:=g1\\ & [1]\wedge relay-L\prime= relay-L\cup \{(mpl[1],mpl[2])\} \end{aligned} $$ So far, the watchdog and the relay nodes are selected. Now, the source node generates a message and wants to broadcast it into the network. In rule R4, the same process is depicted, where the source node generates the message and in response the watchdog nodes overhear it and the respective relay node receives it. $${} \begin{aligned} & \mathbf{R(Broadcast)}= \forall m\in Msg,\forall oh-sn\in OH-SN, \forall \ rm\in \\ &Rec-SN|oh-sn [1]:=m[1]\wedge oh-sn [2]:=m[2]\wedge oh-\\ &sn [4] :=m[4] \wedge oh-sn[5]:=m[5] \wedge OH-SN\prime=OH-\\ &SN \cup \lbrace(oh-sn[1],oh-sn[2],oh-sn[3],oh-sn[4],\\ &oh-sn[5],oh-sn[6],oh-sn [7])\rbrace \wedge rm[1]:=m[1]\wedge \\ &rm[2]:=m[2]\wedge rm[4]:=m[4]\wedge rm[5]:=m[5]\wedge Rec-\\ &SN\prime=Rec-SN \cup (rm[1], rm[2], rm[3],rm[4],rm[5], \\ &rm[6],rm[7],rm[8])\rbrace \end{aligned} $$ The relay node forwards the message that it received from the source node. When the relay node forwards the message, the watchdog nodes and the source node overhear the message forwarded by the relay node. The same is depicted in rule R5. We compute the trust of the relay nodes by (a) computing the number of messages forwarded by the relay nodes by analyzing the overheard messages of source node and watchdog nodes, (b) checking the contents of the message forwarded by the relay node, and (c) by investigating if the relay node generate its own control messages. The computations are performed the same way as explained in Algorithm 1. $${} \begin{aligned} & \mathbf{R(Forward)}=\forall ohm \in OH-relay \forall rem \in Get-Msg,\forall f\\ &\in Flood,\forall ohsn \in OH-SNMP \mid rem[2]\neq NULL \wedge rem \\ &[8]=Send \left(\right) \longrightarrow (f[1]:=rem[1] \wedge f[2]:=rem[2]\wedge f[3]\\ &\!\!:=rem[4]\wedge f[4]:=rem[5] \wedge Flood=Flood \cup (f[1],f[2],\\ &f[3],f[4])\wedge (ohm[1]:=rem[1]\wedge ohm[2]:=rem[2]\wedge\\ &ohm[\!3]:=rem[\!3]\wedge ohm[4]:=rem[4]\wedge ohm[5]:=rem[5]\\ &\wedge ohm[6]:=rem[6]\wedge ohm[7]:=rem[7] \wedge OH-relay\prime=\\ &OH-relay \cup (ohm[\!1],ohm[\!2],ohm[3], ohm[\!4],ohm[\!5],\\ &ohm[6],ohm[\!7])ohsn[\!6]:=rem[2]\wedge OH-SNMP=OH\\ &\!-SNMP\! \cup \!\lbrace(ohsn[1],ohsn[2],ohsn[3],ohsn[4],ohsn[5],\\ &ohsn[6],ohsn[7]) \rbrace) \end{aligned} $$ In rule R6, the source node computes the trust of relay node based on its own observations and those received from watchdog nodes according to Eq. 7. In rule R7, trust computed in rule R6 is compared to the trust threshold and if certain node trust falls below threshold, the node will be isolated from the routing path. $${} \begin{aligned} & \mathbf{R(Trust-Obs)}=\forall gsno \in Get-SNO,\forall gwdo \in Get-\\ &WDO,\forall t\in Trust \mid gsno[\!4]=gwdo[\!4] \wedge gsno[\!2]=gwdo[\!2]\\ &\!\longrightarrow t[1](gsno[\!6])\cup (gwdo[6]) gsno[2] \alpha \wedge Content(gsno\\ &[2],gsno[6])= Content (gwdo[\!2],gwdo[\!6])\longrightarrow t[2] \beta \wedge\\ &gsno[5]=TC \wedge gwdo[5]=TC \wedge Gen-TC-Pack (gwdo\\ &[4],gsno[4])> gsno[7] \longrightarrow t[3]\gamma \wedge t[4],gsno[4]\wedge T\prime=\\ &T \cup \lbrace(t[1],t[2],t[3],t[4])\rbrace \end{aligned} $$ $${} \begin{aligned} & \mathbf{R(Comp-Mali)}=\forall \ gto \in Get-T,\forall gth \in Get-Th\forall\\ &cm \in Comp-Mali \mid Sum(gto[1], vgto[2],gto[3])< gth\\ &\!\longrightarrow \!cm[\!1]:= gto[\!4] \wedge cm[\!2]:=Sum(gto[\!1],gto[\!2],gto[\!3])\\ &CM\prime= CM\cup (cm[1],cm[2]) \end{aligned} $$ Verification of properties In our analysis, we aim at verification of the following correctness properties. Property 1: common neighbors of source node S and relay node x having a trust greater than the trust threshold must be selected as watchdog nodes. second-hand information must be considered from only those nodes which are designated as watchdog nodes. second-hand information is considered from only those nodes having a trust value greater than the trust threshold and whose trust deviation is less than the deviation threshold. A trust of a malicious node M misbehaving in the context of one of the trust attribute must be decremented as per the specification of MATF. Verification results To perform the verification of the HLPN models using Z3, we unroll the model M and the formula f (properties) that provides M k and f k , respectively. Moreover, the said formulas are then passed to Z3 to check if M k ⊧f k (if the formula f holds in the model M up to the bound k execution time). The solver performs the verification and provide the results as satisfiable (s a t) or unsatisfiable (u n s a t). If the answer is sat, then the solver will generate a counter example, which depicts the violation of the property or formula f. Moreover, if the answer is unsat, then formula or the property f holds in M up to the bound k (in our case k is exec. time). In these verification results, we verify the properties mentioned previously. It is worth mentioning here that in our formal verification results, we verify the correctness properties of the proposed scheme, not the performance of the proposed scheme (for performance evaluation results, please refer to the Section 5). Due to high time-consuming process, execution time is an important metric to verify the properties of the MATF. Figure 3 depicts the time taken by the Z3 Solver to prove that the properties discussed previously in Subsection 4.4 hold in the model. Verification time taken by the Z3 Solver Experimental performance analysis In this section, we evaluate the performance of MATF in comparison to the scheme proposed in [14], referred to as single attribute-based trust framework (SATF) in what follows. Network simulator 2 (NS-2)[19, 39] is used to implement and analyze the performance of the proposed MATF. For the simulation experiments, we have varied the mobility speed of the nodes between 1 and 10 m/s. For data traffic, 30 % of the total nodes in the network are selected as source-destination pairs (sessions), spread randomly over the network. Only 512-byte data packets are sent. The packet sending rates in each pair are varied to change the offered load in the network. All traffic sessions are established at random times near the beginning of the simulation run and stay active until the end. Moreover, a very popular and commonly used mobility model, called random way point mobility model [40], is used for node mobility. In the aforementioned mobility model, each node selects a random destination and starts moving with a randomly chosen speed (uniformly distributed between 0 and a predefined maximum speed). The trust threshold value is 0.4 in this set of experiments [14], which is the maximum tolerated misbehavior for a node to be a part of the network [41]. A trust threshold value determines the trust level that a node has to maintain to be a legitimate node. To handle high-dimensional parameter space, we define some commonly used simulation parameters, as stated in Table 4. The number of simulation experiments has been chosen sufficiently large in order to get 95 % confidence interval for the results. Table 4 Simulation parameters Experimental adversarial model In our adversarial model, the malicious node count is set to 10– 30 % of the total nodes in the network. In order to evaluate the proposed scheme against the adversary nodes thoroughly, malicious nodes are selected randomly to keep their distribution uniform in the network. In our experiments, we simulated packet dropping attack by having malicious nodes dropping control and data packets randomly or selectively with 25 % probability. Moreover, malicious nodes are also misbehaving by launching the withholding attack against the legitimate nodes. In withholding attack, misbehaving node does not generate control traffic as per the specification of the routing protocol. Because of the aforementioned behavior of misbehaving nodes, legitimate nodes are unable to have a consistent and updated view of the network. Furthermore, number of malicious nodes exercise bad-mouthing and false praise attacks in collusion is varied from 10 to 50 % of the total nodes in the simulation scenarios. Simulation results and analysis We now discuss the results of the comparison between the MATF and the SATF in terms of several performance metrics. Impact of trust deviation threshold Trust deviation threshold means that second-hand information whose deviation from an evaluating node's observations is greater than the aforementioned threshold will be filtered out while computing the evaluated node trustworthiness. To select the best optimal trust deviation threshold to filter second-hand information, we simulate the MATF for varying the deviation threshold with increasing number of dishonest nodes. For this set of simulation, the mobility speed is set to 1–4 m/s. Dishonest nodes exercise the false praise and bad-mouthing attacks to show the impact on detection rate and false positives rate, respectively (Fig. 4). Trust value computation vs. simulation time Figure 5 a, b illustrates the impact of increasing number of dishonest nodes on the false positive rate and detection rate under different trust deviation thresholds. It can be inferred from Fig. 5 a that detection rate is first increasing up to the deviation threshold of 0.4 and then decreasing with increasing number of dishonest nodes. The reason is that with higher trust deviation threshold, false recommendations from bad-mouthing nodes are not filtered out during the trust computation of evaluated nodes, which provides more opportunities to misbehaving nodes to remain undetected. Impact of trust deviation threshold on detection rate and false positive rate (a, b) Similarly, Fig. 5 b shows the impact of varying trust deviation threshold for increasing number of dishonest nodes. It is obvious from the figure that with increasing trust deviation threshold, the false positives rate is also increasing. The reason is that with higher deviation threshold, such as 0.5 and 0.6, false recommendation from bad-mouthing nodes having deviation of 60 % are only filtered out which causes legitimate nodes as misbehaving nodes, hence more false positives rate. It can be summarized from the above results that 0.4 is an optimal trust deviation threshold in terms of detection rate and false positives. It is worth mentioning here that we will use the trust deviation threshold of 0.4 for the rest of the simulation scenarios. Trust values Figure 4 shows the trust values computation of a some specific misbehaving node at different simulation time instances. As shown in the figure, the MATF decrements the trust in an expedite way of the misbehaving node to achieve the threshold because of multi-attribute and efficient dishonest recommendation filtration criteria, hence more informed decisions. The MATF evaluates the evaluated node on the basis of different network functions, hence more informed and prompt decisions about the trustworthiness of nodes can be taken. However, in case of the SATF, the trust is computed slowly due to high bootstrapping time and data sparsity problem. The reason for this behavior is that evaluated nodes are observed in the context of data forwarding only. It can be inferred from Fig. 4 that the MATF efficiently overcomes the bootstrapping and data sparsity at the start-up of the network as compared to SATF. Detection time and detection rate Detection time refers to the time taken by the trust-based security scheme to detect and declare a misbehaving node as a malicious node. Similarly, malicious node detection rate is calculated as the percentage of malicious nodes detected among the total number of malicious nodes within the network. Figure 6 a shows the malicious node detection time for increasing node speed in the MATF and the SATF. Aforementioned figure shows that the time required in case of the MATF for increasing node speed is smaller as compared to the SATF. The detection time required for misbehaving node detection in the SATF is almost double the MATF. The reason for this behavior is the slow trust building process as discussed in the Fig. 4 analysis. Overall, the detection time is increasing for increasing node speed. This is because of the fact that for higher node speed, nodes have smaller time of interaction; hence, it takes time to build the trust under the high node mobility. Effect on detection rate (a–e) Figure 6 b shows the detection rate for increasing node speed. As shown in figure, detection rate is higher in case of the MATF. The reason is that in the MATF, the node's trust is analyzed in multiple contexts, which expedite the detection rate. Similarly, Fig. 6 c shows the malicious node detection rate with the simulation time. The figure shows that the percentage of the malicious node detection is higher in case of the MATF as compared to the SATF. The detection rate is 100 % at time t=500 s in the MATF, while half of the malicious nodes are detected in the case of the SATF. Figure 6 d illustrates the impact of increasing the number of nodes on the detection rate while keeping the mobility fixed at 1–6 m/s. It can be inferred from the figure that there is a slight increase in the detection rate with increasing node density. This is due to the fact that under high node density, higher number of watchdogs will be available to observe the behavior of an evaluated node that leads to better detection rate. The impact of colluding dishonest attackers on detection rate is shown in Fig. 6 e. As the figure shows, MATF scheme is able to keep the detection rate nearly about 90 % even in case of higher number of false praise nodes as compared to SATF. The reason is the implementation of an efficient trust deviation criteria, hence more confidant decisions. Due to efficient trust deviation criteria, recommendations from colluding dishonest attackers are filtered out and are not considered in the trust computation of an evaluated node. False positive rate The false positive rate is the ratio of the legitimate nodes declared as malicious to the total number of legitimate nodes. Effect of node speed on false positive rate is shown in Fig. 7 a, under the MATF and the SATF. Figure 7 a illustrates that false positive rate is much lower in the MATF as compared to the SATF. The reason for the aforementioned behavior is that MATF uses the second-hand information from only designated nodes which have a deviation in trust values less than the deviation threshold, hence more informed decisions about the node's trustworthiness. While in case of the SATF, second-hand information are used from all the neighbor nodes to compute the trustworthiness of a node. As there are some nodes deployed in the network, exercising the bad-mouthing attack against the legitimate nodes causes higher false positives rate in the SATF. Overall, the figure shows that with an increase in the node speed, the false positives rate also increases. The aforementioned behavior is due to the fact that an evaluating node and the watchdog nodes cannot differentiate between intentional and unintentional malicious activities of a node. For example, even if a node fails to forward a packet because of the network conditions, it is regarded as a malicious activity by a node. As a result, under high node speed, the false positives rate increases. Effect on false positives (a–c) Similarly, Fig. 7 b shows the effect of increasing node density on false positive rate. The figure illustrates that for increasing node density, the false positive rate in case of the MATF is lower as compared to the SATF. The reason is that more legitimate nodes are selected as watchdog, which provides accurate and precise information about the trustworthiness of the evaluated nodes and also because of using an efficient filtration criteria to filter the dishonest recommendations. In case of the SATF, the false positive rate is increasing as the number of bad-mouthing and false praising nodes are also increasing, which causes a false trust estimation about the legitimate nodes. Figure 7 c shows the impact of dishonest colluding attackers on false positive rate. It is obvious from the figure that MATF withstands effectively against the increasing dishonest nodes in terms of false positives. The reason is the use of an efficient trust deviation criteria in the proposed scheme as previously discussed in the reasoning of Fig. 6 e. Packet delivery ratio Packet delivery ratio (PDR) is the ratio of the number of data packets generated by a source node and the number of packets received at the destination. With malicious node count set to 20 % of the total number of deployed nodes, the control and data packet dropping and withholding attacks are implemented. Figure 8 a illustrates the effect of the mobility speed of the nodes on the PDR while keeping the data rate constant at 4 kbps. Figure 8 a shows that the MATF has higher PDR as compared to the SATF as it isolates malicious nodes from the routing paths very earlier (as shown in Fig. 6 c). Moreover, it can also be observed that the PDR decreases with increasing node speed. The reason for the aforementioned behavior is that at a higher node speed, the node drops packets due to the frequent link changes. These results illustrate that the MATF eliminates the malicious nodes well in time from the network and improves the PDR by 10– 12 % for varying mobility speeds of the nodes. Effect on PDR and packet loss rate (a, b) In this section, we present the packet loss analysis of the proposed MATF. Although the packet delivery ratio provides the big picture of efficiency and effectiveness of any scheme, however, the reason to present the packet loss analysis in this paper is to show the effectiveness of the MATF scheme in terms of reducing the packet loss due to misbehaving nodes. As there are many reasons of packet loss in MANETs, such as packet loss due to link errors, queue overflow, frequent link changes, and malicious drop [42, 43]. In these simulation results, we consider the packet loss that is only caused by the malicious node-dropping packets. Figure 8 b shows the packet loss rate for the increasing node speed in the MATF and the SATF. The results show that the MATF has about 8– 15 % less packet loss rate as compared to the SATF. The reason for this behavior is that misbehaving nodes are detected and isolated well in time on the basis of multi-attribute trust criteria. However, in case of the SATF, the misbehaving nodes are detected and isolated very late in the simulation (as shown in Fig. 6 c), which provides more packet drop opportunities to the misbehaving nodes. The major causes of the energy consumption in MANETs are the packet transmission and reception. To compute the energy consumed by the nodes in both the MATF and SATF schemes, we use the generic energy model supported by NS-2. The generic energy model can estimate the consumption of energy for continuous and variable transmission power levels. The parameters we used are as follows: 100 J of initial energy, 0.05 W for transmission, 0.02 W for reception, 0.01 W for idling, and 0.0 W when sleeping. It is worth mentioning that energy consumed is shown in percentage in these results, which is the total percentage energy consumption of the initial energy of a node. The energy consumption of the proposed MATF in comparison to the SATF is shown in Fig. 9 a. As there is no extra message communication in the MATF in comparison to the SATF, the figure shows that energy consumption is almost equal to that of the SATF. A slight increase in the energy consumption in case of MATF is because of the nodes in MATF requiring some extra processing to compute the trust of the nodes on the basis of multi-attribute trust criteria. Moreover, the packet delivery ratio is higher and packet loss due to malicious nodes is lower in the MATF in comparison to the SATF, which also causes more energy consumption as packets need to travel more longer paths in the network, hence more energy consumption at those nodes in the routing path. Effect on energy consumption and NRL (a, b) Normalized routing load Normalized routing load (NRL) is the ratio of the total number of control packets transmitted by the nodes to the total number of received data packets at the destination nodes. It is used to evaluate the efficiency of a routing protocol. Figure 9 b illustrates that NRL is smaller in the MATF as compared to the SATF. The reason is the more packet delivery ratio per control packets in the MATF. As the SATF suffers from more packet loss as shown in the figure, control packets sent per data packet is higher, which causes higher NRL in the SATF. Overall, the routing overhead is increasing in both the schemes with an increase in the node speed. The reason for this behavior is that to maintain the routes under high node mobility, more control packets are transmitted. Security analysis In this section, we present the security analysis of the proposed MATF against the various attacks. Security against bad-mouthing and false praise attack In the MATF, second-hand information is considered from only those nodes, which are designated as watchdog nodes, having trust value greater than the trust threshold, and trust deviation is less than the deviation threshold. Due to the aforementioned criteria for second-hand information, the MATF effectively withstands against the bad-mouthing and false praise attacks. Security against selective misbehavior A smart adversary node may misbehave selectively, such as drops data packets, while forwards control packets. Depending upon the security requirements and the privilege provided by the MATF, an evaluating node can selectively use smart misbehaving nodes to perform different network functions. For example, if an adversary node misbehaves by dropping data packets only, then an evaluating node can use such a node for other network functions, such as control packet forwarding. Security against colluding attackers In the proposed scheme, an evaluating node uses the trust attributes based on local states and its own observation; collusion attack is not much effective against the scheme. The only collusion attack that is possible against the scheme is the publication of false-praise and bad-mouthing information against the legitimate nodes. In the proposed MATF, efficient trust deviation criteria are used which filter such false-praise and bad-mouthing information, as discussed in Figs. 6 e and 7 c. Results presented in the aforementioned figures reveal that the proposed MATF scheme efficiently withstands against the colluding attackers up to 30 % of the total nodes. Conclusion and future work In this work, we proposed a scheme that is based on the multi-attribute trust criteria to minimize the bootstrapping time and to deal with the selective misbehavior. The proposed trust model augments the security of a MANET by enabling a node to identify and remove malicious nodes from the routing paths by overhearing transmission at multiple nodes (evaluating node and watchdog nodes). The proposed security scheme not only provides a way to detect attacks and malicious behavior accurately and timely but also reduces the number of false positives by using the concept of multi-watchdogs. The proposed trust model is evaluated in the context of OLSR routing protocol. Moreover, to prove the correctness of the proposed scheme, we also presented a formal verification of our proposed MATF using HLPN, SMT-Lib, and Z3 Solver. Comparison between the MATF and the SATF has shown that our proposed scheme has more efficiently detected malicious nodes. Moreover, the MATF has shown promising results under high mobility speed of the nodes and frequent topology changes. Simulation results show that the proposed trust model achieves 98– 100 % detection rate of malicious nodes with only 1– 2 % false positives. The proposed MATF has an improved packet delivery ratio in comparison to the SATF of about 90–75 and 80–65 %, respectively, in a network with malicious nodes. We plan to extend our work by using the adaptive mechanism for the weight assignment to different trust attributes based on the run-time network conditions. Moreover, we will evaluate our proposed scheme as an extension to some other reactive routing protocol like DSR to analyze the effect of underlying routing protocol. 1 The difference between the trust values of a recommender node and an evaluating node about a particular evaluated node. S Zhao, A Aggarwal, S Liu, H Wu, in IEEE Wireless Communications and Networking Conference (WCNC2008). A secure routing protocol in proactive security approach for mobile ad-hoc networks (IEEELas Vegas, 2008), pp. 2627–2632. doi:10.1109/WCNC.2008.461. YC Hu, A Perrig, DB Johnson, Ariadne: A secure on-demand routing protocol for ad hoc networks. Wirel. Netw. 11:, 21–38 (2005). P Papadimitratos, ZJ Haas, in IEEE Applications and the Internet Workshops. Secure link state routing for mobile ad hoc networks (IEEEOrlando, 2003), pp. 379–383. MS Obaidat, I Woungang, SK Dhurandher, V Koo, A cryptography-based protocol against packet dropping and message tampering attacks on mobile ad hoc networks security and communication networks (John Wiley & Sons, Ltd, Malden MA, 2014). T Zahariadis, P Trakadas, HC Leligou, S Maniatis, P Karkazis, A novel trust-aware geographical routing scheme for wireless sensor networks. Wirel. Pers. Commun. 69(2), 805–826 (2013). G Zhan, W Shi, J Deng, Design and implementation of TARF: a Trust-Aware Routing Framework for WSNs. IEEE Trans. Dependable Secure Comput. 9(2), 184–197 (2012). S Buchegger, JY Le Boudec, in Proceedings of the 3rd ACM international symposium on Mobile ad hoc networking & computing. Performance analysis of the CONFIDANT protocol (ACMNew York, 2002), pp. 226–236. A Chakrabarti, V Parekh, A Ruia, in Advances in Computer Science and Information Technology.Networks and Communications (Springer). A trust based routing scheme for wireless sensor networks (SpringerBerlin Heidelberg, 2012), pp. 159–169. P Michiardi, R Molva, in Advanced communications and multimedia security. Core: a collaborative reputation mechanism to enforce node cooperation in mobile ad hoc networks (SpringerUSA, 2002), pp. 107–121. S Ganeriwal, LK Balzano, MB Srivastava, Reputation-based framework for high integrity sensor networks. ACM Trans. Sens. Netw. (TOSN). 4(3), 15 (2008). O Khalid, SU Khan, SA Madani, K Hayat, MI Khan, N MinAllah, J Kolodziej, L Wang, S Zeadally, D Chen, Comparative study of trust and reputation systems for wireless sensor networks. Secur. Commun. Netw. 6(6), 669–688 (2013). A Ahmed A, KA Bakar, MI Channa, K Haseeb, AW Khan, A survey on trust based detection and isolation of malicious nodes in ad hoc and sensor networks. Front. Comput. Sci. 9(2), 280–296 (2015). S Marti, TJ Giuli, K Lai, M Baker, in ACM Proceedings of the 6th annual international conference on Mobile computing and networking. Mitigating routing misbehavior in mobile ad hoc networks (ACMNew York, 2000), pp. 255–265. AM Shabut, KP Dahal, SK Bista, IU Awan, Recommendation based trust model with an effective defence scheme for MANETs. IEEE Trans. Mob. Comput. 14(10), 2101–2115 (2015). FS Proto, A Detti, C Pisa, G Bianchi, in IEEE International Conference on Communications (ICC). A framework for packet-droppers mitigation in OLSR wireless community networks (IEEEKyoto, 2011), pp. 1–6. JM Robert, H Otrok, A Chriqi, RBC-OLSR: Reputation-based clustering OLSR protocol for wireless ad hoc networks. Comput. Commun. 35(4), 487–499 (2012). D Zhang, CK Yeo, Distributed court system for intrusion detection in mobile ad hoc networks. Comput. Secur. 30(8), 555–570 (2011). SU Malik, SU Khan, Formal methods in LARGE-SCALE computing systems. ITNOW. 55(2), 52–53 (2013). T Issariyakul, E Hossain, Introduction to network simulator NS2 (Springer Science & Business Media, USA, 2011). S Tan, X Li, Q Dong, Trust based routing mechanism for securing OSLR-based MANET. Ad Hoc Netw. 30:, 84–98 (2015). EM Shakshuki, N Kang, TR Sheltami, EAACK—a secure intrusion-detection system for MANETs. IEEE Trans. Ind. Electron. 60(3), 1089–1098 (2013). K Liu, J Deng, PK Varshney, K Balakrishnan, An acknowledgment-based approach for the detection of routing misbehavior in MANETs. IEEE Trans. Mob. Comput. 6(5), 536–550 (2007). TR Sheltami, A Basabaa, EM Shakshuki, A3ACKs: adaptive three acknowledgments intrusion detection system for MANETs. J. Ambient Intell. Humanized Comput. 5(4), 611–620 (2014). P Gallagher, C Furlani, Digital signature standard (DSS). Federal Information Processing Standards Publications, volume FIPS (2013), 186–3 (2013). RL Rivest, A Shamir, L Adleman, A method for obtaining digital signatures and public-key cryptosystems. Commun. ACM. 26(1), 96–99 (1983). S Bansal, M Baker, Observation-based cooperation enforcement in ad hoc networks. Research Report cs.NI/0307012, Stanford University, 120–130 (2003). X Li, Z Jia, P Zhang, R Zhang, H Wang, Trust-based on-demand multipath routing in mobile ad hoc networks. IET Inf. Secur. 4(4), 212–232 (2010). H Xia, Z Jia, X Li, L Ju, EH Sha, Trust prediction and trust-based source routing in mobile ad hoc networks. Ad Hoc Netw. 11(7), 2096–2114 (2013). A Adnane, C Bidan, RT de Sousa, Trust-based security for the OLSR routing protocol. Comput. Commun. 36(10), 1159–1171 (2013). A Adnane, in Proceedings of the 2008 ACM symposium on Applied computing. Autonomic trust reasoning enables misbehavior detection in OLSR (ACMNew York, 2008), pp. 2006–2013. D Kukreja, SK Dhurandher, BVR Reddy, Enhancing the Security of Dynamic Source Routing Protocol Using Energy Aware and Distributed Trust Mechanism in MANETs. Intelligent Distributed Computing (Springer International Publishing, Springer Switzerland, 2015). R Abdellaoui, J Robert, in 4th Conference on Security in Network Architectures and Information Systems (SAR-SSI). Su-olsr: A new solution to thwart attacks against the olsr protocol (Luchon, 2009), pp. 239–245. D Câmara, AA Loureiro, F Filali, in IEEE Global Telecommunications Conference (GLOBECOM'07). Methodology for formal verification of routing protocols for ad hoc wireless networks (IEEEWashington, 2007), pp. 705–709. SU Malik, SU Khan, SK Srinivasan, Modeling and analysis of state-of-the-art VM-based cloud management platforms. IEEE Trans. Cloud Comput. 1(1), 1–1 (2013). F Ghassemi, S Ahmadi, W Fokkink, A Movaghar, Model checking MANETs with arbitrary mobility, (2013). C C Barrett, A Stump, C Tinelli, The Satisfiability Modulo Theories Library (SMT-LIB), (2010). http://smtlib.cs.uiowa.edu/. Accessed 15 Jan 2016. L De Moura, N Bjørner, in Tools and Algorithms for the Construction and Analysis of Systems. Z3: An efficient SMT solver (SpringerBerlin Heidelberg, 2008), pp. 337–340. SU Malik, SK Srinivasan, SU Khan, L Wang, in 12th International Conference on Scalable Computing and Communications (ScalCom). A methodology for OSPF routing protocol verification (IEEEChangzhou, 2012). P Whigham, The VINT project, the network simulator - ns-2 (University of Otago, 2003). http://www.isi.edu/nsnam/ns/. Accessed 05 Jan 2016. J Broch, DA Maltz, DB Johnson, YC Hu, J Jetcheva, in Proceedings of the 4th annual ACM/IEEE international conference on Mobile computing and networking. A performance comparison of multi-hop wireless ad hoc network routing protocols (ACMNew York, 1998), pp. 85–97. MS Khan, D Midi, MI Khan, E Bertino, in IEEE Trustcom/BigDataSE/ISPA, Vol.1. Adaptive trust threshold strategy for misbehaving node detection and isolation (IEEEHelsinki, 2015), pp. 718–725. Z Wei, H Tang, FR Yu, P Mason, in IEEE Military Communications Conference (MILCOM). Trust establishment based on Bayesian networks for threat mitigation in mobile ad hoc networks (IEEEPerundurai, 2014), pp. 171–177. Y Lu, Y Zhong, B Bhargava, Packet Loss in Mobile Ad Hoc Networks (IEEE, Baltimore, 2003). Technical Report CSD-TR 03-009. Department of Computer Science, Purdue University (2003). http://docs.lib.purdue.edu/cstech/1558/. Retrieved 26 Dec 2015. The work reported in this paper has been partially supported by the Higher Education Commission (HEC), Pakistan. Department of Computer Sciences, COMSATS Institute of Information Technology, Islamabad, Pakistan Muhammad Saleem Khan , Majid Iqbal Khan , Saif-Ur-Rehman Malik , Mukhtar Azim & Nadeem Javaid Department of Computer Sciences, COMSATS Institute of Information Technology, Abbottabad, Pakistan Osman Khalid Search for Muhammad Saleem Khan in: Search for Majid Iqbal Khan in: Search for Saif-Ur-Rehman Malik in: Search for Osman Khalid in: Search for Mukhtar Azim in: Search for Nadeem Javaid in: Correspondence to Nadeem Javaid. Khan, M., Khan, M., Malik, S. et al. MATF: a multi-attribute trust framework for MANETs. J Wireless Com Network 2016, 197 (2016). https://doi.org/10.1186/s13638-016-0691-4 Bootstrapping time Security attacks Intelligent Mobility Management for Future Wireless Mobile Networks
CommonCrawl
Why unlike terms cannot be simplified? "Why $x^2+3xy^2+4xy+7x^2y$ can not be simplified? Why can these terms not be simplified?" I would like an explanation that is understandable by 8th-grade students. The only proof I know (based on linear Independence of powers of any variable) is not elementary at all. P.S. I don't use "simplification" as a mechanical and by-the-law procedure. The question is deeper. For example "why $x^2+3xy^2+4xy+7x^2y \neq x^3-3xy+9x^2y^3$?". Why can't we reduce the number of mononomials? BehzadBehzad $\begingroup$ I am not sure I understand your question, could you make it more explicit (with examples, context, etc.)? $\endgroup$ – Benoît Kloeckner Dec 15 '15 at 16:39 $\begingroup$ @MichaelE2 Well, when they asked so, I asked them back "Let's simplify it. What's your suggestion?" and disproved their suggestions by giving $x$ and $y$ appropriate values. I'm not looking for a proof. An explanation is enough. $\endgroup$ – Behzad Dec 15 '15 at 18:44 $\begingroup$ Part of the reason this is a hard question to respond to is because your counterexample seems unnecessarily complicated and arbitrary. What is the crux of the problem? Would you be satisfied with an explanation of why $4x + 2x^2$ can't be "simplified" to a single term? Is it important that there be two variables in the expression? If so, would you be satisfied with an explanation of why $4x + 3y$ can't be combined into a single term? Or do there need to be higher-degree combinations? In other words, can you delineate for what kinds of expressions you think students require an explanation? $\endgroup$ – mweiss Dec 15 '15 at 19:44 $\begingroup$ If such a simplification existed for some expression, you could always move all terms to one side of the equation, so your question is equivalent to: "Why is a non-zero polynomial never identical to zero?" $\endgroup$ – Dag Oskar Madsen Dec 16 '15 at 1:33 $\begingroup$ By the way I think this is a great question. Something that is taken as granted in school turns out not to be so obvious. $\endgroup$ – Dag Oskar Madsen Dec 16 '15 at 1:46 Since you want to negate an implicit universal quantifier, what you have to do is nothing else than showing that there is a counter example to any simplification equality one could come up with. Instead of a proof, at this stage in one's curriculum giving a strong conviction based on a meaningful insight is probably the best possible expectation. The best idea I can come up with is to use one of the rigorous proofs to cook up a counter-example which shows the idea. For example, take the proof by using asymptotic behavior of monomials as $x$ and $y$ go to infinity. Why $x^2+3xy^2+4xy+7x^2y \neq x^3−3xy+9x^2y^3$? Let's take $x=1$ and $y=1000$. Then clearly $x^2+3xy^2+4xy+7x^2y$ is about $3$ million, while $x^3−3xy+9x^2y^3$ is roughly $9$ billion. After a few cases, the idea appears and a smart kid may be able to construct the counter-examples him or herself, and see why such a simplification cannot happen. The point is that the proof of the independence of monomial functions is really simple; it is only quite cumbersome to write out precisely. As a side point, note that this proof relies on a somewhat subtle property of real numbers (their order), for a good reason: the statement fails in finite fields, even for one-variable polynomials. If you don't know about finite fields, let's say they are sets of "numbers" with rules for addition and multiplication that have the same basic properties as for usual numbers, but with only finitely many numbers. The point of this side note is to stress that your desire to prove the non-simplification statement is really sane: this is by no way obvious, and cannot be proved just by applying the usual rules of computation (factorization, distributivity, etc.) which also holds in finite fields. As a side side note, I think this side note is a good example why it is nice for a teacher to have a much higher education than the level at which one teaches: it helps distinguishing subtle things from trivial ones and gives perspective. $\begingroup$ Thanks. 1. In side note, do you mean something like $2x+3y+5x=3y$ in $\mathbb{}Z_5$ or have a more complicated example in mind? 2. While proving independence of monomials, isn't it necessary to use the fact that every polynomial of one variable has finitely many roots? $\endgroup$ – Behzad Dec 15 '15 at 21:51 $\begingroup$ @Behzad An example would be $x^5=x$ (as functions) in $\mathbb F_5$. $\endgroup$ – Dag Oskar Madsen Dec 16 '15 at 1:38 $\begingroup$ @Behzad wrt 2. I do not see a necessity for this. When there is only one variable $x$, taking $x$ large enough shows independence; if there are two variables $x,y$, then for each value of y, taking $x$ large enough reduces independence to the vanishing of polynomials in $y$ whose coefficients are coefficients from the original linear combination, and that's it. $\endgroup$ – Benoît Kloeckner Dec 16 '15 at 15:38 I start by asking them how they would order wall paper or floor tile. "Square Feet". Then we talk about the third dimension, height. This gives us volume as opposed to just area. I take a step back and ask what the distance is to school. Is its units 'square feet'? Of course not. It's measured in feet or miles. I then ask how they'd add 12 feet to 9 square feet. At that point they start to see the question is absurd. My 2000 sq ft house and the 2000 ft I walk to the end of my street can't be combined in any way. But if your house is 2500 sq ft, I can add them, and say that combined, we'd have 4500 sq ft of living space. It's then a small move to explain how X, x^2, and x^3 represent length, area and volume. Let's say we have this expression: $$2 + 3 + x + 2x + 2x^2 + 2x^2$$ First of all, we know that we can combine $2 + 3$ as $5$, but let's take a closer look. We're going to break the $2$ and the $3$ up into smaller pieces. To make things easy, let's do this so that all of the pieces that we wind up with are exactly the same size: $1$ is an easy number to work with, so that's the size that our pieces should be. The $2$ breaks up in to $$1 + 1$$ and the $3$ breaks up in to $$1 + 1 + 1$$ If we put these groups of pieces together and get ready to add them, we have this: $$1 + 1 + 1 + 1 + 1$$ I count five $1$'s, which means that our which became is equal to $5$. If you take things down to the most basic level, this is how addition works. You break things up into units and then you count up the number of those units that you have. Now let's do the same thing with $x + 2x$, but this time our smallest piece will be $x$. $$x + 2x$$ $$x + x + x$$ I count three $x$'s, and we write that result as $3x$. Next up is our $2x^2 + 2x^2$. Our pieces are going to be $x^2$ for this part. $$2x^2 + 2x^2$$ $$x^2 + x^2 + x^2 + x^2$$ I count four $x^2$'s, and we write that result at $4x^2$. Let's put our parts together... $$5 + 3x + 4x^2$$ Can we combine, or add together, $3x$ and $4x^2$? In order for it to work, we'd need to be able to break each of them up into some identical parts, or units, and then count how many units that we have. We've seen that we can break the second thing up into $x$'s. We've seen that we can break the third thing up into $x^2$'s. Unfortunately, these units don't match. Maybe we can keep breaking things up until we get units that match. I can break $x$ up into $\frac 12 x + \frac 12 x$ and I can break $x^2$ up into $\frac 12 x^2 + \frac 12 x^2$ but we're still not going to find a match. We could break these things up in to any size parts that we want, meaning that we could choose any unit size that we want to break up these terms in to, but we'll never wind up with a unit from $3x$ that matches a unit from $4x^2$. If we can't do that, we can't add them together. JasonJason $\begingroup$ There must be something more going on, since the statement is false in finite fields. See Benoît Kloeckner's answer. $\endgroup$ – Dag Oskar Madsen Dec 16 '15 at 12:38 $\begingroup$ Finite fields? I thought OP wanted to help frustrated eight graders with basic algebra. Most of these answers seem like they belong in math.stackexchange, not matheducators.stackexchange. $\endgroup$ – Jason Dec 16 '15 at 14:02 $\begingroup$ I am only saying that some of the simplest arguments cannot be logically sound since they are contradicted in other (admittedly more abstract) mathematical settings. $\endgroup$ – Dag Oskar Madsen Dec 16 '15 at 14:33 $\begingroup$ The reasoning proposed here is not invalidated by finite fields, as it uses division by integers which may be zero in a given finite field. However, this answer really assume that $x$ and $x^2$ are fundamentally not interchangeable, which is basically what is asked to be explained. Also, the "breaking into parts" point of view may provoke misconceptions, as not all numbers are rationals. If the kid has been exposed to non-integer square roots, or is to be exposed to them, it may conflict with what he or she will be or has been told. $\endgroup$ – Benoît Kloeckner Dec 16 '15 at 15:44 $\begingroup$ To make my criticism of this answer clearer: a good answer should make clear the difference between $x+x^2$ not simplifying, and $\sin^2 x + \cos^2 x$ simplifying to $1$. Short of this, the core point has not been explained. $\endgroup$ – Benoît Kloeckner Dec 16 '15 at 15:49 We can "simplify" the sum of two or more terms, combining them into single term, if they are each numerical multiples of the same algebraic expression. We can simplify $2x^2y + 5x^2y$ because $2x^2y$ and $5x^2y$ are numerical multiples of the same algebraic expression, namely $x^2y$. For any given combination of values for $x$ and $y$, you will always get the same result for $x^2y$. If $x=2$ and $y=3$, you will always get $x^2y= 12$. If $x=3$ and $y=7$, we will always get $x^2y=63$. So, for whatever values $x$ and $y$ may have, $x^2y$ will be a common factor in the expression $2x^2y + 5x^2y$. Therefore, we can write: $2x^2y + 5x^2y=x^2y(2+5)=(2+5)x^2y= 7x^2y$ We cannot simplify $2x^2y + 5x^3y$ because $2x^2y$ and $5x^3y$ are not numerical multiples of the same algebraic expression. Notice that the exponents are different. If $x=2$ and $y=1$, we have different values for $x^2y$ and $x^3y$, namely $x^2y=4$ and $x^3y=8$. So, there is no common factor in the expression $2x^2y + 5x^3y$. $\begingroup$ Thanks, but that's not the point! The question is not the law or mechanical procedure of simplification. $\endgroup$ – Behzad Dec 15 '15 at 18:53 $\begingroup$ Do they understand common factors? In expressions that can be simplified in this way, the common algebraic expression is like a common factor. You would be using the distribution rule in reverse: $ax + bx = x(a+b)$. If $a$ and $b$ are numbers, we can calculate their sum, say $c$. Then we would have $ax+bx=cx$. In the example you gave, the algebraic expressions are all different, i.e. they need not always have the same value. $\endgroup$ – user6104 Dec 15 '15 at 19:06 $\begingroup$ I understand and appreciate your answer, but that does not solve the problem. Please read the "P.S." in the post. $\endgroup$ – Behzad Dec 15 '15 at 19:15 $\begingroup$ Expanding on your counter-example idea then, when we say that $x^2+3xy^2+4xy+7x^2y = x^3-3xy+9x^2y^3$, we mean that this is true for every combination of $x$ and $y$. It is true for $x=0$ and $y=0$, but it is false for $x=2$ and $y=0$. So, this equation is not always true. Then explain that identical algebraic expressions in $x$ and $y$ will always give the same value for every combination of $x$ and $y$, and can therefore be used as a common factor. $\endgroup$ – user6104 Dec 15 '15 at 19:54 $\begingroup$ See edit to my answer. $\endgroup$ – user6104 Dec 15 '15 at 20:05 The "high level" answer is that a polynomial algebra is a free commutative algebra generated by the indeterminates. But this would probably not be a very satisfying answer to 8th graders. A possible discussion starts with the cliche that "you cannot add apples and oranges." And you cannot "add lengths and weights." But you can multiply them! After all, we do use "foot$*$pound" as a unit in physics (at least in the US), and so we could just as well talk about "apple$*$orange" if we wanted to. So we can combine $x$ and $y$ into a single unit $x*y=xy$. But $x+y$ is just kind of stuck out there with no place to go. The main reason why you can or can't do things to algebraic expressions is because you can or can't do those things with numerical expressions. The rules for manipulating algebraic expressions are exactly the same rules for manipulating numeric expressions, with some extra restrictions introduced by the fact that you don't know what some of the numbers are (or that they represent all possible numbers). I suggest talking through the ways that you can rearrange expressions made entirely of numbers without changing their values. A good start is to talk through different strategies to calculate expressions where "like terms" will make it easier to do the calculation, such as $3 + 2\times3 + 3\times7$. Then you can give students an expression and ask them to rearrange it so that the "answer" is the same. Get many different students to talk through their rearrangement and why it is the same. After that, give them several different expressions which are similar and ask them to decide if they have the same answer without actually calculating the final result. Then you could take the same activities above and replace copies of the same number with a letter and ask the same question. This might help to explain what you can and can't do with algebraic expressions, including the inability to add unlike terms. DavidButlerUofADavidButlerUofA I am going to disagree (slightly) with Benoit Kloeckner's observation that the OP's implicit claim is false in finite fields. If we are going to allow ourselves the sophistication to look at fields other than $\mathbb{R}$, we should also oblige ourselves to distinguish between a polynomial and a polynomial function. Give any ring $R$, a polynomial in $R$ is an element of the ring $R[x]$. A polynomial can be written uniquely as a finite sum of terms of the form $a_n x^n$, where $a_n \in R$ and $x$ is a formal variable. A polynomial function is what you get when you interpret a polynomial as a function on $R$ in the obvious way. A polynomial is an expression, but a polynomial function is (at the formal level) a set of ordered pairs. Every polynomial naturally induces a polynomial function, but they remain different kinds of objects. When we talk about "simplifying polynomials" we are referring to operations in $R[x]$. For example, the fact that $(x-1)(x+1)(x+2)$ can be simplified to $x^3+2x^2-x-2$ is a consequence of the way the ring operations are defined in $R[x]$ (assuming $R$ is $\mathbb{Z}$, $\mathbb{R}$, or some other ring in which the symbols $1$ and $2$ have a natural interpretation). The fact that $x^3+2x^2-x-2$ cannot be further reduced to an expression with fewer terms is also true in $R[x]$ for any such $R$. Now, it turns out (surprisingly!) that if $R=\mathbb{Z}/(3)$ this polynomial induces the exact same function as does $2x^2-2$. But (and here is where my disagreement with Benoit lies) despite the fact that $x^3+2x^2-x-2$ and $2x^2-2$ are identical when considered as functions over $\mathbb{Z}/(3)$, they are still different polynomials. They have different degrees, different number of terms, and different leading coefficients, and they generate different ideals. So while it is true that $t^3+2t^2-t-2 = t^2-2$ may be true for all $t \in \mathbb{Z}/(3)$, I would not describe the replacement of the left-hand side by the right-hand side as "simplification". Having gone through all of these preliminaries, let's go back to the OP's original question, which we can now tease apart into two different questions: Why is it that "unlike terms" in a polynomial can't be combined into a single term? Why is it that when working over the real numbers, two different polynomials never induce the same function? To answer the first question, I would turn it back onto the asker (or his students): Why do you think they should be combined? What do you think they should be combined into? The fact that like terms (e.g. $5x^2y + 3 x^2y$) can be combined (continuing the example, into $8x^2y$) depends on the distributive property, but the OP does not want an explanation that has to do with formal properties. So that means we first need an informal explanation for why you can combine like terms; then we can try to see why the same reasoning does not work for unlike terms. Informally, "combining like terms" (as in our example $5x^2y = 3x^2y = 8 x^2y$) can be explained by noting that five things plus three things equals eight things, regardless of what the 'things' are. Five bags of 100 marbles, plus three bags of 100 marbles, add up to 8 bags of 100 marbles. Five 12-packs of bottled water plus three 12-packs of bottled water add up to eight 12-packs of bottled water. In this case, the things are expressions of the form $x^2y$. If you have five of them and three of them, then you have eight of them. But if you have unlike things -- say, five bags of 100 marbles and three 12-packs of bottled water -- what can you combine them into? There are eight "somethings", but the "somethings" cannot be given a simple name. Likewise, if you want to combine $5x^2y+3xy^2$, there may be a strong temptation to want to say that there are 8 "somethings", but how can you say what the "somethings" are? That, at least, is how I would informally address the formal question of why unlike terms in a polynomial cannot be combined. Which brings us to the second question: Why is it that two different polynomials over the reals never induce the same function? As has been noted, this is a rather special property of the reals. It is false when working over any finite ring; it is true when working over any infinite ring that is also an integral domain. The most general conditions under which this property holds are given in the answers to this question on MathOverflow. mweissmweiss Thank you for clarifying that by "simplify", you mean "reduce the number of mononomials". It seems that you are defining "like terms" to have enough in common that you could use the distributive property to combine them into a single mononomial. Thus, the answer is obvious: "Unlike terms" do not have enough in common to use the distributive property to factor out all the variables, and combine the constants into a single constant expression. Thus, you cannot combine two unlike terms into a single mononomial. JasperJasper $\begingroup$ No, that's not the point. I don't use "simplification" as a mechanical and by-the-law process. The question is "Why $x^2+3xy^2+4xy+7x^2y \neq x^3-3xy+9x^2y^3$?" (for example!). Why can't we do "anything" to this polynomial? $\endgroup$ – Behzad Dec 15 '15 at 18:49 To the "Why is $...\neq ...$": just find a counter example. The simplification you are referring to seems to be the distibutive law " backwards". Do your students know this already? Without this law, you probably have to go with the mechanical approach. $\begingroup$ Note that the non-simplification cannot be reduced to the distributive law and its companion (see the side note in my answer). $\endgroup$ – Benoît Kloeckner Dec 15 '15 at 21:08 Suppose $x$ and $y$ represent lengths, then terms can represent geometric shapes and explain why unlike terms cannot be added together. What do you get when you add a square and two more identical squares? Three squares. So $x^2+2x^2=3x^2$ What do you get when you add a square and a rectangle? A Shape that cannot necessarily be defined in terms of either the original shapes. And so $x^2+xy$ cannot be simplified into one term. Although in this case it could make a new rectangle with length $x$ and width $x+y$, so $x(x+y)$. What do you get when you add a square ($x^2$) plus three rectangular prisms ($xy^2$) plus four rectangles ($xy$) plus 7 different rectangular prisms ($x^2y$)? Pieter RousseauPieter Rousseau James Brennan explains Simplifying Algebraic Expressions in the following way: By "simplifying" an algebraic expression, we mean writing it in the most compact or efficient manner, without changing the value of the expression. This mainly involves collecting like terms, which means that we add together anything that can be added together. The rule here is that only like terms can be added together. Like terms are those terms which contain the same powers of same variables. They can have different coefficients, but that is the only difference. Examples: $3x$, $x$, and $–2x$ are like terms. $2x^2$, $–5x^2$, and are like terms. $xy^2$, $3y^2 x$, and $3xy^2$ are like terms. $xy^2$ and $x^2 y$ are NOT like terms, because the same variable is not raised to the same power. Combining Like terms Combining like terms is permitted because of the distributive law. For example, $3x^2 + 5x^2 = (3 + 5)x^2 = 8x^2$ What happened here is that the distributive law was used in reverse—we "undistributed" a common factor of x2 from each term. The way to think about this operation is that if you have three x-squareds, and then you get five more x-squareds, you will then have eight x-squareds. Example: $x^2 + 2x + 3x^2 + 2 + 4x + 7$ Starting with the highest power of $x$, we see that there are four x-squareds in all $(1x^2 + 3x^2)$. Then we collect the first powers of $x$, and see that there are six of them $(2x + 4x)$. The only thing left is the constants $2 + 7 = 9$. Putting this all together we get $ x^2 + 2x + 3x^2 + 2 + 4x + 7 = 4x^2 + 6x + 9$ quid♦ varunkvarunk $\begingroup$ This entire answer is plagiarised from here. $\endgroup$ – ArtOfCode Dec 24 '15 at 12:53 $\begingroup$ While it can be fine and helpful to reproduce content that exists elsewhere on the internet here, one must make sure to attribute it and should also make an effort to see if this reproduction is allowed. Please do not contribute unattributed content; it can cause problems in various ways. $\endgroup$ – quid♦ Dec 24 '15 at 13:32 Not the answer you're looking for? Browse other questions tagged algebra or ask your own question. What number is the sum of two roots Substituting $x=1$ into $px^{p-1}$, why do so many students get $p^{p-1}$? Is there an advantage to teaching solution of simultaneous equations by setting equations equal to each other? Rationale for not dividing both sides of an equation by $x$ (ex: $6x^2 = 12x$) How to show that a radical can be partially simplified How to explain "fractional terms"? Why do we teach complex numbers? Why in the FOIL Method the terms are taken with their signs? Examples for environmental topics in the context of terms or linear inequalities
CommonCrawl
If $\limsup\limits_{n\rightarrow \infty} a_n=a< \infty$, then $\forall\epsilon>0$ $\exists N\in \mathbb{N}:a_n\leq a+\epsilon$ If $\limsup\limits_{n\rightarrow \infty} a_n=a< \infty$, then there exists for all $\varepsilon >0$ a $N\in \mathbb{N}$ such that $a_n\leq a+\varepsilon$ for all $n\in \mathbb{N}$, $n\geq N$ My attempt: Suppose there are infinite elements $a_{n_1},a_{n_2}, a_{n_3},...$ with $a_{n_k}\geq a$. The sequence $(a_{n_k})_k$ is bounded above, otherwise $(a_n)_n$ wouldn't be bounded and $\limsup\limits_{n\rightarrow \infty} a_n=+\infty$. Hence $(a_{n_k})_k$ has - according to Weierstraß and Bolzano - a convergent subsequence $(a_{n_{k_j}})$ with a limit $\geq a$} since $(a_{n_{k_j}})\geq a$ for every $j\in \mathbb{N}$. real-analysis sequences-and-series proof-verification limsup-and-liminf José Carlos Santos ParabolicAlcoholicParabolicAlcoholic $\begingroup$ How do you define $\limsup_na_n$? $\endgroup$ – José Carlos Santos Jun 2 at 12:03 $\begingroup$ Like that: $\limsup\limits_{n\rightarrow \infty}a_n:=\begin{cases}\sup H, \text{ if } (a_n)_n \text{is bounded above} \\ \infty, \text{ else}\end{cases}$. $H$ is the set of limit points. $\endgroup$ – ParabolicAlcoholic Jun 2 at 12:06 You don't need to go as far as using Bolzano-Weierstraß theorem. We can prove your statement directly. Recall that $\limsup\limits_{n\rightarrow \infty} a_n = \lim\limits_{n\to\infty} \sup\limits_{k\geq n} a_k$. Let us fix $\epsilon > 0$. By definition of the limit, we know that there is some $N\in \mathbb N$ such that $\sup\limits_{k\geq N} a_k < a+\epsilon$. Thus, by definition of the supremum, we conclude that $a_k < a + \epsilon$ for every $k\geq N$, which is the conclusion you wanted. SuzetSuzet $\begingroup$ Hi, thank you so much! :D I know it's kind of inappropriate to ask this here. But do you have any idea how to solve this: math.stackexchange.com/questions/3247389/… $\endgroup$ – ParabolicAlcoholic Jun 2 at 12:08 Suppose otherwise. That is, suppose that, for each $N\in\mathbb N$, there is a $n\in\mathbb N$ such that $n\geqslant N$ and that $a_n\geqslant a+\varepsilon$. So, there is a sequence $(a_{n_k})_{k\in\mathbb N}$ such that $(\forall k\in\mathbb N):a_{n_k}\geqslant a+\varepsilon$. And, since $(a_n)_{n\in\mathbb N}$ is bounded, $(a_{n_k})_{k\in\mathbb N}$ is bounded too. So, it has a convergent subsequence, by the Bolzano-Weierstrass theorem. The limit of this subsequence must be greater than or equal to $a+\varepsilon$, but is is impossible, since, by definition, $a$ is the supremum of the set of limit points. José Carlos SantosJosé Carlos Santos $\begingroup$ Thanks a million! :) $\endgroup$ – ParabolicAlcoholic Jun 2 at 12:22 $\begingroup$ I know it's kind of inappropriate to ask this here. But do you have any idea how to solve this: math.stackexchange.com/questions/3247389/… ? $\endgroup$ – ParabolicAlcoholic Jun 2 at 12:31 Not the answer you're looking for? Browse other questions tagged real-analysis sequences-and-series proof-verification limsup-and-liminf or ask your own question. Show that $\sqrt[k]{a}\leq a_{n+1}\leq a_n$ for $n\in \mathbb{N}$ Bolzano-Weierstrass proof correction Proof of $ \limsup\limits_{n\rightarrow\infty} a_n b_n \le (\limsup\limits_{n\rightarrow\infty} a_n)(\limsup\limits_{n\rightarrow\infty} b_n)$ show that if a subsequence of a cauchy sequence converges, then the whole sequence converges Theorem 3.17 in Baby Rudin: Infinite Limits and Upper and Lower Limits of Real Sequences Proving the $\limsup\limits_{x\rightarrow\infty}x_n$ exists, using $\epsilon$ proof Proving if $\limsup x_n = \liminf x_n = c$, then $x_n \rightarrow c, n \rightarrow \infty$ using $\epsilon$ Suppose that $\limsup a_n=M$ and $\lim b_n= b>0$ ($b\neq\infty$) as $n\rightarrow\infty$,and show that $\limsup a_nb_n=(\limsup a_n)b$ Prove if $\forall n\:a_n\ge b_n$ then $\liminf _{n\to \infty }\left(b_n\right)\le \limsup _{n\to \infty }\left(a_n\right)$ Existence of a subsequence converging to limsup How to prove $\liminf\limits_{n\rightarrow \infty}(a_n+b_n)=\liminf\limits_{n\rightarrow \infty}a_n+\liminf\limits_{n\rightarrow \infty}b_n$
CommonCrawl
VOL. 10 · NO. 1 | March, 1982 Ann. Statist. 10 (1), (March, 1982) William Gemmell Cochran 1909-1980 G. S. Watson Ann. Statist. 10 (1), 1-10, (March, 1982) DOI: 10.1214/aos/1176345687 Special Invited Paper A Review of Selected Topics in Multivariate Probability Inequalities Morris L. Eaton Ann. Statist. 10 (1), 11-43, (March, 1982) DOI: 10.1214/aos/1176345688 KEYWORDS: multivariate probability inequalities, partial orderings, FKG inequality, associated random variables, majorization, reflection groups, elliptically contoured distributions, 62H99, 62E99 This paper contains a review of certain multivariate probability inequalities. The inequalities discussed include the FKG inequalities and the related association inequalities, inequalities resulting from Schur convexity and its extension to reflection groups, and inequalities for probabilities of certain convex symmetric sets. Asymptotic Lognormality of $P$-Values Diane Lambert, W. J. Hall KEYWORDS: $p$-value, Bahadur efficiency, slope, one-sample tests, two-sample tests, 62F20, 62G20 Sufficient conditions for asymptotic lognormality of exact and approximate, unconditional and conditional $P$-values are established. It is pointed out that the mean, which is half the Bahadur slope, and the standard deviation of the asymptotic distribution of the log transformed $P$-value together, but not the mean alone, permit approximation of both the level and power of the test. This provides a method of discriminating between tests that have Bahadur efficiency one. The asymptotic distributions of the log transformed $P$-values of the common one- and two-sample tests for location are derived and compared. Natural Exponential Families with Quadratic Variance Functions Carl N. Morris KEYWORDS: exponential families, natural exponential families, quadratic variance function, normal distribution, Poisson distribution, gamma distribution, exponential distribution, Binomial distribution, negative binomial distribution, geometric distribution, hyperbolic secant distribution, orthogonal polynomials, moments, Cumulants, large deviations, Infinite divisibility, limits in distribution, variance function, 60E05, 60E07, 60F10, 62E10, 62E15, 62E30 The normal, Poisson, gamma, binomial, and negative binomial distributions are univariate natural exponential families with quadratic variance functions (the variance is at most a quadratic function of the mean). Only one other such family exists. Much theory is unified for these six natural exponential families by appeal to their quadratic variance property, including infinite divisibility, cumulants, orthogonal polynomials, large deviations, and limits in distribution. Selecting a Minimax Estimator of a Multivariate Normal Mean James O. Berger KEYWORDS: minimax, normal mean, quadratic loss, risk function, prior information, Bayes risk, 62C99, 62F15, 62F10, 62H99 The problem of estimating a $p$-variate normal mean under arbitrary quadratic loss when $p \geq 3$ is considered. Any estimator having uniformly smaller risk than the maximum likelihood estimator $\delta^0$ will have significantly smaller risk only in a fairly small region of the parameter space. A relatively simple minimax estimator is developed which allows the user to select the region in which significant improvement over $\delta^0$ is to be achieved. Since the desired region of improvement should probably be chosen to coincide with prior beliefs concerning the whereabouts of the normal mean, the estimator is also analyzed from a Bayesian viewpoint. Simultaneous Estimation of Several Poisson Parameters Under $K$-Normalized Squared Error Loss Kam-Wah Tsui, S. James Press Ann. Statist. 10 (1), 93-100, (March, 1982) DOI: 10.1214/aos/1176345692 KEYWORDS: Minimax estimator, Poisson distributions, simultaneous estimation, unbiased risk deterioration estimate, 62C15, 62F10, 62C25 In this study, we consider the simultaneous estimation of the parameters of the distributions of $p$ independent Poisson random variables using the loss function $L_k(\lambda, \hat{\lambda}) = \sum (\lambda_i - \hat{\lambda}_i)^2/\lambda^k_i$ for a given positive integer $k$. New estimators are derived, which include the minimax estimators proposed by Clevenson and Zidek (1975), as special cases. The case when more than one observation is taken from some of the variables is considered. Piecewise Exponential Models for Survival Data with Covariates Ann. Statist. 10 (1), 101-113, (March, 1982) DOI: 10.1214/aos/1176345693 KEYWORDS: Asymptotic theory, Censored data, Log-linear model, maximum likelihood estimation, piecewise exponential model, survival data, 62E20, 62F10 A general class of models for analysis of censored survival data with covariates is considered. If $n$ individuals are observed over a time period divided into $I(n)$ intervals, it is assumed that $\lambda_j(t)$, the hazard rate function of the time to failure of the individual $j$, is constant and equal to $\lambda_{ij} > 0$ on the $i$th interval, and that the vector $\ell = \{\log \lambda_{ij}: j = 1, \ldots, n; i = 1, \ldots, I(n)\}$ lies in a linear subspace. The maximum likelihood estimate $\hat{\ell}$ of $\ell$ provides a simultaneous estimate of the underlying hazard rate function, and of the effects of the covariates. Maximum likelihood equations and conditions for existence of $\hat{\ell}$ are given. The asymptotic properties of linear functionals of $\hat{\ell}$ are studied in the general case where the true hazard rate function $\lambda_0(t)$ is not a step function, and $I(n)$ increases without bound as the maximum interval length decreases. In comparison with recent work on regression analysis of survival data, the asymptotic results are obtained under more relaxed conditions on the regression variables. Diagnostic Tests for Multiple Time Series Models D. S. Poskitt, A. R. Tremayne KEYWORDS: Multiple autoregressive-moving average models, score test, local alternatives, portmanteau statistics, equivalences, 62M10, 62F05 This paper is concerned with the development and application of diagnostic checks for vector linear time series models. A hypothesis testing procedure based upon the score, or Lagrangean multiplier, principle is advocated and the distributions of the test statistic both under the null hypothesis and under a Pitman sequence of alternatives are discussed. Consideration of alternative models with singular sensitivity matrices when the null hypothesis is true leads to an interpretation of the score test as a pure significance test and to a notion of an equivalence class of local alternatives. Portmanteau tests of model adequacy are also investigated and are seen to be equivalent to score tests. The Evaluation of Certain Quadratic Forms Occurring in Autoregressive Model Fitting R. J. Bhansali KEYWORDS: stationary process, inverse of covariance matrix, convergence of a sequence of matrices, autoregressive model fitting, inverse covariance function, Moving average process, 62M20, 60G10 Let $\mathbf{R}$ be an infinite dimensional stationary covariance matrix, let $\mathbf{R}(k)$ and $\mathbf{W}(k)$ denote the top $k \times k$ left hand corners of $\mathbf{R}$ and $\mathbf{R}^{-1}$ respectively and let $\mathbf{\Sigma}(k)$ and $\mathbf{\Gamma}(k)$ denote the approximations for $\mathbf{R}(k)^{-1}$ suggested by Whittle (1951) and Shaman (1976) respectively. We consider quadratic forms of the type $Q(k) = \beta(k)' \mathbf{R}(k)^{-1}\alpha (k)$, when the vectors $\beta(k)$ and $\alpha(k)$ constitute the first $k$ elements of the infinite absolutely summable sequences $\{\beta_j\}$ and $\{\alpha_j\}$. If $\chi_1(k) = \beta (k)' \mathbf{W}(k) \mathbf{\alpha}(k)$ and $\chi_2(k) = \beta (k)' \mathbf{\Sigma(k)}\mathbf{\alpha}(k)$, then, as $k \rightarrow \infty, Q(k)$ and $\chi_1(k)$ converge to the same limiting value for all such $\alpha (k)$ and $\beta(k)$, but $\chi_2(k)$ does not necessarily do so. Further, if $\tilde\mathbf{\alpha}(k) = (\alpha_k, \cdots, \alpha_1)'$ and $\tilde\mathbf{\beta}(k) = (\beta_k, \cdots, \beta_1)'$ then $\chi_1(k) \equiv \tilde\mathbf{\beta}(k)'\mathbf{\Gamma}(k)\tilde\mathbf{\alpha}(k)$. We discuss the use of $\mathbf{W}(k)$ for evaluating the asymptotic covariance structure of the autoregressive estimates of the inverse covariance function and the moving average parameters. A Central Limit Theorem for Stationary Processes and the Parameter Estimation of Linear Processes Yuzo Hosoya, Masanobu Taniguchi KEYWORDS: Stationary processes, central limit theorem, linear processes, Spectral density, periodogram, Gaussian maximum likelihood estimate, robustness, autoregressive signal with white noise, Newton-Raphson iteration, 60F15, 62M15, 60G10, 60G35 A central limit theorem is proved for the sample covariances of a linear process. The sufficient conditions for the theorem are described by more natural ones than usual. We apply this theorem to the parameter estimation of a fitted spectral model, which does not necessarily include the true spectral density of the linear process. We also deal with estimation problems for an autoregressive signal plus white noise. A general result is given for efficiency of Newton-Raphson iterations of the likelihood equation. Least Squares Estimates in Stochastic Regression Models with Applications to Identification and Control of Dynamic Systems Tze Leung Lai, Ching Zong Wei KEYWORDS: Stochastic regressors, least squares, system identification, adaptive control, dynamic models, strong consistency, asymptotic normality, Martingales, 62J05, 62M10, 60F15, 60G45, 93B30, 93C40 Strong consistency and asymptotic normality of least squares estimates in stochastic regression models are established under certain weak assumptions on the stochastic regressors and errors. We discuss applications of these results to interval estimation of the regression parameters and to recursive on-line identification and control schemes for linear dynamic systems. Sequential Estimation Through Estimating Equations in the Nuisance Parameter Case Pedro E. Ferreira KEYWORDS: Estimating equation, stopping rule, nuisance parameter, 62F10, 62L12 Let $(X_1, X_2, \cdots)$ be a sequence of random variables and let the p.d.f. of $\mathbf{X}_n = (X_1, \cdots, X_n)$ be $p(\mathbf{x}_n, \theta)$, where $\theta = (\theta_1, \theta_2)$. An estimating equation rule for $\theta_1$ is a sequence of functions $g(x_1, \theta_1), g(x_1, x_2, \theta_1), \cdots$. If the random sample size $N = n$, we estimate $\theta_1$ through the estimating equation $g(\mathbf{X}_n, \theta_1) = 0$. In this paper, optimum estimation rules are obtained and, in particular, sufficient conditions for the optimality of the maximum conditional likelihood estimation rule are given. In addition, Bhapkar's concept of information in an estimating equation is used to discuss stopping criteria. Contamination Distributions Michael Goldstein KEYWORDS: contamination, Outliers, robust estimation, Bayesian estimation, 62F15, 62G35 A simple class of models for possibly contaminated data is considered, for which the effect of large observations on our beliefs and on our procedures is small. Various properties are derived, and the effects of differing prior opinions are considered. De Finetti's Theorem for Symmetric Location Families David Freedman, Persi Diaconis KEYWORDS: exchangeability, robustness, characteristic functions, 62C10, 62E10 Necessary and sufficient conditions are obtained for an exchangeable sequence of random variables to be a mixture of symmetric location families. Confidence Intervals for the Coverage of Low Coverage Samples Warren W. Esty KEYWORDS: coverage, occupancy problem, unobserved species, total probability, 62G15 The coverage of a random sample from a multinomial population is defined to be the sum of the probabilities of the observed classes. The problem is to estimate the coverage of a random sample given only the number of classes observed exactly once, twice, etc. This problem is related to the problem of estimating the number of classes in the population. Non-parametric confidence intervals are given when the coverage is low such that a Poisson approximation holds. These intervals are related to a coverage estimator of Good (1953). Nonparametric Interval and Point Prediction Using Data Trimmed by a Grubbs-Type Outlier Rule Ronald W. Butler KEYWORDS: prediction, tolerance intervals, nonparametric prediction, Outliers, 62G15, 62G35, 62G05, 62M20 For a fixed probability $0 < \gamma < 1$, the "most outlying" $100(1 - \gamma){\tt\%}$ subset of the data from a location model may be located with a Grubbs outlier subset test statistic. This subset is essentially located in terms of its complement, which is the connected $100\gamma{\tt\%}$ span of the data which supports the smallest sample variance. We show that this range of the data may be characterized approximately as the $100\gamma{\tt\%}$ span such that its midpoint is equal to the trimmed mean averaged over the span. Such a range forms a tolerance interval for predicting a future observation from the location model, and the asymptotic laws for its location, coverage, and center are presented. Qualitative Robustness of Rank Tests Helmut Rieder KEYWORDS: Equicontinuity of power functions and laws, Prokhorov, Kolmogorov, Levy, total variation distances, gross errors, breakdown points of tests and test statistics, consistency of tests and tests statistics, one-sample rank statistics, laws of large numbers for rank statistics, 62G35, 62E20, 62G10 An asymptotic notion of robust tests is studied which is based on the requirement of equicontinuous error probabilities. If the test statistics are consistent, their robustness in Hampel's sense and robustness of the associated tests turn out to be equivalent. Uniform extensions are considered. Moreover, test breakdown points are defined. The main applications are on rank statistics: they are generally robust, under a slight condition even uniformly so; their points of final breakdown coincide with the breakdown points of the corresponding $R$ - estimators. Estimated Sampling Distributions: The Bootstrap and Competitors KEYWORDS: sampling distribution, bootstrap estimates, jackknife, asymptotic minimax, Edgeworth, 62G05, 62E20 Let $X_1, X_2, \cdots, X_n$ be i.i.d random variables with d.f. $F$. Suppose the $\{\hat{T}_n = \hat{T}_n(X_1, X_2, \cdots, X_n); n \geq 1\}$ are real-valued statistics and the $\{T_n(F); n \geq 1\}$ are centering functionals such that the asymptotic distribution of $n^{1/2}\{\hat{T}_n - T_n(F)\}$ is normal with mean zero. Let $H_n(x, F)$ be the exact d.f. of $n^{1/2}\{\hat{T}_n - T_n(F)\}$. The problem is to estimate $H_n(x, F)$ or functionals of $H_n(x, F)$. Under regularity assumptions, it is shown that the bootstrap estimate $H_n(x, \hat{F}_n)$, where $\hat{F}_n$ is the sample d.f., is asymptotically minimax; the loss function is any bounded monotone increasing function of a certain norm on the scaled difference $n^{1/2}\{H_n(x, \hat{F}_n) - H_n(x, F)\}$. The estimated first-order Edgeworth expansion of $H_n(x, F)$ is also asymptotically minimax and is equivalent to $H_n(x, \hat{F}_n)$ up to terms of order $n^{- 1/2}$. On the other hand, the straightforward normal approximation with estimated variance is usually not asymptotically minimax, because of bias. The results for estimating functionals of $H_n(x, F)$ are similar, with one notable difference: the analysis for functionals with skew-symmetric influence curve, such as the mean of $H_n(x, F)$, involves second-order Edgeworth expansions and rate of convergence $n^{-1}$. Binary Experiments, Minimax Tests and 2-Alternating Capacities Tadeusz Bednarski KEYWORDS: Robust testing, minimax testing, Capacities, binary experiments, 62G35, 62B15 The concept of Choquet's 2-alternating capacity is explored from the viewpoint of Le Cam's experiment theory. It is shown that there always exists a least informative binary experiment for two sets of probability measures generated by 2-alternating capacities. This result easily implies the Neyman-Pearson lemma for capacities. Moreover, its proof gives a new method of construction of minimax tests for problems in which hypotheses are generated by 2-alternating capacities. It is also proved that the existence of least informative binary experiments is sufficient for a set of probability measures to be generated by a 2-alternating capacity. This gives a new characterization of 2-alternating capacities, closely related to that of Huber and Strassen. On the Limiting Distribution of and Critical Values for the Multivariate Cramer-Von Mises Statistic Derek S. Cotterill, Miklos Csorgo KEYWORDS: Multivariate Cramer-von Mises statistic, Invariance principles, 62H10, 62H15, 60G15 Let $Y_1, Y_2, \cdots, Y_n (n = 1, 2, \cdots)$ be independent random variables (r.v.'s) uniformly distributed over the $d$-dimensional unit cube, and let $\alpha_n(\cdot)$ be the empirical process based on this sequence of random samples. Let $V_{n, d}(\cdot)$ be the distribution function of the Cramer-von Mises functional of $\alpha_n(\cdot)$, and define $V_d(\cdot) = \lim_{n \rightarrow \infty} V_{n, d}(\cdot), \Delta_{n, d} = \sup_{0 < x < \infty}|V_{n, d}(x) - V_d(x)|$. We deduce that $\Delta_{n,d} = O(n^{-1}), d \geq 1$, and calculate also the "usual" levels of significance of the distribution function $V_d(\cdot)$ for $d = 2$ to 50, using expansion methods. Previously these were known only for $d = 1, 2, 3$. Admissibility in Linear Estimation Lynn Roy LaMotte KEYWORDS: General linear model, linear estimation and admissibility, 62J99, 62F10 Necessary and sufficient conditions for a linear estimator to be admissible among linear estimators are described. The model assumed is general, allowing for relations between elements of the mean vector and covariance matrix, and allowing the covariance matrix to vary in an arbitrary subset of nonnegative definite symmetric matrices. Maximum Likelihood and Least Squares Estimation in Linear and Affine Functional Models C. Villegas KEYWORDS: Bayesian inference, logical priors, inner statistical inference, Invariance, conditional confidence, Multivariate analysis, 62A05, 62F15 In a linear (or affine) functional model the principal parameter is a subspace (respectively an affine subspace) in a finite dimensional inner product space, which contains the means of $n$ multivariate normal populations, all having the same covariance matrix. A relatively simple, essentially algebraic derivation of the maximum likelihood estimates is given, when these estimates are based on single observed vectors from each of the $n$ populations and an independent estimate of the common covariance matrix. A new derivation of least squares estimates is also given. Combining Independent Noncentral Chi Squared or $F$ Tests John I. Marden KEYWORDS: Hypothesis tests, generalized Bayes tests, Chi squared variables, $F$ variables, Admissibility, complete class, significance levels, combination procedures, 62C07, 62C15, 62H15, 62C10 The problem of combining several independent Chi squared or $F$ tests is considered. The data consist of $n$ independent Chi squared or $F$ variables on which tests of the null hypothesis that all noncentrality parameters are zero are based. In each case, necessary conditions and sufficient conditions for a test to be admissible are given in terms of the monotonicity and convexity of the acceptance region. The admissibility or inadmissibility of several tests based upon the observed significance levels of the individual test statistics is determined. In the Chi squared case, Fisher's and Tippett's procedures are admissible, the inverse normal and inverse logistic procedures are inadmissible, and the test based upon the sum of the significance levels is inadmissible when the level is less than a half. The results are similar, but not identical, in the $F$ case. Several generalized Bayes tests are derived for each problem. Monotone Regression Estimates for Grouped Observations F. T. Wright KEYWORDS: isotone regression, grouped observations, interpolation, asymptotic distribution and rates of convergence, 62F10, 62E20 The maximum likelihood estimator of a nondecreasing regression function with normally distributed errors has been considered in the literature. Its asymptotic distribution at a point is related to a solution of the heat equation, and its rate of convergence to the underlying regression function is of order $n^{-1/3}$. This estimator can be modified by grouping adjacent observations and then "isotonizing" the corresponding means. It is shown that the resulting estimator has an asymptotic normal distribution for certain group sizes and its rate of convergence is of order $n^{-2/5}$. The results of a simulation study for small sample sizes are presented and grouping procedures are discussed. Asymptotic Distributions of Slope-of-Greatest-Convex-Minorant Estimators Sue Leurgans KEYWORDS: isotonic estimation, asymptotic distribution theory, 60F05, 62E20, 62G05, 62G20 Isotonic estimation involves the estimation of a function which is known to be increasing with respect to a specified partial order. For the case of a linear order, a general theorem is given which simplifies and extends the techniques of Prakasa Rao and Brunk. Sufficient conditions for a specified limit distribution to obtain are expressed in terms of a local condition and a global condition. It is shown that the rate of convergence depends on the order of the first non-zero derivative and that this result can obtain even if the function is not monotone over its entire domain. The theorem is applied to give the asymptotic distributions of several estimators. An Inequality Comparing Sums and Maxima with Application to Behrens-Fisher Type Problem Siddhartha R. Dalal, Peter Fortini KEYWORDS: majorization, inequality, sums of powers, Multiple comparisons, Behrens-Fisher problem, 62F25, 60E15 A sharp inequality comparing the probability content of the $\ell_1$ ball and that of $\ell_\infty$ ball of the same volume is proved. The result is generalized to bound the probability content of the $\ell_p$ ball for arbitrary $p \geq 1$. Examples of the type of bound include $P\{(|X_1|^p + |X_2|^p)^{1/p} \leq c\} \geq F^2(c/2^{1/2p}),\quad p \geq 1,$ where $X_1, X_2$ are independent each with distribution function $F$. Applications to multiple comparisons in Behrens-Fisher setting are discussed. Multivariate generalizations and generalizations to non-independent and non-exchangeable distributions are also discussed. In the process a majorization result giving the stochastic ordering between $\Sigma a_i X_i$ and $\Sigma b_i X_i$, when $(a^2_1, a^2_2, \cdots, a^2_n)$ majorizes $(b^2_1, b^2_2, \cdots, b^2_n)$, is also proved. Bounds on Mixtures of Distributions Arising in Order Restricted Inference Tim Robertson, F. T. Wright KEYWORDS: Order restricted inference, tests for and against a trend, Chi-bar-squared distribution, $E$-bar-squared distribution, tail probability bounds, least favorable configurations, 62E15, 62G10 In testing hypotheses involving order restrictions on a collection of parameters, distributions arise which are mixtures of standard distributions. Since tractable expressions for the mixing proportions generally do not exist even for parameter collections of moderate size, the implementation of these tests may be difficult. Stochastic upper and lower bounds are obtained for such test statistics in a variety of these kinds of problems. These bounds are also shown to be tight. The tightness results point out some situations in which the bounds could be used to obtain approximate methods. These results can also be applied to obtain the least favorable configuration when testing the equality of two multinomial populations versus a stochastic ordering alternative. Invariance Principles for Recursive Residuals Pranab Kumar Sen KEYWORDS: Constancy of regression (location) relationship over time, CUSUM tests, Invariance principles, orthonormal transformations, recursive residuals, tightness, unknown transition points, Wiener process, 60F17, 62F05 A general class of recursive residuals is defined by means of lower-triangular, orthonormal transformations. For these residuals, some weak invariance principles are established under appropriate regularity conditions. The theory is then incorporated in the study of robustness of some tests for change of parameters occurring at unknown time points. Covariance Stabilizing Transformations and a Conjecture of Holland C. C. Song KEYWORDS: Jacobian matrix, multinomial distribution, 62H99, 60E05 This paper gives a proof of a conjecture of P. W. Holland concerning the non-existence of certain covariance stabilizing transformations. The Consistency of Nonlinear Regression Minimizing the $L_1$-Norm Walter Oberhofer KEYWORDS: Nonlinear regression, $L_1$-norm, consistency, 62F12, 62J07 We consider conditions in a nonlinear regression model for the consistency of the estimator obtained by minimizing the $L_1$-norm, i.e. the sum of absolute deviations. Corrections: Uniform Asymptotic Normality of the Maximum Likelihood Estimator T. J. Sweeting Ann. Statist. 10 (1), 320, (March, 1982) DOI: 10.1214/aos/1176345717 Corrections: Properties of Student's $t$ and of the Behrens-Fisher Solution to the Two Means Problem G. K. Robinson Corrections: Simultaneous Confidence Bounds Charles H. Alexander
CommonCrawl
Why are we using a biased and misleading standard deviation formula for $\sigma$ of a normal distribution? It came as a bit of a shock to me the first time I did a normal distribution Monte Carlo simulation and discovered that the mean of $100$ standard deviations from $100$ samples, all having a sample size of only $n=2$, proved to be much less than, i.e., averaging $ \sqrt{\frac{2}{\pi }}$ times, the $\sigma$ used for generating the population. However, this is well known, if seldom remembered, and I sort of did know, or I would not have done a simulation. Here is a simulation. Here is an example for predicting 95% confidence intervals of $N(0,1)$ using 100, $n=2$, estimates of $\text{SD}$, and $\text{E}(s_{n=2})=\sqrt\frac{\pi}{2}\text{SD}$. RAND() RAND() Calc Calc N(0,1) N(0,1) SD E(s) -1.1171 -0.0627 0.7455 0.9344 1.7278 -0.8016 1.7886 2.2417 1.2379 0.4896 0.5291 0.6632 -1.8354 1.0531 2.0425 2.5599 0.0344 -0.1892 0.8188 1.0263 mean E(.) SD pred E(s) pred -1.9600 -1.9600 -1.6049 -2.0114 2.5% theor, est 1.9600 1.9600 1.6049 2.0114 97.5% theor, est 0.3551 -0.0515 2.5% err -0.3551 0.0515 97.5% err Drag the slider down to see the grand totals. Now, I used the ordinary SD estimator to calculate 95% confidence intervals around a mean of zero, and they are off by 0.3551 standard deviation units. The E(s) estimator is off by only 0.0515 standard deviation units. If one estimates standard deviation, standard error of the mean, or t-statistics, there may be a problem. My reasoning was as follows, the population mean, $\mu$, of two values can be anywhere with respect to a $x_1$ and is definitely not located at $\frac{x_1+x_2}{2}$, which latter makes for an absolute minimum possible sum squared so that we are underestimating $\sigma$ substantially, as follows w.l.o.g. let $x_2-x_1=d$, then $\Sigma_{i=1}^{n}(x_i-\bar{x})^2$ is $2 (\frac{d}{2})^2=\frac{d^2}{2}$, the least possible result. That means that standard deviation calculated as $\text{SD}=\sqrt{\frac{\Sigma_{i=1}^{n}(x_i-\bar{x})^2}{n-1}}$ , is a biased estimator of the population standard deviation ($\sigma$). Note, in that formula we decrement the degrees of freedom of $n$ by 1 and dividing by $n-1$, i.e., we do some correction, but it is only asymptotically correct, and $n-3/2$ would be a better rule of thumb. For our $x_2-x_1=d$ example the $\text{SD}$ formula would give us $SD=\frac{d}{\sqrt 2}\approx 0.707d$, a statistically implausible minimum value as $\mu\neq \bar{x}$, where a better expected value ($s$) would be $E(s)=\sqrt{\frac{\pi }{2}}\frac{d}{\sqrt 2}=\frac{\sqrt\pi }{2}d\approx0.886d$. For the usual calculation, for $n<10$, $\text{SD}$s suffer from very significant underestimation called small number bias, which only approaches 1% underestimation of $\sigma$ when $n$ is approximately $25$. Since many biological experiments have $n<25$, this is indeed an issue. For $n=1000$, the error is approximately 25 parts in 100,000. In general, small number bias correction implies that the unbiased estimator of population standard deviation of a normal distribution is $\text{E}(s)\,=\,\,\frac{\Gamma\left(\frac{n-1}{2}\right)}{\Gamma\left(\frac{n}{2}\right)}\sqrt{\frac{\Sigma_{i=1}^{n}(x_i-\bar{x})^2}{2}}>\text{SD}=\sqrt{\frac{\Sigma_{i=1}^{n}(x_i-\bar{x})^2}{n-1}}\; .$ From Wikipedia under creative commons licensing one has a plot of SD underestimation of $\sigma$ Since SD is a biased estimator of population standard deviation, it cannot be the minimum variance unbiased estimator MVUE of population standard deviation unless we are happy with saying that it is MVUE as $n\rightarrow \infty$, which I, for one, am not. Concerning non-normal distributions and approximately unbiased $SD$ read this. Now comes the question Q1 Can it be proven that the $\text{E}(s)$ above is MVUE for $\sigma$ of a normal distribution of sample-size $n$, where $n$ is a positive integer greater than one? Hint: (But not the answer) see How can I find the standard deviation of the sample standard deviation from a normal distribution?. Next question, Q2 Would someone please explain to me why we are using $\text{SD}$ anyway as it is clearly biased and misleading? That is, why not use $\text{E}(s)$ for most everything? Supplementary, it has become clear in the answers below that variance is unbiased, but its square root is biased. I would request that answers address the question of when unbiased standard deviation should be used. As it turns out, a partial answer is that to avoid bias in the simulation above, the variances could have been averaged rather than the SD-values. To see the effect of this, if we square the SD column above, and average those values we get 0.9994, the square root of which is an estimate of the standard deviation 0.9996915 and the error for which is only 0.0006 for the 2.5% tail and -0.0006 for the 95% tail. Note that this is because variances are additive, so averaging them is a low error procedure. However, standard deviations are biased, and in those cases where we do not have the luxury of using variances as an intermediary, we still need small number correction. Even if we can use variance as an intermediary, in this case for $n=100$, the small sample correction suggests multiplying the square root of unbiased variance 0.9996915 by 1.002528401 to give 1.002219148 as an unbiased estimate of standard deviation. So, yes, we can delay using small number correction but should we therefore ignore it entirely? The question here is when should we be using small number correction, as opposed to ignoring its use, and predominantly, we have avoided its use. Here is another example, the minimum number of points in space to establish a linear trend that has an error is three. If we fit these points with ordinary least squares the result for many such fits is a folded normal residual pattern if there is non-linearity and half normal if there is linearity. In the half-normal case our distribution mean requires small number correction. If we try the same trick with 4 or more points, the distribution will not generally be normal related or easy to characterize. Can we use variance to somehow combine those 3-point results? Perhaps, perhaps not. However, it is easier to conceive of problems in terms of distances and vectors. normal-distribution standard-deviation expected-value unbiased-estimator umvue CarlCarl $\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$ – whuber♦ Dec 8 '16 at 14:16 $\begingroup$ Q1: See the Lehmann-Scheffe theorem. $\endgroup$ – Scortchi - Reinstate Monica♦ Dec 8 '16 at 15:57 $\begingroup$ Nonzero bias of an estimator is not necessarily a drawback. For example, if we wish to have an accurate estimator under square loss, we are willing to induce bias as long as it reduces the variance by a sufficiently large amount. That is why (biased) regularized estimators may perform better than the (unbiased) OLS estimator in a linear regression model, for example. $\endgroup$ – Richard Hardy Dec 14 '16 at 20:20 $\begingroup$ @Carl many terms are used differently in different application areas. If you're posting to a stats group and you use a jargon term like "bias", you would naturally be assumed to be using the specific meaning(s) of the term particular to statistics. If you mean anything else, it's essential to either use a different term or to define clearly what you do mean by the term right at the first use. $\endgroup$ – Glen_b -Reinstate Monica Dec 15 '16 at 3:35 $\begingroup$ "bias" is certainly a term of jargon -- special words or expressions used by a profession or group that are difficult for others to understand seems pretty much what "bias" is. It's because such terms have precise, specialized definitions in their application areas (including mathematical definitions) that makes them jargon terms. $\endgroup$ – Glen_b -Reinstate Monica Dec 15 '16 at 3:50 For the more restricted question Why is a biased standard deviation formula typically used? the simple answer Because the associated variance estimator is unbiased. There is no real mathematical/statistical justification. may be accurate in many cases. However, this is not necessarily always the case. There are at least two important aspects of these issues that should be understood. First, the sample variance $s^2$ is not just unbiased for Gaussian random variables. It is unbiased for any distribution with finite variance $\sigma^2$ (as discussed below, in my original answer). The question notes that $s$ is not unbiased for $\sigma$, and suggests an alternative which is unbiased for a Gaussian random variable. However it is important to note that unlike the variance, for the standard deviation it is not possible to have a "distribution free" unbiased estimator (*see note below). Second, as mentioned in the comment by whuber the fact that $s$ is biased does not impact the standard "t test". First note that, for a Gaussian variable $x$, if we estimate z-scores from a sample $\{x_i\}$ as $$z_i=\frac{x_i-\mu}{\sigma}\approx\frac{x_i-\bar{x}}{s}$$ then these will be biased. However the t statistic is usually used in the context of the sampling distribution of $\bar{x}$. In this case the z-score would be $$z_{\bar{x}}=\frac{\bar{x}-\mu}{\sigma_{\bar{x}}}\approx\frac{\bar{x}-\mu}{s/\sqrt{n}}=t$$ though we can compute neither $z$ nor $t$, as we do not know $\mu$. Nonetheless, if the $z_{\bar{x}}$ statistic would be normal, then the $t$ statistic will follow a Student-t distribution. This is not a large-$n$ approximation. The only assumption is that the $x$ samples are i.i.d. Gaussian. (Commonly the t-test is applied more broadly for possibly non-Gaussian $x$. This does rely on large-$n$, which by the central limit theorem ensures that $\bar{x}$ will still be Gaussian.) *Clarification on "distribution-free unbiased estimator" By "distribution free", I mean that the estimator cannot depend on any information about the population $x$ aside from the sample $\{x_1,\ldots,x_n\}$. By "unbiased" I mean that the expected error $\mathbb{E}[\hat{\theta}_n]-\theta$ is uniformly zero, independent of the sample size $n$. (As opposed to an estimator that is merely asymptotically unbiased, a.k.a. "consistent", for which the bias vanishes as $n\to\infty$.) In the comments this was given as a possible example of a "distribution-free unbiased estimator". Abstracting a bit, this estimator is of the form $\hat{\sigma}=f[s,n,\kappa_x]$, where $\kappa_x$ is the excess kurtosis of $x$. This estimator is not "distribution free", as $\kappa_x$ depends on the distribution of $x$. The estimator is said to satisfy $\mathbb{E}[\hat{\sigma}]-\sigma_x=\mathrm{O}[\frac{1}{n}]$, where $\sigma_x^2$ is the variance of $x$. Hence the estimator is consistent, but not (absolutely) "unbiased", as $\mathrm{O}[\frac{1}{n}]$ can be arbitrarily large for small $n$. Note: Below is my original "answer". From here on, the comments are about the standard "sample" mean and variance, which are "distribution-free" unbiased estimators (i.e. the population is not assumed to be Gaussian). This is not a complete answer, but rather a clarification on why the sample variance formula is commonly used. Given a random sample $\{x_1,\ldots,x_n\}$, so long as the variables have a common mean, the estimator $\bar{x}=\frac{1}{n}\sum_ix_i$ will be unbiased, i.e. $$\mathbb{E}[x_i]=\mu \implies \mathbb{E}[\bar{x}]=\mu$$ If the variables also have a common finite variance, and they are uncorrelated, then the estimator $s^2=\frac{1}{n-1}\sum_i(x_i-\bar{x})^2$ will also be unbiased, i.e. $$\mathbb{E}[x_ix_j]-\mu^2=\begin{cases}\sigma^2&i=j\\0&i\neq{j}\end{cases} \implies \mathbb{E}[s^2]=\sigma^2$$ Note that the unbiasedness of these estimators depends only on the above assumptions (and the linearity of expectation; the proof is just algebra). The result does not depend on any particular distribution, such as Gaussian. The variables $x_i$ do not have to have a common distribution, and they do not even have to be independent (i.e. the sample does not have to be i.i.d.). The "sample standard deviation" $s$ is not an unbiased estimator, $\mathbb{s}\neq\sigma$, but nonetheless it is commonly used. My guess is that this is simply because it is the square root of the unbiased sample variance. (With no more sophisticated justification.) In the case of an i.i.d. Gaussian sample, the maximum likelihood estimates (MLE) of the parameters are $\hat{\mu}_\mathrm{MLE}=\bar{x}$ and $(\hat{\sigma}^2)_\mathrm{MLE}=\frac{n-1}{n}s^2$, i.e. the variance divides by $n$ rather than $n^2$. Moreover, in the i.i.d. Gaussian case the standard deviation MLE is just the square root of the MLE variance. However these formulas, as well as the one hinted at in your question, depend on the Gaussian i.i.d. assumption. Update: Additional clarification on "biased" vs. "unbiased". Consider an $n$-element sample as above, $X=\{x_1,\ldots,x_n\}$, with sum-square-deviation $$\delta^2_n=\sum_i(x_i-\bar{x})^2$$ Given the assumptions outlined in the first part above, we necessarily have $$\mathbb{E}[\delta^2_n]=(n-1)\sigma^2$$ so the (Gaussian-)MLE estimator is biased $$\widehat{\sigma^2_n}=\tfrac{1}{n}\delta^2_n \implies \mathbb{E}[\widehat{\sigma^2_n}]=\tfrac{n-1}{n}\sigma^2 $$ while the "sample variance" estimator is unbiased $$s^2_n=\tfrac{1}{n-1}\delta^2_n \implies \mathbb{E}[s^2_n]=\sigma^2$$ Now it is true that $\widehat{\sigma^2_n}$ becomes less biased as the sample size $n$ increases. However $s^2_n$ has zero bias no matter the sample size (so long as $n>1$). For both estimators, the variance of their sampling distribution will be non-zero, and depend on $n$. As an example, the below Matlab code considers an experiment with $n=2$ samples from a standard-normal population $z$. To estimate the sampling distributions for $\bar{x},\widehat{\sigma^2},s^2$, the experiment is repeated $N=10^6$ times. (You can cut & paste the code here to try it out yourself.) % n=sample size, N=number of samples n=2; N=1e6; % generate standard-normal random #'s z=randn(n,N); % i.e. mu=0, sigma=1 % compute sample stats (Gaussian MLE) zbar=sum(z)/n; zvar_mle=sum((z-zbar).^2)/n; % compute ensemble stats (sampling-pdf means) zbar_avg=sum(zbar)/N, zvar_mle_avg=sum(zvar_mle)/N % compute unbiased variance zvar_avg=zvar_mle_avg*n/(n-1) Typical output is like zbar_avg = 1.4442e-04 zvar_mle_avg = 0.49988 zvar_avg = 0.99977 confirming that \begin{align} \mathbb{E}[\bar{z}]&\approx\overline{(\bar{z})}\approx\mu=0 \\ \mathbb{E}[s^2]&\approx\overline{(s^2)}\approx\sigma^2=1 \\ \mathbb{E}[\widehat{\sigma^2}]&\approx\overline{(\widehat{\sigma^2})}\approx\frac{n-1}{n}\sigma^2=\frac{1}{2} \end{align} Update 2: Note on fundamentally "algebraic" nature of unbiased-ness. In the above numerical demonstration, the code approximates the true expectation $\mathbb{E}[\,]$ using an ensemble average with $N=10^6$ replications of the experiment (i.e. each is a sample of size $n=2$). Even with this large number, the typical results quoted above are far from exact. To numerically demonstrate that the estimators are really unbiased, we can use a simple trick to approximate the $N\to\infty$ case: simply add the following line to the code % optional: "whiten" data (ensure exact ensemble stats) [U,S,V]=svd(z-mean(z,2),'econ'); z=sqrt(N)*U*V'; (placing after "generate standard-normal random #'s" and before "compute sample stats") With this simple change, even running the code with $N=10$ gives results like GeoMatt22GeoMatt22 $\begingroup$ @amoeba Well, I'll eat my hat. I squared the SD-values in each line then averaged them and they come out unbiased (0.9994), whereas the SD-values themselves do not. Meaning that you and GeoMatt22 are correct, and I am wrong. $\endgroup$ – Carl Dec 8 '16 at 7:27 $\begingroup$ @Carl: It's generally true that transforming an unbiased estimator of a parameter doesn't give an unbiased estimate of the transformed parameter except when the transformation is affine, following from the linearity of expectation. So on what scale is unbiasedness important to you? $\endgroup$ – Scortchi - Reinstate Monica♦ Dec 8 '16 at 8:29 $\begingroup$ Carl: I apologize if you feel my answer was orthogonal to your question. It was intended to provide a plausible explanation of Q:"why a biased standard deviation formula is typically used?" A:"simply because the associated variance estimator is unbiased, vs. any real mathematical/statistical justification". As for your comment, typically "unbiased" describes an estimator whose expected value is correct independent of sample size. If it is unbiased only in the limit of infinite sample size, typically it would be called "consistent". $\endgroup$ – GeoMatt22 Dec 9 '16 at 6:38 $\begingroup$ (+1) Nice answer. Small caveat: That Wikipedia passage on consistency quoted in this answer is a bit of a mess and the parenthetical statement made related to it is potentially misleading. "Consistency" and "asymptotic unbiasedness" are in some sense orthogonal properties of an estimator. For a little more on that point, see the comment thread to this answer. $\endgroup$ – cardinal Dec 10 '16 at 21:45 $\begingroup$ +1 but I think @Scortchi makes a really important point in his answer that is not mentioned in yours: namely, that even for Gaussian population, the unbiased estimate of $\sigma$ has higher expected error than the standard biased estimate of $\sigma$ (due to the high variance of the former). This is a strong argument in favour of not using an unbiased estimator even if one knows that the underlying distribution is Gaussian. $\endgroup$ – amoeba says Reinstate Monica Dec 13 '16 at 14:52 The sample standard deviation $S=\sqrt{\frac{\sum (X - \bar{X})^2}{n-1}}$ is complete and sufficient for $\sigma$ so the set of unbiased estimators of $\sigma^k$ given by $$ \frac{(n-1)^\frac{k}{2}}{2^\frac{k}{2}} \cdot \frac{\Gamma\left(\frac{n-1}{2}\right)}{\Gamma\left(\frac{n+k-1}{2}\right)} \cdot S^k = \frac{S^k}{c_k} $$ (See Why is sample standard deviation a biased estimator of $\sigma$?) are, by the Lehmann–Scheffé theorem, UMVUE. Consistent, though biased, estimators of $\sigma^k$ can also be formed as $$ \tilde{\sigma}^k_j= \left(\frac{S^j}{c_j}\right)^\frac{k}{j} $$ (the unbiased estimators being specified when $j=k$). The bias of each is given by $$\operatorname{E}\tilde{\sigma}^k_j - \sigma^k =\left( \frac{c_k}{c_j^\frac{k}{j}} -1 \right) \sigma^k$$ & its variance by $$\operatorname{Var}\tilde{\sigma}^{k}_j=\operatorname{E}\tilde{\sigma}^{2k}_j - \left(\operatorname{E}\tilde{\sigma}^k_j\right)^2=\frac{c_{2k}-c_k^2}{c_j^\frac{2k}{j}} \sigma^{2k}$$ For the two estimators of $\sigma$ you've considered, $\tilde{\sigma}^1_1=\frac{S}{c_1}$ & $\tilde{\sigma}^1_2=S$, the lack of bias of $\tilde{\sigma}_1$ is more than offset by its larger variance when compared to $\tilde{\sigma}_2$: $$\begin{align} \operatorname{E}\tilde{\sigma}_1 - \sigma &= 0 \\ \operatorname{E}\tilde{\sigma}_2 - \sigma &=(c_1 -1) \sigma \\ \operatorname{Var}\tilde{\sigma}_1 =\operatorname{E}\tilde{\sigma}^{2}_1 - \left(\operatorname{E}\tilde{\sigma}^1_1\right)^2 &=\frac{c_{2}-c_1^2}{c_1^2} \sigma^{2} = \left(\frac{1}{c_1^2}-1\right) \sigma^2 \\ \operatorname{Var}\tilde{\sigma}_2 =\operatorname{E}\tilde{\sigma}^{2}_1 - \left(\operatorname{E}\tilde{\sigma}_2\right)^2 &=\frac{c_{2}-c_1^2}{c_2} \sigma^{2}=(1-c_1^2)\sigma^2 \end{align}$$ (Note that $c_2=1$, as $S^2$ is already an unbiased estimator of $\sigma^2$.) The mean square error of $a_k S^k$ as an estimator of $\sigma^2$ is given by $$ \begin{align} (\operatorname{E} a_k S^k - \sigma^k)^2 + \operatorname{E} (a_k S^k)^2 - (\operatorname{E} a_k S^k)^2 &= [ (a_k c_k -1)^2 + a_k^2 c_{2k} - a_k^2 c_k^2 ] \sigma^{2k}\\ &= ( a_k^2 c_{2k} -2 a_k c_k + 1 ) \sigma^{2k} \end{align} $$ & therefore minimized when $$a_k = \frac{c_k}{c_{2k}}$$ , allowing the definition of another set of estimators of potential interest: $$ \hat{\sigma}^k_j= \left(\frac{c_j S^j}{c_{2j}}\right)^\frac{k}{j} $$ Curiously, $\hat{\sigma}^1_1=c_1S$, so the same constant that divides $S$ to remove bias multiplies $S$ to reduce MSE. Anyway, these are the uniformly minimum variance location-invariant & scale-equivariant estimators of $\sigma^k$ (you don't want your estimate to change at all if you measure in kelvins rather than degrees Celsius, & you want it to change by a factor of $\left(\frac{9}{5}\right)^k$ if you measure in Fahrenheit). None of the above has any bearing on the construction of hypothesis tests or confidence intervals (see e.g. Why does this excerpt say that unbiased estimation of standard deviation usually isn't relevant?). And $\tilde{\sigma}^k_j$ & $\hat{\sigma}^k_j$ exhaust neither estimators nor parameter scales of potential interest—consider the maximum-likelihood estimator† $\sqrt{\frac{n-1}{n}}S$, or the median-unbiased estimator $\sqrt{\frac{n-1}{\chi^2_{n-1}(0.5)}}S$; or the geometric standard deviation of a lognormal distribution $\mathrm{e}^\sigma$. It may be worth showing a few more-or-less popular estimates made from a small sample ($n=2$) together with the upper & lower bounds, $\sqrt{\frac{(n-1)s^2}{\chi^2_{n-1}(\alpha)}}$ & $\sqrt{\frac{(n-1)s^2}{\chi^2_{n-1}(1-\alpha)}}$, of the equal-tailed confidence interval having coverage $1-\alpha$: The span between the most divergent estimates is negligible in comparison with the width of any confidence interval having decent coverage. (The 95% C.I., for instance, is $(0.45s,31.9s)$.) There's no sense in being finicky about the properties of a point estimator unless you're prepared to be fairly explicit about what you want you want to use it for—most explicitly you can define a custom loss function for a particular application. A reason you might prefer an exactly (or almost) unbiased estimator is that you're going to use it in subsequent calculations during which you don't want bias to accumulate: your illustration of averaging biased estimates of standard deviation is a simple example of such (a more complex example might be using them as a response in a linear regression). In principle an all-encompassing model should obviate the need for unbiased estimates as an intermediate step, but might be considerably more tricky to specify & fit. † The value of $\sigma$ that makes the observed data most probable has an appeal as an estimate independent of consideration of its sampling distribution. Scortchi - Reinstate Monica♦Scortchi - Reinstate Monica Q2: Would someone please explain to me why we are using SD anyway as it is clearly biased and misleading? This came up as an aside in comments, but I think it bears repeating because it's the crux of the answer: The sample variance formula is unbiased, and variances are additive. So if you expect to do any (affine) transformations, this is a serious statistical reason why you should insist on a "nice" variance estimator over a "nice" SD estimator. In an ideal world, they'd be equivalent. But that's not true in this universe. You have to choose one, so you might as well choose the one that lets you combine information down the road. Comparing two sample means? The variance of their difference is sum of their variances. Doing a linear contrast with several terms? Get its variance by taking a linear combination of their variances. Looking at regression line fits? Get their variance using the variance-covariance matrix of your estimated beta coefficients. Using F-tests, or t-tests, or t-based confidence intervals? The F-test calls for variances directly; and the t-test is exactly equivalent to the square root of an F-test. In each of these common scenarios, if you start with unbiased variances, you'll remain unbiased all the way (unless your final step converts to SDs for reporting). Meanwhile, if you'd started with unbiased SDs, neither your intermediate steps nor the final outcome would be unbiased anyway. civilstatcivilstat $\begingroup$ Variance is not a distance measurement, and standard deviation is. Yes, vector distances add by squares, but the primary measurement is distance. The question was what would you use corrected distance for, and not why should we ignore distance as if it did not exist. $\endgroup$ – Carl Dec 11 '16 at 3:39 $\begingroup$ Well, I guess I'm arguing that "the primary measurement is distance" isn't necessarily true. 1) Do you have a method to work with unbiased variances; combine them; take the final resulting variance; and rescale its sqrt to get an unbiased SD? Great, then do that. If not... 2) What are you going to do with a SD from a tiny sample? Report it on its own? Better to just plot the datapoints directly, not summarize their spread. And how will people interpret it, other than as an input to SEs and thus CIs? It's meaningful as an input to CIs, but then I'd prefer the t-based CI (with usual SD). $\endgroup$ – civilstat Dec 11 '16 at 22:35 $\begingroup$ I do no think that many clinical studies or commercial software programs with $n<25$ would use standard error of the mean calculated from small sample corrected standard deviation leading to a false impression of how small those errors are. I think even that one issue, even if that is the only one, should be ignored. $\endgroup$ – Carl Dec 11 '16 at 23:00 $\begingroup$ "so you might as well choose the one that lets you combine information down the road" and "the primary measurement is distance" isn't necessarily true. Farmer Jo's house is 640 acres down the road? One uses the appropriate measurement correctly for each and every situation, or one has a higher tolerance for false witness than I. My only question here is when to use what, and the answer to it is not "never." $\endgroup$ – Carl Dec 12 '16 at 3:11 This post is in outline form. (1) Taking a square root is not an affine transformation (Credit @Scortchi.) (2) ${\rm var}(s) = {\rm E} (s^2) - {\rm E}(s)^2$, thus ${\rm E}(s) = \sqrt{{\rm E}(s^2) -{\rm var}(s)}\neq{\sqrt{\rm var(s)}}$ (3) $ {\rm var}(s)=\frac{\Sigma_{i=1}^{n}(x_i-\bar{x})^2}{n-1}$, whereas $\text{E}(s)\,=\,\,\frac{\Gamma\left(\frac{n-1}{2}\right)}{\Gamma\left(\frac{n}{2}\right)}\sqrt{\frac{\Sigma_{i=1}^{n}(x_i-\bar{x})^2}{2}}$$\neq\sqrt{\frac{\Sigma_{i=1}^{n}(x_i-\bar{x})^2}{n-1}}={\sqrt{\rm var(s)}}$ (4) Thus, we cannot substitute ${\sqrt{\rm var(s)}}$ for $\text{E}(s)$, for $n$ small, as square root is not affine. (5) ${\rm var}(s)$ and $\text{E}(s)$ are unbiased (Credit @GeoMatt22 and @Macro, respectively). (6) For non-normal distributions $\bar{x}$ is sometimes (a) undefined (e.g., Cauchy, Pareto with small $\alpha$) and (b) not UMVUE (e.g., Cauchy ($\rightarrow$ Student's-$t$ with $df=1$), Pareto, Uniform, beta). Even more commonly, variance may be undefined, e.g. Student's-$t$ with $1\leq df\leq2$. Then one can state that $\text{var}(s)$ is not UMVUE for the general case distribution. Thus, there is then no special onus to introducing an approximate small number correction for standard deviation, which likely has similar limitations to $\sqrt{\text{var}(s)}$, but is additionally less biased, $\hat\sigma = \sqrt{ \frac{1}{n - 1.5 - \tfrac14 \gamma_2} \sum_{i=1}^n (x_i - \bar{x})^2 }$ , where $\gamma_2$ is excess kurtosis. In a similar vein, when examining a normal squared distribution (a Chi-squared with $df=1$ transform), we might be tempted to take its square root and use the resulting normal distribution properties. That is, in general, the normal distribution can result from transformations of other distributions and it may be expedient to examine the properties of that normal distribution such that the limitation of small number correction to the normal case is not so severe a restriction as one might at first assume. For the normal distribution case: A1: By Lehmann-Scheffe theorem ${\rm var}(s)$ and $\text{E}(s)$ are UMVUE (Credit @Scortchi). A2: (Edited to adjust for comments below.) For $n\leq 25$, we should use $\text{E}(s)$ for standard deviation, standard error, confidence intervals of the mean and of the distribution, and optionally for z-statistics. For $t$-testing we would not use the unbiased estimator as $\frac{ \bar X - \mu} {\sqrt{\text{var}(n)/n}}$ itself is Student's-$t$ distributed with $n-1$ degrees of freedom (Credit @whuber and @GeoMatt22). For z-statistics, $\sigma$ is usually approximated using $n$ large for which $\text{E}(s)-\sqrt{\text{var}(n)}$ is small, but for which $\text{E}(s)$ appears to be more mathematically appropriate (Credit @whuber and @GeoMatt22). $\begingroup$ A2 is incorrect: following that prescription would produce demonstrably invalid tests. As I commented to the question, perhaps too subtly: consult any theoretical account of a classical test, such as the t-test, to see why a bias correction is irrelevant. $\endgroup$ – whuber♦ Dec 9 '16 at 21:24 $\begingroup$ There's a strong meta-argument showing why bias correction for statistical tests is a red herring: if it were incorrect not to include a bias-correction factor, then that factor would already be included in standard tables of the Student t distribution, F distribution, etc. To put it another way: if I'm wrong about this, then everybody has been wrong about statistical testing for the last century. $\endgroup$ – whuber♦ Dec 9 '16 at 21:30 $\begingroup$ Am I the only one who's baffled by the notation here? Why use $\operatorname{E}(s)$ to stand for $\frac{\Gamma\left(\frac{n-1}{2}\right)}{\Gamma\left(\frac{n}{2}\right)}\sqrt{\frac{\Sigma_{i=1}^{n}(x_i-\bar{x})^2}{2}}$, the unbiased estimate of standard deviation? What's $s$? $\endgroup$ – Scortchi - Reinstate Monica♦ Dec 9 '16 at 21:58 $\begingroup$ @Scortchi the notation apparently came about as an attempt to inherit that used in the linked post. There $s$ is the sample variance, and $E(s)$ is the expected value of $s$ for a Gaussian sample. In this question, "$E(s)$" was co-opted to be a new estimator derived from the original post (i.e. something like $\hat{\sigma}\equiv s/\alpha$ where $\alpha\equiv\mathbb{E}[s]/\sigma$). If we arrive at a satisfactory answer for this question, probably a cleanup of the question & answer notation would be warranted :) $\endgroup$ – GeoMatt22 Dec 9 '16 at 22:20 $\begingroup$ The z-test assumes the denominator is an accurate estimate of $\sigma$. It's known to be an approximation that is only asymptotically correct. If you want to correct it, don't use the bias of the SD estimator--just use a t-test. That's what the t-test was invented for. $\endgroup$ – whuber♦ Dec 9 '16 at 22:58 I want to add the Bayesian answer to this discussion. Just because your assumption is that the data is generated according to some normal with unknown mean and variance, that doesn't mean that you should summarize your data using a mean and a variance. This whole problem can be avoided if you draw the model, which will have a posterior predictive that is a three parameter noncentral scaled student's T distribution. The three parameters are the total of the samples, total of the squared samples, and the number of samples. (Or any bijective map of these.) Incidentally, I like civilstat's answer because it highlights our desire to combine information. The three sufficient statistics above are even better than the two given in the question (or by civilstat's answer). Two sets of these statistics can easily be combined, and they give the best posterior predictive given the assumption of normality. Neil GNeil G $\begingroup$ How then does one calculate an unbiased standard error of the mean from those three sufficient statistics? $\endgroup$ – Carl Dec 14 '16 at 17:44 $\begingroup$ @carl You can easily calculate it since you have the number of samples $n$, you can multiply the uncorrected sample variance by $\frac{n}{n-1}$. However, you really don't want to do that. That's tantamount to turning your three parameters into a best fit normal distribution to your limited data. It's a lot better to use your three parameters to fit the true posterior predictive: the noncentral scaled T distribution. All questions you might have (percentiles, etc.) are better answered by this T distribution. In fact, T tests are just common sense questions asked of this distribution. $\endgroup$ – Neil G Dec 15 '16 at 0:30 $\begingroup$ How can one then generate a true normal distribution RV from Monte Carlo simulations(s) and recover that true distribution using only Student's-$t$ distribution parameters? Am I missing something here? $\endgroup$ – Carl Dec 15 '16 at 2:57 $\begingroup$ @Carl The sufficient statistics I described were the mean, second moment, and number of samples. Your MLE of the original normal are the mean and variance (which is equal to the second moment minus the squared mean). The number of samples is useful when you want to make predictions about future observations (for which you need the posterior predictive distribution). $\endgroup$ – Neil G Dec 15 '16 at 3:24 $\begingroup$ Though a Bayesian perspective is a welcome addition, I find this a little hard to follow: I'd have expected a discussion of constructing a point estimate from the posterior density of $\sigma$. It seems you're rather questioning the need for a point estimate: this is something well worth bringing up, but not uniquely Bayesian. (BTW you also need to explain the priors.) $\endgroup$ – Scortchi - Reinstate Monica♦ Dec 16 '16 at 14:22 Not the answer you're looking for? Browse other questions tagged normal-distribution standard-deviation expected-value unbiased-estimator umvue or ask your own question. The derivation of standard deviation When/why is the sqroot of the variance not a good estimator of the standard deviation? What is the difference between a consistent estimator and an unbiased estimator? Why is sample standard deviation a biased estimator of $\sigma$? What is the difference between ZCA whitening and PCA whitening? Is it true that Bayesian methods don't overfit? How can I find the standard deviation of the sample standard deviation from a normal distribution? Why does this excerpt say that unbiased estimation of standard deviation usually isn't relevant? Square-root transform dependent variable to gaussian for ML problem Should I use a paired sample t-test to compare two methods of measuring absorption lines? Confidence interval for the standard deviation of a Normal distribution with known mean UMVUE for normal distribution $\sigma$ The standard normal distribution vs the t-distribution For which distributions is there a closed-form unbiased estimator for the standard deviation? Why use the standard deviation of sample means for a specific sample? Why we don't make use of the t-distribution for constructing a confidence interval for a proportion?
CommonCrawl
1 Prioritize The Physicist's Companion to Current Fluctuations: One-Dimensional Bulk-Driven Lattice Gases Version 1 Released on 14 August 2015 under Creative Commons Attribution 4.0 International License Alexandre Lazarescu1,2 Physics and Materials Science Research Unit (PHYMS), Faculty of Science, Technology and Communication (FSTC) - University of Luxembourg Department of Physics and Astronomy - KU Leuven Classical transport processes Nonequilibrium driven systems Stochastic processes, fluctuation phenomena One of the main features of statistical systems out of equilibrium is the currents they exhibit in their stationary state: microscopic currents of probability between configurations, which translate into macroscopic currents of mass, charge, etc. Understanding the general behaviour of these currents is an important step towards building a universal framework for non-equilibrium steady states akin to the Gibbs-Boltzmann distribution for equilibrium systems. In this review, we consider one-dimensional bulk-driven particle gases, and in particular the asymmetric simple exclusion process (ASEP) with open boundaries, which is one of the most popular models of one-dimensional transport. We focus, in particular, on the current of particles flowing through the system in its steady state, and on its fluctuations. We show how one can obtain the complete statistics of that current, through its large deviation function, by combining results from various methods: exact calculation of the cumulants of the current, using the integrability of the model ; direct diagonalisation of a biased process in the limits of very high or low current ; hydrodynamic description of the model in the continuous limit using the macroscopic fluctuation theory (MFT). We give a pedagogical account of these techniques, starting with a quick introduction to the necessary mathematical tools, as well as a short overview of the existing works relating to the ASEP. We conclude by drawing the complete dynamical phase diagram of the current. We also remark on a few possible generalisations of these results. Our world is a complex and chaotic place. In order to understand it better, we look for the fundamental laws that govern it, hoping that they are simple enough to be found, and universal enough to be useful. This is how, for instance, by observing the behaviour of various massive objects in different situations, Newton deduced his laws of motion, or how, by studying how gases react to changes in their environment, Clapeyron discovered the ideal gas law. Both of these laws are strikingly simple, and, although they are both approximations, they apply to an extremely wide range of systems. In fact, since a gas is a collection of massive objects, it must obey both laws, but at different levels of description: at the microscopic level, each individual atom follows Newton's laws ; at the macroscopic level, the whole gas follows Clapeyron's law. Moreover, Newton's equation of motion, although conceptually simple, has $6N$ variables for a gas with $N$ atoms (the position and velocity of each atom), and the ideal gas law has only $3$ (density, temperature, pressure). How these two descriptions are compatible is not entirely obvious, although this example is one of the simplest (but there are more complex ones, such as how neurons make a brain, to give an example from the other extreme). Understanding how one goes from one law to the other is just as essential as knowing the two laws themselves. It is the goal of statistical physics to bridge that gap, and to find out how simplicity can emerge from large numbers. Let us consider a system made of a large number of components, each obeying simple laws. There are several formalisms that we can use to that effect. If the system is isolated and stationary, with a well-defined interaction potential between its constituents, then it is reasonable to assume that its states are distributed according to the Gibbs-Boltzmann law: the system is at equilibrium. Given that assumption, one can then compute averages of macroscopic observables with respect to that distribution, and recover all the thermodynamic properties of the system, usually by obtaining the free energy, which gives the probability distribution of macroscopic variables. Non-analyticities in that free energy are the sign of phase transitions, which are the most interesting features of equilibrium systems, and have been studied extensively. However, in that framework, one is limited to static observables, as the Gibbs-Boltzmann distribution gives no information on dynamics. Should one be interested in dynamical observables, or if the system is driven by some external field and cannot be described by an interaction potential, one may take a step back and assume, not the form of the stationary distribution, but of the rates of transition between microstates and of the distribution of the time elapsed between successive transitions. This is equivalent to assume a certain probability density for dynamical trajectories in space and time, instead of a probability for a static configuration. The simplest and most common choice here is to assume Markovian dynamics: the evolution of a trajectory depend only on its state, and not on its history, which implies Poisson-distributed waiting times. Under reasonable assumptions, and if the dynamics do not depend explicitly on time, the system can be shown to relax to a steady state, the distribution of which is not a given, in contrast to equilibrium: it is a contraction of the distribution on trajectories, and is in general difficult to obtain. We may then distinguish two classes of Markovian dynamics: those that have detailed balance, and those that do not. In the first case, the steady state will have an equilibrium distribution, and the microscopic currents of probability between microstates will all be exactly zero: the state is not merely stationary, but entirely inert. Without detailed balance, some of those currents will be non-zero, and their magnitude is an indication of the work done by the environment in order to maintain that flow. That probability current, which translates into macroscopic currents of particles, charges, heat, etc. is what defines the system as being out of equilibrium. It is that type of systems which will be the focus of our attention. Non-equilibrium systems are quite ubiquitous in nature, and in particular in biology: any system which is meant to transport objects, cells, energy, etc. from one point to another is, by definition, out of equilibrium. In many cases, the system is one-dimensional (think of cells in a blood vessel, for instance, or molecular motors on actin), and the main quantity of interest is the flux that goes through it (see fig.1). The question is then to deduce the macroscopic behaviour of the system, and in particular the statistics of that current, from the microscopic definition of the model, and identify what, in that behaviour, may be generic among similar systems. Figure 1 Figure 1. Sketch of a one-dimensional non-equilibrium system. Particles move in a one-dimensional channel, between two reservoirs at fixed densities $\rho_a$ and $\rho_b$. The particles interact with each-other, and are subject to a driving field $V$. Because of the field, and the possible imbalance between the reservoirs, there is a net flux of particles from one side of the system to the other. There are many approaches to that problem. One of them, which has yielded major results in many other sub-fields of statistical physics, is to find a toy model which is simple enough to be mathematically tractable, yet complex enough to be physically relevant. In the case of equilibrium statistical physics, the Ising model has played such a role, and has had a central part in our understanding equilibrium phase transitions. Its counterpart for non-equilibrium systems is the simple exclusion process: particles jump stochastically from site to site on a finite lattice, and may not jump to a site which is already occupied. One-dimensional versions of this model, with uniform jumping rates and either periodic or open boundary conditions, are exactly solvable, and allow for very precise calculations, such as relatively simple expressions for the full steady state distribution and current fluctuations. However, these results and calculations do not extend easily to other models, precisely because resolvability is not generic. Another approach is to start from an effective mesoscopic description, which requires more assumptions but applies to a broader range of systems. For equilibrium models, one may think, for instance, of the mean-field approach, where it is assumed that spatial correlations can be neglected. A similar framework for one-dimensional non-equilibrium particle systems is the macroscopic fluctuation theory, where the probability of a trajectory is assumed to depend only on the local averages of the density and current of particles. It is then quite natural to combine these two approaches: the first can be used to verify the assumptions made for the second, and the second can help in generalising the results obtained through the first. In this review, we propose to give an overview of the methods and results pertaining to these two approaches in the case of the one-dimensional asymmetric simple exclusion process with open boundaries. The layout of this review is as follows. In section 2., we give the main definitions and results related to the mathematical framework that is relevant to our interests: that of large deviations. In particular, we look at large deviations of time-additive observables for Markov processes in continuous time, and we define the so-called 's-ensemble', which is a statistical ensemble for Markov processes where the current is seen as a free parameter. In section 3., we get acquainted with the asymmetric simple exclusion process. In the first part of the section, we give the definition of the model and briefly review existing variants and results. In the second part, we look at the steady state of the system, first through the mean-field approach, and then through the exact expression for the stationary distribution (the so-called 'matrix Ansatz'). In section 4., we present an exact expression for the complete generating function of the cumulants of the current in the open ASEP, and we take the limit of large sizes to extract the asymptotic behaviour of that expression, obtaining a different result for each phase in the ASEP's diagram. We then look at what the corresponding behaviours are for the large deviation function of the current in each phase, which are valid for small fluctuations of the current. In section 5., we take the limit of infinitely high or low currents, and calculate the corresponding limits for the large deviation function through direct diagonalisation of the conditioned process. In the low current limit, we obtain a perturbative expansion around a diagonal matrix. In the high current limit, we obtain a system equivalent to an open XX spin chain, diagonalisable through free fermion techniques. In section 6., we use the macroscopic fluctuation theory to obtain the full dynamical phase diagram of the current for the open ASEP. We compare these results with those obtained from the exact calculations of the previous sections. We also remark that the method used in that section is in principle applicable to any one-dimensional bulk-driven particle gas. The present review is largely based on the author's PhD manuscript [1]. It is intended to give a self-contained and reasonably detailed account of the tools involved in determining the large deviations of the current in the asymmetric simple exclusion process, as much as of the results that they yield. For the sake of brevity and legibility, some of the finer details of those calculations, as well as most technical aspects of everything related to the integrability of the model, have been omitted, but should the readers be in need of clarifications, they may refer to [1] or to the references given in section 3.1.. That section contains most of the bibliographical references of this review, and although it is far from being exhaustive, it should provide an adequate starting point for the curious reader. A crash course in large deviations This first section contains a brief introduction to the mathematical objects that we will be manipulating in the rest of the review, namely: large deviation functions. We first define the concept of large deviations in general, and give a few useful theorems. We then apply the concept to time-additive observables for Markov processes in continuous time, and look at two specific examples with interesting properties: the time-integrated empirical vector, and the entropy production. For a thorough review of this topic, one may refer to [2] and [3]. Definition and a few useful results Consider a system defined by a size $N$, and an observable $a$ intensive in $N$, which has a probability distribution ${\rm P}_N(a)$ for each $N$. It is said that $a$ obeys a large deviations principle with rate $g(a)$ if the limit \begin{equation}\label{I-1-g} g(a)=\lim\limits_{N\rightarrow\infty}\Bigl[-\frac{\log({\rm P}_N(a))}{N}\Bigr] \end{equation} is defined and finite for every $a$. In other terms, $g(a)$ is the rate of exponential decay of ${\rm P}_N(a)$ with respect to $N$. Its minimum is the most probable value of $a$. We then write: \begin{equation}\label{I-1-Pgtossa}\boxed{ {\rm P}_N(a)\approx {\rm e}^{-N g(a)}} \end{equation} where the $\approx$ signifies precisely what is written in eq.(\ref{I-1-g}). Note that $N$ is not necessarily an actual size or a number of elements: it can be a time span, a number of events, or any variable that can be taken to infinity. Also note that $a$ doesn't have to be a scalar observable. It can be a function (in which case $g[a]$ is a large deviation functional), or any mathematical object for which a probability can be defined in the system under consideration. From the large deviation function of $a$, one can obtain that of an observable $b=f(a)$, where the function $f$ is not necessarily injective, by writing \begin{equation} {\rm P}_N(b)=\int {\rm P}_N(a)\delta\bigl(b-f(a)\bigr){\rm d}a=\int {\rm e}^{-N g(a)}\delta(b-f(a)){\rm d}a\sim {\rm e}^{-N \min\limits_{f(a)=b}[g(a)]}, \end{equation} the last expression being obtained from a saddle-point approximation for large $N$. If $f$ is injective, one simply obtains a change of variables from $a$ to $b$. If, on the contrary, $f$ is not injective, which is to say that $b$ is a contraction of $a$, we get the contraction principle: the large deviation function $\tilde{g}(b)$ is given by \begin{equation}\label{I-1-contraction}\boxed{ \tilde{g}(b)=\min\limits_{f(a)=b}[g(a)].} \end{equation} In principle, $a$ may be any mathematical object and $f$ any function, but we will mostly consider linear transformations, where $a$ is a vector and $f$ a matrix with a non-maximal rank. An alternative way to treat a probability distribution which decays fast enough is through its rescaled cumulants $E_k$, or their exponential generating function $E(\mu)=\sum\limits_{k=1}^{\infty}E_k\frac{\mu^k}{k!}$, defined through \begin{equation}\label{I-1-ma} {\rm e}^{N E(\mu)}\equiv\langle{\rm e}^{\mu N a}\rangle=\int {\rm P}_N(a){\rm e}^{\mu N a} {\rm d}a \end{equation} which is to say that the generating function of cumulants is the logarithm of the exponential generating function of moments. If we replace ${\rm P}_N(a)$ by its limit under the large deviation principle: ${\rm P}_N(a)\rightarrow {\rm e}^{-N g(a)}$, we get, in the large $N$ limit: \begin{equation}\label{I-1-Eg} {\rm e}^{N E(\mu)}\rightarrow \int {\rm e}^{-N(g(a)-\mu a)}da. \end{equation} which yields, through a saddle-point approximation, \begin{equation}\label{I-1-Eg2} E(\mu)=\max_a[\mu a-g(a)] \end{equation} or equivalently \begin{equation}\label{I-1-Eg3}\boxed{ E(\mu)=\mu a^\star-g(a^\star)~~,~~\frac{{\rm d}}{{\rm d}a} g(a^\star)=\mu} \end{equation} where $a^\star$ is the value of $a$ at which the maximum in eq.(\ref{I-1-Eg2}) is attained. That is to say that $E$ and $g$ are Legendre transforms of one another and contain essentially the same information (unless $g$ has a non-convex part). The inverse transformation formula is then \begin{equation}\label{I-1-Eg3.5} g(a)=\min_\mu[\mu a-E(\mu)] \end{equation} or \begin{equation}\label{I-1-Eg4}\boxed{ g(a)=\mu^\star a-E(\mu^\star)~~,~~\frac{{\rm d}}{{\rm d}\mu} E(\mu^\star)=a} \end{equation} where $\mu^\star$ is the value of $\mu$ at which the maximum in eq.(\ref{I-1-Eg4}) is attained. This last equation is part of the Gärtner-Ellis theorem, which states that if $E(\mu)$, defined through eq.(\ref{I-1-ma}), is well-behaved, then $a$ obeys a large deviation principle with a rate $g(a)$ obtained through eq.(\ref{I-1-Eg3.5}) [2]. One of the most useful features of the cumulant approach is how it combines with the contraction principle. Consider a vector $a$ and a non-injective matrix $f$. The generating function of the cumulants of a contracted observable $b=f\cdot a$ is given by \begin{equation} \tilde{E}(\tilde{\mu})=\max_b[\tilde{\mu}\cdot b-\tilde{g}(b)]=\max_b[\tilde{\mu} \cdot b-\min\limits_{f\cdot a=b}[g(a)]]=\max_a[\tilde{\mu} \cdot f\cdot a-g(a)]=E(\tilde{\mu}\cdot f) \end{equation} which is to say that the function $\tilde{E}$ is in fact the function $E$ applied to a variable $\tilde{\mu}\cdot f$ which has fewer degrees of freedom than $\mu$ (because $\tilde{\mu}$ is conjugate to $b$ which has fewer degrees of freedom than $a$). In other words, contracting at the level of cumulants reduces to taking special values of $\mu$, which is often much easier than finding the minimum in eq.(\ref{I-1-contraction}). One last remark we should make is that ${\rm P}_N(a)$ has a sub-exponential pre-factor which we may (and did) neglect entirely, as long as it has no poles in $a$. In the case that it does, all the saddle-point approximations that we performed have to be modified to take into account contour integrals around these poles, which may dominate the large $N$ limit [4]. Dynamical large deviations for Markov processes We will now see what can be said of large deviations in the context of continuous time Markov processes on a finite state space. Consider a Markov matrix $M$ acting on states $\{\cal C\}$, with rates $w({\cal C}',{\cal C})$ from $\cal C$ to $\cal C'$, and escape rates $r({\cal C})=\sum\limits_{{\cal C}'} w({\cal C}',{\cal C})$: \begin{equation}\label{I-2-M} M=\sum\limits_{\mathcal{C}\neq\mathcal{C}'}w(\mathcal{C},\mathcal{C}')|\mathcal{C}\rangle\langle\mathcal{C}'|-\sum\limits_{\mathcal{C}}r(\mathcal{C})|\mathcal{C}\rangle\langle\mathcal{C}|. \end{equation} This defines the time evolution of a probability vector $|P_{t}\rangle$ through the master equation \begin{equation}\label{I-2-MP} \frac{d}{dt}|P_{t}\rangle=M|P_{t}\rangle \end{equation} of which the solution is, formally: \begin{equation}\label{I-2-eMP} |P_{t}\rangle={\rm e}^{t M}|P_{0}\rangle \end{equation} where $|P_{0}\rangle$ is the initial probability distribution. In the limit of long times, if $M$ is not reducible, this vector converges to a steady state \begin{equation}\label{I-2-Pstar} \lim\limits_{t\rightarrow\infty} |P_t\rangle=|P^{\star}\rangle~~~~{\rm with}~~~~M|P^{\star}\rangle=0. \end{equation} Equivalently, we can define the probability density ${\rm P}\bigl({\cal C}(t) \bigr)$ of a history ${\cal C}(t)$ going through configurations ${\cal C}_i$ with waiting times $t_i$: the Markovianity of the process tells us that the waiting times are Poisson-distributed with respect to the escape rates, which gives us \begin{equation} {\rm P}[\mathcal{C}(t)]={\rm e}^{-t_N r({\cal C}_N)}w({\cal C}_N,{\cal C}_{N-1})~{\rm e}^{-t_{N-1} r({\cal C}_{N-1})}\dots{\rm e}^{-t_2 r({\cal C}_2)}w({\cal C}_2,{\cal C}_1){\rm e}^{-t_1r({\cal C}_1)} \end{equation} where time increases from right to left. This can then be recast in a more compact form: \begin{equation}\label{I-2-Phist}\boxed{ \log\Bigl({\rm P}[\mathcal{C}(t)]\Bigr)=-\int\limits_{t=0}^{t_f} r\bigl(C(t)\bigr){\rm d}t+\sum\limits_{i=1}^{N-1}\log{w({\cal C}_{i+1},{\cal C}_i)}.} \end{equation} The entries of ${\rm e}^{t_f M}$ are then given by the sum of the probabilities of all paths that start and end at the corresponding microstates. Time-additive observables We can now define time-dependent observables on those histories and look at their large deviations in the long time limit. Let us consider an observable $A_t$ defined as a functional of a history ${\cal C}(t)$: \begin{equation}\label{I-2-At} A_t={\rm F}[\mathcal{C}(t)]. \end{equation} For the sake of simplicity, we are only interested in observables that are additive in time, meaning that if a history ${\cal C}(t)$ is the concatenation of two shorter ones ${\cal C}_1(t)$ and ${\cal C}_2(t)$, which we will write as $\mathcal{C}_1(t)\oplus\mathcal{C}_2(t)$, the functional $F$ distributes over them: \begin{equation}\label{I-2-Fadd} {\rm F}[\mathcal{C}_1(t)\oplus\mathcal{C}_2(t)]={\rm F}[\mathcal{C}_1(t)]+{\rm F}[\mathcal{C}_2(t)]. \end{equation} This forces $F$ to be local in time (independent of time correlations). We also assume $F$ to be time-invariant, i.e. independent on the value of the initial time $t_0$ in ${\cal C}(t)$. These constraints allow us to find the general form of such functionals. Consider first a history without any transitions: ${\cal C}(t)={\cal C}_1$. In this case, time additivity can be used to show that $F[{\cal C}(t)]$ is proportional to the duration $t_1$ of the process. The proportionality coefficient may depend on ${\cal C}_1$, and we will call it $V({\cal C}_1)$. Consider now a history with one transition: the system is in ${\cal C}_1$ for a duration $t_1$, and in ${\cal C}_2$ for a duration $t_2$. By cutting out, using additivity, the portion of history before $t_1-\varepsilon$ and that after $t_1+\varepsilon$, with $\varepsilon$ going to $0$, one is left with just the transition, which has a contribution to $F$ that depends only on ${\cal C}_1$ and ${\cal C}_2$. We will call it $U({\cal C}_2,{\cal C}_1)$. Putting those pieces together, and considering that any history can be decomposed into portions containing at most one transition, we can finally write: \begin{equation}\label{I-2-F}\boxed{ {\rm F}[\mathcal{C}(t)]=\int_{t_0}^{t_f} V\bigl(\mathcal{C}(t)\bigr)dt+\sum\limits_{i=1}^{N}U(\mathcal{C}_{i},\mathcal{C}_{i-1})} \end{equation} which is expressed schematically on fig.-2. Figure 2 Figure 2. Functional $F$ over a schematised history. Each straight portion contributes a simple term to the whole function. Waiting periods (in blue) give a contribution that is extensive in time, and depends only on one configuration. Transitions (in red) give a term that depends on the two configurations involved. The first part of that expression, containing $V$, is a state observable, depending only on the empirical vector, which is to say the relative time spent in each microstate $\cal C$. It contains no direct information on the currents between microstates, although it may depend indirectly on the system being in or out of equilibrium, through time-correlations. The second part, containing $U$, is a jump observable, and is a direct measure of those currents. Each $U(\mathcal{C}_{i},\mathcal{C}_{i-1})$ can be seen as a counter for the transition between $\mathcal{C}_{i-1}$ and $\mathcal{C}_{i}$: every time it is used in the evolution of the system, the value of $A_t$ increases by one quantum of $U$. Whether each value of $U$ is taken as an independent variable, or given a precise value, determines what information is monitored regarding the way those transitions are used, but in many cases, one thing that makes a crucial difference on the resulting behaviour of $A_t$ is whether the system is in equilibrium or not, as we will see shortly. But first, let us have a look at the cumulants of this observable. The generating function $E_t(\nu)$ of the cumulants of $a_t=\frac{1}{t}A_t$ (the intensive version of $A_t$) can be expressed as: \begin{equation}\label{I-2-Amean} {\rm e}^{t E(\nu)}=\langle {\rm e}^{t\nu a_t}\rangle=\int {\rm e}^{\nu{\rm F}[\mathcal{C}(t)]}{\rm P}[\mathcal{C}(t)]{\cal D}[\mathcal{C}(t)] \end{equation} where ${\cal D}[\mathcal{C}(t)]$ is the measure associated with histories (this can be simply defined in the discrete time case, and then taken as a formal limit for small $\delta t$). Replacing ${\rm P}[\mathcal{C}(t)]$ by the expression in eq.(\ref{I-2-Phist}), we see that \begin{align}\label{I-2-Phist2} \log\bigl({\rm e}^{\nu{\rm F}[\mathcal{C}(t)]}{\rm P}[\mathcal{C}(t)]\bigr)=-\int\limits_{t=0}^{t_f} \Bigl(r\bigl(C(t)\bigr)-\nu V[C(t)]\Bigr){\rm d}t+\sum\limits_{i=1}^{N-1}\big(\log{w({\cal C}_{i+1},{\cal C}_i)}+\nu U({\cal C}_{i+1},{\cal C}_i)\big) \end{align} which is to say that it is the (un-normalised) weight produced by a modified Markov matrix $M_\nu$ defined as \begin{equation}\label{I-2-MF}\boxed{ M_\nu=\sum\limits_{\mathcal{C}\neq\mathcal{C}'}{\rm e}^{\nu U(\mathcal{C},\mathcal{C}')}w(\mathcal{C},\mathcal{C}')|\mathcal{C}\rangle\langle\mathcal{C}'|-\sum\limits_{\mathcal{C}}\bigl(r(\mathcal{C})-\nu V(\mathcal{C})\bigr)|\mathcal{C}\rangle\langle\mathcal{C}|} \end{equation} (where we take $U(\mathcal{C},\mathcal{C})=0$). By replacing $M$ by $M_\nu$ in (\ref{I-2-eMP}), we get \begin{equation}\label{I-2-PtF} |P_\nu(t)\rangle={\rm e}^{t M_\nu}|P_{0}\rangle=\sum\limits_{\mathcal{C}_N}\int {\rm e}^{\nu{\rm F}[\mathcal{C}(t)]}{\rm P}[\mathcal{C}(t)]{\cal D}[\mathcal{C}(t)]~|\mathcal{C}_N\rangle \end{equation} where, as intended, the probabilities of histories have received an extra ${\rm e}^{\nu{\rm F}[\mathcal{C}(t)]}$ factor. We can finally sum over the final configuration ${\mathcal{C}_N}$ by projecting to the left on the uniform vector $\langle 1|$ (of which all entries are $1$) and write \begin{equation}\label{I-2-Amean2} {\rm e}^{t E(\nu)}=\langle 1|P_\nu(t)\rangle=\langle 1|{\rm e}^{t M_\nu}|P_{0}\rangle. \end{equation} Note that we use $\nu$ as a generic parameter, which can in fact be a function of the configuration (as in section 2.2.3.) or of the transition. Moreover, as long as $\nu$, $V$ and $U$ are real, the Perron-Frobenius theorem applies: the largest eigenvalue $\Lambda_\nu$ of $M_\nu$ is non-degenerate. If we write the corresponding eigenvectors as $|P_\nu\rangle$ and $\langle \tilde{P}_\nu|$, we get, for large times, \begin{equation}\label{I-2-MPF} {\rm e}^{t M_\nu} \approx {\rm e}^{t \Lambda_\nu}|P_\nu\rangle\langle \tilde{P}_\nu|. \end{equation} By combining equations (\ref{I-2-Amean2}) and (\ref{I-2-MPF}) for $t$ large, we get ${\rm e}^{t E(\nu)}\approx{\rm e}^{t\Lambda_\nu} \langle 1|P_\nu\rangle\langle \tilde{P}_\nu|P_0\rangle$, which is to say that \begin{equation}\label{I-2-MPF2}\boxed{ E(\nu)=\Lambda_\nu.} \end{equation} The generating function of the cumulants of any additive observable is therefore equal to the largest eigenvalue of the associated deformed Markov matrix $M_\nu$. This is a classic result from the Donsker-Varadhan theory of temporal large deviations [5–9]. Notice that this is a property of the deformed matrix, and not of the initial or final configurations: regardless of those, the long time behaviour of the generating function of the cumulants will be the same. One can easily make sense of this: if the duration of the process is large enough, then the system only takes a small time at first to reach its steady state, and gets out of it very near the end. The rest of the evolution can be considered to be around the steady state, whatever the initial and final distributions are, and the part of $A_t$ which is extensive in time comes only from there. The initial distribution only gives a time-independent term $\langle 1|P_\nu\rangle\langle \tilde{P}_\nu|P_0\rangle$, which is negligible (unless, as we mentioned earlier, it has poles). The eigenvectors $|P_\nu\rangle$ and $\langle\tilde{P}_\nu|$ also carry information on current fluctuations. Considering eq.(\ref{I-2-PtF}) for $t$ large, we can regroup all the histories for which ${\rm F}[\mathcal{C}(t)]=t f$ and the final configuration is $\mathcal{C}_t$. Writing that probability ${\rm P}(f~\&~\mathcal{C}_t)$ as ${\rm P}(\mathcal{C}_t|f){\rm P}(f)$ (where ${\rm P}(A|B)$ is the probability of $A$, conditioned on $B$), and invoking the large deviations principle ${\rm P}(f)\approx {\rm e}^{-t g(f)}$, we get \begin{equation}\label{I-2-PtF3} {\rm e}^{t E(\nu)}|P_\nu\rangle\approx\sum\limits_{\mathcal{C}_t}|\mathcal{C}_t\rangle\int {\rm P}(\mathcal{C}_t|f){\rm e}^{t(\nu f-g(f))}{\rm d}f. \end{equation} Finally, just as in eq.(\ref{I-1-Eg}), a saddle-point approximation on ${\rm e}^{t(\nu f-g(f))}$ yields ${\rm e}^{t E(\nu)}$ and fixes the value of $f$ to $\frac{{\rm d}}{{\rm d}\nu} E(\nu)$. Injecting this in (\ref{I-2-PtF3}), we get \begin{equation}\label{I-2-PtF4} P_\nu({\mathcal C})={\rm P}\Bigl({\mathcal C}_t={\mathcal C}~\Big|~f\!=\!\frac{{\rm d}}{{\rm d}\nu} E(\nu)\Bigr). \end{equation} This tells us that the vector $|P_\nu\rangle$ is in fact the probability vector of the final configuration, knowing that the value of $f$ through the evolution of the system was $\frac{{\rm d}}{{\rm d}\nu} E(\nu)$ [10]. A similar calculation on the left eigenvector $\langle\tilde{P}_\nu|$ shows it to be the probability vector of what the initial configuration was, knowing that the value of $f$ is $\frac{{\rm d}}{{\rm d}\nu} E(\nu)$: \begin{equation}\label{I-2-PtF42} \tilde{P}_\nu({\mathcal C})={\rm P}\Bigl({\mathcal C}_0={\mathcal C}~\Big|~f\!=\!\frac{{\rm d}}{{\rm d}\nu} E(\nu)\Bigr) \end{equation} Finally, the product of the two gives the probability of observing a configuration at any point during the evolution of the system (but far enough from the initial and final time), conditioned on the value of $f$: \begin{equation}\label{I-2-sensemble}\boxed{ P_\nu({\mathcal C})\tilde{P}_\nu({\mathcal C})={\rm P}\Bigl({\mathcal C}~\Big|~f\!=\!\frac{{\rm d}}{{\rm d}\nu} E(\nu)\Bigr).} \end{equation} Note that these vectors are implicitly normalised appropriately. Those three distributions are quite different from one another, and in particular their most probable states are in general not the same. We will be mostly interested in eq.(\ref{I-2-sensemble}), for reasons that will become apparent later, in section 4.. When the observable of interest is the entropy production (which we will examine in section 2.2.4.), the statistical ensemble defined by those probabilities, where $\nu$ is considered as a free parameter (similar to a temperature, as it is conjugate to the entropy), is sometimes called the 's-ensemble' [11]. It is quite natural to consider this ensemble: since entropy production plays an important part in the system being out of equilibrium, being able to control its value and look at how the system responds provides us with useful information on its behaviour. Moreover, much like in equilibrium, where one can think of a Lagrange multiplier as an extra interaction, one may think of $M_\mu$ as equivalent to a process with modified dynamics, which has typical values for observables given by the fluctuation values in the original process [12,13]. The Markov matrix for that new process can be obtained through a so-called 'Doob transform', which consists in shifting its eigenvalues by $-E(\nu)$ and conjugating the matrix through $\tilde{P}_\nu$ so that its largest eigenvalue be $0$ and its dominant left eigenvector be uniform. Its steady state is then given by $P_\nu({\mathcal C})\tilde{P}_\nu({\mathcal C})$, which is another argument for that distribution being the most natural one. In the following sub-sections, we will consider two special cases of time-additive observables: the empirical vector, which is a state observable, and the entropy production, which is a jump observable. Large deviations of the empirical vector The first special case that we will consider is that of the time-integrated empirical vector, which is the vector of the total time spent in each configuration. We will write its intensive counterpart as $\rho$, defined by \begin{equation} \rho(c)=\frac{1}{t}\int_{0}^{t} \delta_{c,\mathcal{C}(\tau)}d\tau \end{equation} which contains the fractions of time spent in each configuration $c$. We recognise a special case of eq.(\ref{I-2-F}) where $V(\mathcal{C})=\delta_{c,\mathcal{C}}$ and $U(\mathcal{C}',\mathcal{C})=0$. This is the most general state observable, which can give any other through contraction. The deformed Markov matrix corresponding to $\rho$ is given by \begin{equation}\label{I-2-Mh} M_h=\sum\limits_{\mathcal{C}\neq\mathcal{C}'}w(\mathcal{C},\mathcal{C}')|\mathcal{C}\rangle\langle\mathcal{C}'|-\sum\limits_{\mathcal{C}}\bigl(r(\mathcal{C})-h_{\mathcal{C}}\bigr)|\mathcal{C}\rangle\langle\mathcal{C}|=M+H \end{equation} where $H$ is a diagonal matrix with entries $h_{\mathcal{C}}$. The largest eigenvalue $E(h)$ of $M_h$ contains all the cumulants of the empirical vector, including n-point functions. For instance, \begin{equation} \frac{\rm d^2}{{\rm d}h_{\mathcal{C}_1}{\rm d}h_{\mathcal{C}_2}}E(0)=\langle \rho(\mathcal{C}_1)\rho(\mathcal{C}_2)\rangle_t-\langle \rho(\mathcal{C}_1)\rangle_t\langle \rho(\mathcal{C}_2)\rangle_t \end{equation} where $\langle\cdot\rangle_t$ refers to the average in time (which we will soon need to distinguish from an ensemble average). There is another empirical vector $\pi$ which we may define, by looking at the probability of being at $\mathcal{C}$ at a given time $t$, averaged over a large number $N$ of copies of the process. If the probability vector at time $t$ is $|P_t\rangle$, the exponential generating function of the cumulants of $\pi$ is given by \begin{equation} \mathcal{E}(h)=\log\bigl(\langle 1|{\rm e}^H|P_t\rangle\bigr) \end{equation} which involves ensemble averages $\langle\cdot\rangle_e$. It is important to note that these two generating functions, as well as the corresponding large deviation functions, are in principle different: fluctuations over time are not identical to fluctuations among copies of the system. They do however have one common feature. We may write the first cumulant from $E(h)$ as \begin{equation} \langle\rho(\mathcal{C})\rangle_t=\frac{\rm d}{{\rm d}h_{\mathcal{C}}}E(0)=\lim\limits_{t\rightarrow\infty}\frac{1}{t}\frac{\rm d}{{\rm d}h_{\mathcal{C}}}\log\bigl(\langle 1|{\rm e}^{t(M+H)}|P_0\rangle\bigr)\Big|_{h=0}=\lim\limits_{t\rightarrow\infty}\frac{1}{t}\int_{0}^{t}{\rm P}_\tau(\mathcal{C}){\rm d}\tau={\rm P}^{\star}(\mathcal{C}) \end{equation} where we recall that ${\rm P}^{\star}$ is the distribution of the steady state of $M$. That same first cumulant from $\mathcal{E}(h)$ gives, in the limit of long times: \begin{equation} \langle\pi(\mathcal{C})\rangle_e=\frac{\rm d}{{\rm d}h_{\mathcal{C}}}\mathcal{E}(0)=\lim\limits_{t\rightarrow\infty}\frac{\rm d}{{\rm d}h_{\mathcal{C}}}\log\bigl(\langle 1|{\rm e}^H|P_t\rangle\bigr)\Big|_{h=0}=\frac{\rm d}{{\rm d}h_{\mathcal{C}}}\log\bigl(\langle 1|{\rm e}^H|P^\star\rangle\bigr)\Big|_{h=0}={\rm P}^{\star}(\mathcal{C}). \end{equation} Combining this identity with any contraction of $\rho$ or $\pi$, we conclude that time averages and ensemble averages of state observables are identical, which is to say that the Markov process we are considering is ergodic. All the other cumulants are in principle different, which can be understood by the fact that time-correlations (i.e. the dynamics of the system) play a role in $E$ which they don't in $\mathcal{E}$. This implies that the absolute minima of both large deviation functions have the same locus, but that their shape (stiffness, skewness, etc.) around those minima is different. It is also interesting to note that this remains true for averages in a conditioned process. Let us for instance take a generic deformed Markov matrix $M_\mu$, with dominant eigenvectors $|P_\mu\rangle$ and $\langle\tilde{P}_\mu|$, and consider the two generating functions \begin{equation} E(\mu,h)=\lim\limits_{t\rightarrow\infty}\frac{1}{t}\log\bigl(\langle 1|{\rm e}^{t(M_\mu+H)}|P_0\rangle\bigr) \end{equation} and \begin{equation} \mathcal{E}(\mu,h)=\log\bigl(\langle \tilde{P}_\mu|{\rm e}^H|P_\mu\rangle\bigr) \end{equation} (where the product between the two eigenvectors $|P_\mu\rangle$ and $\langle\tilde{P}_\mu|$ means we are looking at a time which is somewhere in the middle of a long evolution). A calculation similar to what we did earlier yields: \begin{equation}\boxed{ \langle\rho_\mu(\mathcal{C})\rangle_t=\frac{\rm d}{{\rm d}h_{\mathcal{C}}}E(\mu,0)={\rm P}_\mu(\mathcal{C})\tilde{{\rm P}}_\mu(\mathcal{C})=\frac{\rm d}{{\rm d}h_{\mathcal{C}}}\mathcal{E}(\mu,0)=\langle\pi_\mu(\mathcal{C})\rangle_e} \end{equation} where the product ${\rm P}_\mu(\mathcal{C})\tilde{{\rm P}}_\mu(\mathcal{C})$ is normalised. Large deviations of the entropy production We now consider a special jump observable: the entropy production $S_t$ for an evolution between times $0$ and $t$, defined as \begin{equation}\label{I-2-ent1} S_t[{\cal C}(\tau)]=\log\Biggl( \frac{{\rm P}[{\cal C}(\tau)]}{{\rm P}[{\cal C}^R(\tau)]} \Biggr), \end{equation} which measures how probable a history $\mathcal{C}(\tau)$ is compared to its time-reversal $\mathcal{C}^R(\tau)=\mathcal{C}(t-\tau)$. It corresponds to a time-additive observable with \begin{equation}\label{I-2-ent} V({\cal C})=0~~,~~U({\cal C},{\cal C}')=\log\bigl( w({\cal C}',{\cal C})\bigr)-\log \bigl(w({\cal C},{\cal C}')\bigr). \end{equation} so that the corresponding deformed Markov matrix is given by: \begin{equation}\label{I-2-Ment} M_\nu=\sum\limits_{\mathcal{C},\mathcal{C}'}w(\mathcal{C},\mathcal{C}')^{1+\nu}w(\mathcal{C}',\mathcal{C})^{-\nu}|\mathcal{C}\rangle\langle\mathcal{C}'|. \end{equation} We notice that it has a peculiar symmetry: if we replace $\nu$ by $-1-\nu$, the exponents in $M_\nu$ are swapped, which has the same effect as transposing the matrix. That is to say, \begin{equation}\label{I-2-GC}\boxed{ M_\nu=~^t\!M_{-1-\nu}.} \end{equation} This has interesting consequences on its eigensystem: all the eigenvalues are symmetric with respect to $\nu\leftrightarrow -1-\nu$, and the associated right and left eigenvectors are exchanged. This is the famous 'Gallavotti-Cohen symmetry' [14–16], which is one of the only universal features known for non-equilibrium systems. In particular, the generating function of the cumulants of the intensive entropy production $s$, and the conditional probabilities that we have defined earlier, verify: \begin{equation}\label{I-2-GC2} E(\nu)=E(-1-\nu)~~~~,~~~~P_\nu({\cal C})=\tilde{P}_{-1-\nu}({\cal C}). \end{equation} which becomes, for the large deviation function, \begin{equation}\label{I-2-GC22} g(s)-g(-s)=-s \end{equation} or, equivalently, \begin{equation}\label{I-2-GC4} {\rm P}(-s)={\rm e}^{-t s}{\rm P}(s) \end{equation} for $t\rightarrow\infty$. This last equation is called the 'fluctuation theorem'. It was first observed by Evans, Cohen and Morriss in [17], then proven by Evans and Searles in [18], and later led to Gallavotti and Cohen's formulation of their symmetry. The theorem means that a negative entropy production rate is much less probable than its positive counterpart, but not impossible. This does not, as it might seem, contradict the second law of thermodynamics, which is expressed only for the average of $s$. It in fact validates it, since it implies that the mean value of $s$ must be positive. We may finally note that, for equilibrium systems, where the detailed balance condition imposes that \begin{equation}\label{I-2-DB} P^\star({\cal C})w({\cal C}',{\cal C})=P^\star({\cal C}')w({\cal C},{\cal C}') \end{equation} for any two configurations ${\cal C}$ and ${\cal C}'$, the deformed Markov matrix turns out to be similar to the un-deformed one, so that they have the same eigenvalues. Consequently, the generating function of the cumulants of the entropy production rate is identically zero, and its large deviation function is a delta function: \begin{equation}\label{I-2-Eeq} E(\nu)=0~~~~{\rm and}~~~~{\rm P}(s)=\delta(s). \end{equation} There is therefore no entropy production whatsoever in the case of an equilibrium system. This, as in eq.(\ref{I-2-MPF2}), is a property of the transition rates, and not of the initial or final distributions. There can still be a conservative exchange of entropy between the initial and final configurations of a history (if their equilibrium probabilities are not equal). The asymmetric simple exclusion process In this section, we are introduced to the Asymmetric Simple Exclusion Process. After giving its definition, we briefly go through the existing literature related to that model, including its many variants and connections with other problems in physics, mathematics and biology. We then examine the steady state of the model, first through a mean-field approach and then through an exact calculation. Definition of the model and variants Consider a one-dimensional lattice with $L$ sites (or a row of $L$ boxes), numbered from $1$ to $L$. Each site can be empty, or carry one particle. Those particles jump stochastically from site to site, with a rate $p$ if the jump is to the right, from site $i$ to site $i+1$ (and which will be set to $p=1$ by choosing the rate of forward jumps as a time scale), and a rate $q<1$ if the jump is to the left, from site $i$ to site $i-1$. The jumping rate is larger to the right than to the left in order to mimic the action of a field driving the particles in the bulk of the system. Each end of the system is connected to a reservoir of particles, so that they may enter the system at site $1$ with rate $\alpha$ or at site $L$ with rate $\delta$, and leave it from site $1$ with rate $\gamma$ or from site $L$ with rate $\beta$. Those rates allow us to define the effective densities of the two reservoirs. In all of these operations, the only constraint that must be obeyed is that of exclusion, which is to say that there cannot be more than one particle on a given site at a given time, so that a particle cannot jump to a site that is already occupied. These rules are represented schematically on fig.-3. Figure 3 Figure 3. Dynamical rules for the ASEP with open boundaries. The rate of forward jumps has been normalised to 1. Backward jumps occur with rate $q < 1$. All other parameters are arbitrary. The jumps shown in green are allowed by the exclusion constraint. Those shown in red and crossed out are forbidden. Configurations of the system are written as strings of $0$'s and $1$'s, where $0$ indicates an empty site, and $1$ an occupied site. The Markov matrix governing that process is a sum of local jump operators, each carrying the rates of jumps over one of the bonds in the system: \begin{equation}\label{II-1-M} M=m_0+\sum\limits_{i=1}^{L-1}M_i +m_L \end{equation} with \begin{equation}\label{II-1-M2} m_0=\begin{bmatrix} -\alpha & \gamma \\ \alpha & -\gamma \end{bmatrix}~,~ M_{i}=\begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & -q & 1 & 0 \\ 0 & q & -1 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}~,~m_L=\begin{bmatrix} -\delta & \beta \\ \delta & -\beta \end{bmatrix}. \end{equation} It is implied here that $m_0$ acts as written on site $1$ (and is represented in basis $\{0,1\}$ for the occupancy of the first site), and as the identity on all the other sites. Likewise, $m_L$ acts as written on site $L$, and $M_i$ on sites $i$ and $i+1$ (and is represented in basis $\{00,01,10,11\}$ for the occupancy of those two sites). Each of the non-diagonal entries represents a transition between two configurations that are one particle jump away from each other. As we mentioned earlier, because of the asymmetry of the jumps, the particles flow to the right, which results in a macroscopic current deeply connected to the non-equilibrium nature of the system. It is a special case of time-additive observable (as presented in section 2.2.2.) where $V=0$ and $U$ is taken as $1$ for the transitions where a particle jumps to the right, and $-1$ for those where one jumps to the left. We will see in section 4. that it is strongly related to the entropy production (they are in fact equal, up to a constant), and we will be referring to probabilities conditioned on the current as the s-ensemble as well. All the results presented in this review are related to the characterisation of that current and its fluctuations. Variants of the ASEP There are a few simpler cases that one can consider. The first is to force the particles to jump only to the right, by taking $q=\gamma=\delta=0$. In this case, the model is called the totally asymmetric simple exclusion process (or TASEP), and we will often use it in our calculations, as its behaviour is identical to that of the ASEP for all intents and purposes, but much easier to deal with. The second is the opposite limit, where the jumps are as probable to the left as they are to the right: $q=1$. This case is called the symmetric simple exclusion process (or SSEP). It is an example of a 'boundary-driven diffusive systems' (as opposed to the ASEP, which is bulk-driven by the asymmetry, and is therefore not diffusive). Its behaviour is quite different from that of the ASEP, and we will not consider this limit in the present review. Sitting somewhere between the SSEP and the ASEP is the weakly asymmetric simple exclusion process (or WASEP), where the asymmetry $1-q$ is taken to scale with the size of the system as $L^{-1}$. This is done in order to make the integral of the field in the bulk, which is of order $L(1-q)$, comparable with the difference of chemical potential between the reservoirs, which is a constant with respect to $L$. The ASEP and the WASEP correspond to two different ways to take the large $L$ limit in the system: in the ASEP, no rescaling is done to the driving field, so that the large size limit corresponds to a system of increasing length, with the lattice spacing remaining constant, which is relevant to model a system which is really discrete (think for instance of ribosomes on a long string of mRNA, or any other example of discrete biological transport). In the WASEP, on the contrary, the field is rescaled as $L^{-1}$, so that the large size limit corresponds to a system of fixed length, with a smaller and smaller lattice spacing, going to a continuous system when $L$ reaches infinity. We will be using the WASEP as a starting point in section 6.. One can also consider different geometries for the model. Take for instance the ASEP with periodic boundary conditions, i.e. on a ring (fig.-4-b). In this case, the system is not connected to any reservoir, and the number of particles is conserved. This makes it somewhat easier to deal with: the steady-state distribution is uniform, and the coordinate version of the Bethe Ansatz can be used to solve it, as we will see in section 4.2.1.. The ASEP can be defined on an infinite lattice instead (c.f. lower part of fig.-4-d). In this case, there is in general no convergence to a steady state (for generic initial conditions), and the observable of choice is instead the large time behaviour of the transient regime. Finally, one can put more than one type of particles in the system, and consider the multispecies ASEP (fig.-4-c). The exchange rates must then be defined between any two different species of particles. The simplest case to consider (and the most tractable one) is that where the types of particles are numbered, from $0$ (for holes) to $K$ (for the 'fastest' particles), and where a particle of type $k$ sees all lower types $k'<k$ as holes, which is to say that the rates of exchange of two particles of types $k_1<k_2$ are $1$ for $k_2 k_1\rightarrow k_1 k_2$ and $q$ for $k_1 k_2\rightarrow k_2 k_1$ (those rates are represented on fig.-4-c, where different species of particles bear different colours, and are numbered by their rank). Brief overview of the ASEP's family tree We mentioned biological transport earlier for a good reason: the first definition of an ASEP-like model was made in 1968 in [19,20] precisely in order to study the dynamics of ribosomes on mRNA (fig.-4-a). It is still used today in that context, often with a few modifications to make it slightly more realistic, such as making the particle reservoirs finite [21] or even shared between several systems [22], changing the jumping rates from site to site [23], changing the jumping cycle by adding an inactive state for particles [24], allowing them to attach or detach in the middle of the chain [25], and so on. These are only a few recent examples, but a thorough review can be found in [26]. It has also been noticed that the ASEP is strongly related to the XXZ spin chain with spin $\frac{1}{2}$ [27]: the Markov matrix of the ASEP and the Hamiltonian of the spin chain are related through a matrix similarity. This fact goes deeper than a simple mapping between two systems: the XXZ spin chain is well known and well studied, as it has the mathematical property of being 'integrable', meaning that it can be solved exactly, for instance through the Bethe Ansatz [28], and that we can expect precise analytical results from it [29]. For that reason, many results have been obtained for the ASEP by adapting the Bethe Ansatz to its formalism [30–38], and even results that have been found by other means (such as those presented in [39]) are in fact consequences of that property. The downside of this fact, one might argue, is that those methods are only transposable to other integrable systems, but the undeniable upside is that we might have access to very precise results, which could lead to discovering universal features of non-equilibrium systems. Figure 4 Figure 4. The ASEP's family album: a) Ribosomes on mRNA, b) Periodic ASEP, c) Multispecies ASEP, d) Surface growth. Another model related to the ASEP is that of random surface growth [40] (fig.-4-d). In that model, a wall, made of square blocks (with corners pointing up and down), grows by a procedure where blocks can fall in valleys at rate $1$, or lift off of peaks at rate $q$. The relation to the ASEP is rather obvious if one replaces upward slopes (when reading from left to right) by holes, and downward slopes by particles (adding a block means replacing 'down up' by 'up down', i.e. $10$ by $01$, and removing one is the opposite operation). In this context, the situation that is usually considered is that of the infinite ASEP, with a simple initial condition, such as a given mean density to the right of site $0$ and another one to the left (the simplest one being all $1$s to the left and all $0$s to the right [41,42], as represented by the dashed line in fig.-4-d), although more general ones can be considered [43]. One of the most interesting quantities, just as for finite size, is the total current of particles that went over the bond at the centre of the system, which is equal to the number of blocks that have been added above that site, i.e. to the height of the surface (represented in green in fig.-4-d; each green block corresponds to one of the particles that crossed to the right of the system). After a first breakthrough by Johansson in [41], the fluctuations of that height were conjectured [44] and then proven [45] to be related to the famous Tracy-Widom distributions governing the eigenvalues of random matrices [46,47]. Some even more complex quantities have been studied, such as the general n-point correlations of the height [48]. Moreover, there is a whole class of systems, the KPZ universality class (named after Kardar, Parisi and Zhang, authors of the seminal article that started it all [40]), that are governed by the same laws, such as the directed random polymer [49,50], or the delta-Bose gas [51]. One can in particular recover the KPZ equation [52] from the WASEP, or an equivalent solid on solid model, through a particular rescaling [53], as well as some of the critical exponents through renormalisation group techniques [54]. One of the main features of the KPZ universality class is its dynamical exponent $z=\frac{3}{2}$: in the language of the exclusion process, the time it will take on average for a local perturbation of the density of particles, or a marker particle, to spread over a region of size $L$ scales as $t\sim L^{3/2}$, as opposed to $t\sim L^2$ for a diffusion and $t\sim L$ for a ballistic system. That property relates to the relaxation of the system at long but finite times, which we will not consider in the present work (as we will be looking at steady states, i.e. at infinite times), but we will observe the appearance of that exponent in certain places, such as in equation (\ref{IV-1-EHDMC2}). Experimental evidence of the relevance of that model has been obtained in a liquid crystal undergoing a phase transition [55]. This subject has generated many more works than could be summed up here, and the reader can find more information in reviews such as [56–59]. The ASEP can be related to many more models and mathematical objects, such as chains of quantum dots [60], alternating sign matrices [61,62] (through its connection to the XXZ chain), continued fractions [63], Brownian excursions [64–66], Askey-Wilson polynomials [67–69], and a large family of combinatorial objects which all have a connection to Catalan numbers [70]. Earlier results All these interesting connections notwithstanding, the ASEP is a very popular model in itself [71–73] (it has even been referred to as the Ising model of non-equilibrium systems [74]), and has been the subject of a tremendous number of works. The SSEP, for one, has established itself as an archetype of diffusive systems with interactions, for which many universal results have been found, such as the cumulants of the current in a periodic system [75] or an open one [76]. Those results all have to do with the so-called 'macroscopic fluctuation theory' (or MFT) [77–80], developed to deal with the fluctuations of diffusive systems through a hydrodynamic approach [81]. As for results more specific to the SSEP or the WASEP, the large deviation functional of the density profiles was expressed in [82], leading to the joint large deviations functional for the current and the density [83] which we will be using in section 6.. The cumulants of the current for the open SSEP were found in [84], and were observed to depend on a single variable and not on the two boundary densities independently. This lead to the discovery of a surprising symmetry connecting the non-equilibrium SSEP (with different reservoir densities) to a system at equilibrium [85–87]. The full cumulants of the current for the periodic WASEP were found in [88]. In that case, as was found in [89] and further analysed in [90,91], the system undergoes a phase transition in the s-ensemble where, for a low enough current, the optimal density profiles become time-dependent. A similar transition can be found for the activity (the sum of jumps, regardless of direction) in the SSEP [92], or even for combinations of current and activity [93]. Recently, the large deviations of the position of a tracer in the one-dimensional single file diffusion (a continuous equivalent to the SSEP) using MFT [94,95]. See [73] for a review of some of these results. The periodic ASEP, with its fixed number of particles and its trivial steady state (all the configurations are equally probable, as long as they have the correct number of particles), has mostly been studied for the fluctuations of the current. The full generating function of those was found for the TASEP in 1998 [96,97], and although the second cumulant for the ASEP was found prior to that [98], the complete generating function was only obtained more than 10 years later [30–33]. Some other results were obtained for the periodic TASEP, such as the gap (i.e. the characteristic time of the transient regime) [100,99], and, very recently, the whole distribution of the spectrum of the Markov matrix and the behaviour of the low excitations [,101,102]. The s-ensemble was also investigated, for the limit of very large currents, and the probabilities of the configurations were found to be those of a Dyson-Gaudin gas (the discrete analogue of a Coulomb gas) [103]. We will come back to that last observation in section 5.2.. The open ASEP is richer than the periodic case, but much harder to handle. The structure of the steady state itself is quite intricate: it was first found in [104] for the TASEP thanks to some surprising recurrence relations between the weights of the configurations for successive sizes. It was then generalised to the ASEP by expressing those relations in algebraic form [39], giving birth to the 'matrix Ansatz', which we will present in section 3.2.2.. Depending on the values of the two reservoir densities, the system can find itself in three different phases, which was discovered for the TASEP in [105] as an interesting feature of non-equilibrium systems (since, for equilibrium systems with short range interactions, transitions cannot be induced by boundaries). This phase diagram was refined in [106] where sub-phases were found with different correlation lengths. Those results were extended to the ASEP in [107,68] (part of which we present in section 3.2.2.). The 2-point correlation function [108] and then the complete n-point function [64] were calculated for the TASEP, for some values of the boundary densities, and the same was later done for the ASEP in [109,69]. Most of these results rely on the matrix Ansatz, and a review of those results and methods can be found in [110]. See also [72] for a review of various results for the steady state of the open ASEP. More recently, a matrix Ansatz was found for the steady state of the two-species open ASEP [111]. Other properties of the steady state were analysed, such as the static density, current and activity distributions [112,113], the large deviation function of the density profiles [114], or the reverse bias regime (where the boundaries impose a current floving to the left) [115,116]. A hydrodynamic description, named 'domain wall theory' (or DWT) [117–119,81,120] where states of the system are approximated by regions of constant density separated by discontinuities called shocks, was proposed to describe the large scale dynamics of the system, even in the transient regime, and the hydrodynamic quasi-potential was obtained along with the optimal relaxation pathways in [121], but no full equivalent of the MFT has yet been devised. One of the main reasons why the open ASEP is more difficult to study than its circular sibling is that the Bethe Ansatz cannot be used as easily in this case. The coordinate version of the Ansatz which we presented in section 3.2.1., where the particles are treated as plane waves, relies on the number of particles being fixed, and breaks down in the open case. Variants of the coordinate Bethe Ansatz have been used successfully to build excited eigenstates of the system for some special cases of the boundary parameters [34,35,38] (in particular with triangular boundary matrices), and, in conjunction with numerical analysis, to find the relaxation speed of the system (i.e. the gap of the Markov matrix) [36,37,122], as well as the asymptotic large deviation function of the current inside of the Gaussian phases [123]. A generalisation of the matrix Ansatz was used in [124] to calculate the second cumulant of the current in the totally asymmetric case. In [125], an alternative to the Bethe Ansatz, named the 'Q-operator' method, is used to obtain an expression for the complete generating function of the cumulants of the current of the open ASEP, with the exact same structure as for the periodic case [32], proving a conjecture previously emitted in [126]. We will give a brief account of that method in section 4.2.2.. Many variants of the ASEP have also been studied. A matrix Ansatz, akin to the one we mentioned before, was found for the steady state of the periodic multispecies ASEP [127,128]. The case of a single defect particle was analysed, for itself [129–131] or used as a way to mark the position of a shock [132,133]. Different updates procedures were considered and compared for the discrete time case [134,135]. A system with two interacting chains was studied in [136]. The ASEP was also considered with entry and exit of particles in the bulk of the system [137], disordered [138] or smoothly varying [139] jumping rates, a single slow bond [140,141], repulsive nearest-neighbour interactions [142], or on a two-dimensional grid [143,144]. Finally, on the numerical front, the ASEP has been used to develop and test numerical algorithms aimed at producing and analysing rare events, such as a variant of the 'density matrix renormalisation group' (DMRG) algorithm, first in [145] and later in [146,147], as well as numerical implementations of the MFT [148–150], and the so-called 'cloning algorithm' [151–154]. Steady state of the open ASEP Before we look into the fluctuations of the current in the ASEP, let us first see what can be said of the average current and of the steady state of the model. We will start with a simple mean field approach, which gives good results in the large size limit, and we will then present the exact expressions of these quantities, in therms of the so-called 'matrix Ansatz' [39]. Mean field As we recall, the master equation reads: \begin{equation}\label{II-2-MP} \frac{d}{dt}|P_{t}\rangle=M|P_{t}\rangle \end{equation} where $M$ is the Markov matrix of the open ASEP: \begin{equation}\label{II-2-M} M=m_0+\sum\limits_{i=1}^{L-1}M_i +m_L \end{equation} with \begin{equation}\label{II-2-M2} m_0=\begin{bmatrix} -\alpha & \gamma \\ \alpha & -\gamma \end{bmatrix}~,~ M_{i}=\begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & -q & 1 & 0 \\ 0 & q & -1 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}~,~m_L=\begin{bmatrix} -\delta & \beta \\ \delta & -\beta \end{bmatrix}. \end{equation} We shall write the configurations of the system as ${\cal C}=\{n_i\}_{i:1..L}$, where $n_i\in\{0,1\}$ is the occupancy of site $i$. If we trace equation (\ref{II-2-MP}) over all $n_j$'s except for one at site $i$ which is taken to be $1$ (which means projecting it onto $\langle 1|\delta_{n_i,1}$), we get an equation for the time evolution of the mean density at that site: \begin{equation}\label{II-1-nMP} \frac{d}{dt}\langle n_i\rangle=\langle 1|\delta_{n_i,1}M|P_{t}\rangle. \end{equation} The term $\delta_{n_i,1}$ only affects two matrices from the sum in eq.(\ref{II-2-M}), and each of the matrices is stochastic individually, so that only a few terms remain in the right-hand side. A straightforward calculation yields the following expression: \begin{equation}\label{II-1-dn}\boxed{ \frac{d}{dt}\langle n_i\rangle=J_{i-1}-J_i} \end{equation} with \begin{align} J_0&=\alpha\langle(1-n_1)\rangle-\gamma\langle n_1\rangle,\label{II-1-Ji1}\\ J_i&=\langle n_{i-1}(1-n_i)\rangle-q\langle(1-n_{i-1})n_i\rangle,\label{II-1-Ji2}\\ J_L&=\beta\langle n_L\rangle-\delta\langle(1-n_L)\rangle.\label{II-1-Ji3} \end{align} Equation (\ref{II-1-dn}) shows, as expected, that matter is conserved in the system: the variation of density at a site is equal to the current coming from the left minus the current leaving to the right. No approximation has been made so far, but we cannot solve these equations as they are: each current involves two-point correlations, so that the evolution of the densities $\langle n_i\rangle$ is not autonomous. In principle, one would then have to compute, in a similar way, quantities such as $\frac{d}{dt}\langle n_in_{i+1}\rangle$, which would involve three-point correlations, and so on until one reaches the final $L$-point functions [104,106]. We will not be so ambitious, and will instead do a brutal approximation which will allow us to solve everything directly: we will assume that all the local densities are uncorrelated, which is to say $\langle n_i n_{j}\rangle\sim\langle n_i\rangle\langle n_j\rangle$. The resulting system of equations is autonomous. We are interested in the steady state, for which all time derivatives are zero, so that eq.(\ref{II-1-dn}) imposes that all the currents $J_i$ are equal (in a one-dimensional system, a gradient-free current needs to be constant). Equations (\ref{II-1-Ji1} - \ref{II-1-Ji3}) then become \begin{align} J&=\alpha(1-\langle n_1\rangle)-\gamma\langle n_1\rangle\label{II-1-Jmf1}\\ &=\langle n_{i-1}\rangle(1-\langle n_i\rangle)-q\langle n_i\rangle(1-\langle n_{i-1}\rangle)\label{II-1-Jmf2}\\ &=\beta\langle n_L\rangle-\delta(1-\langle n_i\rangle)\label{II-1-Jmf3}. \end{align} The second of these equations gives a recursion relation between $\langle n_i\rangle$ and $\langle n_{i-1}\rangle$, which can then be used $L-1$ times to express $\langle n_L\rangle$ as a function of $\langle n_1\rangle$. Then, the first and last equations can be used to get a single equation on $J$, which fixes its value as a function of all the parameters of the system ($L$, $q$ and the four boundary rates). These calculations can be found in [104] for the TASEP. We are only interested in the large size limit, so we will instead use a method similar to that of [105]. Writing $\langle n_i\rangle=\rho(\frac{i-1/2}{L})$, and taking $L$ to infinity, equation (\ref{II-1-Jmf2}) becomes \begin{equation}\label{II-1-Jx}\boxed{ J=(1-q)\rho(1-\rho)-\frac{1+q}{2L}\nabla\rho} \end{equation} with \begin{equation} \frac{d}{dt}\rho(x)=-\nabla J=0. \end{equation} Note that we have rescaled time in this last equation, by a factor $L$, and that we are keeping the gradient in eq.(\ref{II-1-Jx}) even though its pre-factor goes to zero, as this will give us more information on the shape of the density profile for large sizes, and is in fact necessary to find the correct steady state. There remains the problem of finding the correct boundary conditions from eqs.(\ref{II-1-Jmf1}) and (\ref{II-1-Jmf3}), assuming that only the boundary rates from each side influence the corresponding boundary condition. This can be done by considering the situation where the steady state is homogeneous: $\rho(x)=\rho$. Equations (\ref{II-1-Jmf1}) and (\ref{II-1-Jmf2}) then give \begin{equation} \alpha(1-\rho)-\gamma\rho=(1-q)\rho(1-\rho) \end{equation} which is solved by $\rho=\rho_a=\frac{1}{1+a}$, with \begin{equation}\label{II-1-a2} a=\frac{1}{2\alpha}\Bigl[(1-q-\alpha+\gamma)+\sqrt{(1-q-\alpha+\gamma)^2+4\alpha\gamma}\Bigr]. \end{equation} Doing the same at the left boundary, we get a density $\rho_b=\frac{b}{1+b}$ with: \begin{equation}\label{II-1-b2} b=\frac{1}{2\beta}\Bigl[(1-q-\beta+\delta)+\sqrt{(1-q-\beta+\delta)^2+4\beta\delta}\Bigr]. \end{equation} Those two densities $\rho_a$ and $\rho_b$ can be considered as the effective densities of the reservoirs to which the system is connected. We will take those densities as boundary conditions $\rho$ are $\rho(0)=\rho_a$ and $\rho(1)=\rho_b$ in all future calculations. Now that the equation and the boundary conditions are set, it is time to solve it. In principle, the standard way to do this would be to propagate the left boundary condition $\rho_a$ to the left side of the system through eq.(\ref{II-1-Jx}), keeping $J$ as an unknown, and identifying the result with $\rho_b$, thus obtaining an equation for $J$ in terms of the boundary densities. Plugging this back into eq.(\ref{II-1-Jx}) and solving it then yields $\rho(x)$. It is in fact much simpler to fix $J$ instead, to find all the density profiles compatible with that value, and to determine which boundary conditions are appropriate only at the end. Looking at equation (\ref{II-1-Jx}), we see that the sign of $\nabla\rho$ depends on the difference between $J$ and $(1-q)\rho(1-\rho)$. Moreover, the gradient of $\rho$ needs to be of order $L$ to compensate for any finite difference between these terms. We can therefore argue that $J$ cannot be larger than $\frac{1-q}{4}$ (which is the maximal value taken by $(1-q)\rho(1-\rho)$) or smaller than $0$, otherwise $|\nabla\rho|$ would be larger than some constant of order $L$, and $\rho$ would diverge. This means that there is a density $\rho_c\leq 1/2$ such that $J=(1-q)\rho_c(1-\rho_c)$. We then have (fig.-5): \begin{align} &\nabla\rho<0~~~~{\rm for}~~~~\rho<\rho_c,\\ &\nabla\rho>0~~~~{\rm for}~~~~\rho_c<\rho<(1-\rho_c),\\ &\nabla\rho<0~~~~{\rm for}~~~~(1-\rho_c)<\rho, \end{align} which is to say that $\rho$ gets away from $\rho_c$ and closer to $(1-\rho_c)$. Figure 5 Figure 5. Variations of $\rho$ depending on its position with respect to $\rho_c$ and $1-\rho_c$. For $x$ increasing, $(1-\rho_c)$ is an attractive fixed point, and $\rho_c$ is repulsive. Moreover, it is straightforward to check that $\rho(x)$ is a hyperbolic tangent between $\rho_c$ and $1-\rho_c$, and approximately an exponential otherwise, with a scale $\frac{1}{L}$. On fig.-6, we draw some of the possible profiles for a certain value of $J<\frac{1-q}{4}$. Figure 6 Figure 6. Possible density profiles for a given $\rho_c$. All the profiles with $\rho_a$ in the red region converge to $\rho_b=1-\rho_c$. All the profiles with $\rho_b$ in the orange region come from $\rho_a=\rho_c$. Since $\rho_c$ is repulsive and $1-\rho_c$ attractive, there are in fact only a few possibilities: $\rho_a=\rho_c$ and $\rho_b<1-\rho_c$, so that $J=(1-q)\rho_a(1-\rho_a)$ ; this is represented in orange on fig.-6, and requires $\rho_a<\frac{1}{2}$ and $\rho_a<1-\rho_b$ ; this is called the Low Density phase: the left boundary imposes its (low) density to the whole system, apart from an exponential boundary layer at the right boundary. $\rho_b=1-\rho_c$ and $\rho_a>\rho_c$, so that $J=(1-q)\rho_b(1-\rho_b)$ ; this is represented in red on fig.-6, and requires $\rho_b>\frac{1}{2}$ and $\rho_a>1-\rho_b$ ; this is called the High Density phase: the right boundary imposes its (high) density to the whole system, apart from an exponential boundary layer at the left boundary ; all this can be obtained from the Low Density phase through a left$\leftrightarrow$right and particle$\leftrightarrow$hole symmetry. $\rho_a=\rho_c$ and $\rho_b=1-\rho_c$, so that $J=(1-q)\rho_a(1-\rho_a)=(1-q)\rho_b(1-\rho_b)$ ; this is represented in green on fig.-6, and requires $\rho_a=1-\rho_b<\frac{1}{2}$ ; this is called the Shock Line, since the domain wall going from $\rho_c$ to $1-\rho_c$ is a shock, which can be positioned anywhere in the system ; the steady state is in fact a superposition of all possible shock positions, and the average density is linear from $\rho_a$ to $\rho_b$. The only situation which is not accounted for by these three cases is $\rho_a>\frac{1}{2}$ and $\rho_b<\frac{1}{2}$ ; since $\rho$ cannot decrease between $\rho_c$ and $1-\rho_c$, this requires $\rho_c=1-\rho_c=\frac{1}{2}$, so that $J=\frac{1-q}{4}$ ; this being the largest current allowed, this is called the Maximal Current phase ; the density is close to $\frac{1}{2}$ in the whole system, apart from algebraic boundary layers on both sides. We can summarise this by drawing the phase diagram of the system (fig.-7). The transitions between the MC phase and the HD and LD phases are continuous in both the current and the density profiles. The transition over the SL, however, is discontinuous in the profiles (the mean density goes from $\rho_c$ to $1-\rho_c$), but still continuous in the current. Figure 7 Figure 7. Phase diagram of the open ASEP. The values of the mean current in each phase are given in blue, and the mean density profiles are represented in the insets. Matrix Ansatz In this section, we present the exact form of the steady state which we approximated in the previous one. It is formulated in terms of the famous matrix Ansatz, devised by Derrida, Evans, Hakim and Pasquier in [39], using the recursion relations found by Derrida, Domany and Mukamel in [104] for the TASEP. Since the techniques involved here are rather specific to the ASEP, as they are related to it being integrable (although matrix Ansatze have been used for models which are not known to be integrable, such as the ABC model [155]), we will be much less precise here, and will merely give the main results as well as a few indications on the methods appropriate to approach them. For more details, one may refer to [39] or to section II.2.1 of [1]. The statement of the matrix Ansatz is the following: for any configuration $\mathcal{C}=\{n_i\}$, with $n_i\in \{0,1\}$, the steady state probability $P^\star({\mathcal C})$ can be written in terms of a product of $L$ matrices which may take two values $D$ and $E$, sandwiched between two vectors $\langle\!\langle W|\!|$ and $|\!|V\rangle\!\rangle$, up to a normalisation (those vectors are written using double bras and kets to distinguish the inner space of $D$ and $E$ from the physical space on which $M$ acts). The product of matrices corresponding to $\mathcal{C}$ is obtained by multiplying matrices $D$ for each particle, and $E$ for each hole, in the same order as they appear in the configuration. In other terms: \begin{equation}\label{II-2-Pstar} P^\star({\mathcal C}) = \frac{1}{Z_L}\langle\!\langle W|\!|\prod_{i=1}^L\left(n_i D+(1-n_i)E\right)|\!|V\rangle\!\rangle \end{equation} with the normalisation factor equal to the sum of all possible products, which is to say \begin{equation} Z_L=\langle\!\langle W|\!|(D+E)^L|\!|V\rangle\!\rangle \end{equation} so that $\sum\limits_{{\cal C}}P^\star({\mathcal C})=1$. For instance, the stationary probability of configuration $101101$ for a system of size $6$ is given by $\frac{\langle\!\langle W|\!|DEDDED|\!|V\rangle\!\rangle}{\langle\!\langle W|\!|(D+E)^6|\!|V\rangle\!\rangle}$. These matrices and vectors are of course not arbitrary, but must verify the following conditions: \begin{empheq}[box=\fbox]{align} \langle\!\langle W|\!|(\alpha E-\gamma D) &= (1-q )\langle\!\langle W|\!| , DE-q ED &= (1-q)\left(D+E\right) , (\beta D-\delta E)|\!|V\rangle\!\rangle &= (1-q)|\!|V\rangle\!\rangle. \end{empheq} Given these, it is rather straightforward to check that $M|P^\star\rangle=0$. We can obtain the stationary current from any of the equations (\ref{II-1-Ji1} - \ref{II-1-Ji3}), in combination with the corresponding equation from (3.2.2. - 3.2.2.). For instance, equation (\ref{II-1-Ji2}) writes \begin{equation} J= \frac{1}{Z_L}\langle\!\langle W|\!| (D+E)^{i-2} DE (D+E)^{L-i} |\!|V\rangle\!\rangle-q\frac{1}{Z_L}\langle\!\langle W|\!| (D+E)^{i-2} ED(D+E)^{L-i} |\!|V\rangle\!\rangle=(1-q)\frac{Z_{L-1}}{Z_L} \end{equation} using equation (3.2.2.) to simplify the expression. In all cases, we obtain \begin{equation}\label{II-2-J}\boxed{ J=(1-q)\frac{Z_{L-1}}{Z_L}} \end{equation} and we are left having only to calculate $Z_L$ for any $L$. This calculation was first done in [68], while the simpler equivalent for the TASEP can be found in [39]. Since $Z_L$ is a projection of $(D+E)^L$, the natural procedure is to diagonalise $(D+E)$. Defining two new matrices $d$ and $e$ such that $D=1+d$ and $E=1+e$, the bulk algebra (3.2.2.) becomes \begin{equation} de-q~ed=(1-q) \end{equation} which is that of a $q$-deformed harmonic oscillator [156], where $d$ is the annihilation operator, and $e$ the creation operator. The eigenstates of $(D+E)$ can then be found to be $q$-deformed coherent states, with a unitary complex parameter $z$. One can then write the corresponding representation of the identity into $Z_L$, and obtain, after a few lines of calculations, \begin{equation}\label{II-2-ZLint}\boxed{ Z_L= \frac{1}{2}\oint_{S} \frac{dz}{2 i \pi z}\frac{(1+z)^L(1+z^{-1})^L(z^2,z^{-2})_{\infty}}{ (a z,a/z,\tilde{a}z,\tilde{a}/z,b z,b/z,\tilde{b}z,\tilde{b}/z)_{\infty}},} \end{equation} where $(\cdot)_\infty$ is the $q$-Pochhammer symbol \begin{equation} (x)_{\infty}=\prod_{k=0}^{\infty}(1-q^k x) \end{equation} with the notation convention $(x,y)_{\infty}=(x)_{\infty}(y)_{\infty}$. The two parameters $\tilde{a}$ and $\tilde{b}$ are defined similarly to $a$ and $b$, but with a minus sign in front of the square roots, and have absolutely no importance in any of the calculations that we will do. For $a<1$ and $b<1$, the domain of integration is the unit circle, which is to say that we take the residues at all the poles of the integrand which are in $S=\{0;aq^k,\tilde{a}q^k,bq^k,\tilde{b}q^k\}_{k\in\mathbb{N}}$, and not at the other ones (which are their inverses). Since $Z_L$ is analytic in all the parameters, the poles of $F$ at which we have to take a residue are always the same, even if one of them leaves the unit circle. For that reason, the integral in (\ref{II-2-ZLint}) must be done around $S$ rather than the unit circle. We may rapidly remark on the origin of each term in that expression: $(1+z)^L(1+z^{-1})^L$ is the eigenvalue of $(D+E)$, to the power $L$ ; $(z^2,z^{-2})_{\infty}$ comes from the normalisation of the eigenvectors of $(D+E)$ ; $(a z,a/z,\tilde{a}z,\tilde{a}/z)_{\infty}$ and $(b z,b/z,\tilde{b}z,\tilde{b}/z)_{\infty}$ come, respectively, from the scalar product between $\langle\!\langle W|\!|$ and the right eigenvector of $(D+E)$, and between $|\!|V\rangle\!\rangle$ and the left eigenvector of $(D+E)$. We can finally take $L$ to infinity in $Z_L$, to obtain the average current for large sizes. First of all, since the contour integral is done around an infinite number of poles, we will keep things as simple as possible: for any finite value of $a$ and $b$, $S$ contains all the poles inside the unit circle, plus a few poles outside of it (those for which $aq^k>1$ or $bq^k>1$), minus the inverse of those poles. Because of the symmetry of the integrand, the poles to not be taken in the unit circle have the opposite residue of those to be taken out of the unit circle, so that, all in all, the integral can be written around the unit circle, plus twice the residues around every pole of the form $aq^k>1$ or $bq^k>1$ outside of the circle. The poles related to $\tilde{a}$ and $\tilde{b}$ always stay inside of the unit circle, which is why they do not matter. This is summarised on fig.-8. Figure 8 Figure 8. Only the pole at $0$ and those related to $a$ are represented ; the dashed line in the first figure is part of the unit circle ; the three sets of contours are equivalent. To find the large $L$ behaviour of the integral, we need to note that, be it on the unit circle, or on the line $[1,+\infty[$ on which all the poles sit, the dominating part of the integrand is $(1+z)^L(1+z^{-1})^L$ which is largest on the real axis and increases with $z$. If any of the poles in $S$ are outside of the unit circle, the one which is the furthest from $1$ dominates, so that $Z_L\sim \max[(1+a)^L(1+a^{-1})^L,(1+b)^L(1+b^{-1})^L]$. Otherwise, the integral on the unit circle is dominated by the value at $z=1$ and $Z_L\sim 4^L$. It is left to the reader, as an exercise, to replace these expressions in eq.(\ref{II-2-J}) and recover the phase diagram of $J$. Using similar techniques, one can compute the average density, as well as correlation functions [108,64,109], and validate the mean field calculations performed earlier. Note that we will treat contour integrals of the kind we just encountered in more detail in the next section, where we will see that all the cumulants of the current involve integrals of powers of the very same integrand. Exact cumulants and large size limit In the previous section, we saw what could be said of the average current in the open ASEP. In this section, we use the methods presented in 2.2. to extend our analysis to the fluctuations of the current as well, through the computation of the generating function of its cumulants, which, as we saw, is the largest eigenvalue of a deformed Markov matrix. We will first construct that matrix, and remark on some of its symmetries, which will help simplify the problem. We will then see how to obtain exact expressions for its largest eigenvalue, using techniques relying on the integrability of the model: this will be done for the periodic TASEP, using the coordinate Bethe Ansatz [96], and for the open ASEP, using the Q-operator method [125], at only a moderate level of detail, as these methods are entirely specific to the ASEP and are useful for other integrable models but not for other driven lattice gases. Once these exact expressions have been obtained, we will see how we can take the limit of large sizes and obtain the behaviour of the large deviation function of the current for small fluctuations. Deformed Markov matrix for the current The very first thing we need to define is precisely which current we intend to analyse. As we saw in section 3.2., we can define a current $j_i$ for each bond in the system. Their averages $J_i$ need to be equal in the steady state, but we can in principle choose which current to monitor, take any linear combination of them, or even keep them all as independent variables. Each current defines a time-additive jump observable with $U(\mathcal{C}',\mathcal{C})=\pm 1$, in the notations of eq.(\ref{I-2-F}), if the transition increases or decreases that specific current, or $0$ if the jump is at a different location. We can then deform our Markov matrix with respect to all these currents, each with a conjugate parameter $\mu_i$ (fig.-9). Figure 9 Figure 9. We obtain the following matrix \begin{equation}\label{III-1-Mmu} M_{\{\!\mu_i\!\}}=m_0(\mu_0)+\sum_{i=1}^{L-1} M_{i}(\mu_i)+m_L(\mu_l) \end{equation} with \begin{equation}\label{III-1-Mmui} m_0(\mu_0)=\begin{bmatrix} -\alpha & \gamma{\rm e}^{-\mu_0} \\ \alpha{\rm e}^{\mu_0} & -\gamma \end{bmatrix}~,~ M_{i}(\mu_i)=\begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & -q & {\rm e}^{\mu_i} & 0 \\ 0 & q{\rm e}^{-\mu_i}& -1 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}~,~m_L(\mu_L)=\begin{bmatrix} -\delta & \beta{\rm e}^{\mu_L} \\ \delta{\rm e}^{-\mu_L} & -\beta \end{bmatrix} \end{equation} (where, as before, it is implied that $m_0$ acts as written on site $0$ in the basis $\{0,1\}$ and as the identity on the other sites, and the same goes for $m_L$ on site $L$; similarly, $M_i$ is expressed by its action on sites $i$ and $i\!+\!1$ in the basis $\{00,01,10,11\}$ and acts as the identity on the rest of the system). The largest eigenvalue $E({\{\!\mu_i\!\}})$ of that matrix is the joint generating function of the cumulants of all the local currents, and the left and right eigenvectors carry the probabilities of configurations conditioned on the values of the integrated currents going to or coming from the steady state (as explained in section 2.). Before proceeding to analyse that deformed Markov matrix, we should remark on a rather useful symmetry. If one considers the diagonal matrix $R_i(\lambda)$ (with $1\leq i\leq L$) with an entry ${\rm e}^{\lambda}$ for all configurations for which site $i$ is occupied, and $1$ otherwise, one may easily check that the matrix similarity $R_i(\lambda)^{-1}M_{\{\mu_i\}}R_i(\lambda)$ simply replaces $M_{i-1}(\mu_{i-1})$ and $M_{i}(\mu_{i})$ by, respectively, $M_{i-1}(\mu_{i-1}-\lambda)$ and $M_{i}(\mu_{i}+\lambda)$, and leaves the rest of $M_{\{\mu_i\}}$ unchanged. That is to say that part of the deformation is transferred from $M_{i-1}(\mu_{i-1})$ to $M_{i}(\mu_{i})$. Using combinations of this transformations for any sites and parameters $\lambda$, we conclude that all the Markov matrices deformed with respect to the currents are similar, and therefore have the same eigenvalues, as long as the sum of the deformation parameters $\mu=\sum_{i=0}^{L}\mu_i$ is fixed. Note that the eigenvectors, however, are different, but related to each other through those simple transformations. We will therefore write $E(\mu)$ instead of $E({\{\!\mu_i\!\}})$. Since we are mainly interested in the eigenvalues of $M_{\{\mu_i\}}$ rather than its eigenvectors, we may choose a specific combination of $\mu_i$'s to simplify our calculations. We will therefore put all the deformation on the first jump matrix $m_0(\mu)$ and leave the others un-deformed, unless specified otherwise. We will write \begin{equation}\label{III-1-M0mu} M_{\mu}=m_0(\mu)+\sum_{i=1}^{L-1} M_{i}+m_L. \end{equation} We may make one last remark on the deformed Markov matrix. There is a particular set of weights $\{\mu_i\}$ defined by \begin{equation} \{\mu_0=\nu\log{\biggl(\frac{\alpha}{\gamma}\biggr)},~~\mu_i=\nu\log{(1/q)},~~\mu_L=\nu\log{\biggl(\frac{\beta}{\delta}\biggr)} \} \end{equation} for which $M_{\{\mu_i\}}$ becomes: \begin{equation}\label{MLambda} m_0=\begin{bmatrix} -\alpha & \gamma^{1+\nu}\alpha^{-\nu} \\ \alpha^{1+\nu}\gamma^{-\nu} & -\gamma \end{bmatrix}~,~ M_{i}=\begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & -q & q^{-\nu} & 0 \\ 0 & q^{1+\nu}& -1 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}~,~m_L=\begin{bmatrix} -\delta & \beta^{1+\nu}\delta^{-\nu} \\ \delta^{1+\nu}\beta^{-\nu} & -\beta \end{bmatrix} \end{equation} which is the deformed Markov matrix measuring the entropy production. We see immediately, as before, that \begin{equation} M_{-1-\nu}= ~^t\!M_\nu \end{equation} which implies the Gallavotti-Cohen symmetry for the eigenvalues and between the left and right eigenvectors of $M_\nu$ with respect to the transformation $\nu\leftrightarrow(-1\!-\!\nu)$. Considering that $\mu=\nu \log{\Bigl(\frac{\alpha\beta}{\gamma \delta q^{L-1}}\Bigr)}$, we also obtain the Gallavotti-Cohen symmetry related to the current, namely \begin{equation}\boxed{ E(\mu)=E\biggl(-\log{\Bigl(\frac{\alpha\beta}{\gamma \delta q^{L-1}}\Bigr)}-\mu\biggr)} \end{equation} which is also valid for the other eigenvalues of $M_\mu$, and the corresponding relations between the right and left eigenvectors, as well as a simple relation between the microscopic entropy production $s$, conjugate to $\nu$, and the macroscopic current $j$, conjugate to $\mu$: \begin{equation}\label{sj} s= j\log{\Bigl(\frac{\alpha\beta}{\gamma \delta q^{L-1}}\Bigr)}. \end{equation} There are several points to be noted here. First of all, those weights are ill-defined for the TASEP: micro-reversibility (i.e. the fact that for any allowed transition, the reverse transition is also allowed) is essential to have a fluctuation theorem. Moreover, if we take either the $q\rightarrow 0$ or the $L\rightarrow\infty$ limit, the centre of the Gallavotti-Cohen symmetry $\mu=-\frac{1}{2}\log{\Bigl(\frac{\alpha\beta}{\gamma \delta q^{L-1}}\Bigr)}$ is rejected to $-\infty$, so that the 'negative current' part of the fluctuations is lost. Finally, we may consider the detailed balance case, where $\frac{\alpha\beta}{\gamma \delta q^{L-1}}=1$. In that case, we saw in section 2.2.4. that there is no entropy production whatsoever, i.e. that $E(\nu)=0$ (where we use the letter $E$ abusively, since it is not the same function as $E(\mu)$). We see, indeed, from eq.(\ref{sj}), that $s=0$. This does not mean, however, that $j=0$: the deformations through $\mu$ and $\nu$ are in that case not equivalent, and $E(\mu)\neq 0$. The only implication this has on $E(\mu)$ is that it is an even function: $E(\mu)=E(-\mu)$, all the odd cumulants are zero, and positive and negative currents of the same amplitude are equiprobable. Exact cumulants of the current from integrability As we have mentioned earlier, the deformed Markov matrix of which we seek to extract the largest eigenvalue is integrable: its algebraic properties make it tractable in a systematic way, at least in principle, much like a quadratic one-dimensional spin chain Hamiltonian can always be diagonalised through free fermion techniques, although diagonalising it in practice is still a difficult problem. A wide variety of methods have been developed to tackle integrable models, such as the Bethe Ansatz and its variants (coordinate [157], algebraic [28], functional [32], off-diagonal [158,159], modified [160]), the Q-operator approach [29], the q-Onsager algebra approach [161,162], the separation of variables approach [163], and more [164]. In this section, we will focus on two of these approaches, which are well adapted to our endeavour. We will first examine the coordinate Bethe Ansatz for the periodic TASEP [96], which is one of the simplest methods among the ones we mentioned, but requires the number of particles in the system to be fixed. We will then look at the Q-operator approach for the open ASEP [125], which allows to calculate the generating function of the cumulants of the current exactly, although the analytical form of the solution has a few drawbacks, as we shall see. Considering that these methods and calculations are rather specific to the ASEP and its being integrable, and are of little use for other bulk-driven particle gases, we will only give the minimum level of detail necessary to understand how the results are attained and why they have their peculiar structure. References will be given along the way for the readers in need of more detail. Periodic case: coordinate Bethe Ansatz The coordinate Bethe Ansatz is perhaps the simplest way to approach the ASEP [157,165], but requires the number of particles to be fixed. It relies on the fact that integrable systems can be understood as a generalisation of free fermions, where the anti-commutation rule depends on the particles being exchanged. As a result, the eigenstates of the system can be written as generalised determinants of one-particle eigenstates, where the coefficient of each term is a function of the permutation instead of its sign. We will see here how this can be used for the totally asymmetric case, as was first done in [96], and which, while being much simpler than the general case, has essentially the same behaviour. These results were later extended to the general periodic case in [32]. To make our calculations simpler, we will count the current on every bond in the system with equal weight, so that the system retains its translational invariance. The deformed Markov matrix we are considering is therefore \begin{equation}\label{III-1-MperMu} M_{\mu}=\sum_{i=1}^{L} M_{i}(\mu/L) \end{equation} with \begin{equation}\label{III-1-Mmui2} M_{i}(\mu/L)=\begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 &0 & {\rm e}^{\mu/L} & 0 \\ 0 & 0& -1 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}, \end{equation} acting on sites $i$ and $i+1$, with $L+1\equiv 1$. Note that the number of particles, $N$, is conserved. Let us first take $N=1$. The system is then simply a totally biased random walk on a circle. Its eigenvectors are of the form \begin{equation} |\psi^{(1)}(z)\rangle=\sum\limits_{x=1}^{L}z^x|x\rangle, \end{equation} where $|x\rangle$ is the configuration with the particle at site $x$. The periodicity condition imposes that $z^L=1$, which has $L$ solutions. The eigenvalue associated with each of these solutions is $\Lambda(z)=\bigl(\frac{{\rm e}^{\mu/L}}{z}-1\bigr)$. We now consider $N=2$. Around configurations where the two particles are far from each other, at positions $x_1<x_2$ (with an arbitrary site being labelled as $1$), the system looks like two independent random walks, so that states such as \begin{equation} \sum\limits_{x_1<x_2}z_1^{x_1}z_2^{x_2}|x_1,x_2\rangle \end{equation} are locally stable, with an eigenvalue $\Lambda(z_1,z_2)=\bigl(\frac{{\rm e}^{\mu/L}}{z_1}+\frac{{\rm e}^{\mu/L}}{z_2}-2\bigr)$. This cannot be an eigenstate, since the two particles are in fact not independent, so that configurations where the two particles are next to each other would not be multiplied by $\Lambda(z_1,z_2)$: the contributions that would have come from the configuration where both particles are at the same site are missing. However, we see that the eigenvalue is invariant under exchange of $z_1$ and $z_2$, so that there are (at least) two locally stable states with that same eigenvalue. Taking a suitable combination of the two allows to make those missing terms cancel, and we obtain a true eigenstate. This is somewhat similar to the 'method of images' which is generally used for random walks with walls. Let us therefore consider \begin{equation} |\psi^{(2)}(z_1,z_2)\rangle=\sum\limits_{x_1<x_2}a~z_1^{x_1}z_2^{x_2}+b~z_2^{x_1}z_1^{x_2}|x_1,x_2\rangle. \end{equation} Writing the eigenvalue equation for a configuration $|x_1,x_1+1\rangle$, the coefficients $a$ and $b$ need to be such that \begin{equation} \Lambda(z_1,z_2)(a~z_1^{x_1}z_2^{x_1+1}+b~z_2^{x_1}z_1^{x_1+1})={\rm e}^{\mu/L}(a~z_1^{x_1-1}z_2^{x_1+1}+b~z_2^{x_1-1}z_1^{x_1+1})-(a~z_1^{x_1}z_2^{x_1+1}+b~z_2^{x_1}z_1^{x_1+1}) \end{equation} which simplifies to \begin{equation}\label{Betheab} a({\rm e}^{\mu/L}-z_2)+b({\rm e}^{\mu/L}-z_1)=0. \end{equation} Moreover, due to the periodicity of the system, and because the eigenstates of $M_\mu$ cannot depend on the arbitrary labelling of the sites which we chose, we need to have \begin{equation} a~z_1^{x_1}z_2^{x_2}+b~z_2^{x_1}z_1^{x_2}=a~z_1^{x_2}z_2^{x_1+L}+b~z_2^{x_2}z_1^{x_1+L} \end{equation} for any $x_1$ and $x_2$, which is to say \begin{equation}\label{abz1z2} a=b~z_1^L~~~~{\rm and}~~~~b=a~z_2^L. \end{equation} Since we are only interested in the eigenvalues of $M_\mu$, we do not need to worry about $a$ and $b$. Eliminating them from eq.(\ref{abz1z2}), we get the 'Bethe equations' \begin{equation} z_i^L=\frac{(z_i-{\rm e}^{\mu/L})}{({\rm e}^{\mu/L}-z_j)}~~~~{\rm for}~~~~i=1,2~;~j\neq i \end{equation} of which the solution can then be used to obtain $\Lambda(z_1,z_2)$. This simple case can then be extended to more particles without much effort: the integrability of the model ensures that triple collisions (when three particles are on adjacent sites) are merely combinations of double collisions, and do not add extra constraints to the $z_i$'s. It is then straightforward to generalise eq.(\ref{Betheab}) to a relation between coefficients of terms with only two adjacent $z_i$'s being exchanged. Eliminating them from the relations imposed by periodicity, we obtain the Bethe equations for $N$ particles (and we recall the expression of the eigenvalues): \begin{equation}\label{BethePer}\boxed{ \Lambda(\{z_i\})=\sum\limits_{i=1}^N\biggl(\frac{{\rm e}^{\mu/L}}{z_i}-1\biggr)~~~~{\rm with}~~~~z_i^L=\frac{(z_i-{\rm e}^{\mu/L})^{N-1}}{\prod\limits_{j\neq i}({\rm e}^{\mu/L}-z_j)}~~~~{\rm for}~~~~i=1..N.} \end{equation} For more details on how to obtain these equations, one may refer to chapter 8 of [29] or to [1]. The Bethe equations are a system of $N$ coupled non-linear equations, which cannot be systematically solved in principle. Moreover, since the number of solutions to these equations is unknown a priori, it is not obvious that every eigenstate of the model corresponds to one of these solutions. That being said, the fact that we are looking for the largest eigenvalue of $M_\mu$, which has non-negative entries save for its diagonal, ensures that we can attain our goal: the Perron-Frobenius theorem [166] tells us that the largest eigenvalue $E(\mu)$ of $M_\mu$ is always non-degenerate, which means that we can follow it, and the corresponding eigenvectors, continuously by varying $\mu$. Moreover, we know that $E(0)=0$, and it is easy to check that the right eigenvector for that eigenvalue at $\mu=0$ is uniform in the chosen occupancy sector (i.e. all the weights of configurations with $N$ particles are equal, and all the others are $0$) ; we will write it as $|1_N\rangle$. It corresponds to a Bethe state with all $z_i$'s equal to $1$. This information, combined with the Bethe equations, is enough to obtain an expression for $E(\mu)$. To that purpose, we first need to change variables, for a simple matter of convenience: we will write $z_i={\rm e}^{\mu/L}(1+y_i)$, so that the eigenvalues and the Bethe equations become, after a few simple manipulations, \begin{equation}\label{BethePerY} \Lambda(\{z_i\})=\sum\limits_{i=1}^N\biggl(\frac{1}{1+y_i}-1\biggr)~~~~{\rm with}~~~~{\rm e}^{\mu}\frac{(1+y_i)^L}{y_i^N}=(-1)^{N-1}\biggl(\prod\limits_{j=1}^N y_j\biggr)^{-1}~~~~{\rm for}~~~~i=1..N. \end{equation} The state we are interested in then corresponds to the solution with $y_i\rightarrow 0$ when $\mu\rightarrow 0$. Notice that the right-hand side of the Bethe equations is the same for every $i$. Let us define $B=(-1)^{N-1}{\rm e}^{\mu}\prod\limits_{j=1}^N y_j$, and \begin{equation}\label{hBethe} h(y)=\frac{(1+y)^L}{y^N}. \end{equation} The Bethe equations then tell us that all the $y_i$'s are roots of $1-Bh(y)$, with the self-consistency condition given by the definition of $B$. Moreover, it is easy to check that, for $\mu$ small enough, and $B$ smaller than $1$, $1-Bh(y)$ has exactly $N$ roots inside of the unit circle among its $L$ roots. Since we know that the $y_i$'s should be small, these are the roots we are looking for. Summarising all this, we have three relations which we need to combine in order to obtain an expression of $E(\mu)$: the $y_i$'s are the roots of $1-Bh(y)$ which are inside of the unit circle ; $E(\mu)=\sum\limits_{i=1}^N\Bigl(\frac{1}{1+y_i}-1\Bigr)$ ; $\mu=\sum\limits_{i=1}^N\log\bigl(-B^{\frac{1}{N}}/y_i\bigr)$. Since both $E$ and $\mu$ are of the form $\sum\limits_{i=1}^N f(y_i)$, we can write them as contour integrals around the unit circle $c_1$, using the first relation: \begin{equation} \sum\limits_{i=1}^N f(y_i)=\oint_{c_1}\frac{dz}{\imath2\pi} f(z)\frac{(-Bh'(z))}{1-Bh(z)} \end{equation} which, after an integration by parts, becomes \begin{equation} \oint_{c_1}\frac{dz}{\imath2\pi} f'(z)\log\bigl(1-Bh(z)\bigr) \end{equation} with $f'(z)=-(1+z)^{-2}$ for $E$ and $f'(z)=-z^{-1}$ for $\mu$. We can also check that there is no extra constant part coming from the integration by parts of a logarithm: for $\mu\rightarrow0$, both $E$ and $\mu$ have to vanish, which is the case here through $B\rightarrow0$. We therefore have an implicit expression for $E(\mu)$ in terms of $B$, given by \begin{equation}\boxed{ \mu=\oint_{c_1}\frac{dz}{\imath2\pi z}\log\bigl(1-Bh(z)\bigr)} \end{equation} and \begin{equation}\boxed{ E(\mu)=\oint_{c_1}\frac{dz}{\imath2\pi(1+z)^2}\log\bigl(1-Bh(z)\bigr),} \end{equation} which we may expand as series in $B$, and calculate the coefficients (which are all binomial numbers ; it is left as an exercise to the reader to check it): \begin{equation} \mu=-\sum_{k=1}^{\infty}C_{k}\frac{B^k}{k}~~~~{\rm with}~~~~C_k=\binom{kL}{kN} \end{equation} and \begin{equation} E(\mu)=-\sum_{k=1}^{\infty}D_{k}\frac{B^k}{k}~~~~{\rm with}~~~~D_k=\binom{kL-2}{kN-1}. \end{equation} The periodic ASEP, with $q\neq 0$, yields results with a similar structure, as was found in [32]. From these formulae, one can, in principle, calculate any cumulant in a finite number of steps, by inverting $\mu(B)$ order by order and injecting it into $E(B)$. Since we want to take a Legendre transform of $E(\mu)$ to obtain the large deviation function of the current, we are rather interested in an closed expression of $E(\mu)$, at least in some limit. We will see how to obtain that shortly, but we will first have a look at the open ASEP, where the coordinate Bethe Ansatz only applies in special cases [34,35,38]. Open case: Q-operator method We now go back to the open ASEP, with a deformation on $m_0$ as in eq.(\ref{III-1-M0mu}). Because the number of particles is not fixed, the coordinate Bethe Ansatz cannot be applied for generic boundary parameters, and we will have to use a different method: that of the so-called 'Q-operator', sometimes called 'auxiliary operator' [167], which gives a relatively simple way to obtain the expressions we seek. The main difference between this method and that of the Bethe Ansatz is that it does not require an Ansatz for the eigenvectors to access the eigenvalues. Moreover, it is entirely algebraic: it yields relations verified by the matrix $M_\mu$, rather than by parameters entering an Ansatz for its eigenvectors (which is the case for the Bethe equations), so that the completeness of the solution does not need to be proven a posteriori. The calculations and proofs involved in applying the Q-operator method are rather lengthy and technical, and we will not dwell on them here. All that we will see here is based on [1,125], to which the reader may refer for more details. At the core of this method is a generalisation of the $d$ and $e$ matrices which we used in section 3.2.2. to build the Matrix Ansatz. Let us consider similar matrices, with a slightly different q-commutation rule, as well as a third matrix $A$, defined as \begin{equation}\label{dexy} de-q~ed=(1-q)(1-x^2 A^2)~~~~,~~~~dA-q~Ad=0~~~~,~~~~Ae-q~eA=0, \end{equation} where $x$ is a free parameter. As before, one can think of $d$ and $e$ as the annihilation and creation operators of a q-deformed harmonic oscillator with an extra parameter, and $A$ as the q-counting matrix $q^n$ (where $n$ is the number of excitations). In this usual representation, they can be written down as \begin{equation} d=\sum\limits_{n=1}^{\infty}(1-q^n)|\!|n-1\rangle\!\rangle\langle\!\langle n|\!|~~,~~e=\sum\limits_{n=0}^{\infty}(1-x^2 q^n)|\!|n+1\rangle\!\rangle\langle\!\langle n|\!|~~,~~A=\sum\limits_{n=0}^{\infty} q^n|\!|n\rangle\!\rangle\langle\!\langle n|\!|. \end{equation} In section 3.2.2., we used these objects to build a vector, with $1+d$ for a particle, and $1+e$ for a hole. Here, we need them to construct a transfer matrix, in a similar way. Let us define a $2\times 2$ matrix $X(x)$ with those objects as entries: \begin{equation} X(x)=\begin{bmatrix} 1+x A & e\\ d &1+x A\end{bmatrix}, \end{equation} expressed in basis $\{0,1\}$. We also need to define a matrix $A_\mu$ such that \begin{equation} dA_\mu-{\rm e}^{-\mu}~A_\mu d=0~~~~,~~~~A_\mu e-{\rm e}^{-\mu}~eA_\mu=0, \end{equation} which is to say $A_\mu={\rm e}^{-n\mu}$, with the same structure as $A$ but with $q$ replaced by ${\rm e}^{-\mu}$. Moreover, just as for the matrix Ansatz in section 3.2.2., we need to define special vectors to be placed at the boundaries, containing the parameters from $m_0$ and $m_L$. In the present case, we need four vectors $|\!| V \rangle\!\rangle$, $\langle\!\langle W|\!|$, $|\!|\tilde{V} \rangle\!\rangle$ and $\langle\!\langle \tilde{W}|\!|$, such that \begin{align} [\beta (d+1+x A) - \delta (e+1+x A) -(1-q)] ~|\!| V \rangle\!\rangle &= 0 \label{V-2-V},\\ \langle\!\langle W|\!|~ [\alpha(e+1+x A)- \gamma (d+1+x A)-(1-q)] &= 0\label{V-2-W},\\ [\beta (d-1-x A)-\delta (e-1-x A) +(1-q)x A]~|\!| \tilde{V} \rangle\!\rangle &=0\label{V-2-Vt}, \\ \langle\!\langle \tilde{W}|\!|~ [\alpha(e-1-x A) - \gamma (d-1-x A)+(1-q)x A] &=0\label{V-2-Wt}. \end{align} As one can easily check, the first two are generalisations of the ones which we used for the Matrix Ansatz (which we recover by taking $x=0$). Using all those elements, we build two transfer matrices $U_\mu(x)$ and $T_\mu(x)$ (each corresponding to a pair of boundary vectors), with a structure very similar to that of the matrix Ansatz: instead of a product of matrices $D$ and $E$ for particles or holes, we use the elements of $X(x)$ for transitions between configurations (i.e. $X_{i,j}$ at site $k$ if the initial configuration has occupancy $j$ at site $k$, and the final one has occupancy $j$ at site $k$). We also need to add a matrix $A_\mu$ at the place where we are counting the current, which is between the left border and the first site in this case (if we had more general $\mu_i$'s, we would need a matrix $A_{\mu_i}$ for each of those). For instance, the entry of $U_\mu(x)$ from configurations $001010$ to the right to $010111$ to the left is given by $\langle\!\langle W|\!| A_\mu (1+xA)~d~e~d~(1+xA)~d |\!| V \rangle\!\rangle$. We write these matrices in the following way \begin{align} U_\mu(x)&=\langle\!\langle W|\!| A_\mu \prod_{i=1}^{L}X^{(i)}(x) |\!| V \rangle\!\rangle,\\ T_\mu(x)&=\langle\!\langle \tilde{W}|\!| A_\mu \prod_{i=1}^{L}X^{(i)}(x)|\!| \tilde{V} \rangle\!\rangle, \end{align} with an exponent $(i)$ on each $X$ which serves only to signify that it acts on site $i$. Note that those transfer matrices are in principle defined up to a non-trivial normalisation, which can be a function of $x$, and is implicitly included in the boundary vectors. That normalisation plays a part in particular in equation (\ref{V-2-PQs}), which is not homogeneous in $U_\mu$ and $T_\mu$, and we choose it so that eq.(\ref{t1F}) holds. The main property of these matrices is that any product $U_\mu(x)T_\mu(y)$ commutes with any other product $U_\mu(x')T_\mu(y')$, and with $M_\mu$: \begin{equation}\label{V-2-MUTs} [U_\mu(x)T_\mu(y), U_\mu(x')T_\mu(y')]=0~~~~,~~~~[U_\mu(x)T_\mu(y),M_\mu]=0, \end{equation} which can be shown using so-called 'R-matrices', as is customary for transfer matrix methods [28]. Note that the dependence in $x$ of each of the matrices is not only in the diagonal entries of $X$ and in the relations defining the boundary vectors, but also implicitly in $d$ and $e$ through eq.(\ref{dexy}). Those two transfer matrices do not commute with one-another, but we may use them to define two that do: $P(x)$ and $Q(x)$ defined by \begin{equation} P(x)=U_\mu(x)\Bigl[U_\mu(0)\Bigr]^{-1}~~~~,~~~~Q(x)=(1-{\rm e}^{-\mu})^{-1}U_\mu(0)T_\mu(x) \end{equation} such that: \begin{equation} U_\mu(x)T_\mu(y)=(1-{\rm e}^{-\mu})P(x)Q(y). \end{equation} The downside of these is that, unlike $U_\mu$ and $T_\mu$, $P$ is not defined constructively because it involves $[U_\mu(0)]^{-1}$ (which exists for a generic $\mu$), and does not have the same product structure. Apart from their commuting with $M_\mu$ (which means they have the same eigenvectors), the operators $P$ and $Q$ have another useful property: given certain constraints on their variables, their product decouples into a transfer matrix with a finite auxiliary space and a shifted version of the product. More precisely, for every positive integer $k$, \begin{equation}\label{V-2-PQs} P(x)Q(1/q^{k-1}x)=t^{(k)}(x)+{\rm e}^{-2k\mu}P(q^k x)Q(q/x). \end{equation} The matrix $t^{(k)}(x)$ is the transfer matrix of an integrable vertex model with a $k$-dimensional auxiliary space. For instance, $t^{(2)}(x)$ is the transfer matrix of the six-vertex model [29]. Moreover, $t^{(1)}(x)$ is a scalar matrix, which we can calculate, choosing the appropriate normalisation for $U_\mu$ and $T_\mu$: \begin{equation}\label{t1F} t^{(1)}(x)\equiv F(x)=\frac{(1+x)^L(1+x^{-1})^L(x^2,x^{-2})_{\infty}}{ (a x,a/x,\tilde{a}x,\tilde{a}/x,b x,b/x,\tilde{b}x,\tilde{b}/x)_{\infty}}, \end{equation} where the identity operator on configuration space is implicit. We already encountered that function inside of a contour integral in eq.(\ref{II-2-ZLint}). $t^{(2)}(x)$ and $F(x)$ are strongly connected to $M_\mu$, through the usual relation between the six-vertex transfer matrix and the Hamiltonian of the spin-$\frac{1}{2}$ XXZ chain (which is equivalent to the ASEP): \begin{equation} M_\mu=\frac{1}{2}(1-q)\frac{d}{d x} \log\biggl(\frac{t^{(2)}(x)}{F(qx)}\biggr)\bigg|_{x=-1}. \end{equation} Combining eq.(\ref{V-2-PQs}) for $k=1$ and $k=2$, we can eliminate $P$ and obtain the 'T-Q equation': \begin{equation}\label{V-2-TQs} t^{(2)}(x)Q(1/x)=F(x)Q(1/qx)+{\rm e}^{-2\mu}F(qx)Q(q/x) \end{equation} which allows us to express $M_\mu$ in terms of $Q$ alone: \begin{equation}\label{V-2-Mmut2}\boxed{ M_\mu=\frac{1}{2}(1-q)\frac{d}{d x} \log\biggl(\frac{Q(q/x)}{Q(1/x)}\biggr)\bigg|_{x=-1}.} \end{equation} This relation between $M_\mu$ and $Q$ (which applies to the whole spectrum of $M_\mu$), along with eq.(\ref{V-2-PQs}) for $k=1$: \begin{equation}\label{PQF}\boxed{ P(x)Q(1/x)=F(x)+{\rm e}^{-2\mu}P(q x)Q(q/x)} \end{equation} is enough to obtain $E(\mu)$: they are a more complex version of the relations between $E$, $\mu$ and the Bethe roots we used in the periodic case (in fact, applying the Q-operator method to the periodic case, we find that the Bethe roots are the inverses of the roots of the eigenvalues of $Q$). From this point on, we need to use the 'functional Bethe Ansatz', as was done in [32] for the periodic ASEP, in order to unravel these relations and obtain an expression of $E(\mu)$. We first need to define a matrix $B$ which plays the same role as the intermediate variable we used in the previous section: \begin{equation} B=-{\rm e}^{2\mu}\bigl(Q(0)\bigr)^{-1}=-{\rm e}^{2\mu}(1-{\rm e}^{-\mu})\bigl(U_\mu(0)T_\mu(0)\bigr)^{-1}. \end{equation} Moreover, just as in the previous section, we need to know the behaviour of $B$ and $Q$ in the $\mu\rightarrow 0$ limit. We find, from analytical calculations which we will not go into here but can be found in [1], that the first eigenvalue of $B$ goes to $0$, while the others remain finite, and that the roots of the first eigenvalue of $Q(1/x)$ are inside of the unit circle $c_1$ if $a<1$ and $b<1$ (as was needed in section 3.2.2.), while those of $P$ are outside of it. Restricting ourselves to that first eigenspace from now on, this allows us to separate $P$ and $Q$ in eq.(\ref{PQF}) using a contour integral, and express $M_\mu$ in eq.(\ref{V-2-Mmut2}) only in terms of $F(x)$. In the following, the notations $F$, $P$, $Q$, and $B$ will refer to the eigenvalues of the operators in the dominant eigenspace, and not to the operators themselves. Let us define a function $W(x)$ as: \begin{equation}\label{IV-2-W} W(x)=-\frac{1}{2}\log\biggl(\frac{P(x)Q(1/x)}{{\rm e}^{-2\mu}P(q x)Q(q/x)}\biggr), \end{equation} and a convolution kernel $K$, as: \begin{equation}\label{IV-2-K} K(z,\tilde{z})=2\sum_{k=1}^{\infty}\frac{q^k}{1-q^k}\Bigl((z/\tilde{z})^k+(z/\tilde{z})^{-k}\Bigr) \end{equation} along with the associated convolution operator $X$: \begin{equation}\label{IV-2-X} X[f](z)=\oint_{c_1}\frac{d\tilde{z}}{\imath2\pi\tilde{z}}f(\tilde{z})K(z,\tilde{z}). \end{equation} Expanding $\log(P(x))$ and $\log(Q(1/x))$ in powers of $x$ and $1/x$ respectively, we can easily check that \begin{equation} -\log\bigl(P(q x)Q(q/x)/Q(0)\bigr)=X[W](x), \end{equation} which allows us to rewrite eq.(\ref{PQF}) as a functional equation of only one unknown function $W$: \begin{equation}\label{IV-2-WW}\boxed{ W(x)=-\frac{1}{2}\ln\Bigl(1-B F(x) e^{X[W](x)}\Bigr).} \end{equation} The last step is to take eq.(\ref{V-2-Mmut2}) in the first eigenspace of $M_\mu$, and eq.(\ref{IV-2-W}) at $x=0$, to find: \begin{equation} E(\mu)=\frac{1}{2}(1-q)\frac{d}{dx}\log\biggl(\frac{Q(q/x)}{Q(1/x)}\biggr)\biggl|_{x=-1}=\frac{1}{2}(1-q)\oint_{c_1}\frac{dz}{\imath2\pi(1+z)^2}\log\biggl(\frac{Q(q/z)}{Q(1/z)}\biggr)~~~~,~~~~\mu=-W(0) \end{equation} where the contour integral form can be checked formally by expanding $\log(Q(1/x))$ in powers of $1/x$. Considering that, for $\mu$ small enough, $P$ is holomorphic inside of the unit circle, we can replace $\frac{1}{2}\log\Bigl(\frac{Q(q/z)}{Q(1/z)}\Bigr)$ by $-W(z)$ when expressing $E(\mu)$ as a contour integral (since $P$ will not contribute), and obtain: \begin{equation}\label{IV-2-muB}\boxed{ \mu=-\oint_{c_1}\frac{dz}{\imath2\pi z}W(z)} \end{equation} and \begin{equation}\label{IV-2-EB}\boxed{ E(\mu)=-(1-q)\oint_{c_1}\frac{dz}{\imath2\pi(1+z)^2}W(z).} \end{equation} All this was done for $a<1$ and $b<1$, but can then be generalised to any $a$ and $b$ through the same reasoning as in section 3.2.2. for the mean current, replacing the unit circle $c_1$ by small contours around $S=\{0,q^k a,q^k \tilde{a},q^k b,q^k \tilde{b}\}$. As we see, the form of $E(\mu)$ is similar to what we found for the periodic TASEP. It is even more similar to the periodic ASEP case [32]: the expressions only differ by the factors $2$ in $K$ and $\frac{1}{2}$ in $W(x)$, and by the function $F(x)$ which is exchanged with $h(x)$ as defined in eq.(\ref{hBethe}). Moreover, if we choose $a=1$, $\tilde{a}=-q$, $b=\sqrt{q}$ and $\tilde{b}=-\sqrt{q}$, which is to say $\alpha=\frac{1}{2}$, $\gamma=\frac{q}{2}$, $\beta=1$ and $\delta=q$, special cancellations occur in $F(x)$ which reduces to $(1+x)^{L+1}(1+x^{-1})^{L+1}$. This is the same as the function $h$ for the periodic ASEP with $2L+2$ sites and $L+1$ particles. Because of the extra factors $2$ and $\frac{1}{2}$, the generating function of the cumulants of the current is half that which we found in the periodic case, taken at $2\mu$. This also works if we exchange $a$ with $b$ and $\tilde{a}$ with $\tilde{b}$. Those two special points correspond to $\rho_a=\frac{1}{2}$ and $1-\rho_b=\frac{1}{1+q}$, or the opposite, and are on the transition lines between the MC phase and the LD or HD phase. We will come back to this remark later. The expressions we just obtained are exact, and valid for any values of the parameters of the system, including its size. They are, however, somewhat unwieldy, especially if we want to perform a Legendre transform in order to obtain the large deviation function of the current, which is our goal. In the next section, we take the limit of large sizes and see how we can obtain closed expressions for $E(\mu)$ and $g(j)$ for small fluctuations of the current. Large size limit In this section, we will need to take the result we just obtained for the generating function of the cumulants of the current in the open ASEP, and extract its large size behaviour, through various approximations. We will not go into every fine detail of the necessary calculations, but we will endeavour to make them as easy to follow as possible. That being said, the calculations themselves might not be of interest to every reader, and we will make the results clearly visible at the end of each sub-section (i.e. they will be boxed). Before taking any limit, we will need to examine eqs.(\ref{IV-2-muB}) and (\ref{IV-2-EB}) a little closer. The function $W(z)$ that they contain is defined, in eq.(\ref{IV-2-WW}), through a self-consistency equation. Expanding the logarithm in powers of $B$, we can express every coefficient by calculating $W(x)$ perturbatively in $B$. The coefficient of $B^k$ will be a combination of $k$ functions $F$, either at the same point or convolved together through $K$. For instance, the coefficient of $B^2$ in $\mu$ is \begin{equation}\label{III-3-C2} -\frac{1}{4}\oint_S\frac{dz}{\imath2\pi z}F(z)^2~-~\frac{1}{4}\oint_S\frac{dz_{1}}{\imath2\pi z_{1}}\oint_S\frac{dz_{2}}{\imath2\pi z_{2}}F(z_{1})F(z_{2})K(z_{1},z_{2}) \end{equation} As was painstakingly verified in [1], it turns out that, in the large size limit, the term containing no convolutions is dominant in every case, and that, when all is said and done, the behaviour of $E(\mu)$ differs from the TASEP case (where $q=0$ so that $K=0$) only by a global factor $(1-q)$. We will therefore save ourselves some trouble here and only consider that simpler case: $q=\gamma=\delta=0$. The simplified expressions we will have to examine are the following: \begin{equation}\label{muTASEP} \mu=-\sum_{k=1}^{\infty}C_{k}\frac{B^k}{k}~~~~{\rm with}~~~~C_k=\frac{1}{2}\oint_{\{0,a,b\}}\frac{dz}{\imath2\pi z}F^k(z), \end{equation} \begin{equation}\label{ETASEP} E(\mu)=-\sum_{k=1}^{\infty}D_{k}\frac{B^k}{k}~~~~{\rm with}~~~~D_k=\frac{1}{2}\oint_{\{0,a,b\}}\frac{dz}{\imath2\pi (1+z)^2}F^k(z), \end{equation} where \begin{equation}\label{FTASEP} F(x)=\frac{(1+x)^L(1+x^{-1})^L(1-x^2)(1-x^{-2})}{ (1-a x)(1-a/x)(1-b x)(1-b/x)}. \end{equation} As for the calculation of the mean current that we saw in section 3.2.2., the asymptotic form of these contour integrals, for $L$ large, will depend on the position of $a$ and $b$ with respect to the unit circle (see fig.10). Figure 10 Figure 10. Positions of $a$ (blue circles) and $b$ (red circles) with respect to the unit circle (black circles) in each phase of the open ASEP. Moreover, the correct way to handle that large size limit is not entirely straightforward, due to $E(\mu)$ being expressed implicitly: each cumulant is a combination of $C_k$'s and $D_k$'s of same or lower order, such as \begin{equation} E_{3}=\frac{3D_{1}C_{2}^2-2D_{1}C_{1}C_{3}-3D_{2}C_{1}C_{2}+2D_{3}C_{1}^2}{C_{1}^5} \end{equation} but the leading order of $E_k$ in $L$ might not be obtained by taking the leading orders of each term, as this may simply give $0$ due to certain cancellations. As it turns out, this is in fact always the case: at leading order in $L$, in every phase, one finds $D_k\sim JC_k$, where $J$ is the average current, so that $E(\mu)\sim J\mu$. This cancels out in every $E_k$ other than $E_1=J$. For that reason, we will sometimes be calculating equivalents of \begin{equation} E(\mu)-J\mu=-\sum_{k=1}^{\infty}\tilde D_{k}\frac{B^k}{k}~~~~{\rm with}~~~~\tilde D_k=D_k-JC_k \end{equation} instead, which we will find to be sufficient in every case we will examine. We will only ignore one point of the phase diagram: $\alpha=\beta=\frac{1}{2}$, which is to say $a=b=1$, at which the $k$ first orders of $E_k$ vanish. Finally, one should note that, because we will make approximations on the coefficients of $E(\mu)$ expanded (implicitly) around $\mu=0$, the expressions we will obtain for $E(\mu)$ or $g(j)$ will in principle be valid only for a small $\mu$ and a small fluctuation $(j-J)$. Low/high density phases We start with the low density phase: $a>1$ and $a>b$. The corresponding results for the high density phase are obtained by exchanging $a$ and $b$. As we recall from section 3.2.2., we can replace the integration set $\{0,a,b\}$ by the unit circle plus twice the poles which are outside of it. The dominant part of each contour integral is then the residue around $a$, which is the one where $F(x)$ is maximal. That residue is of order $k$ in both $C_k$ and $D_k$. We can therefore write \begin{align}\label{IV-1-muELD} \mu&=-\sum_{k=1}^\infty \frac{ B^k}{k!} \frac{d^{k-1}}{dz^{k-1}} \Big \{ \frac{\phi^k(z)}{z} \Big\} \Big|_{z =a} ,\\ E(\mu)&=-\sum_{k=1}^\infty \frac{ B^k}{k!} \frac{d^{k-1}}{dz^{k-1}} \Big \{ \frac{\phi^k(z)}{(1+z)^2} \Big\} \Big|_{z =a}, \end{align} with \begin{equation}\label{IV-1-FphiLD} \phi(z)=(z-a)F(z)=z\frac{(1+z)^{L}(1+z^{-1})^{L}(1-z^2)(1-z^{-2})}{(1-a z)(1-b z)(1-b /z)}. \end{equation} One may recognise there a structure related to the Lagrange inversion formula [168]. Considering two variables $w$ and $z$ such that \begin{equation}\label{IV-1-wzB} w = z + B \phi(w) \end{equation} we can express a function $f$ taken at $w$ by expanding it around $z$, as: \begin{equation}\label{IV-1-lagrange} f(w) -f(z)=\sum\limits_{k=1}^{\infty}\frac{B^k}{k!} \frac{d^{k-1}}{dz^{k-1}}\Bigl(\phi^k(z) f'(z)\Bigr). \end{equation} Choosing $f(z)=-\log(z)$ for $\mu$ and $1/(1+z)$ for $E(\mu)$, we can then write: \begin{align} \mu ~~~&= -\log(w) + \log(a),\\ E(\mu) &= \frac{1}{w+1} - \frac{1}{a+1}, \end{align} where $w$ is as defined in (\ref{IV-1-wzB}). Combining those two equations, we get a closed expression for $E(\mu)$: \begin{equation}\label{IV-1-EmuTASEP}\boxed{ E(\mu) = \frac{a}{a+1} \frac{ {\rm e}^\mu -1} {{\rm e}^\mu + a}.} \end{equation} It only remains for us to perform a Legendre transform to obtain the large deviation function of the current: \begin{equation}\label{IV-2-gjASEP}\boxed{\boxed{ g(j)=(1-q)\Bigl[\rho_a-r+r(1-r)\log\Bigl(\frac{1-\rho_a}{\rho_a}\frac{r}{1-r}\Bigr)\Bigr].}} \end{equation} where $r$ is such that $ j=(1-q)r(1-r)$ and we recall that $\rho_a=\frac{1}{(1+a)}$. This expression was first obtained in [83] using a method which we will discuss in section 6.. The formula (\ref{IV-1-EmuTASEP}), with an extra factor $(1-q)$ for the general ASEP, was obtained in [123] through a numerical approach to the Bethe equations. Contrary to what we noted earlier, these expressions are in fact valid in their whole phase, and not only for small $\mu$, as we will see in section 6.. Shock line We now look at the shock line, where $a=b>1$. Unfortunately, the method we used in the previous section does not work here: the residue at $a$ in $C_k$ and $D_k$ is of order $2k$, so that we are missing all the derivatives of even order in $\mu$ and $E$. Instead, we simply calculate the leading order of each coefficient. After simplifying every term as much as possible (see chapter IV of [1] for a detailed calculation), we obtain the large-size equivalent of our two series: \begin{align} \mu=-\frac{2}{L}\frac{a+1}{a-1}&\sum_{k=1}^{\infty}\frac{k^{2k-1}}{(2k)!} B^k\label{IV-1-muHL},\\ E(\mu)-(1-q)\frac{a}{(1+a)^2}\mu=-(1-q)\frac{2}{L^2}\frac{a}{a^2-1}&\sum_{k=1}^{\infty}\frac{k^{2k-2}}{(2k)!} B^k\label{IV-1-EHL} \end{align} and we find that the cumulants of the current behave as \begin{equation} E_k\sim(1-q)\frac{a}{a^2-1}\bigl(\frac{a-1}{a+1}\bigr)^kL^{k-2} \end{equation} for $k\geq 2$. These two series can be expressed in terms of the Lambert ${\cal W}$ function [169], defined as the solution to $x={\cal W}_{\cal L}{\rm e}^{{\cal W}_{\cal L}}$. The series expansion of ${\cal W}_{\cal L}(x)$ around $0$ is: \begin{equation}\label{IV-1-LambertW} {\cal W}_{\cal L}(x)=-\sum\limits_{n=1}^{\infty}\frac{n^{n-1}}{n!}(-x)^n \end{equation} so that $\mu$ and $E$ become: \begin{align} \mu=\frac{2}{L}\frac{a+1}{a-1}&\Bigl[{\cal W}_{\cal L}(\sqrt{B}/2)+{\cal W}_{\cal L}(-\sqrt{B}/2)\Bigr]\label{IV-2-muHL}\\ E(\mu)-(1-q)\frac{a}{(1+a)^2}\mu=(1-q)\frac{2}{L^2}\frac{a}{a^2-1}&\Bigl[2\Bigl({\cal W}_{\cal L}(\sqrt{B}/2)+{\cal W}_{\cal L}(-\sqrt{B}/2)\Bigr)\nonumber\\ +&{\cal W}_{\cal L}(\sqrt{B}/2)^2+{\cal W}_{\cal L}(-\sqrt{B}/2)^2\Bigr].\label{IV-2-EHL} \end{align} Many things are known about ${\cal W}_{\cal L}$, including its asymptotic behaviour, which is what we need. The main branch of the function, called ${\cal W}_0$, is defined on the whole complex plane except for $]\!-\!\infty,\!-1/e]$, and behaves as $\log(x)$ (even for $x$ complex, in which case the angular part of $x$ can be neglected and we get $\log(|x|)$). This will be appropriate for $B<0$. For $B>0$, however, the functions ${\cal W}_{\cal L}(-\sqrt{B}/2)$ in our expressions have to be continued analytically to the second branch ${\cal W}_{-1}$, on which $x$ goes back from $-1/e$ to $0$ and behaves as $\log(-x)$ (fig.-11). Figure 11 Figure 11. Plot of the Lambert ${\cal W}$ function. The principal branch (blue) behaves as $\log(x)$ for $x\rightarrow\infty$. The second branch (red) behaves as $\log(-x)$ for $x\rightarrow 0^-$. For $B\rightarrow -\infty$, we therefore have ${\cal W}_{\cal L}(\pm\sqrt{B}/2)\sim \log(|B|)/2$, so that: \begin{align} \mu\sim\frac{2}{L}\frac{a+1}{a-1}&\log(|B|)\label{IV-2-muHL-}\\ E(\mu)-(1-q)\frac{a}{(1+a)^2}\mu\sim(1-q)\frac{2}{L^2}\frac{a}{a^2-1}&\log(|B|)^2/2\label{IV-2-EHL-} \end{align} meaning that, for $\mu$ positive and small (since we are working in the $L\rightarrow\infty$ limit ; see comment below), $E$ behaves as \begin{equation}\label{IV-2-ESL+}\boxed{ E_+(\mu)-(1-q)\frac{a}{(1+a)^2}\mu\sim(1-q)\frac{a(a-1)}{4(a+1)^3}\mu^2.} \end{equation} We take the Legendre transform of this and get, for $j>J=(1-q)\rho_a(1-\rho_a)$: \begin{equation}\label{IV-2-g+}\boxed{\boxed{ g_+(j)\sim \frac{(j-J)^2}{J(1-2\rho_a)}}} \end{equation} (where $J=(1-q)\rho_a(1-\rho_a)$ is the average current). The a priori domains of validity of equations (\ref{IV-2-ESL+}) and (\ref{IV-2-g+}) are not entirely obvious, and require a careful analysis. Starting from equations (\ref{muTASEP}) and (\ref{ETASEP}), which are exact for any $L$ and $B$, we first took a large $L$ limit and kept only the leading contributions, yielding equations (\ref{IV-1-muHL}) and (\ref{IV-1-EHL}), still valid for any $B$. We then took a large (negative) $B$, and obtained equations (\ref{IV-2-muHL-}) and (\ref{IV-2-EHL-}). Because of the order in which we took the limits, $L$ is taken to be large enough so that the first correction to (\ref{IV-1-muHL}) be negligible for any $B$ we consider, which is to say that $L$ goes to $\infty$ faster than $\log(|B|)$. For that reason, the equations we obtain here, and similarly in all the following calculations, are in principle valid only for $\mu$ small, which is to say $j$ close to its average. We will indeed see in section 6. that these expressions correspond only to the leading order of those we find from the macroscopic fluctuation theory. To do the same for $\mu<0$, we need to be on the second branch of ${\cal W}_{\cal L}$, for which we have $B\rightarrow 0^+$. In that case, we have ${\cal W}_{\cal L}(-\sqrt{B}/2)\sim \log(B)/2$, but ${\cal W}_{\cal L}(\sqrt{B}/2)\sim 0$ (that part being still on the main branch). This gives us: \begin{align} \mu\sim\frac{2}{L}\frac{a+1}{a-1}&\log(|B|)/2\label{IV-2-muHL+}\\ E(\mu)-(1-q)\frac{a}{(1+a)^2}\mu\sim(1-q)\frac{2}{L^2}\frac{a}{a^2-1}&\log(|B|)^2/4\label{IV-2-EHL+} \end{align} so that, for $\mu$ negative and small, we have \begin{equation}\label{IV-2-ESL-}\boxed{ E_-(\mu)-(1-q)\frac{a}{(1+a)^2}\mu\sim(1-q)\frac{a(a-1)}{2(a+1)^3}\mu^2.} \end{equation} We take the Legendre transform of this and get, for $j<J$: \begin{equation}\boxed{\boxed{ g_-(j)\sim \frac{(j-J)^2}{2J(1-2\rho_a)}.}} \end{equation} Notice that the dependence in $L$ has vanished from both cases, so that even though all the cumulants of the current depend on $L$ at $\mu=0$, none of them do for a finite $\mu$. Notice also that $g_-$ differs from $g_+$ by a factor $\frac{1}{2}$, which comes from the fact that for $\mu>0$, both functions ${\cal W}_{\cal L}$ in $\mu$ and ${\cal W}_{\cal L}^2$ in $E(\mu)$ contribute to the limit, whereas for $\mu<0$, only one of each does. This results in $g(j)$ not being analytic at $j=J$, which is the signature of a non-equilibrium phase transition (see section 6. for a description of the phase on each side of the transition). LD/MC transition line We now consider the transition line between the low density and the maximal current phase, for which $a=1$ and $b<1$. As we noted earlier, the cumulants of the current on this line are related to those for a half-filled periodic TASEP of size $2L+2$. The calculations we will need to perform here are therefore the same as can be found in [97], which we will reproduce, and use as a reference for the maximal current phase in the next section (which is similar but not identical). Taking the limit of large $L$ in $\mu$ and $E$ (see chapter IV of [1] for a detailed calculation), we obtain \begin{align} \mu=-\frac{L^{-1/2}}{2\sqrt{\pi}}&\sum_{k=1}^{\infty}\frac{B^k}{k^{3/2}} \label{IV-1-muHDMC2},\\ E(\mu)-(1-q)\frac{1}{4}\mu=-(1-q)\frac{L^{-3/2}}{16\sqrt{\pi}}&\sum_{k=1}^{\infty}\frac{B^k}{k^{5/2}}\label{IV-1-EHDMC2}, \end{align} so that the cumulants behave as \begin{equation} E_k\sim\pi(\pi L)^{(k-3)/2} \end{equation} for $k\geq 2$. Notice that the pre-factor $L^{-3/2}$ in (\ref{IV-1-EHDMC2}) is a sign of the dynamical scaling $z=\frac{3}{2}$ of the KPZ universality class. These series can be written in terms of the polylogarithm ${\rm Li}_{5/2}(B)$, defined as \begin{equation}\label{IV-2-HMC1} H(B)=-{\rm Li}_{5/2}(B)=-\sum_{k=1}^{\infty}\frac{B^k}{k^{5/2}}=\frac{2}{\sqrt{\pi}}\int_{-\infty}^{+\infty} d\theta~\theta^2 \log\bigl[1-B{\rm e}^{-\theta^2} \bigl], \end{equation} so that \begin{align}\label{IV-2-gMC} \mu&=\frac{L^{-1/2}}{2\sqrt{\pi}}B ~H'(B)\\ E(\mu)-\frac{1-q}{4}\mu&=\frac{L^{-3/2}}{16\sqrt{\pi}}~H(B) \end{align} As in the previous section, the cases $\mu<0$ and $\mu>0$ require different approaches. For $\mu>0$, we need to take $B\rightarrow-\infty$. In this case, the integrand in $H(B)$ can be approximated by: \begin{equation} \log\bigl[1-B{\rm e}^{-\theta^2} \bigl]\sim \log\bigl[|B|~{\rm e}^{-\theta^2} \bigl]~ \mathbb{I}\bigl[\theta^2<\log(|B|)\bigr] \end{equation} where $\mathbb{I}[X]$ is the indicator of $X$, equal to $1$ if $X$ is true, and $0$ if $X$ is false. We can then estimate: \begin{equation}\label{IV-2-HMC2} H(B)\sim \frac{4}{\sqrt{\pi}}\int_{0}^{\log(|B|)^{1/2}} d\theta~\theta^2 \bigl(\log(|B|)-\theta^2\bigr)=\frac{8}{15\sqrt{\pi}}\log(|B|)^{5/2} \end{equation} and \begin{equation}\label{IV-2-HMC3} B~H'(B)\sim \frac{4}{3\sqrt{\pi}}\log(|B|)^{3/2} \end{equation} so that \begin{equation}\label{IV-2-gMC2}\boxed{ E_+(\mu)-\frac{1-q}{4}\mu\sim(1-q)\frac{1}{20}\Bigl(\frac{3}{2}\Bigr)^{2/3} L^{-2/3}\pi^{2/3}\mu^{5/3}.} \end{equation} We can then take the Legendre transform of this result. We find, for $j>J=\frac{1-q}{4}$: \begin{equation}\label{IV-2-gMC4}\boxed{\boxed{ g_+(j)\sim(j-J)^{5/2}\frac{32\sqrt{3}L}{5\pi(1-q)^{3/2}}}} \end{equation} and notice that, for once, it depends on $L$. For $\mu<0$, we have to take into account the fact that the polylogarithm $Li_{5/2}(x)$ has a branch cut at $x=1$ and is not defined for $x>1$. To remedy this, we have to go back to the expressions of $\mu$ and $E$ in terms of the roots of $(1-Bh(x))$ we saw in section 4.2.1.. These expressions apply in principle to all eigenvalues of $M_\mu$, and depend on which roots are included in the integral. In the case of the steady state, for $B$ small enough, all the roots we consider are inside of the unit circle. However, it can be shown that, as $B$ gets closer to $1$, one of the roots $z_0$ goes to $1$, and merges with its counterpart $z_0^{-1}$ from outside of the unit circle (fig.-12). Since we know, from the Perron-Frobenius theorem, that $E(\mu)$ never crosses any other eigenvalue of $M_\mu$, the choice of roots that consists in taking $z_0^{-1}$ instead of $z_0$ must correspond to $E(\mu)$ as well (because they coincide for $B=1$). We can therefore find the correct analytic continuations for $\mu$ and $E(\mu)$ in terms of $B$ by finding $z_0$, replacing its contribution in those series by that of $z_0^{-1}$, and taking $B$ back from $1$ to $0$. This procedure is explained in more detail in [170]. Figure 12 Figure 12. Bethe roots for a periodic system with $20$ sites and $10$ particles. The roots are at the centres of the white discs. The unit circle is represented in black. On the left, where $B<1$, a pair of roots can be seen to approach the unit circle on the real axis. On the right, where $B=1$, those roots have merged. We can find those two roots using equation (\ref{IV-2-HMC1}). An integration by parts turns it into: \begin{equation}\label{IV-2-HMC1p} H(B)\sim\frac{2}{3\sqrt{\pi}}\int_{-\infty}^{+\infty} d\theta~\theta^3 \frac{2\theta B {\rm e}^{-u}}{B {\rm e}^{-u}-1}. \end{equation} For $0<B<1$, the poles in this expression are at $\theta_{\pm}=\pm i \sqrt{-\log(B)}$. The corresponding residues are $i\frac{4\sqrt{\pi}}{3}\theta_{\pm}^3$ (as explained in [170]), and we must subtract the one corresponding to $\theta_-$ and add the other one to $H$. We get, for $B\rightarrow 0$: \begin{align} H(B)&=\frac{8}{3}\sqrt{\pi}\bigl[-\log(B)\bigl]^{3/2}-\sum_{k=1}^{\infty}\frac{B^k}{k^{5/2}}\sim\frac{8}{3}\sqrt{\pi}\bigl[-\log(B)\bigl]^{3/2}\\ B H'(B)&=-4\sqrt{\pi}\bigl[-\log(B)\bigl]^{1/2}-\sum_{k=1}^{\infty}\frac{B^k}{k^{3/2}}\sim-4\sqrt{\pi}\bigl[-\log(B)\bigl]^{1/2}. \end{align} Putting those together, we find, for $\mu<0$: \begin{equation}\label{IV-2-gMC5}\boxed{ E_-(\mu)-\frac{1-q}{4}\mu\sim-(1-q)\frac{1}{48}\mu^{3}} \end{equation} and, for $j<J=\frac{1-q}{4}$, \begin{equation}\label{IV-2-gMC7}\boxed{\boxed{ g_-(j)\sim(J-j)^{3/2}\frac{8}{3(1-q)^{1/2}}.}} \end{equation} Once more, we find a result which is independent of $L$. Moreover, we see that the phase transition that takes place here at $\mu=0$ is of a different nature than the one on the shock line: the behaviour of $g(j)$ with respect to $L$ changes from one side to the other. Maximal current phase Finally, we inspect the maximal current phase, for which $a<1$ and $b<1$. Once more, we start by calculating the large $L$ behaviour of every $C_k$ and $\tilde D_k$: \begin{align} \mu=-\frac{L^{-1/2}}{2\sqrt{\pi}}&\sum_{k=1}^{\infty}\frac{(2k)!}{k!k^{(k+3/2)}} B^k\label{IV-1-muMC2}\\ E(\mu)-(1-q)\frac{1}{4}\mu=-(1-q)\frac{L^{-3/2}}{16\sqrt{\pi}}&\sum_{k=1}^{\infty}\frac{(2k)!}{k!k^{(k+5/2)}} B^k\label{IV-1-EMC2}, \end{align} so that, as in the previous section, \begin{equation} E_k\sim(1-q)\pi(\pi L)^{(k-3)/2} \end{equation} for $k\geq 2$. We will use the same method as for the LD/MC line. We may still write \begin{align} \mu&=\frac{L^{-1/2}}{2\sqrt{\pi}}B ~H'(B)\\ E(\mu)-\frac{1-q}{4}\mu&=\frac{L^{-3/2}}{16\sqrt{\pi}}~H(B) \end{align} but this time, $H(B)$ is defined as \begin{equation}\label{IV-2-HLMC1} H(B)=-\sum_{k=1}^{\infty}\frac{(2k)!}{k!k^{(k+5/2)}}\Bigl(\frac{B}{4}\Bigr)^k=\frac{2}{\sqrt{\pi}}\int_{-\infty}^{+\infty}d\theta~(\theta^2-1) \log\bigl[1-B\theta^2{\rm e}^{-\theta^2} \bigl]. \end{equation} For $\mu>0$, i.e. $B\rightarrow-\infty$, we have: \begin{equation} \log\bigl[1-B\theta^2{\rm e}^{-\theta^2} \bigl]\sim \log\bigl[|B|\theta^2{\rm e}^{-\theta^2} \bigl]~ \mathbb{I}\bigl[|B|\theta^2{\rm e}^{-\theta^2}>1\bigr]. \end{equation} The upper bound of the integral can therefore be set at $\theta_B$ such that $|B|\theta_B^2{\rm e}^{-\theta_B^2}=1$, in which we recognise the square root of the Lambert $W$ function: $\theta_B=\sqrt{-W_{-1}(-1/B)}$. For large $B$, it behaves as $\log(|B|)^{1/2}$, as we saw in section 4.3.2.. After estimating $H(B)$ in the same way as in the previous section we find the exact same expressions for $\mu$ and $E$, leading to \begin{equation}\label{IV-2-gLMC2}\boxed{ E_+(\mu)-\frac{1-q}{4}\mu\sim(1-q)\frac{1}{20}\Bigl(\frac{3}{2}\Bigr)^{2/3} L^{-2/3}\pi^{2/3}\mu^{5/3}} \end{equation} and, for $j>J=\frac{1-q}{4}$, \begin{equation}\label{IV-2-gLMC4}\boxed{\boxed{ g_+(j)\sim(j-J)^{5/2}\frac{32\sqrt{3}L}{5\pi(1-q)^{3/2}}.}} \end{equation} This tells us that the phase which is reached by selecting positive fluctuations of the current is the same whether one starts from the inside of the MC phase or from its boundary. For $\mu<0$, the situation is slightly different from that of the previous section. The roots of $(1-BF(x))$ behave similarly, but this time, there are two pairs of roots crossing the unit circle instead of one (fig.-13), both close to the real axis. Using the same procedure as before, we find them to have exactly the same behaviour with respect to $B$, the only difference being that we have twice as many residues as we had then. Figure 13 Figure 13. Bethe roots for an open system with $9$ sites. The unit circle is represented in black. This time, there are two pairs of roots that merge for a critical value of $B$. Those roots get closer to the real axis as $L$ becomes large. This gives us: \begin{equation}\label{IV-2-HLMC12} H(B)\sim\frac{16}{3}\sqrt{\pi}\bigl[-\log(B)\bigl]^{3/2} \end{equation} and \begin{equation}\label{IV-2-HLMC32} B H'(B)\sim-8\sqrt{\pi}\bigl[-\log(B)\bigl]^{1/2} \end{equation} so that, for $\mu<0$, \begin{equation}\label{IV-2-gLMC5}\boxed{ E_-(\mu)-\frac{1-q}{4}\mu\sim-(1-q)\frac{1}{192}\mu^{3}} \end{equation} and, for $j<J=\frac{1-q}{4}$, \begin{equation}\label{IV-2-gLMC7}\boxed{\boxed{ g_-(j)\sim(J-j)^{3/2}\frac{16}{3(1-q)^{1/2}}.}} \end{equation} Note that we do not find the same behaviour for negative fluctuations of the current inside of the MC phase and on its boundaries, which indicates a phase transition between those two regions for $\mu<0$. We will explore that transition, and all those we mentioned in the previous sections, in more detail in section 6.. But before that, we will see what may be said of $E(\mu)$ and $g(j)$ in the limit of large fluctuations. Asymptotic limits for extreme currents In the previous section, we obtained the behaviour of the generating function of the cumulants and of the large deviation function of the current for small fluctuations. In particular, we found evidence for a number of dynamical phase transitions, in the different forms these functions take for positive or negative fluctuations. In this section, we examine the opposite limit: that of extreme fluctuations. We will do this by taking $\mu$ to $\pm \infty$ in the deformed Markov matrix $M_\mu$, and diagonalising the limiting matrix directly, without use of integrability-related methods. There are a few caveats which need to be pointed out before we start calculating. First of all, as we saw in section 4.1., the current in the open ASEP is proportional to the entropy production, and satisfies the Gallavotti-Cohen symmetry, which means that the $\mu\rightarrow-\infty$ limit, corresponding to a large negative current, can be deduces from the $\mu\rightarrow+\infty$ limit, corresponding to a large positive current. However, if we consider the totally asymmetric case instead, the Gallavotti-Cohen symmetry is destroyed, and the $\mu\rightarrow-\infty$ limit becomes that of a null current (simply because negative currents are strictly impossible). Whether the behaviour of the TASEP for $\mu\rightarrow-\infty$ and that of the ASEP for $j\rightarrow0^+$ (which corresponds to $\mu\rightarrow-\frac{1}{2}\log(\alpha\beta/\gamma \delta q^{L-1})^+$)are the same is not entirely obvious, but we will see evidence for it in section 6.. In the meantime, we will focus on the TASEP when taking the $\mu\rightarrow-\infty$ limit. As for the $\mu\rightarrow+\infty$ limit, we will immediately see that considering the ASEP or the TASEP makes no difference whatsoever, and we will focus on the latter simply for the sake of consistency. As for dealing with these limits in a proper way, one needs to be careful. Nothing guarantees that the equivalent of $M_\mu$ will be diagonalisable, and it will in fact not be in general. Consider for instance the case where we monitor the current on the first bond, and take $\mu$ to infinity. We are left with the matrix \begin{equation} M_\mu\sim m_0(\mu)\sim\begin{bmatrix}0 &0 \\ \alpha{\rm e}^{\mu} & 0 \end{bmatrix} \end{equation} acting only on site $1$, which is not diagonalisable. This comes from the fact that the eigenvectors do not have good limits in that basis: some entries, to which vanishing elements of $M_\mu$ apply, will diverge, which makes it inappropriate to neglect those elements. However, if we find a basis in which the equivalent of $M_\mu$ is diagonalisable, then the eigenelements of the limit are the limits of the eigenelements. Luckily, we already have a set of basis transformations at our disposal, which consist in changing the way in which we measure the current (see section 4.1.). It turns out that putting a weight $\mu_i=\frac{\mu}{L+1}$ on each bond is the simplest distribution which will yield diagonalisable limits. In this section, we will therefore consider the deformed Markov matrix defined as \begin{equation}\label{IV-3-Mmu} M_\mu=m_0(\mu_0)+\sum_{i=1}^{L-1} M_{i}(\mu_i)+m_L(\mu_l), \end{equation} where \begin{equation}\label{IV-3-Mmui} m_0(\mu_0)=\begin{bmatrix} -\alpha &0 \\ \alpha{\rm e}^{\mu_0} & 0 \end{bmatrix}~,~ M_{i}(\mu_i)=\begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 0& {\rm e}^{\mu_i} & 0 \\ 0 &0& -1 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}~,~m_L(\mu_l)=\begin{bmatrix} 0 & \beta{\rm e}^{\mu_L} \\ 0 & -\beta \end{bmatrix}, \end{equation} with $\mu_i=\frac{\mu}{L+1}$ for all $i$'s, as our starting point. Low current limit We first examine the $\mu\!\rightarrow\!-\infty$ limit. Noting $\varepsilon\!=\!{\rm e}^{\mu/(L+1)}\rightarrow 0$, we can decompose $M_\mu$ into its diagonal elements, which are finite, and its non-diagonal elements, which vanish: \begin{equation}\label{IV-3-Mlc} M_\mu=M_d+\varepsilon M_j \end{equation} where $M_d$ is the matrix containing the diagonal (escape) rates, and $M_j$ is the matrix containing the non-diagonal (jumping) rates. The entries of $M_d$ are given by: \begin{equation}\label{IV-3-Md} M_d({\cal C},{\cal C})=-(1-n_1)\alpha-\sum\limits_{i=1}^{L-1} n_i(1-n_{i+1})-n_L \beta \end{equation} where $n_i$ is the occupancy of site $i$ in ${\cal C}$. At lowest order in $\varepsilon$, those are the eigenvalues of $M_\mu$, since the non-diagonal part vanishes. Since we are looking for the highest eigenvalue of $M_\mu$, we see that there are four possible situations (assuming that $\alpha$ and $\beta$ are limited to $[0,1]$): if $\alpha<\beta$, then the best configuration is empty ($n_i=0$ for all $i$'s), with an eigenvalue of $E=-\alpha$. If $\beta<\alpha$, we have the same in reverse: the best configuration is full ($n_i=1$ for all $i$'s) and $E=-\beta$ (those two first cases are symmetric to one another, and we will only be considering the first one) ; If $\alpha=\beta<1$, then $E=-\alpha$, and we have two competing configurations: empty or full ; if $\alpha=\beta=1$, then any configuration with a block of $1$'s followed by a block of $0$'s has an eigenvalue of $E=-1$, which is the highest, and there are $L+1$ of those. The phase diagram of the model in that limit thus consists of two phases, one transition line, and one special point (fig.-14). Figure 14 Figure 14. Phase diagram of the open ASEP for very low current. The mean density profiles are represented in red the insets. The profiles in orange are the individual configurations which compose the steady state. However, we will be looking to perform a Legendre transform on $E(\mu)$, so that this first order of the largest eigenvalue, which is independent of $\mu$, will not be enough: we will have to calculate the following order as well. We will therefore treat the non-diagonal part of $M_\mu$ perturbatively to extract its first non-trivial contribution to the largest eigenvalue. Empty/full phases We first consider the case where $\alpha <\beta$, where the dominant eigenvalue of $M_\mu$ is equal to $-\alpha$ at leading order in $\varepsilon$. We expand that eigenvalue and its corresponding eigenvector as a series in $\varepsilon$: \begin{equation}\label{IV-3-EMP} E(\mu)=E_0+\sum E_k \varepsilon^k~~,~~|P_\mu\rangle=|P_0\rangle+\sum \varepsilon^k |P_k\rangle \end{equation} where \begin{equation}\label{IV-3-EM} E(\mu)|P_\mu\rangle=M_\mu|P_\mu\rangle. \end{equation} We already know that \begin{equation}\label{IV-3-P0} |P_0\rangle=|00\dots00\rangle~~,~~E_0=-\alpha. \end{equation} The first order in $\varepsilon$ in (\ref{IV-3-EM}) gives: \begin{equation}\label{IV-3-P1} E_1|P_0\rangle+E_0|P_1\rangle=M_j|P_0\rangle+M_d|P_1\rangle. \end{equation} Since the only state that can be reached from $|P_0\rangle$ through $M_j$ is $|10\dots00\rangle$, which has no overlap with $|P_0\rangle$, we get: \begin{equation}\label{IV-3-E1} |P_1\rangle=\frac{1}{E_0-M_d}M_j|P_0\rangle\sim|10\dots00\rangle ~~,~~E_1=0 \end{equation} so that the first correction to $E(\mu)$ is $0$. The second order in $\varepsilon$ in (\ref{IV-3-EM}) gives: \begin{equation}\label{IV-3-P2} E_2|P_0\rangle+E_0|P_2\rangle=M_j|P_1\rangle+M_d|P_2\rangle. \end{equation} Again, the only state that can be reached from $|P_1\rangle$ through $M_j$ is $|010\dots00\rangle$, which has no overlap with $|P_0\rangle$: \begin{equation}\label{IV-3-E2} |P_2\rangle=\Bigl(\frac{1}{E_0-M_d}M_j\Bigr)^2|P_0\rangle\sim|010\dots00\rangle ~~,~~E_2=0 \end{equation} and once more, the correction to $E(\mu)$ is $0$. The first possible non-zero correction to $E(\mu)$ that we might get is when we find a $|P_k\rangle$ which has an overlap with $|P_0\rangle$. The shortest way to go back to $|P_0\rangle$ through jumps is to have one particle enter the system (the first step being $|P_1\rangle$), and then travel all the way to the other end, and exit the system. This can be done in $L+1$ steps, so that $E_k=0$ for $k:1..L$, and \begin{equation}\label{IV-3-P22} E_{L+1}|P_0\rangle+E_0|P_{L+1}\rangle=M_j|P_L\rangle+M_d|P_{L+1}\rangle. \end{equation} Putting those $L+1$ first equations together, we get \begin{equation}\label{IV-3-E3} |P_{L+1}\rangle= M_j\Bigl(\frac{1}{E_0-M_d}M_j\Bigr)^L|P_0\rangle\sim|P_0\rangle+\dots \end{equation} and \begin{equation} E_{L+1}=\langle P_0|P_{L+1}\rangle=\frac{\alpha}{1-\alpha}. \end{equation} Putting this back into $E(\mu)$, we get: \begin{equation}\label{IV-3-EmuLD}\boxed{ E(\mu)\sim -\alpha+{\rm e}^{\mu}\frac{\alpha}{1-\alpha}} \end{equation} and \begin{equation}\label{IV-3-gjLD}\boxed{\boxed{ g(j)=\alpha+j\log(j)-j\bigl(\log(\alpha/(1-\alpha))+1\bigr).}} \end{equation} We may note that taking the limit $\mu\!\rightarrow\!-\infty$ in (\ref{IV-1-EmuTASEP}) gives the same result: \begin{equation}\label{IV-3-EmuTASEP} E(\mu)=\frac{a}{a+1} \frac{ {\rm e}^\mu -1} {{\rm e}^\mu + a}\sim-\frac{1}{1+a}+\frac{1}{a}{\rm e}^\mu=-\alpha+{\rm e}^{\mu}\frac{\alpha}{1-\alpha} \end{equation} which indicates that this expression for $E(\mu)$ remains valid for all $\mu<0$. We will be more specific in section 6.. Finally, note that in this case, the second largest eigenvalue is $-\beta$ for $\varepsilon\rightarrow 0$ (and corresponds to a completely full system), so that the gap between the first two eigenvalues of $M_\mu$ is finite and equal, at leading order, to $\Delta E=(\beta-\alpha)$. The corresponding results for $\beta<\alpha$ (the 'full' phase) can be obtained by exchanging $\alpha$ with $\beta$. Coexistence line We now consider the slightly more complex case where $\alpha=\beta<1$. This time, there are two states with equal eigenvalues for $\mu=-\infty$: \begin{equation}\label{IV-3-P0c} |P_0\rangle=|0\rangle=|00\dots00\rangle~~{\rm with}~~E_0=-\alpha \end{equation} and \begin{equation}\label{IV-3-P0c2} |\tilde{P}_0\rangle=|1\rangle=|11\dots11\rangle~~{\rm with}~~\tilde{E}_0=-\alpha. \end{equation} As in the previous case, the first corrections to those eigenvalues are the rates with which we can go from these configurations back to themselves, but since they are degenerate, we must also consider how we can go from one to the other. As before, it takes the same $L+1$ steps to go from $|0\rangle$ to itself, or from $|1\rangle$ to itself, so that the first correction to both $E$ and $\tilde{E}$ is ${\rm e}^{\mu}\frac{\alpha}{1-\alpha}$. At this stage, those states are still degenerate. To lift the degeneracy, we have to consider the shortest way to go from $|0\rangle$ to $|1\rangle$, or the opposite. This means completely filling or emptying the system, and can be done in $L(L+1)/2$ steps. This tells us that the difference between the two highest eigenvalues is of order $\varepsilon^{L(L+1)/2}={\rm e}^{\frac{L}{2}\mu}$. For symmetry reasons, the main eigenvector is then $\frac{1}{2}(|0\rangle+|1\rangle)$, and the second one is $\frac{1}{2}(|0\rangle-|1\rangle)$. In conclusion, we have, as in the previous case, \begin{equation}\label{IV-3-EmuLDHD}\boxed{ E(\mu)\sim -\alpha+{\rm e}^{\mu}\frac{\alpha}{1-\alpha}} \end{equation} and \begin{equation}\label{IV-3-gjLDHD}\boxed{\boxed{ g(j)=\alpha+j\log(j)-j\bigl(\log(\alpha/(1-\alpha))+1\bigr)}} \end{equation} but this time, the gap behaves as $\Delta E\sim{\rm e}^{\frac{L}{2}\mu}$. To put these considerations in a more systematic format, we may use the so-called 'resolvent formalism' [171]. We present it here for two reasons: it will be useful to us in the next section, and it allows, in the case of a perturbation around degenerate states, to rigorously define an effective interaction matrix between those states. This gives us, in essence, a reduced dynamics for the system in the subset of phase space which contains only the dominant configurations. This formalism can be stated as follows: for a general matrix $M$ with eigenvalues $E_i$ and eigenvectors $ |P_i\rangle$ and $\langle P_i|$, we may write \begin{equation}\label{IV-3-proj} \oint_{C}\frac{dz}{i 2\pi}\frac{1}{z-M}=\sum\limits_{E_i\in C} |P_i\rangle\langle P_i| \end{equation} where the sum is over the eigenvalues of $M$ which lie inside of the contour $C$. Moreover, we have \begin{equation}\label{IV-3-Eproj} \oint_{C}\frac{dz}{i 2\pi}\frac{z}{z-M}=\sum\limits_{E_i\in C} E_i|P_i\rangle\langle P_i|. \end{equation} Going back to the matter at hand, which is $M_\mu$ with $\alpha=\beta$, a good way to isolate the two dominant eigenvectors is to consider that same contour integral, with a contour close enough to $-\alpha$ so that the two highest eigenvalues are inside it, but not any of the others. This allows us to define an effective matrix $M_{eff}$ such that: \begin{equation}\label{IV-3-Eproj2} M_{eff}=-\alpha+\oint_{C}\frac{dz}{i 2\pi}\frac{z}{z-\alpha-M_d-\varepsilon M_j} \end{equation} where $C$ is a small circle centred at $0$. We can now expand this expression in terms of $\varepsilon$: \begin{equation} M_{eff}=-\alpha+\oint_{C}\frac{dz}{i 2\pi} \sum\limits_{k=0}^{\infty}\frac{z}{z-(M_d+\alpha)}\Bigl(M_j \frac{1}{z-(M_d+\alpha)} \Bigr)^k \varepsilon ^k \end{equation} which is a sum over paths of length $k$, with transitions given by $M_j$ and a 'potential' given by $(z-M_d-\alpha)^{-1}$. We see that the only terms which contribute to the integral (i.e. that give first order poles which yield non-zero residues) are those for which $M_d$ is taken at $-\alpha$ exactly twice, which is to say the paths that go through $|0\rangle$ or $|1\rangle$ twice. It is now fairly straightforward to find the amplitudes of $M_{eff}$ between $|0\rangle$ and $|1\rangle$: we only have to project that expression between those states, and since $M_d=-\alpha$ in both of those, we only have to consider all the paths going from one of those states to another without going through them at any other point. Since we are doing a perturbative expansion in $\varepsilon$, we only need the term with the lowest number of steps. Between $|0\rangle$ and itself, or $|1\rangle$ and itself, there is only one path of the lowest length, which is $L+1$, and the amplitude for that path is $\frac{\alpha}{1-\alpha}$. Between $|0\rangle$ and $|1\rangle$, or the opposite, the shortest length is $L(L+1)/2$. There are many suitable paths for that transition, and the total amplitude is a quantity $X$ which we do not need explicitly, since that factor does not appear in the dominant term in $E(\mu)$ (it would be of order ${\rm e}^{\frac{L}{2}\mu}$). All in all, we have an effective matrix given by: \begin{equation} M_{eff}=\begin{bmatrix} -\alpha +{\rm e}^{\mu}\frac{\alpha}{1-\alpha} &X {\rm e}^{\frac{L}{2}\mu}\\ X {\rm e}^{\frac{L}{2}\mu} &-\alpha +{\rm e}^{\mu}\frac{\alpha}{1-\alpha} \end{bmatrix} \end{equation} which is easily diagonalised, and we can retrieve the results we found earlier. Equal rates point For the last case, where all the jumping rates are equal ($\alpha=\beta=1$), we find $L+1$ states with an eigenvalue equal to $-1$ for $\mu=-\infty$. Those states are given by $|k\rangle=|\{1\}_k \{0\}_{L-k}\rangle$, i.e. configurations made of a block of $1$'s followed by a block of $0$'s. Those are called 'anti-shocks', being symmetric to the usual shocks which have a low density region followed by a high density one. Using the resolvent formalism, we find: \begin{align} \langle k| M_{eff} |k\rangle&\sim-1+\varepsilon^{L+1},\\ \langle k+1| M_{eff} |k\rangle&\sim\varepsilon^{k+1},\\ \langle k-1| M_{eff} |k\rangle&\sim\varepsilon^{L-k+1}, \end{align} as well as terms of the type \begin{align} \langle k+2| M_{eff} |k\rangle&\sim X\varepsilon^{2k+3},\\ \langle k-2| M_{eff} |k\rangle&\sim Y\varepsilon^{2L-2k+3},\\ \langle k+3| M_{eff} |k\rangle&\sim Z\varepsilon^{3k+6}, \end{align} and so on. We can check those last terms to be of sub-leading order in $E(\mu)$, and we will neglect them right away. We are left with \begin{equation} M_{eff}=-1+\varepsilon^{L+1}+\sum\limits_{k=1}^{L}\varepsilon^{k}|k\rangle\langle k-1|+\varepsilon^{L-k+1}|k-1\rangle\langle k|. \end{equation} We can transform it through a matrix similarity to have all the non-diagonal coefficients be equal to $\varepsilon^{(L+1)/2}$, which yields: \begin{equation}\label{IV-3-MeffHuckel} \tilde M_{eff}=-1+\varepsilon^{L+1}+\varepsilon^{(L+1)/2}\sum\limits_{k=1}^{L}|k\rangle\langle k-1|+|k-1\rangle\langle k|. \end{equation} This is a well known tridiagonal matrix, used for instance to model the electronic interactions in conjugated dienes through the Hückel method [172]. It is easily diagonalised (which is left as an exercise to the reader). Its eigenvalues are \begin{equation}\label{IV-3-Ek} E^{(k)}=-1+2\varepsilon^{(L+1)/2} \cos(k \pi/(L+2)) \end{equation} for $k\in[\![1,L+1]\!]$. The highest one is \begin{equation}\label{IV-3-Ek2} E^{(1)}=-1+2\varepsilon^{(L+1)/2}\cos(\pi/(L+2)) \end{equation} and the gap to the second one is \begin{equation} \Delta E=2\varepsilon^{(L+1)/2}\Bigl(\cos(\pi/(L+2))-\cos(2 \pi/(L+2))\Bigr)\sim \frac{3\pi^2}{L^2}{\rm e}^{\mu/2}. \end{equation} Ultimately, we find that \begin{equation}\label{IV-3-EER-}\boxed{ E(\mu)\sim-1+2{\rm e}^{\mu/2}} \end{equation} and \begin{equation}\label{IV-3-gjLDc}\boxed{\boxed{ g(j)=1+2j\log(j)-2j.}} \end{equation} Moreover, knowing that the eigenvector associated to that first eigenvalue is distributed according to a sine function, which is to say that the probability of $|k\rangle$ is: \begin{equation} {\rm P}(k)\sim\sin\biggl(\frac{\pi k}{L+1}\biggr)^2 \end{equation} we find that the mean density $\rho_n$ at site $n$ is of the form: \begin{equation}\boxed{ \rho_n=1-\frac{n}{L+1}+\frac{1}{2\pi}\sin\biggl(\frac{2\pi n}{L+1}\biggr).} \end{equation} Note that this probability, being given by the product of the coefficients of the right and left dominant eigenvectors on state $|k\rangle$, corresponds to the probability of observing $|k\rangle$ conditioned on a low current in the original process $M_{eff}$, even though it is obtained from $\tilde M_{eff}$, because these matrices are similar. High current limit We now consider the limit where $\mu\rightarrow\infty$. In this case, the diagonal part of $M_\mu$ is negligible, and the non-diagonal part is dominant. Since this non-diagonal part depends on $\mu$, we do not need to conserve the sub-dominant part in order to transform $E(\mu)$ to $g(j)$. To make certain upcoming calculations easier, we will shift our choice of $\mu_i$'s slightly, to \begin{align} \mu_0&=\frac{\mu}{L+1}+\frac{1}{L+1}\log(2\alpha\beta)-\log(\sqrt{2}\alpha),\\ \mu_i&=\frac{\mu}{L+1}+\frac{1}{L+1}\log(2\alpha\beta),\\ \mu_L&=\frac{\mu}{L+1}+\frac{1}{L+1}\log(2\alpha\beta)-\log(\sqrt{2}\beta), \end{align} so that we may write \begin{equation}\label{IV-3-Mhc} M_\mu\sim(2\alpha\beta{\rm e}^{\mu})^{\frac{1}{L+1}} M_j \end{equation} with \begin{equation}\label{IV-3-M+hc} M_j=\frac{1}{\sqrt{2}} S_1^+ +\sum\limits_{n=1}^{L-1}S_n^- S_{n+1}^+ +\frac{1}{\sqrt{2}}S_L^- \end{equation} where $S_n^\pm$ are the operators for the creation or annihilation of a particle at site $n$. Note that the dependence in $\mu$, but also $\alpha$ and $\beta$, is only in a global pre-factor in $M_\mu$, and hence in $E(\mu)$. This tells us that we should not expect any phase transitions in this limit (except perhaps at $\alpha=0$ or $\beta=0$): the largest eigenvalue of $M_\mu$ will simply be that pre-factor multiplied by a constant: \begin{equation} E(\mu)\propto(2\alpha\beta{\rm e}^{\mu})^{\frac{1}{L+1}} \end{equation} This tells us already most of what we want to know about the behaviour of the large deviations of the current for extremely large positive fluctuations, but we can learn much more, as it turns out that $M_j$ is exactly diagonalisable. We may recognise $M_j$ to be the upper half of the Hamiltonian of an open XX spin chain [173]. Moreover, it happens to commute with its transpose. We know, from the Perron-Frobenius theorem, that the highest eigenvalue of that matrix is real and non-degenerate. It is therefore also the highest eigenvalue of its transpose, with the same eigenvectors (because they commute). This allows us to define $H=\frac{1}{2}(M_j+{}^t\!M_j)$, which has the same highest eigenvalue and the same eigenvectors as $M_j$. $H$ is given by: \begin{equation}\boxed{ H=\frac{1}{\sqrt{8}} S_1^x +\frac{1}{2}\sum\limits_{n=1}^{L-1}(S_n^- S_{n+1}^++S_n^+ S_{n+1}^-) +\frac{1}{\sqrt{8}}S_L^x} \end{equation} which is the Hamiltonian for the open XX chain with spin $1/2$ and extra boundary terms $S_1^x$ and $S_L^x$ (with $S^x=S^++S^-$). This spin chain was studied for general boundary conditions in [173]. We will present here a simpler version of their calculations, only applicable to our situation but much less intricate than the general solution. The main problem in dealing with $H$ is that it is not entirely quadratic. The first step in diagonalising it is to remedy this by considering two extra sites, one at $0$ and one at $L+1$, which we couple with our system by defining an new Hamiltonian: \begin{equation} \tilde{H}=\frac{1}{\sqrt{8}}S_0^x S_1^x +\frac{1}{2}\sum\limits_{n=1}^{L-1}(S_n^- S_{n+1}^++S_n^+ S_{n+1}^-) +\frac{1}{\sqrt{8}}S_L^xS_{L+1}^x. \end{equation} Since $[\tilde{H},S_0^x]=[\tilde{H},S_{L+1}^x]=0$, this modified Hamiltonian has four sectors, corresponding to the eigenspaces of $S_0^x$ and $S_{L+1}^x$. Since each of those has two eigenvalues $1$ and $-1$, we can recover $H$ by projecting $\tilde{H}$ onto the eigenspaces of $S_0^x$ and $S_{L+1}^x$ where both eigenvalues are $1$: \begin{equation} H=\frac{1}{4}\bigl(\!\langle 0_0|+\langle 1_0|\bigr)\!\otimes\!\bigl(\!\langle 0_{L+1}|+\langle 1_{L+1}|\bigr)\tilde{H}\bigl(|0_0\rangle+|1_0\rangle\!\bigr)\!\otimes\!\bigl(|0_{L+1}\rangle+|1_{L+1}\rangle\!\bigr). \end{equation} We are now left with diagonalising $\tilde{H}$. The rest of the calculation is a rather standard approach to quadratic spin chains. We first perform a Jordan-Wigner transformation on the operators $S_n^\pm$ : \begin{align} c_n=\Biggl(\prod\limits_{m=0}^{n-1}(-1)^{n_m}\Biggr)S_n^-~~~~&,~~~~c_n^\dagger=\Biggl(\prod\limits_{m=0}^{n-1}(-1)^{n_m}\Biggr)S_n^+,\\ S_n^-=\Biggl(\prod\limits_{m=0}^{n-1}(-1)^{c_m^\dagger c_m}\Biggr)c_n~~~~&,~~~~S_n^+=\Biggl(\prod\limits_{m=0}^{n-1}(-1)^{c_m^\dagger c_m}\Biggr)c_n^\dagger, \end{align} where $n_m$ is the number of particles on site $m$, with values in $\{0,1\}$. This yields fermionic operators: \begin{align} \{c_n^\dagger,c_m\}&=\delta_{n,m},\\ \{c_n^\dagger,c_m^\dagger\}&=0,\\ \{c_n,c_m\}&=0. \end{align} The elements of $\tilde{H}$ become: \begin{align} S_n^+S_{n+1}^-&=c_n^\dagger c_{n+1},\\ S_n^-S_{n+1}^+&=c_{n+1}^\dagger c_n,\\ S_0^x S_1^x&=(c_0^\dagger-c_0)(c_1^\dagger+c_1),\\ S_L^xS_{L+1}^x&=(c_L^\dagger-c_L)(c_{L+1}^\dagger+c_{L+1}), \end{align} so that \begin{equation} \tilde{H}=\frac{1}{\sqrt{8}}(c_0^\dagger-c_0)(c_1^\dagger+c_1) +\frac{1}{2}\sum\limits_{n=1}^{L-1}(c_n^\dagger c_{n+1}+c_{n+1}^\dagger c_n) +\frac{1}{\sqrt{8}}(c_L^\dagger-c_L)(c_{L+1}^\dagger+c_{L+1}). \end{equation} We now perform a Bogoliubov transformation [174] on $\tilde{H}$, writing it as \begin{equation} \tilde{H}={\cal E}_0+\sum\limits_{k}{\cal E}_k d_k^\dagger d_k \end{equation} with all the ${\cal E}_k>0$, and the $d_k$'s to be determined. We want the $d_k$'s to be fermionic, so that $[\tilde{H},d_k^\dagger]={\cal E}_k d_k^\dagger$, which is the equation we will now try to solve. We have two trivial solutions with energy $0$ (called zero-modes): $(c_0^\dagger+c_0)$ and $(c_{L+1}^\dagger-c_{L+1})$. For the other solutions, we write: \begin{equation} d_k^\dagger=\frac{X^{(k)}}{\sqrt{2}}(c_0^\dagger-c_0)+\sum\limits_{n=1}^{L}A_i^{(k)}c_n^\dagger+B_n^{(k)}c_n+\frac{Y^{(k)}}{\sqrt{2}}(c_{L+1}^\dagger+c_{L+1}) \end{equation} and $[\tilde{H},d_k^\dagger]={\cal E}_k d_k^\dagger$ becomes: \begin{align} A_{n+1}^{(k)}+A_{n-1}^{(k)}&=2{\cal E}_k A_n^{(k)}~~~~{\rm for}~~ n\in[\![2,L-1]\!],\\ -B_{n+1}^{(k)}-B_{n-1}^{(k)}&=2{\cal E}_k B_n^{(k)}~~~~{\rm for}~~ n\in[\![2,L-1]\!],\\ X^{(k)}+A_2^{(k)}&=2{\cal E}_k A_1^{(k)},\\ X^{(k)}-B_2^{(k)}&=2{\cal E}_k B_1^{(k)},\\ A_{1}^{(k)}+B_{1}^{(k)}&=2{\cal E}_k X^{(k)},\\ Y^{(k)}+A_{L-1}^{(k)}&=2{\cal E}_k A_L^{(k)},\\ -Y^{(k)}-B_{L-1}^{(k)}&=2{\cal E}_k B_L^{(k)},\\ A_{L}^{(k)}-B_{L}^{(k)}&=2{\cal E}_k Y^{(k)}. \end{align} All those equations can be written in a more compact form by defining: \begin{align} A_{L+1}^{(k)}&=Y^{(k)},\\ A_{L+1+n}^{(k)}&=(-1)^{n}B_{L+1-n}^{(k)},\\ A_{0}^{(k)}&=X^{(k)}, \end{align} for which they become: \begin{align} A_{n+1}^{(k)}+A_{n-1}^{(k)}&=2{\cal E}_k A_n^{(k)}~~~~{\rm for}~~ n\in[\![1,2L]\!] \label{IV-3-Anbulk},\\ A_{2L}^{(k)}+(-1)^{L}A_{0}^{(k)}&=2{\cal E}_k A_{2L+1}^{(k)},\\ (-1)^{L}A_{2L+1}^{(k)}+A_{1}^{(k)}&=2{\cal E}_k A_{0}^{(k)}. \end{align} These are the same equations which we would have found for a periodic $XX$ spin chain with $2L+2$ sites, the only (but important) difference being that $d_k^\dagger$ mixes $c_k$'s and $c_k^\dagger$'s, so that the total spin is not conserved. We look for plane wave solutions of the form $A_n=r^n$, with $2{\cal E}=r+\frac{1}{r}$. This automatically solves eq.(\ref{IV-3-Anbulk}). The other two equations become: \begin{align} r^{2L}+(-1)^{L}&=r^{2L}+r^{2L+2},\\ (-1)^{L}r^{2L+1}+r&=r+\frac{1}{r} \end{align} and both simplify into \begin{equation} r^{2L+2}=(-1)^{L}. \end{equation} We have $2L+2$ solutions to this equation, given by $r=\omega_k={\rm e}^{\frac{i\pi(L-2k+2)}{2L+2}}$ for $ k\in[\![1,2L+2]\!]$, so that $A_n^{(k)}=\omega_k^n$, and the energies are given by: \begin{equation}\boxed{ {\cal E}_k=\cos\biggl(\frac{(L-2k+2)\pi}{2L+2}\biggr)=\sin\biggl(\frac{(2k-1)\pi}{2L+2}\biggr)~~~~~~~~{\rm for}~~ k\in[\![1,2L+2]\!].} \end{equation} We can then write the $d_k^\dagger$'s as: \begin{equation} d_k^\dagger=\frac{1}{\sqrt{2L+2}}\Biggl(\frac{1}{\sqrt{2}}(c_0^\dagger-c_0)+\sum\limits_{n=1}^{L}\omega_k^n c_n^\dagger-(-\omega_k)^{-n}c_n+\frac{\omega_k^{L+1}}{\sqrt{2}}(c_{L+1}^\dagger+c_{L+1})\Biggr) \end{equation} and the inverse relations as: \begin{align} \frac{1}{\sqrt{2}}(c_0^\dagger-c_0)&=\frac{1}{\sqrt{2L+2}}\sum\limits_{k=1}^{2L+2}d_k^\dagger,\\ c_n^\dagger&=\frac{1}{\sqrt{2L+2}}\sum\limits_{k=1}^{2L+2} \omega_k^{-n} d_k^\dagger,\\ c_n&=\frac{1}{\sqrt{2L+2}}\sum\limits_{k=1}^{2L+2}- (-\omega_k)^{n} d_k^\dagger,\\ \frac{1}{\sqrt{2}}(c_{L+1}^\dagger+c_{L+1})&=\frac{1}{\sqrt{2L+2}}\sum\limits_{k=1}^{2L+2} \omega_k^{-L-1} d_k^\dagger\label{CL+1}. \end{align} Note that the $d_k^\dagger$'s are fermions, but, because there are $2L+2$ of them, and only $L+2$ of the $c_k^\dagger$'s, they are not all independent: we have $\omega_{L+1+k}=-\omega_k$, so that $d_{L+1+k}^\dagger=-d_k$. We now need to determine the constant term ${\cal E}_0$ in $\tilde{H}$. Considering only the scalar terms in $\sum\limits_{k}{\cal E}_k d_k^\dagger d_k$, we find: \begin{align} &\sum\limits_{k=1}^{L+1}{\cal E}_k\frac{1}{2L+2}\Biggl(\frac{1}{2}(c_0^\dagger-c_0)(c_0-c_0^\dagger)+\sum\limits_{n=1}^{L}c_n^\dagger c_n+c_n c_n^\dagger+\frac{1}{2}(c_{L+1}^\dagger+c_{L+1})(c_{L+1}+c_{L+1}^\dagger)\Biggr)\nonumber\\ &~~=\frac{1}{2}\sum\limits_{k=1}^{L+1}{\cal E}_k. \end{align} Since there is no scalar part in $\tilde{H}$, we must therefore have: \begin{equation} {\cal E}_0=-\frac{1}{2}\sum\limits_{k=1}^{L+1}{\cal E}_k. \end{equation} The highest eigenvalue can then be obtained by considering the state with all the energy levels occupied: \begin{equation}\boxed{ E={\cal E}_0+\sum\limits_{k=1}^{L+1}{\cal E}_k=\frac{1}{2}\sin\Bigl(\frac{\pi}{2L+2}\Bigr)^{-1}\sim \frac{L}{\pi}.} \end{equation} The corresponding eigenstate is defined by \begin{equation}\label{IV-3-psi}\boxed{ |\psi\rangle=\prod\limits_{k=1}^{L+1}d_k^\dagger|\Omega\rangle} \end{equation} which is such that $d_k^\dagger|\psi\rangle=0$ for $k\in[\![1,L+1]\!]$. The vector $|\Omega\rangle$ is arbitrary (provided that it is not in the kernel of any of the $d_k^\dagger$'s that we apply to it). Remembering the global factor which we took out of $M_\mu$ at the beginning of this section, we finally get: \begin{equation}\boxed{ E(\mu)\sim\frac{L}{\pi}{\rm e}^{\mu/L}} \end{equation} and \begin{equation}\label{IV-3-gMMC}\boxed{\boxed{ g(j)\sim L\bigl( j\log(j)-j(1-\log(\pi))\bigl)}} \end{equation} which is proportional to $L$, consistently with eq.(\ref{IV-2-gLMC4}) which was obtained for positive fluctuations of the current in the MC phase. We now have what we wanted, but we can have a look at the eigenvector (\ref{IV-3-psi}) as well. The first and easiest calculation that we can do here is that of the two-point correlations in $|\psi\rangle$. The connected correlation between the occupancies of sites $n$ and $m$ is given by: \begin{align} C_{nm}&=\langle \psi|c^\dagger_n c_n c^\dagger_m c_m|\psi\rangle-\langle \psi|c^\dagger_n c_n |\psi\rangle \langle \psi|c^\dagger_m c_m|\psi\rangle\nonumber\\ &=-\langle \psi|c^\dagger_n c^\dagger_m|\psi\rangle \langle \psi|c_n c_m|\psi\rangle+\langle \psi|c^\dagger_n c_m |\psi\rangle \langle \psi|c_n c^\dagger_m |\psi\rangle \end{align} (using Wick's theorem). We find that: \begin{align} \langle \psi|c^\dagger_n c_m |\psi\rangle&=\frac{1}{2L+2}\sum\limits_{k=1}^{L+1} \omega_k^{m-n}, \label{IV-cdncm}\\ \langle \psi|c_n c^\dagger_m |\psi\rangle&=\frac{1}{2L+2}\sum\limits_{k=1}^{L+1} (-\omega_k)^{n-m}, \label{IV-cncdm}\\ \langle \psi|c^\dagger_n c^\dagger_m|\psi\rangle&=-\frac{1}{2L+2}\sum\limits_{k=1}^{L+1} \omega_k^{-n}(-\omega_{k})^{-m}, \label{IV-cdncdm}\\ \langle \psi|c_n c_m|\psi\rangle&=-\frac{1}{2L+2}\sum\limits_{k=1}^{L+1} (-\omega_{k})^{n}\omega_k^{m}, \label{IV-cncm} \end{align} If $n$ and $m$ have same parity, each of those terms sum to $0$ (unless $n=m$ in the first two sums, but here we consider two different sites). If not, we get: \begin{align} \langle \psi|c^\dagger_n c_m |\psi\rangle&=\frac{1}{L+1}\frac{{\rm e}^{i\pi(L)(m-n)/(2L+2)}}{1-{\rm e}^{-i\pi(m-n)/(L+1)}} , \\ \langle \psi|c^\dagger_n c^\dagger_m|\psi\rangle&=-\frac{1}{L+1}(-1)^m\frac{{\rm e}^{i\pi(L)(-m-n)/(2L+2)}}{1-{\rm e}^{i\pi(m+n)/(L+1)}} , \\ \langle \psi|c_n c_m|\psi\rangle&= -\frac{1}{L+1}(-1)^n\frac{{\rm e}^{i\pi(L)(m+n)/(2L+2)}}{1-{\rm e}^{-i\pi(m+n)/(L+1)}}, \end{align} so that \begin{equation}\boxed{ C_{mn}=\frac{1}{4(L+1)^2\sin^2\Bigl(\frac{\pi(m+n)}{(2L+2)}\Bigr)}-\frac{1}{4(L+1)^2\sin^2\Bigl(\frac{\pi(m-n)}{(2L+2)}\Bigr)}.} \end{equation} The correlations are therefore exactly $0$ for sites which are an even number of bonds apart (as is the case for a half-filled periodic chain [103]), and behave as \begin{equation}\label{IV-3-corrMC} C_{mn}\sim -\frac{1}{\pi^2 (m-n)^2} \end{equation} otherwise, if the two sites are far away enough from the boundaries. Note that those correlations do not vanish with the size of the system, in contrast with the steady state of the ASEP at $\mu=0$, where they behave as $L^{-1}$ in the maximal current phase and vanished exponentially in the high and low density phases [108]. We can now examine the (un-normalised) probability of any given configuration. This can be expressed as \begin{equation}\label{Phnpm} P(\{h_n,p_n\})=\langle \psi|\prod\limits_{n=0}^{N}c_{h_n}^{} c_{h_n}^{\dagger}\frac{(c_{L+1}^{\dagger}+c_{L+1})^2}{2}\prod\limits_{n=N+1}^{L}c_{p_n}^{\dagger} c_{p_n}^{}|\psi\rangle \end{equation} which is to say that we select only the configuration that has holes at positions $\{h_n\}$ and particles at positions $\{p_n\}$ in $|\psi\rangle$, and project it onto its Hermitian conjugate. Note that the term $\frac{(c_{L+1}^{\dagger}+c_{L+1})^2}{2}$ makes no difference (since it is equal to a constant factor $\frac{1}{2}$), but is there to have effectively $L+1$ sites instead of $L$, which will be useful shortly. From the expression of this term in (\ref{CL+1}), we see that it corresponds to having a hole (or, in fact, a particle) at site $L+1$. Note also that the terms in (\ref{Phnpm}) can be reordered as long as any pair $\{c_n,c^\dagger_n\}$ is kept in the same order, so that we can regroup all the $c$'s from the first product to the left, for instance. We can now use Wick's theorem [175] on this expression, and write it as the Pfaffian of an anti-symmetric matrix ${\cal A}$ whose upper triangle consists of all the mean values $\langle \psi|ab|\psi\rangle$ where $a$ and $b$ are two terms from the product in (\ref{Phnpm}), taken in the same order. We can write it as a block matrix: \begin{equation} {\cal A}=\begin{bmatrix} A_1 & \langle \psi|c_{h_n}c_{h_m}^\dagger |\psi\rangle & \langle \psi|c_{h_n} c_{p_m}^\dagger |\psi\rangle & \langle \psi|c_{h_n} c_{p_m}|\psi\rangle \\ -\langle \psi|c_{h_m}c_{h_n}^\dagger |\psi\rangle & A_2 & \langle \psi|c_{h_n}^\dagger c_{p_m}^\dagger |\psi\rangle & \langle \psi|c_{h_n}^\dagger c_{p_m} |\psi\rangle\\ -\langle \psi|c_{h_m} c_{p_n}^\dagger |\psi\rangle & -\langle \psi|c_{h_m}^\dagger c_{p_n}^\dagger |\psi\rangle & A_3 & \langle \psi|c_{p_n}^\dagger c_{p_m} |\psi\rangle\\ -\langle \psi|c_{h_m} c_{p_n} |\psi\rangle & -\langle \psi|c_{h_m}^\dagger c_{p_n} |\psi\rangle & -\langle \psi|c_{p_m}^\dagger c_{p_n} |\psi\rangle & A_4 \end{bmatrix} \end{equation} where $A_1|_{n,m}= \langle \psi|c_{h_n}c_{h_m} |\psi\rangle$ if $n<m$ and $A_1|_{n,m}=- \langle \psi|c_{h_m}c_{h_n} |\psi\rangle$ if $n>m$. The same goes for $A_2$ with $c_{h_n}^\dagger$, $A_3$ with $c_{p_n}$ and $A_4$ with $c_{p_n}^\dagger$. Looking at the expression given in (\ref{IV-cdncm}) to (\ref{IV-cncm}), we see that $- \langle \psi|c_m c_n |\psi\rangle=\langle \psi|c_n c_m |\psi\rangle$, $- \langle \psi|c_m^\dagger c_n^\dagger |\psi\rangle=\langle \psi|c_n^\dagger c_m^\dagger |\psi\rangle$ and $- \langle \psi|c_m c_n^\dagger |\psi\rangle=\langle \psi|c_n^\dagger c_m |\psi\rangle-\delta_{n,m}$. We also note that all those block matrices can be factorised in a simple way: if we define \begin{align} X^-_{n,k}=-(-w_k)^{h_n},~~~~~~&~~~~~~X^+_{n,k}=w_k^{-h_n},\\ Y^-_{n,k}=-(-w_k)^{p_n},~~~~~~&~~~~~~Y^+_{n,k}=w_k^{-p_n}, \end{align} then ${\cal A}$ can be rewritten as \begin{equation} {\cal A}=\begin{bmatrix} X^-(X^+)^\dagger & X^-(X^-)^\dagger & X^-(Y^-)^\dagger & X^-(Y^+)^\dagger\\ X^+(X^+)^\dagger -1_h& X^+(X^-)^\dagger & X^+(Y^-)^\dagger& X^+(Y^+)^\dagger \\ Y^+(X^+)^\dagger & Y^+(X^-)^\dagger & Y^+(Y^-)^\dagger& Y^+(Y^+)^\dagger \\ Y^-(X^+)^\dagger & Y^-(X^-)^\dagger & Y^-(Y^-)^\dagger-1_p& Y^-(Y^+)^\dagger \end{bmatrix} \end{equation} where $1_h$ and $1_p$ are identity matrices whose respective sizes are the numbers of holes and particles (one of which is occupying site $L+1$, so that the sum of the two numbers is $L+1$). In order to calculate $P(\{h_n,p_n\})={\rm Pf}\bigl[{\cal A} \bigr]$, we need to first consider its square $P(\{h_n,p_n\})^2={\rm Det}\bigl[{\cal A} \bigr]$. After exchanging a few lines and columns in that determinant, we can write it in a factorised form: \begin{equation} {\rm Det}\bigl[{\cal A} \bigr]={\rm Det}\left[\begin{array}{ c| c} \substack{X^+\\ Y^-}&-1_{L+1}\\ \hline \substack{X^-\\ Y^+}&0\end{array}\right]\centerdot\left[\begin{array}{ c| c} (X^+)^\dagger ~(Y^-)^\dagger&(X^-)^\dagger ~(Y^+)^\dagger\\ \hline 1_{L+1}&0\end{array}\right]. \end{equation} Each block in this last expression is of a square matrix of size $L+1$ (which is where the fact that we included site $L+1$ becomes useful). That determinant then reduces to: \begin{equation} P(\{h_n,p_n\})^2={\rm Det}\bigl[\substack{X^-\\ Y^+} \bigr]{\rm Det}\bigl[(X^-)^\dagger ~(Y^+)^\dagger\bigr]=\big|{\rm Det}\bigl[\substack{X^-\\ Y^+} \bigr]\big|^2. \end{equation} After a few final simplifications, we find \begin{equation}\boxed{ P(\{h_n,p_n\})={\rm Det}\bigl[\omega^{h_n k}, \omega^{-p_n k} \bigr]} \end{equation} where $\omega={\rm e}^{\frac{i\pi}{L+1}}$. We recognise this to be Vandermonde determinant which gives, for any configuration: \begin{equation}\boxed{\boxed{ P(\{n_n\})=\prod\limits_{n_n=n_m}[\sin(r_m - r_n)]\prod\limits_{n_n\neq n_m}[\sin(r_m + r_n)]}} \end{equation} where $n_n$ is the occupancy of site $n$, and $r_n=n \pi/(2L+2)$. Note that all these probabilities are still un-normalised. This distribution is exactly that of a Dyson-Gaudin gas [176], which is a discrete version of the Coulomb gas, on a periodic lattice of size $2L+2$, with two defect sites (at $0$ and $L+1$) that have no occupancy, and a reflection anti-symmetry between one side of the system and the other (fig.-15). The first (upper) part of the gas is given by the configuration we are considering, and the second (lower) is deduced by anti-symmetry. The interaction potential between two particles at positions $r_n$ and $r_m$ is then given by: \begin{equation} V(r_n,r_m)=-\log\bigl(\sin(r_m - r_n)\bigr). \end{equation} It was shown in [103] that the large current limit of the steady state of the periodic ASEP of size $L$ also converges to a Dyson-Gaudin gas (the standard periodic case, without defects or symmetry). Figure 15 Figure 15. Dyson-Gaudin gas equivalent for the configuration $(110101000110111)$ for the open ASEP conditioned on a large current. The lower part of the system is deduced from the upper part by an axial anti-symmetry. We should note that the trick consisting in taking the sum of $M_j$ and its transpose to reconstruct an XX spin chain is not in fact necessary [177]. All the calculations we saw can be performed, in a slightly different way, on $M_j$ directly, which has the added advantage that the imaginary part of the other eigenvalues is not lost. Generic dynamical phase transition As we stated in the introduction to the present section, none of the methods we used to analyse the $\mu\rightarrow\pm\infty$ limits appear to rely on the ASEP being integrable. It could be, however, that they do require it in a subtle way. This is, in fact, not the case, as it turns out that those two limits can be treated in the exact same way, and yield very similar results, for a relatively broad class of models which are in general not integrable [178]. Let us consider a generalisation of the open TASEP, with site-dependent jump rates $p_i$, as well as an arbitrary (but finite) extra interaction potential $V({\cal C})$ which may depend on the whole configuration ${\cal C}$, so that the rate for a particle to jump from site $i$, turning configuration ${\cal C}$ into ${\cal C}'$, is of the form $p_i{\rm e}^{(V({\cal C}')-V({\cal C}))/2}$. The methods we used for the TASEP can be applied in precisely the same way to that new model. In the low current limit, we found for the TASEP that the first order in perturbation of the largest eigenvalue did not depend on the size of the system, in every case. This comes from the fact that, whether the dominant eigenstate at order $0$ is degenerate or not, the fastest way for the system to leave one of these states and come back to it is to have one particle jump through the whole system once, which corresponds to one quantum of current. This remains true for the generalised model we are considering now, so that the large deviation function of the current does not depend on the size of the system for low currents. In the large current limit, the similarity is even stronger, as was noticed in [179]. The non-diagonal part of the deformed Markov matrix which we are keeping in that limit is entirely equivalent to that of the simple TASEP, up to a matrix similarity, so that its whole spectrum is the same (up to a multiplicative constant, in fact). The eigenvectors are also related to those of the TASEP, and the product of the right and left dominant eigenvectors, which give the stationary density conditioned on a large current, is identical. More importantly, the large deviation function of the current is linear in the size of the system for large currents. One will be able to find a detailed account of these statements in [178]. In the case of the open TASEP, these two limits are compatible with the results obtained for small fluctuations of the current around the MC phase: the large deviation function is independent of the size of the system for negative fluctuations, and proportional to it for positive fluctuations, so that a dynamical phase transition takes place at the average current. In the next section, we will see that this comes from a transition in the appropriate coarse-graining scheme in the large size limit. For currents which are below $\frac{1}{4}$, which is the average current in the MC phase, the system can be describes through a hydrodynamic (i.e. mean field) description, where the large size behaviour of the system depends only on the local density. In that scenario, as we shall see shortly, the optimal way to produce a fluctuation of the current is to have a localised fluctuation of the density. The localised nature of that fluctuation means that its cost, in terms of probability, will not depend on the size of the system. However, as we saw in section 3.2., the largest current which can be produced in the mean-field approach is $\frac{1}{4}$. In order to have the current fluctuate above that value, one has to introduce correlations, spread out through the entire system, which can be seen in the large current limit. Since these correlations are needed everywhere in the system, the probability cost is extensive, hence the factor $L$ in the large deviation function of the current. Moreover, because of those correlations, the local density fluctuates much less that in the hydrodynamic case, so that these states are sometimes called 'hyperuniform' [93]. One would hope that these observations remain true in the case of generalised rates, and that the behaviour of the large deviation function of the current in these two limits is a good indication of the presence of a dynamical phase transition somewhere in between. It is not known, for the moment, whether there are any universal features to that transition. It would be particularly interesting to examine the case where the jumping rates are disordered. Hydrodynamic description and dynamical phase diagram In the previous sections, we obtained, by various exact calculation methods, the behaviour of the large deviation function of the current of the open TASEP in three different limits: around the average current, and for very high and very low currents. In this final section, we will see how the gaps between those limits can be bridged through a hydrodynamic description of the system, based on the macroscopic fluctuation theory (MFT, [73]). Although this method is much less rigorous than the exact calculations performed earlier (because it relies on a poorly controlled exchange of limits), we will see that it gives much more information on the fluctuations of the currents, in the cases where it applies, and gives us access to the complete dynamical phase diagram of the open ASEP. Five phases will be obtained: a low and a high density phase, which are the continuation of those same phases for the average current, and which meet the empty and full phases in the low current limit ; a shock phase, which stems from the shock line, for positive fluctuations of the current ; an anti-shock phase, which is obtained by decreasing the current from the maximal current phase of the average current ; and a maximal current phase, which is obtained for $j>\frac{1}{4}$. In the first four of these phases, the large deviation function of the current is independent of the size of the system, and the results obtained with this method agree with what we obtained earlier in the appropriate limits. In the last phase, the hydrodynamic description breaks down, as the system goes through the dynamical phase transition which we discussed at the end of the previous section. We will start by presenting the principle of the method, which was first used in [83] on the open ASEP, but works equally well for more general one-dimensional driven lattice gases. We will then use it to describe all the dynamical phases of the ASEP, obtaining the large deviation function of the current, as well as the form of the typical profiles conditioned on that current, save for the maximal current phase where the method breaks down. This will allow us to draw the dynamical phase diagram of the system. Finally, we will comment on evidence showing that this method yields not only the dominant state and eigenvalue of the deformed Markov matrix, but also a large family of higher eigenstates. Macroscopic Fluctuation Theory: from WASEP to ASEP The method which we will be using in this section, and which can be found in [83], is based on the macroscopic fluctuation theory [73], or MFT for short. The principle is the following: for a noisy Langevin equation with a Gaussian noise, such as a noisy Burgers equation, the probability of a certain history is given by the Gaussian weight of the noise which generates it. For instance, for a driven one-dimensional interacting particle system, with a conductivity $\sigma(\rho)$ and a diffusion coefficient $D(\rho)$, subject to a field $F$, the local density verifies the equation \begin{equation}\label{LangevinGas} {\rm d}_t \rho=-\nabla j~~~~{\rm with}~~~~j=F\sigma(\rho)-D(\rho)\nabla\rho+\sqrt{\sigma(\rho)}\xi, \end{equation} where $\xi$ is a Gaussian noise with mean $0$ and variance $1$, and it is implied that $\rho$ and $j$ depend on $x$ and $t$. The probability of a certain history $\{j(x,t),\rho(x,t)\}$ is thus given by \begin{equation} -\log\Bigl({\rm P}\bigl[\{j(x,t),\rho(x,t)\}\bigr]\Bigr)\sim\int_0^t d\tau\int_0^1\frac{\bigl[j-F\sigma(\rho)+D(\rho)\nabla\rho\bigr]^2}{2\sigma(\rho)}dx \end{equation} with the constraint that ${\rm d}_t \rho=-\nabla j$. This gives explicitly the joint large deviation function of the current and density. In order to obtain that of the current alone, one then has to minimise that quadratic action with respect to the density (as stated by the contraction principle ; c.f. section 2.1.). This minimisation will give us not only the large deviation function of the current, but also the optimal density profile to produce that current, which is the value of $\rho$ realising the minimum. However, this cannot be directly applied to the ASEP: looking back at to eq.(\ref{II-1-Jx}), which is the deterministic part of the Langevin equation, we see that $D=\frac{1+q}{2L}$ vanishes at large sizes, or, equivalently (through a rescaling), that the field $F\sim L(1-q)$ diverges, leaving us with an inviscid Burgers equation. We will use this approach nonetheless, by starting from the weakly asymmetric simple exclusion process (or WASEP), for which $F$ is finite (which is to say $(1-q)\sim L^{-1}$), or in fact the general system described by eq.(\ref{LangevinGas}). We will then minimise the MFT action with respect to the density, after which we will take $F$ to be of order $L$ again. We therefore start with the joint large deviation function of the current and density, given by \begin{equation}\label{IV-2-gjSigmaDt} g(j,\rho)=\lim\limits_{t\rightarrow+\infty}\frac{1}{t}\int_0^t d\tau\int_0^1\frac{\bigl[j-F\sigma(\rho)+D(\rho)\nabla\rho\bigr]^2}{2\sigma(\rho)}dx \end{equation} with ${\rm d}_t \rho=-\nabla j$. We intend to contract this to the large deviation function of only the space integrated current. We will assume that the best density to produce a given constant average current, in the long time limit, is time-independent. The constraint thus becomes ${\rm d}_t \rho=0=-\nabla j$, which is to say that $j$ does not depend on $x$ or $t$ any more. The time integral can then be taken out of the large deviation function, and we get \begin{equation}\label{IV-2-gjSigmaD} g(j,\rho)=\int_0^1\frac{\bigl[j-F\sigma(\rho)+D(\rho)\nabla\rho\bigr]^2}{2\sigma(\rho)}dx \end{equation} which is the action we will minimise over $\rho$. The first step in that direction is to expand the square \begin{equation} g(j,\rho)=\int_0^1\frac{\bigl[j-F\sigma(\rho)\bigr]^2+\bigl[D(\rho)\nabla\rho\bigr]^2}{2\sigma(\rho)}dx+\int_0^1\frac{\bigl[j-F\sigma(\rho)\bigr]}{\sigma(\rho)}D(\rho)\nabla\rho ~dx \end{equation} and notice that the cross product between the gradient and the rest produces a constant term \begin{equation} \int_0^1\frac{\bigl[j-F\sigma(\rho)\bigr]}{\sigma(\rho)}D(\rho)\nabla\rho ~dx=\int_{\rho_a}^{\rho_b}\frac{\bigl[j-F\sigma(\rho)\bigr]}{\sigma(\rho)}D(\rho)d\rho \end{equation} which does not need to be minimised. We are left with having to cancel the functional derivative of the first part alone. Let us write \begin{equation} A(\rho)=\frac{\bigl[j-F\sigma(\rho)\bigr]^2}{2\sigma(\rho)}~~~~{\rm and}~~~~B(\rho)=\frac{\bigl[D(\rho)\bigr]^2}{2\sigma(\rho)}. \end{equation} We want to minimise $\int_0^1dx~A(\rho)+B(\rho)(\nabla\rho)^2$. The minimising profile satisfies the Euler-Lagrange equation: \begin{equation} A'(\rho)-B'(\rho)(\nabla\rho)^2-2B(\rho)\Delta\rho=0 \end{equation} which, multiplied by $\nabla\rho$, gives \begin{equation} \nabla\bigl[A(\rho)-B(\rho)(\nabla\rho)^2\bigr]=0. \end{equation} The minimising profile is therefore such that \begin{equation} A(\rho)-B(\rho)(\nabla\rho)^2=K \end{equation} where $K$ is an integration constant. That constant can be found to be $0$ for $F\rightarrow\infty$ [83]. Replacing $A(\rho)$ and $B(\rho)$ by their expressions, and taking a square root, we obtain the equation defining the optimal profile: \begin{equation}\label{optRho}\boxed{ j-F\sigma(\rho)\pm D(\rho)\nabla\rho=0.} \end{equation} This equation is the same as the mean field equation, except for the sign of the gradient. To construct this profile, for a given current, one may therefore use the same construction as we saw in section 3.2., but allow the gradient to have either sign. We will come back to this in the case of the ASEP. We also obtain, from this minimisation, the large deviation of the current, by injecting eq.(\ref{optRho}) into $g(j,\rho)$. The portions of the optimal profile where $\nabla\rho$ has the same sign as in the mean field equation do not contribute to the integral, as they make the integrand vanish, so that: \begin{equation} g(j)=\int\frac{\bigl[j-F\sigma(\rho)+D(\rho)\nabla\rho\bigr]^2}{2\sigma(\rho)}dx=\int\frac{4\bigl[j-F\sigma(\rho)\bigr]^2}{2\sigma(\rho)}dx=\int\frac{2\bigl[j-F\sigma(\rho)\bigr]D(\rho)\nabla\rho}{\sigma(\rho)}dx \end{equation} where the integral is over all the portions of space where $j-F\sigma(\rho)=D(\rho)\nabla\rho$. We end up with a combination of the values of the primitive of $(j-F\sigma(\rho))/\sigma(\rho)$ at the points delimiting those portions of space. We now consider the case of the TASEP, where $F=L$, $\sigma(\rho)=\rho(1-\rho)$ and $D=\frac{1}{2}$, and we rescale time by a factor $L$, so that the optimal profile equation for a current $j$ is given by \begin{equation}\label{IV-4-Jx} j=\rho(1-\rho)\mp\frac{1}{2L}\nabla\rho. \end{equation} For a given current $j$, and two boundary conditions $\rho_a$ and $\rho_b$, we can repeat the construction of density profiles we saw in section 3.2., without the constraint on the sign of the variations of $\rho$. Note that this constraint enforced that for given boundary conditions, only one value of the current was possible. In the present case, the current is a free variable. The allowed building blocks for the profiles are represented on fig.16 ; the portions in red are the only ones contributing to $g(j)$. Figure 16 Figure 16. Various optimal profiles for a fixed $r$. The parts in green satisfy the mean field equation, and do not contribute to $g(j)$, whereas the portions in red do. The large deviation function of the current for that profile is then given by an integral over the red portions of the profile: \begin{equation} g(j)=\int\frac{\bigl[j-\rho(1-\rho)\bigr]}{\rho(1-\rho)}\nabla\rho~ dx=\sum\limits_{i}\biggl[j\log\Bigl(\frac{\rho}{1-\rho}\Bigr)-\rho \biggr]_{\rho_i^-}^{\rho_i^+} \end{equation} where $\rho_i^-$ and $\rho_i^+$ are the densities at the boundary of each red section. Note that in every case, one of the boundaries is $r$ or $1-r$. We shall write each of these terms as \begin{equation}\label{IV-2-bound}\boxed{ f(j;r1,r2)=\int_{r_1}^{r_2}\frac{\bigl[j-\rho(1-\rho)\bigr]}{\rho(1-\rho)}d\rho=j\log\Bigl(\frac{1-r_1}{r_1}\frac{r_2}{1-r_2}\Bigr)+r_1-r_2} \end{equation} where $j$ has to be equal to either $r_1(1-r_1)$ or $r_2(1-r_2)$. This is the same as the function $F^{\rm res}$ from [83]. Through this procedure, there are many profiles we can build for a given set $\{j,\rho_a,\rho_b\}$, since any number of shocks and anti-shocks can be added between $r$ and $1-r$. The true optimal profile is the one which minimises the large deviation function, which is to say the one with the fewest red portions. If at least one of $r$ and $1-r$ is between $\rho_a$ and $\rho_b$, then the optimal profile is monotonic. Otherwise, two candidates must be compared: one where $\rho$ goes from $\rho_a$ to $r$, and then to $\rho_b$, and one where $\rho$ goes from $\rho_a$ to $1-r$, and then to $\rho_b$. The other extremising profiles are conjectured to correspond to excited states of the system, conditioned on the average current. We will say more about these in section 6.3.. In the next section, we will list all the possible configurations for $j=r(1-r)$, $\rho_a$ and $\rho_b$ that lead to different forms of $g(j)$ (i.e. with different combinations of the function $f$), and determine, in each of those phases, the expressions of $g(j)$, $E(\mu)$, $j(\mu)$, and the boundaries of the phase. We will also compare the asymptotic behaviours of those results with everything we found in the previous sections, to confirm their validity. We will then summarise all we know about the maximal current phase, which is not accessible by this method (and which is defined by $j>\frac{1}{4}$). Finally, we will put all this together in order to draw the phase diagram of the open ASEP with respect to $\rho_a$, $\rho_b$ and $\mu$. In all cases, we will be noting $r$ the density for which $j=r(1-r)$ which is below $\frac{1}{2}$. Also note that we will do all the calculations for the TASEP, knowing that the same for the ASEP can be obtained merely by multiplying $E(\mu)$ by $(1-q)$. We also recall that we have defined two other boundary parameters $a=\frac{1-\rho_a}{\rho_a}$ and $b=\frac{\rho_b}{1-\rho_b}$, which we will use in certain formulae to make them more compact. Finally, we will, for the same reason, sometimes use, instead of $\mu$, the variable $u$ defined by: \begin{equation} u=\frac{1}{1+{\rm e}^{\mu}}. \end{equation} The non-perturbed case is given by $u=\frac{1}{2}$, $u=0$ corresponds to an infinite current, and $u=1$ to zero current. Dynamical phase diagram of the open ASEP Each phase is described in turn. High/low density phases We start with the low density phase, from which we can deduce the high density phase through $\rho_a\leftrightarrow 1-\rho_b$. This phase is defined by $\rho_a<1-\rho_b$, $\rho_a<1-r$ and $\rho_b<1-r$. The optimal profile is almost always on $\rho=r$, with possible boundary layers at both ends, with only the one at the left boundary contributing to $g(j)$ (fig.-17). Figure 17 Figure 17. Optimal profiles for a fixed $\rho_c$ in the low density phase. Only the portion in red contribute to $g(j)$. The large deviation function of the current is in this case: \begin{equation} g(j)=f(j;\rho_a,r)=j\log\Bigl(\frac{1-\rho_a}{\rho_a}\frac{r}{1-r}\Bigr)+\rho_a-r, \end{equation} which agrees with what we found in eq.(\ref{IV-2-gjASEP}) from exact calculations. This was first obtained in [83]. The generating function of the cumulants of the current is \begin{equation}\label{IV-4-ELD} E(\mu)=\frac{a}{a+1} \frac{ {\rm e}^\mu -1} {{\rm e}^\mu + a}= \frac{{\rm e}^\mu} {{\rm e}^\mu + a}-\frac{1}{1+a}, \end{equation} which was also obtained in [123] through a numerical approach to the Bethe equations, and the current is, in terms of $\mu$: \begin{equation} j(\mu)=\frac{a~{\rm e}^{\mu}}{({\rm e}^{\mu}+a)^2}=\rho_a(1-\rho_a)\frac{u(1-u)}{(\rho_a+u-2u\rho_a)^2}. \end{equation} The boundaries of the phase are given by: \begin{align} \rho_a&<1-\rho_b\\ u&>\frac{\rho_a^2}{1-2\rho_a+2\rho_a^2}~~~~~~~~~~~{\rm with}~~\rho_a>\frac{1}{2},\\ u&>\frac{\rho_a \rho_b}{1-\rho_b-\rho_a+2\rho_a\rho_b}~~~~{\rm with}~~\rho_b>\frac{1}{2},\\ u&>\rho_a~~~~~~~~~~~~~~~~~~~~~~~~~~~{\rm with}~~\rho_a<\frac{1}{2}~~,~~\rho_b<\frac{1}{2}, \end{align} where this last condition corresponds to $j<\frac{1}{4}$, which is the boundary with the MC phase. According to this, the LD phase goes all the way up to $u=1$. This expression of $E(\mu)$ is consistent with what we found for $\mu\rightarrow -\infty$, i.e. $u\rightarrow 1$, i.e. eq.(\ref{IV-3-gjLD}). We may also note that, on the line $\rho_a=\frac{1}{2}$, which corresponds to the LD-MC transition line for $\mu=0$, we find: \begin{equation} E(\mu)=\frac{1}{2} \frac{{\rm e}^\mu -1}{{\rm e}^\mu+1} \end{equation} which is consistent with the expression found in [97] for the half-filled periodic TASEP (we recall that the open system with $\rho_a=\frac{1}{2}$ and $\rho_b<1/2$ is equivalent to a half-filled periodic system of twice the size). The $\mu\rightarrow 0^-$ limit gives: \begin{equation} E(\mu)\sim \frac{\mu}{4}-\frac{\mu^3}{48} \end{equation} which is the same as eq.(\ref{IV-2-gMC5}). Shock phase We consider the case where $\rho_a<r$ and $\rho_b>1-r$. Here, there is a number of optimal profiles of order $L$. Each of them has a boundary layer around each boundary, both of them contributing to $g(j)$, and two constant regions, where $\rho=r$ near the left boundary and $\rho=1-r$ near the right boundary, separated by a shock that can be placed anywhere in the system (fig.-18). Figure 18 Figure 18. A few optimal profiles for a fixed $\rho_c$ in the shock phase. Only the portions in red contribute to $g(j)$. The large deviation function of the current is given by: \begin{equation} g(j)=f(j;\rho_a,r)+f(j;1-r,\rho_b)=j\log\biggl(\frac{(1-\rho_a)\rho_b}{\rho_a(1-\rho_b)}\frac{r^2}{(1-r)^2}\biggr)+\rho_a-\rho_b+1-2r. \end{equation} The generating function of the cumulants of the current is \begin{equation}\label{IV-4-ELD2} E(\mu)=\frac{2{\rm e}^{\mu/2}} {{\rm e}^{\mu/2} + \sqrt{ab}}-\frac{1}{1+a}-\frac{1}{1+b} \end{equation} and the current is: \begin{equation} j(\mu)=\frac{\sqrt{ab}~{\rm e}^{\mu/2}} {({\rm e}^{\mu/2} + \sqrt{ab})^2}=\sqrt{\frac{(1-\rho_a)\rho_b}{\rho_a(1-\rho_b)}\frac{(1-u)}{u}}\Biggl(\sqrt{\frac{(1-\rho_a)\rho_b}{\rho_a(1-\rho_b)}}+\sqrt{\frac{(1-u)}{u}}\Biggr)^{-2}. \end{equation} The boundaries of the phase are given by: \begin{align} u&<\frac{\rho_a \rho_b}{1-\rho_b-\rho_a+2\rho_a\rho_b}~~~~{\rm with}~~\rho_a<\frac{1}{2}~~,~~\rho_b>\frac{1}{2},\\ u&<\frac{(1-\rho_a) (1-\rho_b)}{1-\rho_b-\rho_a+2\rho_a\rho_b}~~~~{\rm with}~~\rho_a<\frac{1}{2}~~,~~\rho_b>\frac{1}{2},\\ u&>\frac{\rho_a(1-\rho_b)}{\rho_b+\rho_a-2\rho_a\rho_b}~~~~~~~~~~{\rm with}~~\rho_a<\frac{1}{2}~~,~~\rho_b>\frac{1}{2}, \end{align} where the last condition corresponds to $j<\frac{1}{4}$. We may note that the volume which is defined by these boundaries is symmetric under any permutation of $\rho_a$, $1-\rho_b$ and $u$. The shock phase concerns one of the asymptotic results we found before. For $\mu\rightarrow 0^+$, which imposes $\rho_a=1-\rho_b$, we find: \begin{equation} E(\mu)\sim\frac{a}{(1+a)^2}\mu+\frac{a(a-1)}{4(a+1)^3}\mu^2 \end{equation} which is what we found in eq.(\ref{IV-2-ESL+}). Anti-shock phase The last phase we can access through the MFT is for $\rho_a>(1-r)$ and $\rho_b<r$. In this case, there also is a number of optimal profiles of order $L$: the first go down from $\rho_a$ to $(1-r)$, then down from $(1-r)$ to $r$ through an anti-shock that can be placed anywhere, and that contributes to $g(j)$, and finally down from $r$ to $\rho_b$ (fig.-19). Figure 19 Figure 19. A few optimal profiles for a fixed $\rho_c$ in the Anti-shock phase. Only the portions in red contribute to $g(j)$. The large deviation function of the current is given by: \begin{equation} g(j)=f(j;1-r,r)=2j\log\Bigl(\frac{r}{1-r}\Bigr)+1-2r. \end{equation} The generating function of the cumulants of the current is \begin{equation}\label{IV-4-ELD3} E(\mu)=\frac{2{\rm e}^{\mu/2}} {{\rm e}^{\mu/2} + 1}-1=\tanh(\mu/4) \end{equation} and the current is: \begin{equation} j(\mu)=\frac{1-\tanh(\mu/4)}{4}=\frac{-2(u-u^2)+\sqrt{u-u^2}}{1-4(u-u^2)}. \end{equation} The boundaries of the phase are given by: \begin{align} u&<\frac{\rho_a^2}{1-2\rho_a+2\rho_a^2}~~~~{\rm with}~~\rho_a>\frac{1}{2}~~,~~\rho_a>1-\rho_b,\\ u&<\frac{(1-\rho_b)^2}{1-2\rho_b+2\rho_b^2}~~~~{\rm with}~~\rho_b<\frac{1}{2}~~,~~\rho_a<1-\rho_b,\\ u&>\frac{1}{2}, \end{align} where the last condition corresponds to $j<\frac{1}{4}$. We note that this phase corresponds to one of the examples that can be found in [83]. The expression for $E(\mu)$ also comes up as a side note in [123]. The limit $\mu\rightarrow 0^-$ gives: \begin{equation} E(\mu)\sim\frac{\mu}{4}-\frac{\mu^3}{192} \end{equation} which is consistent with eq.(\ref{IV-2-gLMC7}). The limit $\mu\rightarrow -\infty$, which implies $\rho_a=1-\rho_b=1$, gives: \begin{equation} E(\mu)\sim-1+2{\rm e}^{\mu/2} \end{equation} which is the same as equation (\ref{IV-3-EER-}), and this is the last asymptotic limit that we had to check. Figure 20 Figure 20. Phase diagram of the open ASEP in the s-ensemble. Top: diagrams at fixed $u$. Centre: complete diagram with phase boundaries and exploded view. Bottom: diagrams at fixed $\rho_a$. There is one phase left for us to examine, to a much lesser extent than all the the others because the MFT breaks down in this case: the maximal current phase. Once we take out the phases we have already considered, we are left with a volume, in the three-dimensional phase space with variables $\rho_a$, $\rho_b$ and $u$, defined by: \begin{align} u&<\frac{1}{2}~~~~~~~~~~~~~~~~~~~~~~{\rm with}~~\rho_a>\frac{1}{2}~~,~~\rho_b<\frac{1}{2},\\ u&<\rho_a~~~~~~~~~~~~~~~~~~~~~{\rm with}~~\rho_a>\frac{1}{2}~~,~~\rho_b>\frac{1}{2},\\ u&<1-\rho_b~~~~~~~~~~~~~~~~{\rm with}~~\rho_a<\frac{1}{2}~~,~~\rho_b<\frac{1}{2},\\ u&<\frac{\rho_a(1-\rho_b)}{\rho_b+\rho_a-2\rho_a\rho_b}~~~~{\rm with}~~\rho_a<\frac{1}{2}~~,~~\rho_b>\frac{1}{2}. \end{align} We know that, asymptotically: \begin{equation} g(j)\sim(j-J)^{5/2}\frac{32\sqrt{3}L}{5\pi(1-q)^{3/2}} \end{equation} for $\mu\rightarrow 0^+$ with $\rho_a>\frac{1}{2}$ and $\rho_b<\frac{1}{2}$ (i.e. right next to the MC phase for the steady state), which we found in eq.(\ref{IV-2-gMC4}), and that: \begin{equation} g(j)\sim L j\log(j)-Lj(1-\log(\pi)) \end{equation} for $\mu\rightarrow\infty$, as we saw in eq.(\ref{IV-3-gMMC}). The best estimate of $g(j)$ from the MFT is obtained for $\rho=\frac{1}{2}$ in the whole system, which yields $g(j)\sim L(j-J)^2$, showing that this hydrodynamic description does indeed break down in the MC phase. Since this last result is valid independently of the boundary parameters, we know that all the plane $u=0$ belongs to the same phase. We have, however, no way to be certain that all of the volume we have described above is just one phase. That being said, we know that the entire region corresponds to a mean current higher than $\frac{1}{4}$. There is no way for the system to produce such a current through a hydrodynamic profile, for which the maximal possible current is $\frac{1}{4}$ if $\rho=\frac{1}{2}$, so that, in order to increase the current, the system must produce correlations, which is why the MFT breaks down. Those correlations must be negative for neighbouring sites (if the particles are next to holes, they will jump more easily and produce more current), which is consistent with what we found in the large current limit in eq.(\ref{IV-3-corrMC}). What's more, those correlations must be created everywhere in the system, because a single non-correlated zone would cause a blockage and bring the current back down to $\frac{1}{4}$. We can also argue that the mean density should be around $\frac{1}{2}$, because it is easier to get to a large current starting from $\frac{1}{4}$ than from anything lower. In all these remarks, the boundaries play little part: independently of them, the system must be around $\rho=\frac{1}{2}$, and anti-correlated at every point. We therefore don't expect any sub-phases in this region. Phase diagram Now that we have considered all the possible combinations of $\rho_a$, $\rho_b$ and $u$, we can draw the phase diagram of the ASEP in the s-ensemble (fig.-20). Each phase is represented using a different colour: blue for the low density phase, green for the high density phase, orange for the shock phase, purple for the anti-shock phase, and red/pink for the maximal current phase. The full diagram can be seen in the centre of the figure, with black lines marking the corners of the phases, and an exploded view of the LD, MC, shock (S) and anti-shock (AS) phases is also shown, with coloured lines representing slices for regularly spaced values of $u$. The HD phase can be deduced from the LD phase through the symmetry $\rho_a\leftrightarrow 1-\rho_b$. The top and bottom parts of the figure contain slices of the diagram for specific values of $u$ (top) and $\rho_a$ (bottom), with a few iso-current lines drawn in all phases except the MC phase. Note that those iso-current lines do not represent evenly spaced values of the current ($j$ varies, in fact, more slowly as one approaches the MC phase). Hydrodynamic excited states As we noted earlier, the extremisation of the MFT action yields many density profiles, which are local minima of that action, and of which we only considered the optimal ones to build the dynamical phase diagram of the current. One may naturally wonder if the other extremising profiles have any physical significance. We conjecture that they do in fact correspond to the low excitations (i.e. relaxation modes) of the model. In particular, profiles which correspond to a vanishing $\mu$, i.e. $\frac{{\rm d}}{{\rm d}j} g(j)=0$, if they exist, are relaxation modes of the unperturbed dynamics, and the corresponding values of the Legendre transform of $g(j)$ are the eigenvalues of these relaxation modes, which is to say the inverses of the relaxation times. In this section, we give some evidence towards that conjecture. More details on this conjecture will appear in [180]. Consider the system in its low density phase, with some fixed values of $j=r(1-r)$ and $\rho_a$, and $\rho_b<1-r$. Looking for the second best profile obtained from the aforementioned extremisation, we notice that it depends on the position of $\rho_b$ with respect to $r$: if $\rho_b<r$, it is obtained by adding a shock and an anti-shock to the optimal profile, whereas if $\rho_b>r$, one may go directly down to $\rho_b$ after the shock, so that the profile with an anti-shock becomes the third best (fig.-21). Figure 21 Figure 21. Second best extremising profiles in the LD phase, for $\rho_b$ smaller or larger than $r$. The profile on the left still exists for $\rho_b>r$, but becomes the third best. In the case where $\rho_b<r$, the large deviation function of the current for the second best profile is given by \begin{equation} g(j)=j\log\biggl(\frac{1-\rho_a}{\rho_a}\Bigl(\frac{r}{1-r}\Bigr)^3\biggr)+1+\rho_a-3r. \end{equation} The deformation parameter $\mu$ vanishes for \begin{equation} r=\Bigl[1+\bigl(\frac{1-\rho_a}{\rho_a}\bigr)^{1/3}\Bigr]^{-1} \end{equation} and we get a value for the Legendre transform of $g(j)$ equal to \begin{equation} E^\star=-1-\rho_a+3\rho_c. \end{equation} If $\rho_b>r$, that profile still exists, but another one appears, which is more probable (fig.-21 right). The large deviation function of the current for that profile is \begin{equation} g(j)=j\log\biggl(\frac{1-\rho_a}{\rho_a}\frac{\rho_b}{1-\rho_b}\bigl(\frac{\rho_c}{1-\rho_c}\bigr)^2\biggr)+1+\rho_a-\rho_b-2\rho_c. \end{equation} The deformation parameter $\mu$ vanishes for \begin{equation} r=\Bigl[1+\bigl(\frac{1-\rho_a}{\rho_a}\frac{\rho_b}{1-\rho_b}\bigr)^{1/2}\Bigr]^{-1} \end{equation} and we get a value for the Legendre transform of $g(j)$ equal to \begin{equation} E^\star=-1-\rho_a+\rho_b+2\rho_c. \end{equation} Figure 22 Figure 22. Phase diagram of the first excited state of the open ASEP ; an extra transition appears in the LD and HD phases, corresponding to a change of behaviour of the associated density profile. The line on which the second profile appears is that for which \begin{equation} \rho_b=r=\Bigl[1+\bigl(\frac{1-\rho_a}{\rho_a}\bigr)^{1/3}\Bigr]^{-1}, \end{equation} which is to say, using the alternative boundary parameters $a$ and $b$: \begin{equation}\boxed{ a~ b^3=1.} \end{equation} This line is therefore a transition line in the phase diagram of the second best density profile of the open ASEP (fig.-22). The same calculations can be done for the HD phase by exchanging $a$ and $b$. The value obtained for $E^\star$, as well as the location of this transition line, agree perfectly with results obtained in [36] for the eigenvalue of the first excited state of the open ASEP, through a numerical resolution of the Bethe equations, and which were later verified in [122]. This leads us to conjecture that all the non-optimal extremal profiles obtained from the MFT are excited states of the (biased, unless $\mu=0$) open ASEP, with the value of the Legendre transform of the large deviation function giving the corresponding eigenvalue. One should note that each shock and anti-shock in the profile adds an order $L$ to the degeneracy of that eigenvalue (because it does not depend on their position), and that finite size corrections will lift that degeneracy, in which case the true eigenstates will be specific superpositions of these profiles. More evidence for that statement will appear in [180]. Conclusion and outlook We have seen, in this review, how to obtain the large deviation function of the average current of particles in the steady state of one-dimensional bulk-driven lattice gases, and in particular of the asymmetric simple exclusion process with open boundaries. We first reviewed the mathematical tools necessary to pose and treat the problem at hand. We defined large deviation functions, which are a natural generalisation of free energies, and saw that they relate to generating functions of cumulants through a Legendre transform. We then considered the special case of time-additive observables in time-continuous Markov processes, and saw how the problem of obtaining the generating function of those observables in the long time limit reduces to that of computing the largest eigenvalue of a deformed Markov matrix. We then introduced the reader to the open ASEP, starting with a rapid overview of the existing literature related to the model and its variants. We also presented two important results pertaining to its steady state: first, the phase diagram of the average current, as well as the typical density profiles, using a mean-field approach ; then, the famous matrix Ansatz, giving the exact probability distribution of the steady state using a matrix product formulation, and which allows to obtain that same phase diagram through an exact calculation. Being interested in the fluctuations of the current rather than its average, we posed the problem of obtaining its large deviation function, which is equivalent to that of calculating the largest eigenvalue of a deformed Markov matrix. That matrix being integrable, we saw how to use the coordinate Bethe Ansatz (in the periodic case) and the Q-operator method (in the open case) in order to obtain an exact expression of the generating function of the cumulants of the current, written as an implicit pair of series in an intermediate parameter. Treating those series in the large size limit, we obtained the asymptotic behaviour of the large deviation function of the current for small fluctuations, in each of the phases of the average current, and found a few dynamical phase transitions. We noticed in particular that for fluctuations of the current smaller than $\frac{1}{4}$, which is the maximal current obtained from a hydrodynamic (mean field) description of the system, the large deviation function does not depend on the size of the system, whereas if the current is larger than $\frac{1}{4}$, it is proportional to the size of the system. Having obtained the behaviour of the large deviation function of the current in the limit of small fluctuations, we then looked at the limit of extreme fluctuations, either positive or negative, where it turns out that the deformed Markov matrix can be diagonalised directly, without invoking the integrability of the system. In the large negative fluctuation limit, where the current goes to $0$, the deformed Markov matrix is a perturbation of a diagonal matrix, and we find that the large deviation function of the current is independent of the size of the system. In the large positive fluctuation limit, where the current goes to infinity, the system is equivalent to an open XX spin chain, and the large deviation function of the current is proportional to the size of the system. We also note that these properties do not depend on many details of the system, and would be equally valid with added interactions between the particle or site-wise disorder of the jumping rates. The difference in scaling with respect to the size of the system suggests that a generic dynamical phase transition exists at an intermediate current for all those systems. Finally, we considered a less exact but more powerful approach, based on the macroscopic fluctuation theory, to obtain the large deviation function of any finite fluctuation of the current, using a hydrodynamic description of the system in the large size limit and a non-rigorous exchange of limits. This allowed us to build the complete dynamical phase diagram of the current for the open ASEP. In four of the five phases we obtained, which are the ones corresponding to a current smaller that $\frac{1}{4}$, we found large deviation functions independent of the size of the system, and perfect agreement with the exact results obtained before. In the last phase, we found no such agreement, and we surmised that the MFT breaks down due to the necessary presence of local correlations in the system in that phase. In the phases where the MFT does apply, we gave some evidence for a conjecture stating that the non-optimal extremisers of the MFT action are the slow relaxation modes of the system. Many challenges related to the contents of this review remain to be tackled, and we will conclude by mentioning a few of them. First of all, the Q-operator method summarised in section 4.2.2. has only yielded the dominant eigenvalue of the deformed Markov matrix so far, but it allows in principle access to the whole spectrum. Obtaining it, through that method or any of the other ones mentioned in section 4.2., would in particular allow to verify that the non-optimal profiles mentioned in section 6.3. are really related to the low excitations of the model. On the subject of those local minima of the MFT action, it would be interesting to understand for which models they might appear and whether they always correspond to excited states. For one-dimensional lattice gases, a finite drive in the bulk seems essential: those local minima appear for the ASEP because of the appearance of shocks and anti-shocks which can be combined in many ways, which is a consequence of the diffusion part of the Langevin equation being inversely proportional to the size of the system. It is natural to wonder what might be the case in other types of systems, such as higher-dimensional ones, for instance. Finally, it would certainly be interesting to understand exactly how generic the dynamical phase transition we discussed in section 5.3. really is, and whether it has any universal features. In the case of the open ASEP, that transition arises because of the fact that the current which can be obtained from a hydrodynamic description is bounded. If one tries to impose a current higher that $\frac{1}{4}$ (which is the upper bound), which is not forbidden in the underlying microscopic model, the hydrodynamic description breaks down, and another macroscopic description of the model is needed. It would stand to reason that such a transition would appear for any model where a hydrodynamic limit introduces bounds for an observable. The question of whether those transitions have any universal features is for the moment entirely open. Acknowledgements: The author would like to thank K. Mallick, V. Pasquier, F. van Wijland, J. Tailleur, M. Evans, R. Blythe, G. Schutz, D. Karevski, T. Sadhu, C. Maes, W. de Roeck, C. Nardini and C. Perez Espigares for useful and interesting discussions. This work was financed by the Interuniversity Attraction Pole - Phase VII/18 (Dynamics, Geometry and Statistical Physics) at KU Leuven. A. Lazarescu. Exact Large Deviations of the Current in the Asymmetric Simple Exclusion Process with Open Boundaries (PhD thesis). Phd, UPMC (2013). H. Touchette. The large deviation approach to statistical mechanics. Physics Reports 478(1-3), 1–69 (2009). H. Touchette and R. J. Harris. Large Deviation Approach to Nonequilibrium Systems. Nonequilibrium Statistical Physics of Small Systems: Fluctuation Relations and Beyond pp. 335–360 (2013). H. Touchette, R. J. Harris and J. Tailleur. First-order phase transitions from poles in asymptotic representations of partition functions. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics 81(3), 030101 (2010). M. D. Donsker and S. R. S. Varadhan. Asymptotic evaluation of certain Markov process expectations for large time—III. Communications on Pure and Applied Mathematics 29(4), 389–461 (1976). M. D. Donsker and S. R. S. Varadhan. Asymptotic evaluation of certain markov process expectations for large time. IV. Communications on Pure and Applied Mathematics 36(2), 183–212 (1983). M. D. Donsker and S. R. S. Varadhan. Asymptotic evaluation of certain markov process expectations for large time, II. Communications on Pure and Applied Mathematics 28(2), 279–301 (1975). M. D. Donsker and S. R. S. Varadhan. Asymptotic evaluation of certain markov process expectations for large time, I. Communications on Pure and Applied Mathematics 28(1), 1–47 (2010). F. den Hollander. Large deviations, vol. 14 (Academic press, 2000). ISBN 0-8218-1989-5. T. Nemoto and S. I. Sasa. Computation of large deviation statistics via iterative measurement-and-feedback procedure. Physical Review Letters 112(9) (2014). R. L. Jack and P. Sollich. Large deviations and ensembles of trajectories in stochastic models. Progress of Theoretical Physics Supplement 184(Supplement 1), 304–317 (2009). R. Chetrite and H. Touchette. Nonequilibrium Markov processes conditioned on large deviations. Annales Henri Poincaré (2014). R. Chetrite and H. Touchette. Variational and optimal control representations of conditioned and driven processes. arXiv:1506.05291 (2015). J. L. Lebowitz and H. Spohn. A Gallavotti-Cohen Type Symmetry in the Large Deviation Functional for Stochastic Dynamics. Journal of Statistical Physics 95, 333–365 (1998). J. Kurchan. Fluctuation theorem for stochastic dynamics. Journal of Physics A: Mathematical and General 31(16), 3719 (1997). G. Gallavotti and E. G. D. Cohen. Dynamical ensembles in nonequilibrium statistical mechanics. Physical Review Letters 74(14), 2694–2697 (1995). D. J. Evans, E. G. D. Cohen and G. P. Morriss. Probability of second law violations in shearing steady states. Physical Review Letters 71(15), 2401–2404 (1993). D. J. Evans and D. J. Searles. Equilibrium microstates which generate second law violating steady states. Physical Review E 50(2), 1645–1648 (1994). C. T. MacDonald, J. H. Gibbs and a. C. Pipkin. Kinetics of biopolymerization on nucleic acid templates. Biopolymers 6(1), 1–5 (1968). C. T. MacDonald and J. H. Gibbs. Concerning the kinetics of polypeptide synthesis on polyribosomes. Biopolymers 7(5), 707–725 (1969). D. A. Adams, B. Schmittmann and R. K. P. Zia. Far-from-equilibrium transport with constrained resources. Journal of Statistical Mechanics: Theory and Experiment 2008(06), P06009 (2008). P. Greulich, L. Ciandrini, R. J. Allen and M. C. Romano. Mixed population of competing totally asymmetric simple exclusion processes with a shared reservoir of particles. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics 85(1), 011142 (2012). P. Greulich and A. Schadschneider. Single-Bottleneck Approximation for Driven Lattice Gases with Disorder and Open Boundary Conditions. Journal of Statistical Mechanics: Theory and Experiment 2008(04), P04009 (2007). L. Ciandrini, I. Stansfield and M. C. Romano. Role of the particle's stepping cycle in an asymmetric exclusion process: A model of mRNA translation. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics 81(5), 051904 (2010). L. Reese, A. Melbinger and E. Frey. Crowding of molecular motors determines microtubule depolymerization. Biophysical Journal 101(9), 2190–2200 (2011). T. Chou, K. Mallick and R. K. P. Zia. Paradigmatic Model To Biological Transport. Reports on Progress in Physics 74(11), 116601 (2011). S. Sandow. Partially asymmetric exclusion process with open boundaries. Physical Review E 50(4), 2660–2667 (1994). L. D. Faddeev. How Algebraic Bethe Ansatz works for integrable model. Les-Houches lectures p. 59 (1996). R. J. Baxter. Exactly solved models in statistical mechanics (Academic Press, 2007). ISBN 0486462714. S. Prolhac. Fluctuations and skewness of the current in the partially asymmetric exclusion process. Journal of Physics A: Mathematical and Theoretical 41(36), 21 (2008). S. Prolhac. A combinatorial solution for the current fluctuations in the exclusion process. arXiv preprint arXiv:0904.2356 (2), 7 (2009). S. Prolhac. Tree structures for the current fluctuations in the exclusion process. Journal of Physics A: Mathematical and Theoretical 43(10), 105002 (2010). S. Prolhac and K. Mallick. Current Fluctuations in the exclusion process and Bethe Ansatz. Journal of Physics A: Mathematical and Theoretical 41(17), 17 (2008). N. Crampé, E. Ragoucy and D. Simon. Eigenvectors of open XXZ and ASEP models for a class of non-diagonal boundary conditions. Journal of Statistical Mechanics: Theory and Experiment 2010(11), P11038 (2010). N. Crampe, E. Ragoucy and D. Simon. Matrix Coordinate Bethe Ansatz: Applications to XXZ and ASEP models. Journal of Physics A: Mathematical and Theoretical 44(40), 18 (2011). J. de Gier and F. H. L. Essler. Exact Spectral Gaps of the Asymmetric Exclusion Process with Open Boundaries. Journal of Statistical Mechanics: Theory and Experiment 2006(12), P12011 (2006). J. de Gier and F. H. L. Essler. Bethe ansatz solution of the asymmetric exclusion process with open boundaries. Physical Review Letters 95(24), 240601 (2005). D. Simon. Construction of a Coordinate Bethe Ansatz for the asymmetric simple exclusion process with open boundaries. Journal of Statistical Mechanics: Theory and Experiment 2009(07), P07017 (2009). B. Derrida, M. R. Evans, V. Hakim and V. Pasquier. Exact solution of a 1D asymmetric exclusion model using a matrix formulation. Journal of Physics A: Mathematical and General 26(7), 1493–1517 (1999). M. Kardar, G. Parisi and Y. C. Zhang. Dynamic scaling of growing interfaces. Physical Review Letters 56(9), 889–892 (1986). K. Johansson. Shape Fluctuations and Random Matrices. Communications in mathematical physics 209(2), 437–476 (1999). C. A. Tracy and H. Widom. Total current fluctuations in the asymmetric simple exclusion process. Journal of Mathematical Physics 50(9), 095204 (2009). T. Sasamoto and H. Spohn. One-Dimensional kardar-Parisi-zhang equation: An exact solution and its universality. Physical Review Letters 104(23), 230602 (2010). M. Praehofer and H. Spohn. Current fluctuations for the totally asymmetric simple exclusion process. In and out of equilibrium 51, 185–204 (2001). G. B. Arous and I. Corwin. Current fluctuations for TASEP: A proof of the Prähofer-Spohn conjecture. Annals of Probability 39(1), 104–138 (2011). P. L. Ferrari. From interacting particle systems to random matrices. Journal of Statistical Mechanics: Theory and Experiment 2010(10), P10016 (2010). T. Sasamoto. Fluctuations of the one-dimensional asymmetric exclusion process using random matrix techniques. Journal of Statistical Mechanics: Theory and Experiment 2007(07), P07007 (2007). S. Prolhac and H. Spohn. The one-dimensional KPZ equation and the Airy process. Journal of Statistical Mechanics: Theory and Experiment 2011(03), P03020 (2011). G. Amir, I. Corwin and J. Quastel. Probability Distribution of the Free Energy of the Continuum Directed Random Polymer in 1+1 dimensions. Communications on Pure and Applied Mathematics 64(4), 466–537 (2010). P. Calabrese, P. L. Doussal and A. Rosso. Free-energy distribution of the directed polymer at high temperature. EPL (Europhysics Letters) 90(2), 6 (2010). T. Imamura, T. Sasamoto and H. Spohn. KPZ, ASEP and Delta-Bose Gas. Journal of Physics: Conference Series 297, 012016 (2011). H. Spohn. Stochastic integrability and the KPZ equation. IAMP News Bulletin pp. 1–6 (2012). L. Bertini and G. Giacomin. Stochastic Burgers and KPZ Equations from Particle Systems (1997). V. Lecomte, U. C. Täuber and F. van Wijland. Current distribution in systems with anomalous diffusion: renormalization group approach. Journal of Physics A: Mathematical and Theoretical 40, 1447–1465 (2007). K. A. Takeuchi and M. Sano. Universal fluctuations of growing interfaces: Evidence in turbulent liquid crystals. Physical Review Letters 104(23), 230601 (2010). I. Corwin. The Kardar-Parisi-Zhang equation and universality class. Random Matrices: Theory and Applications 1(01), 1130001 (2011). T. Halpin-Healy and Y.-C. Zhang. Kinetic roughening phenomena, stochastic growth, directed polymers and all that. Aspects of multidisciplinary statistical mechanics. Physics Reports 254(4-6), 215–414 (1995). T. Kriecherbauer and J. Krug. A pedestrian's view on interacting particle systems, KPZ universality, and random matrices. Journal of Physics A: Mathematical and Theoretical 43(40), 403001 (2008). J. Quastel. Weakly Asymmetric Exclusion and KPZ. Proceedings of the International Congress of Mathematicians 2010 (ICM 2010) - Vol. I: Plenary Lectures and Ceremonies, Vols. II-IV: Invited Lectures, pp. 2310–2324 (Hindustan Book Agency, India, 2010). ISBN 9789814324304. T. Karzig and F. {Von Oppen}. Signatures of critical full counting statistics in a quantum-dot chain. Physical Review B - Condensed Matter and Materials Physics 81(4), 045317 (2010). M. T. Batchelor, J. de Gier and B. Nienhuis. The quantum symmetric XXZ chain at Delta=-1/2, alternating sign matrices and plane partitions. Journal of Physics A: Mathematical and General 34(19), 7 (2001). J. de Gier, M. T. Batchelor, B. Nienhuis and S. Mitra. The XXZ spin chain at Delta = - 1/2: Bethe roots, symmetric functions, and determinants. Journal of Mathematical Physics 43(8), 4135–4146 (2002). R. A. Blythe, W. Janke, D. A. Johnston and R. Kenna. Continued Fractions and the Partially Asymmetric Exclusion Process. Journal of Physics A: Mathematical and Theoretical 42(32), 325002 (2009). B. Derrida, C. Enaud and J. L. Lebowitz. The asymmetric Exclusion Process and Brownian Excursions. Journal of Statistical Physics 115(1/2), 365–382 (2003). S. N. Majumdar and A. Comtet. Airy distribution function: From the area under a brownian excursion to the maximal height of fluctuating interfaces. Journal of Statistical Physics 119(3-4), 777–826 (2005). S. N. Majumdar and A. Comtet. Exact maximal height distribution of fluctuating interfaces. Physical Review Letters 92(22), 225501–1 (2004). S. Corteel and L. K. Williams. Tableaux combinatorics for the asymmetric exclusion process. Advances in Applied Mathematics 39(3), 293–310 (2007). T. Sasamoto. One-dimensional partially asymmetric simple exclusion process with open boundaries: orthogonal polynomials approach. Journal of Physics A: Mathematical and General 32(41), 7109–7131 (1999). M. Uchiyama, T. Sasamoto and M. Wadati. Asymmetric simple exclusion process with open boundaries and Askey–Wilson polynomials. Journal of Physics A: Mathematical and General 37(18), 4985–5002 (2004). X. G. Viennot. Canopy of binary trees, Catalan tableaux and the asymmetric exclusion process (2009). G. Schutz. Exactly Solvable Models for Many-Body Systems Far from Equilibrium. vol. 19 of Phase Transitions and Critical Phenomena, pp. 3–251 (Academic Press, 2001). ISBN 0122203194. B. Derrida. An exactly soluble non-equilibrium system: The asymmetric simple exclusion process. Physics Reports 301(1-3), 65–83 (1998). B. Derrida. Non equilibrium steady states: fluctuations and large deviations of the density and of the current. Journal of Statistical Mechanics: Theory and Experiment 2007(07), P07023 (2007). B. Schmittmann and R. K. P. Zia. Statistical mechanics of driven diffusive systems. B. Schmittmann and R. K. P. Zia (Eds.), Phase Transitions and Critical Phenomena, vol. 17 of Phase Transitions and Critical Phenomena, pp. 3–214 (Academic Press, 1995). ISBN 9780122203176. C. Appert-Rolland, B. Derrida, V. Lecomte and F. {Van Wijland}. Universal cumulants of the current in diffusive systems on a ring. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics 78(2), 21122 (2008). T. Bodineau and B. Derrida. Current fluctuations in nonequilibrium diffusive systems: An additivity principle. Physical Review Letters 92(18), 180601–1 (2004). L. Bertini, a. De Sole, D. Gabrielli, G. Jona-Lasinio and C. Landim. Macroscopic fluctuation theory for stationary non-equilibrium states. Journal of Statistical Physics 107(3-4), 635–675 (2002). L. Bertini, A. De Sole, D. Gabrielli, G. Jona-Lasinio and C. Landim. Stochastic interacting particle systems out of equilibrium. Journal of Statistical Mechanics: Theory and Experiment 2007(07), P07014 (2007). L. Bertini, A. De Sole, D. Gabrielli, G. Jona-Lasinio and C. Landim. Current Fluctuations in Stochastic Lattice Gases. Physical Review Letters 94(January), 030601 (2005). L. Bertini, A. De Sole, D. Gabrielli, G. Jona-Lasinio and C. Landim. Large deviation approach to non equilibrium processes in stochastic lattice gases. Bulletin of the Brazilian Mathematical Society 37(4), 611–643 (2006). H. Spohn. Large scale dynamics of interacting particles. Texts and monographs in physics (Springer-Verlag, 1991). ISBN 9783642843730. B. Derrida and C. Enaud. Large deviation functional of the weakly asymmetric exclusion process. Journal of Statistical Physics 114(3/4), 537–562 (2003). T. Bodineau and B. Derrida. Current large deviations for asymmetric exclusion processes with open boundaries. Journal of Statistical Physics 123(2), 277–300 (2006). B. Derrida, B. Doucot and P. E. Roche. Current fluctuations in the one dimensional Symmetric Exclusion Process with open boundaries. Journal of Statistical Physics 115(3/4), 717–748 (2003). J. Tailleur, J. Kurchan and V. Lecomte. Mapping nonequilibrium onto equilibrium: The macroscopic fluctuations of simple transport models. Physical Review Letters 99(15), 150602 (2007). V. Lecomte, A. Imparato and F. {Van Wijland}. Current fluctuations in systems with diffusive dynamics, in and out of equilibrium. Progress of Theoretical Physics Supplement 184, 276–289 (2009). S. Prolhac and K. Mallick. Cumulants of the current in the weakly asymmetric exclusion process. Journal of Physics A: Mathematical and Theoretical 42(17), 24 (2009). T. Bodineau and B. Derrida. Distribution of current in nonequilibrium diffusive systems and phase transitions. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics 72(6), 66110 (2005). D. Simon. Bethe Ansatz for the Weakly Asymmetric Simple Exclusion Process and Phase Transition in the Current Distribution. Journal of Statistical Physics 142(5), 931–951 (2011). V. Belitsky and G. M. Schütz. Microscopic Structure of Shocks and Antishocks in the ASEP Conditioned on Low Current. Journal of Statistical Physics 152, 93–111 (2013). V. Lecomte, J. P. Garrahan and F. van Wijland. Inactive dynamical phase of a symmetric exclusion process on a ring. Journal of Physics A: Mathematical and Theoretical 45(17), 175001 (2012). R. L. Jack, I. R. Thompson and P. Sollich. Hyperuniformity and Phase Separation in Biased Ensembles of Trajectories for Diffusive Systems. Physical Review Letters 114(1), 060601 (2015). P. L. Krapivsky, K. Mallick and T. Sadhu. Dynamical properties of single-file diffusion. arXiv:1505.01287 (2015). P. L. Krapivsky, K. Mallick and T. Sadhu. Tagged particle in single-file diffusion. arXiv:1506.00865 . B. Derrida and J. L. Lebowitz. Exact Large Deviation Function in the Asymmetric Exclusion Process. Physical Review Letters 80(2), 8 (1998). B. Derrida and C. Appert. Universal large-deviation function of the Kardar–Parisi–Zhang equation in one dimension. Journal of Statistical Physics 94(1-2), 1–30 (1999). B. Derrida and K. Mallick. Exact diffusion constant for the one-dimensional partially asymmetric exclusion model. Journal of Physics A: Mathematical and General 30(4), 1031–1046 (1999). O. Golinelli and K. Mallick. Bethe Ansatz calculation of the spectral gap of the asymmetric exclusion process. Journal of Physics A: Mathematical and General 37(10), 3321–3331 (2003). O. Golinelli and K. Mallick. Spectral gap of the totally asymmetric exclusion process at arbitrary filling. Journal of Physics A: Mathematical and General 38(7), 1419–1425 (2004). S. Prolhac. Spectrum of the totally asymmetric simple exclusion process on a periodic lattice—bulk eigenvalues. Journal of Physics A: Mathematical and Theoretical 46(41), 415001 (2013). S. Prolhac. Current fluctuations for totally asymmetric exclusion on the relaxation scale. Journal of Physics A: Mathematical and Theoretical 48, 06FT02 (2015). V. Popkov, G. M. Schütz and D. Simon. Asymmetric simple exclusion process on a ring conditioned on enhanced flux. Journal of Statistical Mechanics: Theory and Experiment 2010(10), P10007 (2010). B. Derrida, M. R. Evans and K. Mallick. Exact diffusion constant of a one-dimensional asymmetric exclusion model with open boundaries. Journal of Statistical Physics 79(5-6), 833–874 (1995). J. Krug. Boundary-induced phase transitions in driven diffusive systems. Physical Review Letters 67(14), 1882–1885 (1991). G. Schütz and E. Domany. Phase transitions in an exactly soluble one-dimensional exclusion process. Journal of Statistical Physics 72(1-2), 277–296 (1993). T. Sasamoto. Density profile of the one-dimensional partially asymmetric simple exclusion process with open boundaries. Journal of the Physical Society of Japan 69(4), 1055–1067 (2000). B. Derrida and M. R. Evans. Exact correlation functions in an asymmetric exclusion model with open boundaries. Journal de Physique I 3(2), 311–322 (1993). M. Uchiyama and M. Wadati. Correlation Function of Asymmetric Simple Exclusion Process with Open Boundaries. Journal of Nonlinear Mathematical Physics 12(sup1), 676–688 (2013). R. A. Blythe and M. R. Evans. Nonequilibrium steady states of matrix-product form: a solver's guide. Journal of Physics A: Mathematical and Theoretical 40(46), R333 (2007). N. Crampe, K. Mallick, E. Ragoucy and M. Vanicat. Open two-species exclusion processes with integrable boundaries. Journal of Physics A: Mathematical and Theoretical 48, 175002 (2015). S. L. A. De Queiroz. Current-activity versus local-current fluctuations in a driven flow with exclusion. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics 86(4), 1–9 (2012). M. Depken and R. Stinchcombe. Exact probability function for bulk density and current in the asymmetric exclusion process. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics 71(3), 36120 (2005). B. Derrida, J. L. Lebowitz and E. R. Speer. Exact free energy functional for a driven diffusive open stationary nonequilibrium system. Physical review letters 89(3), 030601 (2002). J. de Gier, C. Finn and M. Sorrell. Relaxation rate of the reverse biased asymmetric exclusion process. J. Phys. A: Math. Theor. 44, 405002 (2011). R. A. Blythe, M. R. Evans, F. Colaiori and F. H. L. Essler. Exact solution of a partially asymmetric exclusion model using a deformed oscillator algebra. Journal of Physics A: Mathematical and General 33(12), 2313 (1999). M. Dudzinski and G. M. Schütz. Relaxation spectrum of the asymmetric exclusion process with open boundaries. Journal of Physics A: Mathematical and General 33(47), 8351–8363 (2000). A. B. Kolomeisky, G. M. Schütz, E. B. Kolomeisky and J. P. Straley. Phase diagram of one-dimensional driven lattice gases with open boundaries. Journal of Physics A: Mathematical and General 31(33), 6911–6919 (1999). L. Santen and C. Appert. The asymmetric exclusion process revisited: Fluctuations and dynamics in the domain wall picture. Journal of Statistical Physics 106(1-2), 187–199 (2002). S. Varadhan. Ito's Stochastic Calculus and Probability Theory. N. Ikeda, S. Watanabe, M. Fukushima and H. Kunita (Eds.), Itô's Stochastic Calculus and Probability Theory (Springer Japan, Tokyo, 1996). ISBN 9784431685340. C. Bahadoran. A quasi-potential for conservation laws with boundary conditions. arXiv:1010.3624 (2010). A. Proeme, R. A. Blythe and M. R. Evans. Dynamical Transition in the Open-boundary Totally Asymmetric Exclusion Process. Journal of Physics A: Mathematical and Theoretical 44(3), 035003 (2010). J. de Gier and F. H. L. Essler. Large deviation function for the current in the open asymmetric simple exclusion process. Physical Review Letters 107(1), 10602 (2011). A. Lazarescu and V. Pasquier. Bethe Ansatz and Q-operator for the open ASEP. Journal of Physics A: Mathematical and Theoretical 47(1), 46 (2014). M. Gorissen, A. Lazarescu, K. Mallick and C. Vanderzande. Exact current statistics of the asymmetric simple exclusion process with open boundaries. Physical Review Letters 109(17), 170601 (2012). M. R. Evans, P. a. Ferrari and K. Mallick. Matrix representation of the stationary measure for the multispecies TASEP. Journal of Statistical Physics 135(2), 217–239 (2009). S. Prolhac, M. R. Evans and K. Mallick. Matrix product solution of the multispecies partially asymmetric exclusion process. Journal of Physics A: Mathematical and Theoretical 42(16), 165004 (2008). C. d. Boutillier, P. Fran ois, K. Mallick and S. Mallick. A matrix ansatz for the diffusion of an impurity in the asymmetric exclusion process. Journal of Physics A: Mathematical and General 35(46), 9703–9730 (2002). B. Derrida and M. R. Evans. Bethe Ansatz Solution for a Defect Particle in the Asymmetric Exclusion Process. Journal of Physics A: Mathematical and General 32(26), 23 (1999). T. Sasamoto. One-dimensional partially asymmetric simple exclusion process on a ring with a defect particle. Physical Review E 61(5), 4980–4990 (2000). B. Derrida, S. a. Janowsky, J. L. Lebowitz and E. R. Speer. Exact solution of the totally asymmetric simple exclusion process: Shock profiles. Journal of Statistical Physics 73(5-6), 813–842 (1993). K. Mallick. Shocks in the asymmetry exclusion model with an impurity. Journal of Physics A: Mathematical and General 29(17), 5375–5386 (1999). M. R. Evans, N. Rajewsky and E. R. Speer. Exact solution of a cellular automaton for traffic. Journal of Statistical Physics 95(1-2), 45–96 (1998). N. Rajewsky, L. Santen, A. Schadschneider and M. Schreckenberg. The asymmetric exclusion process: Comparison of update procedures. Journal of Statistical Physics 92(1-2), 151–194 (1997). M. R. Evans, Y. Kafri, K. E. P. Sugden and J. Tailleur. Phase diagram of two-lane driven diffusive systems. Journal of Statistical Mechanics: Theory and Experiment 2011(06), P06009 (2011). M. R. Evans, R. Juhász and L. Santen. Shock formation in an exclusion process with creation and annihilation. Physical review. E, Statistical, nonlinear, and soft matter physics 68(2 Pt 2), 026117 (2003). R. J. Harris and R. B. Stinchcombe. Disordered asymmetric simple exclusion process: Mean-field treatment. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics 70(1 2), 016108 (2004). R. B. Stinchcombe and S. L. a. De Queiroz. Smoothly varying hopping rates in driven flow with exclusion. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics 83(6), 061113 (2011). S. A. Janowsky and J. L. Lebowitz. Exact results for the asymmetric simple exclusion process with a blockage. Journal of Statistical Physics 77(1-2), 35–51 (1994). S. A. Janowsky and J. L. Lebowitz. Finite-size effects and shock fluctuations in the asymmetric simple-exclusion process. Physical Review A 45(2), 618–625 (1992). V. Popkov and G. M. Schuetz. Steady-state selection in driven diffusive systems with open boundaries. EPL (Europhysics Letters) 1(1), 257 (2000). S. Katz, J. L. Lebowitz and H. Spohn. Phase transitions in stationary nonequilibrium states of model lattice systems. Physical Review B 28(3), 1655–1658 (1983). C. Pérez-Espigares, F. Redig and C. Giardin. The spatial fluctuation theorem. arXiv:1502.03364 (2015). Y. Hieida. Application of the Density Matrix Renormalization Group Method to a Non-Equilibrium Problem. J. Phys. Soc. Jpn 67(369), 8 (1998). M. Gorissen, J. Hooyberghs and C. Vanderzande. Density-matrix renormalization-group study of current and activity fluctuations near nonequilibrium phase transitions. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics 79(2), 020101 (2009). M. Gorissen and C. Vanderzande. Finite size scaling of current fluctuations in the totally asymmetric exclusion process. Journal of Physics A: Mathematical and Theoretical 44(11), 115005 (2010). G. Bunin, Y. Kafri and D. Podolsky. Non-differentiable large-deviation functionals in boundary-driven diffusive systems. Journal of Statistical Mechanics: Theory and Experiment 2012(10), L10001 (2012). G. Bunin, Y. Kafri and D. Podolsky. Large deviations in boundary-driven systems: Numerical evaluation and effective large-scale behavior. EPL (Europhysics Letters) 99(2), 20002 (2012). G. Bunin, Y. Kafri and D. Podolsky. Cusp Singularities in Boundary-Driven Diffusive Systems. Journal of Statistical Physics 152(1), 112–135 (2013). C. Giardina, J. Kurchan, V. Lecomte and J. Tailleur. Simulating Rare Events in Dynamical Processes. Journal of Statistical Physics 145(4), 787–811 (2011). C. Giardinà, J. Kurchan and L. Peliti. Direct evaluation of large-deviation functions. Physical Review Letters 96(12), 120603 (2006). V. Lecomte and J. Tailleur. A numerical approach to large deviations in continuous-time. Journal of Statistical Mechanics: Theory and Experiment 2007(03), P03004–P03004 (2006). J. Tailleur and V. Lecomte. Simulation of large deviation functions using population dynamics. AIP Conference Proceedings, vol. 1091, pp. 212–219 (AIP, 2009). M. R. Evans, Y. Kafri, H. M. Koduvely and D. Mukamel. Phase Separation and Coarsening in One-Dimensional Driven Diffusive Systems: Local Dynaimcs Leading to Long-Range Hamiltonians. Phys. Rev. E 58(3), 2764 (1998). P. Podle\'{s} and E. Müller. Introduction to Quantum Groups, vol. 10 (World Scientific Publishing Company Incorporated, 1998). M. Gaudin. La fonction d'onde de Bethe (Masson, 1983). ISBN 2225796076. J. Cao, W. L. Yang, K. Shi and Y. Wang. Off-diagonal Bethe ansatz solution of the XXX spin chain with arbitrary boundary conditions. Nuclear Physics B 875(1), 152–165 (2013). F.-K. Wen, Z.-Y. Yang, S. Cui, J.-P. Cao and W.-L. Yang. Spectrum of the Open Asymmetric Simple Exclusion Process with Arbitrary Boundary Parameters. Chinese Physics Letters 32, 050503 (2015). S. Belliard and R. A. Pimenta. Modified algebraic Bethe ansatz for XXZ chain on the segment - II - general cases. Nuclear Physics B 894, 527–552 (2014). P. Baseilhac. The q-deformed analogue of the Onsager algebra: Beyond the Bethe ansatz approach. Nuclear Physics B 754(1), 309–328 (2006). P. Baseilhac and S. Belliard. The half-infinite XXZ chain in Onsager's approach. Nuclear Physics B 873(3), 550–584 (2013). S. Faldella, N. Kitanine and G. Niccoli. Complete spectrum and scalar products for the open spin-1/2 XXZ quantum chains with non-diagonal boundary terms. J. Stat. Mech. 2014, P01011 (2014). N. Crampe, E. Ragoucy and M. Vanicat. Integrable approach to simple exclusion processes with boundaries. Review and progress. Journal of Statistical Mechanics: Theory and Experiment 2014(11), P11032 (2014). H. Bethe. Zur Theorie der Metalle - I. Eigenwerte und Eigenfunktionen der linearen Atomkette. Zeitschrift fur Physik 71(3-4), 205–226 (1931). F. R. Gantmacher. Matrix Theory, Vol. 2 (American Mathematical Society, 2000). C. Korff. Auxiliary matrices on both sides of the equator. Journal of Physics A: Mathematical and General 38(1), 47 (2004). F. Flajolet and R. Sedgewick. Analytic combinatorics (Cambridge University Press, 2009). ISBN 9780521898065. R. M. Corless, G. H. Gonnet, D. E. G. Hare, D. J. Jeffrey and D. E. Knuth. On the LambertW function. Advances in Computational Mathematics 5(1), 329–359 (1996). B. Derrida and R. Zeitak. Distribution of domain sizes in the zero temperature Glauber dynamics of the 1 D Potts model. Physical Review E 54(3), 24 (1996). I. Fredholm. Sur une classe d'équations fonctionnelles. Acta Mathematica 27(1), 365–390 (1903). C. A. Coulson, B. O'Leary and R. B. Mallion. Huckel theory for organic chemists (Academic Press (London and New York), 1978). ISBN 0121932508. U. Bilstein and B. Wehefritz. The XX–model with boundaries. Part I: Diagonalization of the finite chain. Journal of Physics A: Mathematical and General 32(2), 191 (1998). C. Kittel. Quantum Theory of Solids, 2nd Revised Edition (Wiley, 1987). ISBN 978-0-471-62412-7. J. M. Stéphan, G. Misguich and V. Pasquier. Rényi entropy of a line in two-dimensional Ising models. Physical Review B - Condensed Matter and Materials Physics 82(12), 1–8 (2010). M. Gaudin. Gaz coulombien discret à une dimension. Journal de Physique 34(7), 511–522 (1973). G. M. Schütz. Private communication . A. Lazarescu. Generic Dynamical Phase Transition in One-Dimensional Lattice Gases With Exclusion. (in preparation) . G. M. Schütz. Conditioned Stochastic Particle Systems and Integrable Quantum Spin Systems. From Particle Systems to Partial Differential Equations II, vol. 129, pp. 371–393 (2015). ISBN 978-3-319-16636-0. A. Lazarescu. Hydrodynamic Spectrum of Bulk-Driven One-Dimensional Lattice Gases. (in preparation) . Michaël Bon Alexandre Lazarescu. "The Physicist's Companion to Current Fluctuations: One-Dimensional Bulk-Driven Lattice Gases - Version 1". SJS (14 Aug. 2015)
CommonCrawl